This thesis presents a modified RGB-D Simultaneous Localization and Mapping
(SLAM) algorithm, to improve the estimation of the camera pose in dynamic environments.
The openly available Real-Time Appearance-Based Mapping (RTAB-Map)
SLAM algorithm is analyzed and adapted by the static point weighting algorithm,
which deals with dynamic objects. The Frame-To-Map (F2M) approach of RTABMap
is further used while the odometry is calculated based on the modified version
of the static point weighting algorithm. Foreground depth edges are extracted and
matched with the intensity assisted Iterative Closest Point (IAICP) method. To distinguish
between features of dynamic and static objects, a static weight from the
integrated SLAM algorithm is used. It is based on the spatial distances of matches.
The estimation of the camera poses includes this weighting system. The influence of
several parameters used in SLAM algorithms is analyzed, on the example of the proposed
algorithm. Values for these parameters are chosen for the proposed SLAM
algorithm to deliver a robust performance in different environments. The proposed
SLAM algorithm is evaluated with two datasets regarding its performance in different
environments. To analyze how the accuracy changes through the combination of the
two algorithms, the proposed algorithm is compared to the two basis algorithms.
Die Masterarbeit wurde bei und mit ITK Engineering erstellt.