Omni-Vision Based Simultaneous Localization and Mapping for Mobile Robot
|School||Shanghai Jiaotong University|
|Course||Control Theory and Control Engineering|
|Keywords||Mobile robot Simultaneous localization and mapping (SLAM) Uncertainty extended Kalman filter (EKF ) Panoramic vision (omni-vision)|
Mobile robot to have full autonomy to explore unknown regions must have created a reliable map of the environment, while taking advantage of the map to the capacity of autonomous positioning. The key technology is to achieve the ability to simultaneous localization and mapping (SLAM) is one of the hot research topics in the field of robot localization. Although SLAM problem using the laser, sonar and other sensors technology is relatively mature, but due to the vision sensor has a low price, and informative, and can take full advantage of the state-of-the-art computer vision and image processing technology, which makes vision-based The SLAM (vSLAM) is considered to have a wider range of application areas and research value. The research on vSLAM stereo vision based on common vision sensor and a monocular vision. The ordinary visual sensor, however, due to sight limitations, and thus exists in the feature tracking and positioning continuity inadequacies. The panoramic visual (Omni-vision) with a 360-degree viewing angle, have a wide range of applications in the field of robot navigation, video surveillance, multimedia conferencing. It not only has the advantage of ordinary vision sensors, and play its wide field of view of this special advantage, you can get both rich and complete information, and to strengthen the ability to track and locate targets compensate current vSLAM study deficiencies. From a research perspective, the combination between panoramic visual and vSLAM of exploration, presented a panoramic vision system to achieve vSLAM program, referred to as the \Panoramic vSLAM difficulty lies in how to extract from a large panoramic image in distortion robust visual signposts, and the establishment of effective observation model and the correct analysis system uncertainty. This program as a visual landmark by extracting color panoramic image area to create a map, to establish a system of observation model based on uncertainty analysis of panoramic visual imaging principle and positioning, positioning signpost position, and then using the extended Kalman filter algorithm (EKF) synchronized robot and map position, bring the cumulative positioning error corrected by the odometer. And the program also changes in the panoramic image nonlinear complex imaging model shortcomings were used feature selection and Observations the equivalent transformation means to improve the accuracy of the positioning, enhance the efficiency of the algorithm. Finally, this paper laboratory self-developed mobile robot Frontier-II were the panoramic vSLAM experiments, significant color features through identification with artificial landmarks to create the map. Experimental results show that the the this article the panoramic vSLAM proposed program is feasible and effective, and have good positioning accuracy and robustness; same time, the results further show the efficiency of the continuity of panoramic vision can to enhance vSLAM in robots to track the target and create a map , thereby improving the positioning capability.