Incremental Dense Multi-modal 3D Scene Reconstruction

Aquiring reliable depth maps is an essential prerequisite for accurate and incremental 3D reconstruction used in a variety of robotics applications. Depth maps produced by affordable Kinect-like cameras have become a de-facto standard for indoor reconstruction and the driving force behind the success of many algorithms. However, Kinect-like cameras are less effective outdoors where one should rely on other sensors. Often, we use a combination of a stereo camera and  lidar, however, process the acquired data in independent pipelines which generally leads to sub-optimal performance since both sensors suffer from different  drawbacks. In this paper, we propose a probabilistic model that efficiently exploits complementarity between different depth-sensing modalities for incremental  dense scene reconstruction. Our model uses a piecewise planarity prior assumption which is common in both the indoor and outdoor scenes. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction of a number of scenes