Recent progress in computer vision and deep learning has facilitated precise depth sensing and realistic 3D modeling from visual data. In contrast to traditional cameras, Prof. Lim’s presentation will suggest implementing a multi-camera setup with ultra-wide-angle fisheye lenses boasting more than 220 degrees of field of view (FoV). Such an arrangement could enable 360-degree omnidirectional depth estimation, as well as more robust and precise visual simultaneous localization and mapping (SLAM). By combining dense depth estimation and camera trajectory, it is possible to achieve full 3D structure modeling of the environment. The camera system proposed will autonomously navigate mobile robots, generate and update HD maps for autonomous vehicles, as well as provide city-scale visual mapping through the use of helmet-mounted miniature cameras. This presentation will address fundamental challenges, the basic algorithmic concept, and practical instances in robotics and 3D modeling. |