Exploring and mapping an environment are important tasks for robotic autonomy. Current approaches to these problems focus on learning geometric models of the world using range sensors and exploration heuristics. In this work we present the concept of the visual map, a representation of the visual structure of the environment, as well as a framework for learning this structure. Such maps are useful for a robot equipped with a camera to navigate and localize, and can also serve as a useful tool for visualization and virtual environment construction.
In the first half of this work we develop the map-learning framework, including how visual scene features are initially selected, tracked and evaluated, and how they can subsequently be used for robot pose estimation, navigation and scene reconstruction. We take a probabilistic approach to these tasks and present experimental results demonstrating the utility of the framework.
An important consideration while mapping an environment is that of what priors, if any, are necessary for constructing an accurate map. The second half of our work poses two questions; first: how can visual maps be constructed using only limited prior information about the exploratory trajectory?, and second: how does the exploratory trajectory influence the accuracy of the map? We approach these problems empirically and demonstrate experimental results illustrating how to solve them.
Back to the LCI Forum page