Date: Oct 30th, 2008
Room: ICICS/CS 206
Speaker: Jim Little
Title: Spatial search for finding objects
Abstract:
Our visually-guided mobile robots have been mapping, navigating, and acting based on visual landmarks and the geometric structure of their environment, encoded in a map. Maps record contextual information and play an important role in assisting planning for action as well. In the Semantic Robot Vision Challenge, which our robots have won the past two years, a robot is given a list of names of objects and must download images of the named objects from the Web and then construct classifiers to recognize those objects. It then searches for the objects in a small test room and returns a list of the objects' names and the best image matching the name. I will show how we met this challenge by developing sequential object recognition techniques, termed "informed visual search", to find specific objects within a cluttered environment, employing an active strategy based on identifying potential objects using an attention mechanism and on planning to obtain images of these objects from numerous viewpoints. Many open problems remain such as how to integrate top-down and bottom-up strategies, how to use object models, and how to incorporate uncertainty in the maps, the actions, and the object models.