Agenda
- Start planning itinerary update from Scott
- Robot shipping plan
- Self-administered PC and networking update
- List of pre-known objects posted
- Demo of working image processing nodes
- Finally, the list of completed and in progress tasks.
Minutes
- Everyone should contribute to the abstract.
- It can be checked out like this: svn co svn+ssh://username@cascade.cs.ubc.ca/lci/project/raid1/srvc/SVN/DOC/abstract
- Bug Scott about planning who's going to Vegas
- Start looking into renting SUV/minivan to drive the bot to vegas. In parallel, Catherine will investigate shipping options in case those look better.
- Robot network setup is complete. From any locally administered machine, we should be able to drive the robot
- Put this in your .bashrc.ros file:
function fraser() {
export ROS_MASTER_URI=http://fraser:11311
}
-
- Then type "fraser()" in your shell in order to have ROS use the robot's roscore
- Test this by typing "rostopic list" and make sure you see the robot's devices
- This is the list of pre-known objects, grouped by our assessments:
- We can probably do these already:
- laptop
- toy car
- toy Stegosaurus
- table lamp
- These objects are really going to need some contours or structure:
- mug
- bottle
- bowl
- Frying pan
- Not sure if there's any point working on:
- We need to obtain a bunch of these objects for testing. We'll go buy more if needed. [ALL]
Completed components:
- Porting of basic drivers for: bumblebee, cannon and powerbot
- Tower design
- gmapping
- Tilting laser drivers
- Robot coordinate transform code
- Network configuration and development environment
- Robot router setup
- Setup self-administered PC's
- ROS instructions
- WG nav stack
- Basic saliency map computation
Current in-progress task list:
- Capture data from robot for testing
- Tower upgrade:
- Order material for building a new laser/camera mount and assemble same.
- Attention system:
- Stereo + saliency combined to identify interesting regions [PV]
- Tilt laser point cloud segmentation [MM]
- Choice of where to look
- High-level control functionality such as planning
- Random walk behavior
- 3 main high-level planners:
- Exploring frontiers
- Find tables [PV]
- Space coverage
- Look back at objects
- Top level state machine to choose between above planners
- Recognition framework (James module directly or something built upon that) [AG and CG]
- Skeleton framework for the recognition system (inputs: robot images, outputs class guesses)
- Combining results from different types of detectors (different algorithms)
- Combining results from various viewpoints
- Evaluate on the known objects
- Test data interface
- Felzenswalb detector
- MB profiled Kenji's python implementation - most of the time in convolution - promising
- Will investigate cuda'ing pieces
- Helmer detector
- Mccann
- Training data interface and additional parameters
- Load balancing between various recognition algorithms
- Cuda on fraser [MB, WW and TH]
- Need to get the code compiling
- GPUSift
- FastHOG
- Web grabbing module [PF and CG]
- Add additional sources of info
- Investigate filtering techniques
- Integrate output data format with classification
- Speed-up of Felzenswalb training [MB]
- Data transfer kills several ideas we've had about converting to Cuda
- Kenji suggested several non-GPU speedups which Matt will work on next
-
Future tasks pending completion of others:
- Use of 3D models in recognition
- Use of 3D information and context in attention system
- Real time result reporting
- Feeding back classification results to robot planner
- Investigate new cameras which might be faster than the Cannon
- Prioritizing computation done by classifiers towards images which look really promising to the attention system, and based on the classes which have already been recognized.
--
DavidMeger - 14 Oct 2009
This topic: LCI
> WebHome >
TheSemanticRobotVisionChallenge > Oct142009
Topic revision: r3 - 2009-10-28 - DavidMeger