Minutes:
- 2-page abstract due for this year. We'll develop it in the directory:
- DOC/abstract inside SVN. Dave will start it and email around [DM]
- Qualification video - take it sometime that's convenient
- Start planning itinerary
- 12 possibly interested {TH, AG, MM, CG, PF, MB, SH, DM, PV, SM, WW, TS}
- SH to email the 4 supervisors, who will figure it out for us
{JL, DL, BW, AM}
- For budgeting reasons, the costs are roughly:
- >= $380 US round-trip for flights
- ~ $55/night per room at the Monte Carlo (cheaper elsewhere)
- ~ $2000 for robot shipping
- Potentially some conference registrations ($400.00 for students)
- Catherine update on shipping
- Cost for shipping is about the same as cost to rent a car and drive
- MM might be ok with driving his own car. Wonder how much UBC/SRVC would compensate him for putting 6K on his new Jetta
- Debate on driving vs shipping on several axes:
- Shipping:
- Pros: Doesnt take our time, if we invest in a crate (~$800), we always have it later
- Cons: Potential for damage, extra downtime for robot with packaging
- Driving:
- Pros: We keep the robot in our possession, saves cost on flights
- Cons: Takes a lot of our time
- Major issue seems to be how long the shipping downtime would be. CG to check out.
- Also would like to know if it's possible to rent a truck with unlimited km. SH to check out.
- Dont forget that loading the robot into anything without a ramp is difficult.
- Need to decide soon.
- Theakston and Lapinkulta are self-admined and can be used for development. Log in with lciuser... and sudo adduser to make your own account.
- Finally, the list of completed and in progress tasks.
Completed components:
- Porting of basic drivers for: bumblebee, cannon and powerbot
- Tower design
- gmapping
- Tilting laser drivers
- Robot coordinate transform code
- Network configuration and development environment
- Robot router setup
- Setup self-administered PC's
- ROS instructions
Current in-progress task list:
- Capture data from robot for testing
- Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM]
- WG nav stack
- Tower upgrade:
- Order material for building a new laser/camera mount and assemble same.
- Saliency maps and visual attention
- Basic saliency map computation [DM]
- Stereo + saliency combined to identify interesting regions [PV]
- High-level control functionality such as planning
- Random walk behavior
- 3 main high-level planners:
- Exploring frontiers
- Find tables [PV]
- Space coverage
- Look back at objects
- Top level state machine to choose between above planners
- Choice of "where-to-look" aka attention system
- Recognition framework (James module directly or something built upon that) [AG and CG]
- Combining results from different types of detectors (different algorithms)
- Combining results from various viewpoints * We'll meet on the previous two topics tomorrow
- Collect data for 5 "given" object classes once they're published
- Test data interface
- Felzenswalb detector
- MB profiled Kenji's python implementation - most of the time in convolution - promising
- Will investigate cuda'ing pieces
- Helmer detector
- Mccann
- Training data interface and additional parameters
- Cuda on fraser [MB, WW and TH]
- Need to get the code compiling
- GPUSift
- FastHOG
- Web grabbing module [PF and CG]
- Add additional sources of info
- Investigate filtering techniques
- Integrate output data format with classification
- Speed-up of Felzenswalb training [MB]
- Initial investigation to verify this is a doable task (profiling current code, ensuring good performance on web data, investigation of potential speedups such as GPU feature extraction and SVM learning)
Future tasks pending completion of others:
- Use of 3D models in recognition
- Use of 3D information and context in attention system
- Real time result reporting
- Feeding back classification results to robot planner
- Investigate new cameras which might be faster than the Cannon
- Prioritizing computation done by classifiers towards images which look really promising to the attention system, and based on the classes which have already been recognized.
This topic: LCI
> WebHome >
TheSemanticRobotVisionChallenge > Oct72009
Topic revision: r4 - 2009-10-14 - DavidMeger