Tags:
create new tag
view all tags

SRVC Meeting minutes for September 30th, 2009

Discussion:

  • Report that mechanical turk will not be allowed this year
  • Reminder to lockup the laptops because of the recently reported break-in attempts.
  • Matt Baumann will start on the team tomorrow, and we discussed many ideas that we have for things he can help us with. These are mainly related to speeding up various classifiers including the Felzenswalb detector and performing fast feature extraction on the GPU.
  • Next we went over a list of the current status. Going forward, I'll record things based on a past-present-future view. That is, what we've completed, what tasks are active, and what things we still need to start down the road.

Completed components:

  • Porting of basic drivers for: bumblebee and powerbot
  • Tower design

Current in-progress task list:

  • Basic robot functions based on ROS. (with aim to perform a preliminary test run of navigation and mapping) [MM and DM]
    • WG nav stack
    • gmapping
    • Coordinate transform codes
    • Tilting laser driver
    • Network configuration and development environment
      • Robot router setup
      • Setup self-administered PC's
      • ROS instructions
    • Tower upgrade:
      • Order material for building a new laser/camera mount and assemble same.

  • Recognition framework (James module directly or something built upon that) [AG and CG]
    • Test data interface
    • Felzenswalb detector
    • Helmer
    • Mccann
    • Training data interface and additional parameters
  • Cuda on fraser [WW and TH]
  • Web grabbing module [PF and CG]
    • Add additional sources of info
    • Investigate filtering techniques
    • Integrate output data format with classification
  • Speed-up of Felzenswalb training
    • Initial investigation to verify this is a doable task (profiling current code, ensuring good performance on web data, investigation of potential speedups such as GPU feature extraction and SVM learning)

Future tasks pending completion of others:

  • High-level control functionality such as planning [DM]
    • Random walk behavior
    • 3 main high-level planners:
    • Top level state machine to choose between above planners
    • Choice of "where-to-look" aka attention system
  • Use of 3D models in recognition
  • Use of 3D information and context in attention system
  • Real time result reporting
  • Feeding back classification results to robot planner
  • Investigate new cameras which might be faster than the Cannon
  • Prioritizing computation done by classifiers towards images which look really promising to the attention system, and based on the classes which have already been recognized.

-- DavidMeger - 30 Sep 2009

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View |  Raw edit | More topic actions
Topic revision: r2 - 2009-10-01 - DavidMeger
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback