Panoptes: A Scalable Architecture for Video Sensor Networking Applications

Wu-chi Feng, Brian Code, Ed Kaiser, Mike Shea, Wu-chang Feng

Presented by Gary Huang


Paper Summary

The paper introduces a Panoptes video-based sensor networking architecture in terms of Panoptes platform, Panoptes implementation and Panoptes performance as the below.  There are three significant contributions in this paper: (1) developing a low-power, high-quality video capturing platform; (2) designing a prioritizing buffer management algorithm to save power; (3) designing a bit-mapping algorithm for the efficient querying and retrieval of video data.

(I) Panoptes Platform

There are three design requirements for Panoptes video sensor, such as low power, flexible adaptive buffering, and power management.  The Panoptes hardware components are InterlStrongArm 206 MHz embedded platform, Logitech 3000 USB-based video camera, 64 Mbytes of memory, Linux 2.4.19 operating system kernel, and 802.11-based networking card.  Panoptes sensor software architecture consists of four parts such as Video Capture, Filtering, Compression, and Buffering and Adaptation.

(II) Panoptes Implementation

The application that the paper implemented is called Little Sister Sensor Networking Application.  The application consists of three parts such as User Interface, Video Sensor Software and Video Aggregation Software.  The User Interface of the application allows clients communicate with Panoptes system to request video events.  There are two algorithms associated with Video Sensor Software, which are a simple change detection filtering algorithm for even recognition, and a simple bitmapping algorithm for the efficient querying and access to the stored video data.  The Video Aggregation Software that is responsible for the storage and retrieval of the video data between video sensors and clients.

(III) Panoptes Performance

USB performance shows that for different size of frames, the smaller frame transmits faster under same compression.  Compression performance indicates that Panoptes sensor uses an optimized compression routine to reduce system CPU time.  Component interaction shows that JPEG encode takes a longest time and Power measurements shows that the maximum power consumption of Panoptes sensor is about 5 watts like a night light.


Group Discussions

Here are some problems discussed in the class:

(1) How does Panoptes sensor determine the priorities of video frames and how is this working?

Incoming video data is mapped to a number of priorities defined by the applications.  The priorities can be used to manage both  frame rate and frame quality.  It is important to note that the priority mapping can be dynamic over time.  The flexible priority-based streaming mechanism starts discarding data from the lowest priority layer to the highest priority layer until the amount of buffered data is less than the low-water mark.

(2) Is Panoptes application a real-time video sensor application?

The answer is NO.  Before clients request video events via the User Interface of Panoptes application, the system should first be done capturing all events which happed in the past time.  Therefore, the application is not allowed to be used in real-time.  Buck told us a very funny story about one of authors who ran the Panoptes sensor to capture what happened in the lab at night.  This story shows that the system has to record events first.

(3) Can some Panoptes software be integrated in hardware or built in some other networking server?

Either way is achievable.  In the paper, Panoptes sensor uses the first way i.e. it integrates all software in the platform and the application.  Also Panoptes sensor has its own operating system.  So, after integrating its hardware Panoptes sensor is an independent system which can manage itself and serve clients on the Internet.


Presentation Slides

Here is the link for a PowerPoint file < Panoptes.ppt >.