Hi and Welcome
This is the page related to my research. The following is the log of the activities completed arranged chronologically with date.
Benchmarks conducted for the MMCN Paper (paper submitted on 1st August 2006, acceptance notification due: 1st September 2006)
Most of the work reported here were completed before Aug 1st 2006
- We instrumented both mplayer and VLC for measuring jitter, fps and CPULoad related parameters. In particular, we modified the video display modules by placing printf()'s at appropriate places that print out the current time and frame jitter at that time. From this basic information, we later calculate FPS by running another script on the dump files. We make sure that in all the experiments, we allow mplayer and VLC to drop frames when needed.
- If X instances of mplayer and Y instances of VLC just saturates the CPU by playing a specific XVID(mpeg-4) video (all of them play the same video at the same position) , we measure their individual fps and frame jitter. At the same time, we measure the overall CPU load on the system for the duration of playback. We run this experiments on a Dell Inspiron 1300 laptop.
- Next, we perform the same experiment with 2X mplayer instances and 2Y VLC instances and measure the same parameters. We run this experiments on a Dell Inspiron 1300 laptop.
- From the above two experiments, we show that VLC performs worse of the two under heavy CPU load.
- Next, we patch the instrumented mplayer for playing SPEG video. We perform the same experiment as above and note the new values for X and 2X. Lets say the new value of X be X'. We note that X'<X. We run this experiments on a P-IV 3Ghz desktop with 1Ghz ram.
- We perform the same experiments using QStream this time with a single QStream process running multiple videos.
- We used Qmon for dumping the performance data on to some text files and plot them. We note the threshold # of videos that just saturates the CPU, say Z. We run increasing number of videos upto Z to check how QStream adapts to increasing load and then go ahead upto 2Z. We run Qstream client on a P-IV 3Ghz desktop with 1Ghz ram and 100 Mbps ethernet. We run the server and monitor on a different machine on the same subnet. Later we plot the fps, cpuload and jitter values for each of the videos.
- We run another set of similar experiments as above with boosting the temporal quality of one video (meaning it would drop less frames) among the rest. We show that in the case where all videos have equal priorities, all of them degrade gracefully, with boosted case, one of them definitey perform better compared to the rest.
- Lastly, we run a single video with Qstream and plot the video bitrate against cpu load and show that video bitrate of a MPEG 4 video vary greately over time and the cpuload also varies acordingly showing a strong correlation between the two. This clearly shows that the entropy of a video varies greatly over time and so does the number of bits used to encode a single frame. Thus, when we play multiple videos on a single machine, cpu requirements are a result of combined requirements of all the videos taken together which is very difficult to predict, much less make provisions for it beforehand. Thus, this shows our belief that graceful quality adaptation across multiple videos is a difficult problem which is exasperated by the fact that hardware capability vaies greately from device to device, from handhelds having low processing power (for saving battery life) to high end dual core desktop machines.
Experiments performed as a continuation of the ones on the MMCN Paper but not reported on the submitted paper (perhaps will be reported on the camera ready once the paper gets accepted).
The work reported here was completed before Aug 15th 2006
- QStream tests as before but this time, we run N instances of qstream, each running one video.
- We do the same set of tests as before but now each of the qstream processes run one video each and we also make sure that we skip each video by different amounts so as to bring a heterogeneous environment where each video has varying cpu requirements. We disable alsa sound output because we find that contention among the different qstream processes for acquiring the single device causes major stops in videos. We compare the results with those of the previous benchmarks and find that:-
- Boosting has no effect - pretty understandable because the videos do not communicate among each other about their deadlines. Further, The scheduler does not know anything about the boosted status of one video.
- Unfairness among the videos is more pronounced due to the scheduler knowing nothing about the deadlines of the players.
- We intend to perform the same set of experiments with mplayer and sound disabled for more fair comparison in the final version of the paper.
We performed some minor bug fixes with qstream as we went along, modified the decoder for more fine grained frame level cpu adaptation frame dropping etc.
Readings completed (as of 23/08/2006)
- Major portions of Buck's thesis. Chapters on performance analysis and multicast were avoided.
- Read the rejected NOSDAV paper and the soft timer's paper.
- Read Qstream qsf source code and shared memory IPC.
- Currently in the process of coding the cooperatinve scheduler
Future Directions
The EDF scheduler
-
-
- We propose to design a cooperative edf scheduler that yeilds teh cpu not only based on its own deadline but also based on the deadline of other edf processes that might be running. Details of the tests to be performed is highlighted in Buck's email.