Dynamic
Behavior of Slowly-Responsive Congestion Control Algorithms
Deepak
Bansal, Hari BalaKrishna, Sally Floyd and Scott Shenker
Presented
By: Poovappa Subbaiah (Dinu) |
Summary:
Applications such as
best-effort, unicast streaming video and audio require steady transmission
rates for optimal performance with the available bandwidth. Based on the
TCP-Compatible paradigm, which requires that any mechanism have the same
bandwidth usage as TCP in the presence of constant loss rate, several
slowly responsive congestion control algorithm have been proposed. By
responding less drastically than the AIMD, these algorithms attempt to
provide a smooth data rate curve. While the proposed algorithms are
TCP-Compatible in static state, this paper attempts to determine its
behavior in a dynamic situation.
Several
TCP-Compatible algorithms are considered such as Binomial, TEAR, TFRC and
AIMD (b < 0.5) for testing. The dynamic test scenarios present some
interesting behavior of these algorithms. A plot of the stabilization time
as a function of the algorithm parameter shows that TFRC (self clocked)
and SQRT respond extremely well in comparison to TCP. A test for long-term
fairness reveals that the TFRC and SQRT do not compete unfairly with TCP
in the presence of varying bandwidths. It is further determined that there
is a loss in throughput for the slowly responsive mechanisms in comparison
to TCP in the same environments. A test for smoothness of data rates in
dynamic conditions show that TFRC achieves the required property. Based on
all these factors, the paper concludes that most of the TCP-Compatible
algorithms studied are safe for deployment. An important contribution of
this paper is to show that incorporating a self-clocking mechanism based
on packet conservation prevents the network from persistent loss rates
even with the more slowly responsive algorithms.
Group
Discussions:
The analysis of the
paper resulted in several interesting questions and discussions.
- One
of the questions that were discussed was if the tests conducted were
too dynamic. Since we use optical cable for data transmissions, data
rate fluctuations on wired lines are fairly smooth and packet drops
are minimal. Would this kind of extreme dynamic testing ignore the
real internet and fail in identifying the correct behavior of the
slowly responsive mechanisms. Contrary
to this, was the argument that the tests conducted were probably not
dynamic enough. In support, it was suggested that a single bottleneck,
used in the tests in the paper did not emulate the real environment.
- Would
it be possible to emulate the slow start with the TFRC? While the
sudden response to congestion needs to be slow, we also need a faster
method to utilize available bandwidth. This would prevent the slowly
responsive mechanisms to lose bandwidth to TCP. This question is being
discussed within the Network community and in fact, when the data rate
reaches a certain low point, slow start is used to push up TFRC data
rates. However, it is surprising that there is no mention of it in the
paper.
- Instead
of suggesting a new slowly responsive algorithm, wouldn’t it be
better to slow down TCP congestion control algorithm itself? This
would minimize the concerns about deploy ability. It is possible to
slow down TCP congestion control algorithm and in fact it is also
discussed in the paper. The AIMD using a decrease parameter, b as
lesser than 0.5 is slowly responsive, The paper also discusses the
Binomial algorithms and several tests are conducted on the SQRT
algorithm which is a non linear generalization of the AIMD.
- Does
queuing algorithms like RED help in reducing the packet losses in the
slowly responsive algorithms? RED tries avoid congestion by
controlling the average queue size at the gateway or router. First
problem when network gets congested is that router's queue overflows
and begins to drop packets. This situation can be prevented in the
sender reduces the sending rates before the congestion happens and
slower responsive mechanisms could be benefited by this. But
unfortunately the performance of RED itself is in question. By and
large the network community considers RED to have been unsuccessful in
its performance.
- What
about the sensitivity of smoothness? How sensitive must it be or in
other words how smooth must the data rate curve be to be reasonably
good? The paper Equation-based Congestion Control for Unicast
Applications by Sally Floyd, Mark Handley, Jitendra Padhye and Jörg
Widmer discusses this aspect briefly. The authors suggest that 0.15
secs interval seems to be a plausible candidate for a minimum interval
over which bandwidth variations would begin to be noticeable to
multimedia users. However this is very relative and depends on
different aspects, including the content of the stream itself.
- A
couple of observations that were made in the discussions require
mention. The basic assumption that multimedia streams need a smooth
data transfer rates could be questionable. This comes in relation to
the fact that many variable bit-encoding schemes have been proposed
and have been found to perform better than a constant rate encoding.
Also, The smoothness of the curve is a property of the intervals in
time that the rates are averaged for. This could be a highly
fluctuation curve with a high frequency of oscillations, which can
result in a fairly smooth curve, for a certain time interval of
measurement. This is an interesting point and possibly needs further
study.
Additional
References:
- The
following paper discusses the Binomial congestion control algorithms
in greater detail. As seen in the tests for TCP Compatibility in a
dynamic environment, the SQRT performs extremely well.
Bansal, D., and Balakrishnan, H. Binomial Congestion Control Algorithms.
In Proceedings of the Conference
on Computer Communications (IEEE Infocom) (A
nchorage, AK, April 2001). Pp. 631-640.
Click
here for Presentation
Slides |