This seems to suggest that error-limited performance is very roughly bandwidth = (sqrt(mss) * C) / (rtt * sqrt(ber)) where the units are: :mss = max segment size in bits :C = dimensionless constant, approx 0.9 (see paper for more details) :rtt = round trip time in seconds :ber = bit error rate in per-bit Note that this is a small-ber approximation, assuming loss is dominated by full-length packets. Needless to say, the bandwidth does not go to infinity if the ber goes to zero: packet drops will occur when the b/w tries to exceed the physical link b/w. This model appears to fit reality pretty well, according to the paper. |
An interesting fact is that the poor? design of the error-correction protocol stack of the Internet forces a requirement for error-rates of 1x10-11. This is often achieved by tunneling the the internet protocols through a more reliable protocol such as ATM (asynchronous transfer mode).
The packets each have a checksum?, the sum of all the 8-bit bytes in the packet.
In the internet, ICMP "pings" are sent by routers every 30 seconds or so. In the internet, when a ping fails, the router updates its routing table?.
-- The Anome
Thanks for the corrections. I think I need to qualify the remarks about the packet error rate. If an error rate above 1x10-11 is also coupled to a delay of three hundred milliseconds or more (i.e. a satellite link), I've heard reports that a packet storm of rebroadcast packets can occur, paralyzing the failing link until routers begin to avoid the congestion. A number of experimental and optionally-deployed protocols use more-selective packet retransmission to avoid this problem, which is a known defect in TCP caused by its windowed packet retransmission policy. Ray Van De Walker
See http://www.psc.edu/networking/tcp_friendly.html#performance and specifically the paper http://citeseer.nj.nec.com/mathis97macroscopic.html for
This seems to suggest that error-limited performance is very roughly
bandwidth = (sqrt(mss) * C) / (rtt * sqrt(ber))
where the units are:
Note that this is a small-ber approximation, assuming loss is dominated by full-length packets.
Needless to say, the bandwidth does not go to infinity if the ber goes to zero: packet drops will occur when the b/w tries to exceed the physical link b/w.
This model appears to fit reality pretty well, according to the paper.
-- The Anome