[Home]Network protocol design principles

HomePage | Recent Changes | Preferences

The topic of this entry is to describe the design principles which had been applied for specifying network protocols. The entry needs rework and has been move here from Systems engineering.

Usually, protocols are layered. For example, one layer might describe how to encode text (with ASCII, say), while another describes how to inquire for messages (with the internet's simple mail transport protocol, for example), while another may correct errors (with the internet's transmission control protocol), another handles addressing (say with IP, the internet protocol), another handles the error detection (with the internet's point-to-point protocol), and another handles the physical form of the bits, (with a V.42 modem, for example).

Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. Layering also permits familiar protocols to be adapted to unusual circumstances. For example, the mail protocol above can be adapted to send messages to aircraft. Just change the V.42 modem protocol to the INMARS LAPD data protocol used by the international marine radio satellites.

It's a truism? that communication media are alway faulty. The conventional measure of quality is failing bits per bit transmitted. This has the wonderful feature of being a dimensionless figure of merit that can be compared across any speed or type of communication media.

Conventionally, failure rates of 1x10-4 bit per bit are faulty (they interfere with telephone conversations), while 1x10-5 bit per bit or more should bring slow maintenance (they can be heard).

Communication systems correct errors by selectively resending bad parts of a message. For example, in TCP (the internet's transmission control protocol), messages are divided into packets, each of which has a checksum. When a checksum? is bad, the packet is discarded. When a packet is lost, the receiver acknowledges all of the packets up to, but not including the failed packet. Eventually, the sender sees that too much time has elapsed without an acknowledgement, so it resends all of the packets that have not been acknowledged. At the same time, the sender backs off its rate of sending, in case the packet loss was caused by saturation of the path between sender and receiver. (Note: this is an over-simplification: see TCP for more detail)

In general, the performance of TCP is severely degraded in conditions of high packet loss (more than 0.1%), due to the need to resend packets repeatedly. For this reason, TCP/IP connections are typically either run on highly reliable fibre networks, or over a lower-level protocol with added error-detection and correction features (such as modem links with ARQ). These connections typically have uncorrected bit error rates of 1x10-9 to 1x10-12, ensuring high TCP/IP performance.

Another form of network failure is topological failure, which a communcations link is cut. Most modern communication protocols periodically send messages to test a link. In phones, a framing bit is sent every 24 bits on T1 lines. In phone systems, when "sync is lost,", fail-safe mechanisms reroute the signals around the failing equipment.

In packet-switched networks, the equivalent functions are performed using router update messages to detect loss of connectivity.

/Talk


HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited December 18, 2001 4:22 am by 213.253.40.xxx (diff)
Search: