Network Protocols | Encyclopedia.com

Network Protocols

Most modern computers are interconnected with other computers in one way or another, whether by a dialup connection or over a local area network (LAN) . For interconnections that cover distances greater than a few meters, serial connections are economical. In serial communications, the information bits are sent one at a time over a single communications channel. This stands in contrast to parallel communications, where information is sent one byte (or word) at a time over eight or more communications channels between the machines.

As in all communications, the problem normally focuses on establishing ways for the receiver (meaning the destination computer) to interpret and decode correctly the transmitted information. Communications protocols are designed to facilitate this in serial communications; they are especially important in this case because the receiver must be able to process correctly each bit that it receives.

Protocol Functions

For serial communications to take place correctly, several functions have to be possible. First, the receiving and sending computers must be able to coordinate their actions, to enable flow control, error control, addressing, and connection management. Flow control manages the rate of information flow between machines (note that this may be different than the data rate of the network connecting the machines); error control enables transmission errors to be corrected; addressing allows information to be routed to the correct destination; and connection management is the set of functions associated with setting up and maintaining connections (where needed).

Second, the receiver must be able to determine when a message (or data packet) begins and ends and distinguish control information from the information that the user is transmitting.

Protocol Mechanisms

These functions are implemented through communications protocols. These protocols are a strict set of rules that both the sending and the receiving computers follow when communicating. These rules include the format of information to be sent, as well as rules defining how a machine (sender or receiver) is to behave when an event occurs. An event can be externally created (such as the occurrence of an error) or internally generated (such as a connection request). These behaviors are written into the communications software that runs in both the sender and receiver.

Generally, protocols break the information that the user wishes to send into packets . These packets normally consist of a header, the user information (message), and often a trailer. The trailer is most commonly a checksum generated by a cyclical redundancy check (CRC) coder and is used for error detection. The header carries information fields that the sender and receiver use to communicate with each other so that they can implement the necessary functions as defined by the protocol. Figure 1 is a graphical illustration of the sequence of bits that would be transmitted in a hypothetical packet.

Network Architectures

Although the general functions of protocols are as stated earlier, it turns out to be convenient and efficient to optimize specific protocols for specific classes of functions, and then to use multiple protocols to get the overall job done rather than designing a “one size fits all” system. To aid in this task, protocol developers created an approach to classify and organize the different functions that have to be performed in data communications. The most common approach to this is called the Open Systems Interconnection Reference Model (OSI Reference Model) that was developed and standardized by the International Organization for Standardization (ISO). The OSI Reference Model is a seven-layer model, with each layer representing a particular set of functions, and for which specific protocols have been developed.

Here are the layers and their functions as seen in Figure 2:

  • The physical layer is concerned with moving bits, and includes physical and electrical connections.
  • The link layer is concerned with reliable bit transfer, as well as local (as opposed to global) addressing. This involves synchronization, error control, and some flow control.
  • The network layer is concerned with routing packets through network elements (called nodes) interconnected by reliable links. This layer deals with addresses that are global in scope. Examples of global addresses are a telephone number or a postal address, whereas a local address might be an office telephone extension or mailbox.
  • The transport layer ensures end-to-end reliability, connection control, and flow control. Its task is to make a network meet the special requirements of end user machines. End-to-end means that network elements are not involved in implementing transport layer functionality.
  • The session layer is concerned with the establishment and maintenance of connections for the communicating end nodes.
  • The presentation layer is concerned with ensuring that information can be transmitted between different types of computer systems (for example, Apple Macintosh computers and Windows PCs).
  • Finally, the application layer is concerned with providing the functions needed by networked applications. A networked application may be a mail program (such as Eudora or Microsoft Outlook), and the application layer function would be the mail transport protocols.

Quite a number of protocols have been developed under the guidelines of the OSI reference model. Many of these have not met with commercial success, so they are most often used to provide the basis for new protocols today. Still, the OSI reference model serves as a convenient way to organize the necessary functionality of data communications systems.

TCP/IP Protocols

The effort to garner commercial support of OSI protocols has been undermined in large part by the protocols that are used in the Internet, which are called the Transport Control Protocol (TCP) and the Internet Protocol (IP). These protocols were developed outside of the OSI effort, and were designed to provide minimal services. This design strategy moved much of the “intelligence” (i.e., the processing requirements) needed to implement networked applications to the user machines. TCP/IP was developed and debugged before similar OSI protocols got off the ground, so many users adopted them as an interim measure until the OSI protocols were available. They took hold, and users never saw the need to switch protocols, which is a costly procedure that often results in unreliable systems for a time.

Two principal protocols exist in this system. The Internet Protocol (IP) was designed to provide an adaptive (though it is sometimes unreliable) network that could interconnect independently owned and managed sub-networks. It is unreliable because the network makes no assurances that packets are delivered to their destinations, though they do their best to try. Allowing independently owned and managed networks is also important
because different organizations have different local requirements, yet they still might want to communicate with other users. IP corresponds fairly closely to the OSI network layer in functionality.

To provide some confidence to end users, the Transport Control Protocol (TCP) was developed to work with IP. TCP provides end-to-end error control as well as flow control. Thus, if IP loses some packets along the way, TCP will recover them transparently to the user, so the network will seem reliable. TCP also performs some session management functions. Thus, it is a mix between OSI’s transport and session layers (though it does not contain all session layer functions).

By pushing the processing requirements for services out of the network, TCP/IP has set the stage for rapid innovation in applications and services. Adding a new service means developing the necessary application protocol, constructing a server , and advertising the services. What is significant is that no network changes are necessary for new services, which supports the rate of innovation. Thus, TCP/IP networks can carry electronic mail, images, worldwide web traffic, even video images and telephone calls, all without changes to the underlying network systems (with the exception, of course, of the capacity increases needed to handle all of the packets that are generated by these services).

Future Evolution of TCP/IP

Despite its success, TCP/IP faces some challenges. One of the challenges is that the available IP addresses are being exhausted, due to its success. Another is that many of the new services are making demands on the network that would be best served if some additional capabilities existed; for example, telephone calls and interactive video would benefit from “real time” capabilities to minimize network delays.

To address these concerns, a new version of IP was developed, version 6 (called IPv6). When deployed, IPv6 will support a much larger number of addresses and offer a broader range of services to network users, including real-time support and security services. Conversion to IPv6 will eventually require that all network elements and all end-user machines be converted to this new protocol. To ease this pain, a transition strategy exists that calls for parallel (but interconnected) networks to be operated for a number of years.

see also Bridging Devices; Internet; Network Design; Network Topologies; Telecommunications.

Martin B. Weiss

Bibliography

Blank, Andrew G. TCP/IP Jumpstart: Internet Protocol Basics. San Francisco: Sybex, Inc., 2000.

MacKinnon, Dennis, William McCrum, and Donald Sheppard. An Introduction to Open Systems Interconnection. New York: Computer Science Press, 1990.