Network Layer: Introduction and Service Models

4.1 Introduction and Network Service
Models

We saw in  the previous chapter that the transport layer provides
communication service between two processes running on two different
hosts. In order to provide this service, the transport layer relies on
the services of the network layer, which provides a communication service
between hosts. In particular, the network-layer moves transport-layer
segments from one host to another. At the sending host, the transport layer
segment is passed to the network layer. The network layer  then “somehow”
gets the segment to the destination host and passes the segment up the
protocol stack to the transport layer.  Exactly how the network layer
moves a segment from the transport layer of an origin host to the transport
layer of the destination host is the subject of this chapter. We will see
that unlike the transport layers, the network layer  requires the
coordination of each and every host and router in the network.
Because
of this, network layer protocols are among the most challenging (and therefore
interesting!) in the protocol stack.

Figure 4.1-1 shows a simple network with two hosts (H1 and H2) and four
routers (R1, R2, R3 and R4).  The role of the network layer in a sending
host is to begin the packet on its journey to the the receiving host. 
For example, if H1 is sending to H2, the network layer in host H1 transfers
these packets to it nearby router, R2.  At the receiving host (e.g.,
H2) , the network layer receives the packet from its nearby router (in
this case, R3) and delivers the packet up to the transport layer at H2. 
The primary role of the routers is to “switch” packets from input links
to output links.  Note that the routers in Figure 4.1-1 are shown
with a truncated protocol stack,  i.e., with no upper layers above
the network layer, since routers do not run transport and application layer
protocols  such as those we examined in Chapters 2 and 3.



Figure 4.1-1: The network layer

The role of the network layer is thus deceptively simple — to transport
packets from a sending host to a receiving host.  To do so, 
three important network layer functions can be identified:

  • Path Determination.  The network layer must determine the route
    or path taken by packets as they flow from a sender to a receiver. 
    The algorithms that calculate these paths are referred to as routing
    algorithms.  
      A routing algorithm would determine,
    for example, whether packets from H1 to H2 flow along the path R2-R1-R3
    or path R2-R4-R3 (or any other path between H1 and H2).   Much
    of this chapter will focus on routing algorithms.  In Section 4.2
    we will study the theory of routing algorithms, concentrating on the two
    most prevalent classes of routing algorithms: link state routing and distance
    vector routing.  We will see that the complexity of a routing algorithms
    grows considerably as the number of  routers in the network increases. 
    This motivates the use of hierarchical routing, a topic we cover in section
    4.3.  In Section 4.8 we cover multicast routing —  the routing
    algorithms, switching function, and call setup mechanisms that allow a
    packet that is sent just once by a sender to be delivered to multiple destinations.
  • Switching.  When a packet arrives at the input to a router,
    the router must move it to the appropriate output link.  For example,
    a packet arriving from host H1 to router R2 must either be forwarded towards
    H2 either along the link from R2 to R1 or along the link from R2 to R4. 
    In Section 4.6, we look inside a router and examine how a packet is actually
    switched (moved) from an input link to an output link.
  • Call Setup. Recall that in our study of TCP,  a three-way handshake
    was required before data actually flowed from sender to receiver. 
    This allowed the sender and receiver to setup the needed state information
    (e.g., sequence number and initial flow control window size).  In
    an analogous manner, some network layer architectures (e.g., ATM) requires
    that the routers along the chosen path from source to destination 
    handshake with each other in order to setup state before data actually
    begins to flow.  In the network layer, this process is referred to
    as call setup.  The network layer of the Internet architecture
    does not perform any such call setup.

Before delving into the details of the theory and implementation of the
network layer, however,  let us first take the broader view and consider
what  different types of service might be offered by the network
layer.

 

4.1.1 Network Service Model

When the transport layer at a sending host transmits a packet into the
network (i.e., passes it down to the network layer at the sending host),
can the transport layer count on the network layer to deliver the packet
to the destination? When multiple packets are sent, will they be delivered
to the transport layer in the receiving host in the order in which they
were sent?  Will the amount of  time between the sending of two
sequential packet transmissions be the same as the amount of time between
their  reception?  Will the network provide any feedback about
congestion in the network?  What is the abstract view (properties)
of the channel connecting the transport layer in the two hosts? The answers
to these questions and others are determined by the service model
provided by the network layer.  The network service model defines
the characteristics of  end-to-end transport of data between one “edge”
of the network and the other, i.e., between sending and receiving end systems.

Datagram or Virtual Circuit?

Perhaps the most important abstraction provided by the network layer to
the upper layers is whether or not the network layer uses virtual circuits
(VCs)
or not. You may recall from Chapter 1 that a virtual-circuit 
packet network behaves much like a telephone network, which uses “real
circuits” as opposed to “virtual circuits”.  There are three identifiable
phases in a virtual circuit:

  • VC setup. During the setup phase, the sender contacts the network
    layer, specifies the receiver address, and waits for the network to setup
    the VC.  The network layer determines the path between sender and
    receiver, i.e., the series of links and switches through which all packets
    of the VC will travel. As discussed in Chapter 1, this typically involves
    updating tables in each of the packet switches in the path. During VC setup,
    the network layer may also reserve resources (e.g., bandwidth) along the
    path of the VC.
  • Data transfer.  Once theVC has been established, data can begin
    to flow along the VC.
  • Virtual circuit teardown.  This is initiated when the sender
    (or receiver)  informs the network layer of its desire to terminate
    the VC.  The network layer will then typically inform the end system
    on the other side of the network of the call termination, and update the
    tables in each of the packet switches on the  path to indicate that
    the VC no longer exists.

There is a subtle but important distinction between VC setup at the network
layer and connection setup at the transport layer (e.g., the TCP 3-way
handshake
we studied in Chapter 3).  Connection setup at the transport layer
only involves the two end systems.  The two end systems agree to communicate
and together determine the parameters (e.g., initial sequence number, flow
control window size) of their transport level connection before data actually
begins to flow on the transport level connection. Although the two end
systems are aware of the transport-layer connection, the switches within
the network are completely oblivious to it. On the otherhand, with a virtual-circuit
network layer,  packet switches are involved in virtual-cicuit
setup, and each packet switch is fully aware of all the VCs passing through
it.

The messages that the end systems send to the network to indicate the
initiation or termination of a VC, and the messages passed between the
switches to set up the VC (i.e. to modify switch tables) are known as signaling
messages
and the protocols used to exchange these messages are often
referred to as signaling protocols.
VC setup is shown pictorially
in Figure 4.1-2.

 

 



Figure 4.1-2: Virtual circuit service model

We mentioned in Chapter 1 that ATM uses virtual circuits, although virtual
circuits in ATM jargon are called virtual channels. Thus ATM packet switches
receive and process VC setup and tear down messages, and they also maintain
VC state tables. Frame relay and X.25, which will be covered in Chapter
5, are two other networking technologies that use virtual circuits.

With a datagram network layer, each time an end system wants
to send a packet, it stamps the packet with the address of the destination
end system, and then pops the packet into the network. As shown in Figure
4.1-3, this is done without any VC setup. Packet switches (called “routers”
in the Internet)  do not maintain any state information about VCs
because there are no VCs! Instead, packet switches route a packet towards
its destination by examining the packet’s destination address,  indexing
a routing table with the destination address, and forwarding the packet
in the direction of the destination. (As discussed in Chapter 1, datagram
routing is similar to routing ordinary postal mail.) Because routing tables
can be modified at any time, a series of packets sent from one end system
to another may follow different paths through the network and may arrive
out of order. The Internet uses a datagram network layer.



Figure 4.1-3: Datagram service model

You may recall from Chapter 1 that a packet-switched network typically
offers either a VC service or a datagram service to the transport layer,
and not both services. For example, an ATM network offers only a VC service
to the ATM transport layer (more precisely, to the ATM adaptation layer),
and the Internet offers only a datagram sevice to the transport layer.
The transport layer in turn offers services to communicating processes
at the application layer. For example, TCP/IP networks (such as the Internet)
offers a connection-oriented service (using TCP) and connectionless service
(UDP) to its communicating processes.

An alternative terminology for VC service and datagram service is network-layer
connection-oriented service
and network-layer connectionless
service,
respectively. Indeed, the VC service is a sort of connection-oriented
service, as it involves setting up and tearing down a connection-like entity,
and maintaining connection state information in the packet switches. The
datagram service is a sort of connectionless service in that it doesn’t
employ connection-like entities. Both sets of terminology have advantages
and disadvantages, and both sets are commonly used in the networking literature.
We decided to use in this book the “VC service” and “datagram service”
terminology for the network layer, and reserve the “connection-oriented
service” and “connectionless service” terminology for the transport layer.
We believe this decision will be useful in helping the reader delineate
the services offered by the two layers.

 

The Internet and ATM Network Service Models

 

Network Architecture
Service Model
Bandwidth Guarantee
No Loss Guarantee
Ordering
Timing
Congestion indication

Internet
Best Effort
None
None
Any order possible
Not maintained
None

ATM
CBR
Guaranteed constant rate
Yes
In order
maintained
congestion will not occur

ATM
VBR
Guaranteed rate
Yes
In order
maintained
congestion will not occur

ATM
ABR
Guaranteed  minimum 
None
In order 
Not maintained
Congestion indication provided

ATM 
UBR
None
None
In order
Not maintained
None

Table 4.1-1:  Internet and ATM Network Service
Models

The key aspects of the service model of the Internet and  ATM 
network architectures are summarized in Table 4.1-1.  We do not want
to delve deeply into the details of the service models here (it can be
quite “dry” and detailed discussions can be found in the standards themselves
[ATM
Forum 1997]). A comparison between the Internet and ATM service models
is, however, quite instructive.

The current Internet architecture provides only one service model, 
the datagram service, which is also known as “best effort service.” 
From Table 4.1-1, it might appear that best effort service is a euphemism
for “no service at all.” With best effort service,  timing between
packets is not guaranteed to be preserved,  packets are not guaranteed
to be received in the order in which they were sent, nor is the  eventual
delivery of transmitted packets guaranteed.  Given this definition,
a network which delivered
no packets to the destination would satisfy
the definition best effort delivery service  (Indeed, today’s congested
public Internet might sometimes appear to be an example of a network that
does so!).  As we will discuss shortly, however, there are sound reasons
for such a minimalist network service model.  The Internet’s best-effort
only service model is currently being extended to include so-called “integrated
services” and “differentiated service.”  We will cover these still
evolving service models later in Chapter 6.

Let us next turn to the ATM service models. As noted in our overview
of ATM in chapter 1,  there are two ATM standards bodies (the ITU
and The ATM Forum) . Their network
service model definitions contain only minor differences and we adopt here
the terminology used in the ATM
Forum standards.  The ATM architecture provides for multiple service
models (that is, each of the two ATM standards each has multiple service
models).  This means that within the same network, different connections
can be provided with different classes of service.

Constant bit rate (CBR) network service was the first ATM service
model to be standardized,  probably reflecting the fact that telephone
companies were the early prime movers behind ATM, and CBR network service 
is ideally suited for carrying real-time, constant-bit-rate, streamline
audio (e.g., a digitized telephone call) and video traffic.  
The goal of CBR service is conceptually simple — to make the network connection
look like a dedicated copper or fiber connection between the sender and
receiver.  With CBR service, ATM cells are carried across the network
in such a way that the end-end delay  experienced by a cell (the so-called
cell transfer delay, CDT), the variability in the end-end delay (often
referred to as “jitter” or “cell delay variation, CDV)”), and the fraction
of cells that are lost or deliver late (the so-called cell loss rate, CLR)
are guaranteed to be less than some specified values.  Also, an allocated
transmission rate (the peak cell rate, PCR) is defined for the connection
and the sender is expected to offer data to the network at this rate. 
The values for the PCR, CDT, CDV, and CLR are  agreed upon by the
sending host  and the ATM network when the CBR connection is first
established.

A second conceptually simple ATM service class is Unspecified Bit
Rate (UBR) network service
.  Unlike CBR service, which guarantees
rate, delay, delay jitter, and loss, UBR makes no guarantees at all other
than in-order delivery of cells (that is, cells that are fortunate enough
to make it to the receiver).  With the exception of in-order delivery,
UBR service is thus equivalent to the Internet best effort service model. 
As with the Internet best effort service model, UBR also provides no feedback
to the sender about whether or not a cell is dropped within the network. 
For reliable transmission of data over a UBR network, higher layer protocols
(such as those we studied in the previous chapter) are needed.  UBR
service might be well suited for non-interactive data transfer applications
such as email and newsgroups.

If UBR can be  thought of as a “best effort” service, then Available
Bit Rate (ABR) network service
might best be  characterized as
a “better” best effort service model.   The two most important
additional features of ABR service over UBR service are:

  • A minimum cell transmission rate (MCR)  is guaranteed to a connection
    using ABR service.  If, however,  the network has enough free
    resources at a given time, a sender may actually be able to successfully
    send traffic at a higher rate than the MCR.
  • Congestion feedback from the network.  An ATM network provides feedback
    to the sender (in terms of a congestion notification bit, or a lower rate
    at which to send) that controls how the sender should adjust its rate between
    the MCR  and some peak cell rate (PCR).  ABR senders must decrease
    their transmission rates in accordance with such feedback.

ABR provides a minimum bandwidth guarantee, but on the other hand will
attempt to transfer data as fast as possible (up to the limit imposed by
the PCR).  As such, ABR is well suited for data transfer where it
is desirable to keep the transfer delays low (e.g., Web browsing).

The final ATM service model is Variable Bit Rate (VBR) network service.
VBR service comes in two flavors (and in the ITU specification of VBR-like
service comes in four flavors — perhaps indicating a service class
with an identity crisis!). In real-time VBR service, the acceptable cell
loss rate, delay, and delay jitter are specified as in CBR service.  
However, the actual source rate is allowed to vary according to parameters
specified by the user to the network.  The declared variability in
rate may be used by the network (internally) to more efficiently allocate
resources to its connections, but in terms of the loss, delay and jitter
seen by the sender, the service is essentially the same as CBR service. 
While early efforts in defining a VBR service models were clearly targeted
towards real-time services (e.g., as evidenced by the PCR, CDT, CDV and
CLR parameters), a second flavor of VBR service is now targeted towards
non-real-time services and provides a cell loss rate guarantee.  An
obvious question with VBR is what advantages it offers over CBR (for real-time
applications) and over UBR and ABR for non-real-time applications. 
Currently, there is not enough (any?) experience with VBR service to answer
this questions.

An excellent discussion of the rationale behind various aspects of the
ATM Forum’s Traffic Management Specification 4.0 [ATM
Forum 1996] for CBR, VBR, ABR and UBR service is
[Garret 1996].

4.1.2 Origins of Datagram and Virtual Circuit Service

The evolution of the Internet and ATM network service models reflects their
origins.  With the notion of a virtual circuit as a central organizing
principle, and an early focus on CBR services, ATM reflects its roots in
the telephony world (which uses “real circuits”).  The subsequent
definition of UBR and ABR service classes acknowledges the importance of 
the types of data applications developed in the data networking community. 
Given the VC architecture and a focus on supporting real-time traffic with
guarantees
about the level of received performance (even with data-oriented services
such as ABR), the network layer is significantly more complex than
the best effort Internet. This too, is in keeping with the ATM’s telephony
heritage.  Telephone networks, by necessity, had their “complexity’
within the network, since they were connecting  “dumb” end-system
devices such as a rotary
telephone (For those too young to know, a rotary phone is a non-digital
telephone with no buttons – only a dial).

The Internet, on the other hand, grew out of the need to connect computers
(i.e., more sophisticated end devices) together. With sophisticated end-systems
devices, the Internet architects chose to make the network service model
(best effort) as simple as possible and to implement any additional functionality
(e.g., reliable data transfer), as well as any new application level network
services at a higher layer, at the end systems. This inverts the model
of the telephone network, with some interesting consequences:

  • The resulting network service model which made minimal (no!) service guarantees
    (and hence posed minimal requirements on the network layer) also made it
    easier to interconnect networks that used very different link layer
    technologies (e.g., satellite, Ethernet, fiber, or  radio)  which
    had very different characteristics (transmission rates, loss characteristics).  
    We will address the interconnection of IP networks in detail  Section
    4.4.
  • As we saw in Chapter 2,  applications such as email, the Web, and
    even a network-layer-centric service such as the DNS are implemented in
    hosts (servers) at the edge of the network.  The ability to add a
    new service simply by attaching a host to the network and defining a new
    higher layer protocol (such as HTTP) has allowed new services such as the
    WWW to be adopted in a breathtakingly short period of time.

As we will see in Chapter 6, however, there is considerable debate in the
Internet community about how the network layer architecture must evolve
in order to support the real-time services such a multimedia. An interesting
comparison of the ATM and the proposed next generation Internet architecture
is given in [Crowcroft 95].

 

References

[ATM Forum 1996]  ATM Forum,
“Traffic Management 4.0,” ATM Forum document af-tm-0056.0000.  On-line

[ATM Forum 1997] ATM Forum. “Technical
Specifications: Approved ATM Forum Specifications.” On-line.

[Crowcroft 1995] J. Crowcroft, Z.
Wang, A. Smith, J. Adams, “A Comparison of the IETF and ATM Service Models,”
IEEE
Communications Magazine,
Nov./Dec. 1995, pp. 12 – 16.  Compares
the Internet Engineering Task Force int-serv service model with the ATM
service model. On-line.

[Garrett 1996] M. Garett, “A Service
Architecture for ATM: From Applications to Scheduling,” IEEE Network
Magazine,
May/June 1996, pp. 6 – 14. A thoughtful discussion of the
the ATM Forum’s recent TM 4.0 specification of CBR, VBR, ABR and UBR service.

Copyright Keith W. Ross and Jim  Kurose,
1996-2000  All rights reserved.