What is Round-trip Time and How Does it Relate to Network Latency?

Round-trip time is an important metric that can indicate the quality of communications available between two end-points. It’s a metric that our team often discusses with customers because it directly relates to the service quality experienced by users. Round-trip time can be directly impacted by a range of design decisions, especially concerning network topology.  However, there is some confusion around what exactly round-trip time is, how it affects your service, and how you can improve it.

What is Round-trip Time?

One of our most viewed dashboard metrics, round-trip time (RTT) is the time it takes for a packet to go from the sending endpoint to the receiving endpoint and back. There are many factors that affect RTT, including propagation delay, processing delay, queuing delay, and encoding delay. These factors are generally constant for a given pair of communicating endpoints. In addition, network congestion can add a dynamic component to RTT.  

Propagation delay is usually the dominant component in RTT. It ranges from a few milliseconds to hundreds of milliseconds depending on whether the endpoints are separated by a few kilometers or by an entire ocean.

The remaining components (processing, queuing, and encoding delays) can vary by the number of nodes in the network connecting endpoints. When only a few router hops separate endpoints, these factors are negligible.

In real-time communications, we must consider the impact of network topology on RTT. Any infrastructure-based topology introduces incremental delay as compared to a peer-to-peer connection. When media is anchored by an MCU, SFU, or TURN server, additional processing, queuing and encoding delays occur. But, more importantly, an infrastructure topology can add significant propagation delay depending on where the server is located relative to the endpoints.

sfu

Figure 2: Infrastructure Topology

Hairpinning occurs when media is anchored in a location that is geographically remote from an endpoint, adding significant propagation delay, as compared to a peer connection. This is why the placement of infrastructure can be critical to delivering low RTT and a high-quality user experience. The further away the media server is from the sending and receiving endpoints, the higher the RTT value and the lower the service quality.

mediaserver-1

Figure 3: The media server is located further away than necessary from the sending and receiving endpoints, resulting in a high round-trip time.

ideal-mediaserver-1

Figure 4: The media server is located between the sending and receiving endpoints, resulting in a lower round-trip time.

Clearing Up a Few Misconceptions

Round-trip time and ping time are often considered synonymous. While ping time may provide a good RTT estimate, it differs in that most ping tests are executed within the transport protocol using ICMP packets. In contrast, RTT is measured at the application layer and includes the additional processing delay produced by higher level protocols and applications (e.g. HTTPS).

Network latency is closely related, but different than RTT. Latency is the time it takes for a packet to go from the sending endpoint to the receiving endpoint. Many factors may affect the latency of a service. Latency is not explicitly equal to half of RTT, because delay may be asymmetrical between any two given endpoints. RTT includes processing delay at the echoing endpoint.

How Does RTT Affect Your Real-time Communications Service?

Continental segregation of RTT occurs when traffic is highly localized.

rtts

Figure 5: Round-trip time for intercontinental vs intracontinental video bridge conferences collected from one month of data monitoring by callstats.io.

It is important to ensure that your real-time communications service localizes infrastructure near communicating endpoints, so you can provide your customers with the highest call quality.

 

Visit our Help Center