Network Latency and Throughput
Network latency and network throughput are terms that are often confused or even used as alternatives to each other. Latency and Throughput are totally different and you can have High Latency (Bad) and High Throughput (Good) or Low Latency (Good) and Low Throughput (Bad). As you can see from the examples mentioned, even the “High” and “Low” concepts are opposite with latency and throughput. Let us explain what each is and what the effects are.
Some synonyms might help: Delay, Wait period, Response time. You see, latency is a measure of response. If you send a request for data (by clicking on a link, for example), the request gets sent via your local Network Device (router, modem, etc). After that, the request hops to the next device (router or switch) and the next, and the next. The number of devices travelled over is called the “path”. Latency increases with each device as each device needs some time to process the data and look up the correct route to send the data over. Per device, latency varies greatly according to manufacturer and model. It is usually less for higher speed equipment such as Fibre or Gigabit switches and routers.
The Good, the Bad and the Ugly
Latency is expressed in time of a round-trip response in milliseconds (1000ms in a second) and can be judged by the following table:
LAN (Local Network): Good: under 1ms, Average: 1-3ms, Bad: over 5ms.
WAN (National connections): Good: under 30ms, Average: 30-50ms, Bad: over 50ms.
WWAN (International): Good: under 100ms, Average: 100-200ms, Bad: over 300ms.
Networks also have other delay factors affecting latency. These are:
- Propagation delay: Amount of time required for a message to travel from the sender to receiver, which is a function of distance over speed.
- Transmission delay: Amount of time required to push all the packet’s bits into the link, which is a function of the packet’s length and data rate of the link.
- Processing delay: Amount of time required to process the packet header, check for bit-level errors, and determine the packet’s destination.
- Queuing delay: Amount of time the packet is waiting in the queue until it can be processed.
The total latency between the client and the server is the sum of all the delays just listed. At HOSTAFRICA, we keep a constant eye on our latency and throughput to ensure the best experience for our users.
When does it matter?
Latency on normal internet activity such as web browsing, email and even streaming video has almost no effect until it starts getting (in excess of 300-500ms. Voice over IP (VOIP) is more sensitive and needs latencies of under 200ms to work properly. Gamers are the most in need of low latency as are Forex traders. Latency on international links is affected by the distances. Thus it pays to find a server closer to your target market if latency is a concern.
Throughput is a measure of data transfer over time. This is often referred to as “link speed” or “connection speed”. The measure used here is bits per second (b/s) and is usually used in multiples ie Kilobits/s (Kb/s)(1000bits/s), Megabits per second (Mb/s)(1000,000b/s), Gigabits per second (Gb/s) (1000,000,000b/s) and Terabits per second (Tb/s) (1000,000,000,000b/s). Some very large networks also talk about Petabits per second (Pb/s)(1000Tb/s).
Throughput is independent of latency. If latency is purely a factor of distance or many hops, throughput will not be greatly affected. If latency is caused by bad connections, defects or bad configurations, latency will affect the throughput of TCP traffic, but not so much on UDP traffic.
Bits and Bytes
What are these bits and bytes? A BIT is a single SIGNAL or FLAG. It can be ON (1) or OFF (0). This is part of the binary code system on which computers run. A BYTE is the least amount of BITS needed to represent a character. Thus a BYTE is 8 BITs. Note that link speed is always in BITS (b) per second while download speed is often referenced in BYTES (B) per second. As there are 8 bits per byte, 100Mb/s = 12.5MB/s.
Why can latency affect throughput?
When latency is high enough or due to faulty networks, it can cause packet loss. Packet loss is when data packets (usually around 1500 bytes each) are lost or dropped before reaching their target. As a result of this, packets have to be resent. This causes a phenomenon called TCP BACKOFF which will immediately halve the current speed and try to send again. Speed will increase slowly if there are no errors but will degrade on more errors. This results in the following scenario:
Speed=100Mbs ->Error ->Speed-50Mbs ->Error ->Speed=25Mbs ->Success ->Speed=30Mbs ->Error ->Speed=15Mbs etc,etc.
A 100Mb/s link will almost never allow you the full 100Mb/s as roughly 5-8% is used for packet overhead. This overhead contains information such as routing data, frame tags, Quality Of Service (QoS) tags, VLAN tags and Packet index to name a few. As a result, the full packet is never available and neither is the full speed of the link.
Hopefully, this will give you a better picture of your network connection.
The internet has become a daily necessity in life. In fact, the United Nations Human Rights Council has deemed it a Basic Human Right. What actually connects our computers to […]
IPV6 Reduced Complexity IPV6 networking reduces the complexity of address planning and assigning addresses to networks. Instead of using a wide variety of IPV4 subnet lengths, it is best practice […]