The Obsession with Internet Bandwidth: Part 1

Published 31/01/2024
Author image
Speed, in terms of Internet connectivity, isn't real. More bandwidth isn't the answer. Read now to find out what affects the user experience online.
Article cover

Over the past 20 years, the primary solution suggested for resolving network experience issues has been to 'get more bandwidth'. However, this advice has persisted even as bandwidth has increased dramatically, yet poor user experience remains the top complaint.


Today, our global obsession with bandwidth often leads to a swift decision to invest in more of it when suggested. More critically, our quest for 'speed' has sparked a fervent habit of continually measuring our bandwidth, almost as if our lives depended on it. In the United States alone, millions of bandwidth tests, commonly known as speed tests, are conducted daily.


The prevailing marketing message has convinced the world that more bandwidth equates to faster speeds, and faster is invariably better than slower. A prevalent belief is that 100Mbps is definitively faster than 50Mbps, thus ensuring a superior user experience. But is this assumption accurate?


Consider purchasing a new PC from Amazon. It's widely accepted that 'next day' delivery is quicker than 'second day' delivery. From the buyer's perspective, 'faster' is determined by how soon the PC arrives. It's challenging to argue that arriving earlier is slower than arriving later.


When evaluating the buyer's experience, does knowing Amazon's maximum shipping rate of 20,000 packages per hour (pph) add any value? The answer is 'No', as this rate limit has no direct link to the actual delivery time. What about the distance to the customer? For instance, does a package sent to Denver experience a different delivery time compared to one sent to Sydney from the same dispatch point? The answer is likely 'Yes'.


Similar to packages per hour, bandwidth is measured as a rate, specifically bits per second (bps). So, should we expect bps and pph to have comparable characteristics? For example, is sending a network packet to an online user analogous to sending a package to a customer? Considering geography, is it reasonable to assume that sending data over a greater distance will take longer than over a shorter one? If so, then variations in network connection times, influenced by factors like wireless vs. wired connections or the performance of a $60 router versus a $10,000 one, should also be expected. Interestingly, network time, known as latency, is well-defined but rarely considered. Latency measures the time taken for a packet to travel to its destination and back (round trip time, RTT), and is expressed in milliseconds (ms).


Latency time defines one of the most important aspects of packet delivery.


Latency time is a crucial factor in packet delivery. Consider a cloud application service provider serving two customers: Customer 'a' with a connection latency of 10ms and Customer 'b' with a latency of 20ms. If a packet is sent simultaneously to both, it is logical that the packet will reach Customer 'a' first, as 10ms is less than 20ms. Now, if Customer 'a' has a bandwidth of 50Mbps and Customer 'b' has 100Mbps, would this alter the result? Likely not, despite our obsession with bandwidth suggesting otherwise. In essence, time is a direct factor in user experience, whereas the relevance of bits per second is debatable.


To further explain this disconnect between time and rate, imagine a car at a fork in the road, with both paths leading to the airport. The left path is a 70mph, 8-lane freeway, and the right is a 30mph, single-lane road. If speed is the primary concern, the left freeway seems the obvious choice. However, if the left path is 35 miles to the airport and the right is only 10 miles, the journey is shorter via the right path - 30 minutes on the freeway vs. 20 minutes on the single-lane road. The 'cars per hour' rate of the road doesn't directly impact the travel experience. What matters is the 'time to destination', which is a key factor for the driver. Similarly, arriving earlier is faster than later, and in terms of experience, a delayed arrival could mean missing a flight, clearly a negative outcome.


So, why measure bandwidth at all? Bandwidth defines capacity. It is one of two parameters that set the limit of a network's capacity. Under supply and demand principles, the stability and performance of a connection rely on demand never exceeding supply. This is akin to driving in rush hour traffic when demand surpasses road capacity. For instance, a 50Mbps connection with a 20ms latency equates to a capacity of 1 million bits end-to-end (20 x 50,000).


Considering demand, 1 million bits translates to just 125,000 bytes or 125KB, roughly the size of a low-quality photo. A typical webpage contains multiple images, often of higher quality, along with graphics, text, tables, animations, and videos. Therefore, the demand from a single user's web browser can quickly surpass the capacity of a 50Mbps connection with 20ms latency, causing a period where the connection becomes unavailable to others.


Latency is important because time is everything.


Understanding the size of a delay and the number of users becomes essential in determining the quality of network service. While both the bandwidth rate and latency impact the transaction time for a user, it's the concurrent user demand that determines the severity of delay penalties due to supply limitations. Hence, measuring bandwidth solely as bits per second without considering capacity, concurrency, and demand often leads to user complaints about slow response times or even disconnects.


Increasing bandwidth is key to handling concurrent usage. Similar to adding lanes to a highway to manage rush hour traffic, increasing bandwidth can allow more data to flow simultaneously. However, this doesn't necessarily reduce travel time (or data transmission time) if congestion wasn't the initial problem. Most bandwidth or speed test services overlook the crucial metric of delay caused by poor supply/demand ratios, leading to inaccurate assessments of bandwidth.


The main issues with most Internet bandwidth tests are:


1. Many users misinterpret bandwidth measurements as 'speed.' However, if an increase in bitrate doesn't reduce latency, the perceived improvement is questionable, especially if latency increases.


2. Several bandwidth measurement solutions report bandwidth rates as speed without referencing capacity or time, exacerbating the confusion between speed and capacity.


3. Many bandwidth tests don't measure data accurately. Methods that require filling the network 'pipe' continuously throughout the test are flawed, as they presuppose knowledge of the bandwidth, which is the very thing being tested. Simply adding data and dividing by time, without considering latency, yields meaningless results that are often misinterpreted as significant.


Consider a highway test where 10 cars each cover 6 miles in an hour. A flawed test might report this as 60 miles per hour when, in reality, it's 6 miles per hour. This illustrates the importance of including time in the equation. Similarly, in network terms, a 60Mbps speed might seem impressive, but if it's actually operating at 6Mbps, that's problematic. Popular bandwidth tests, favoring providers, often mask poor network quality, leading to the prevalent issue of poor user experiences.


So, how should bandwidth be measured? Any test that requires continuously reading data with no gaps has several fundamental problems:


1. The test must keep the pipe full of data throughout.

2. The amount of data (bandwidth x latency) required to do this means the test needs to know the bandwidth beforehand, which is counterintuitive.

3. Demanding 100% of the pipe can negatively impact user experience.

4. Accuracy is compromised unless the network is completely unused.

5. Saturating high-capacity networks harms the experience of many users.


Running such tests infrequently, like at night, doesn't provide a realistic assessment of typical usage times. However, focusing on user experience by frequently testing equilibrium throughput (timeliness) offers more benefits:


1. It accurately reflects the user experience as it's a single-user test.

2. Frequent, non-destructive testing during peak hours doesn't affect user experience.

3. It enhances performance management strategies by providing more representative data.

4. It quickly identifies network delays impacting user experience.

5. Capacity-specific issues are only tested when there's clear evidence of problems.


In conclusion, recognizing the role of bandwidth in defining capacity and concurrency is vital for a network measurement strategy focused on user experience. Shifting the focus from abstract metrics like bits per second to tangible 'user experience' factors like timeliness and efficiency is key. This doesn't guarantee a good experience but prioritizing metrics that impact user experience can lead to early detection and resolution of problems. This proactive approach enhances overall customer experience quality and consistency. It also aids in understanding experience trends and improving future network planning. The old mindset of needing more bandwidth is inadequate without considering the user experience factor. Emphasizing the importance of latency and bandwidth as a capacity, not speed, greatly benefits a robust network assessment strategy by prioritizing user experience. In network terms, time is indeed everything.

Get the best content from Converge direct to your inbox every month.

More From The Author

Related Story