New FCC Broadband Standards Should Consider Latency
The FCC sought public input in November on including latency metrics in reporting requirements.
Jericho Casper
WASHINGTON, February 27, 2024 – The Federal Communications Commission is being told that its focus on speed in its proposed draft order released Thursday is obscuring other important broadband metrics, namely the time it takes for traffic to be routed.
Suggestions included in a draft report and order published by the FCC last week suggest that, as the agency considers increasing the national broadband speed standard, it should also look to enhance broadband criteria with regard to latency.
Up for consideration by the FCC on March 14, the draft report would increase the current national broadband speed definition of 25 * 3 Megabits per second (Mbps) established in 2015, raising it to 100 * 20 Mbps. If adopted, the report would also adopt a long-term national broadband speed target of 1 Gigabit per second downstream and 500 Mbps upstream.
While the FCC's draft report doesn't specify explicit latency requirements for defining "broadband", the agency did ask for feedback on creating benchmarks for network latency reporting, which it plans to revisit in future inquiries.
The FCC’s draft report, on page 70, indicates that several respondents to the agency’s initial inquiry launched in November support incorporating service quality elements, particularly latency, into the commission's annual Section 706 report. This report evaluates whether advanced telecommunication capabilities, including broadband, are being deployed to all Americans in a reasonable and timely manner.
For more than a decade, the commission has required its recipients of the Universal Service Fund’s high-cost funding to have 95 percent or more of all observations of network round-trip latency at or below 100 milliseconds. This is the same standard the Department of Commerce requires Broadband Equity, Access and Deployment funding recipients to meet.
In the FCC’s draft report, the commission states the majority of commenters call for the FCC to maintain this standard, consistent with language in the Infrastructure Act.
On the other hand, some commentators, including USTelecom and WISPA, argue that considering service quality factors like latency would broaden the scope of the commission’s Section 706 assessment beyond its original purpose. However, the FCC rejected this assertion in its preliminary report.
Among those supporting the FCC’s proposal to investigate latency as a service attribute are ADTRAN, whose comments are detailed on pages 15-17; ASSIA, noted on pages 2-4; and Dr. William Hawkins, III, an assistant professor of computer science at the University of Cincinnati, whose comments argue that “working latency must be evaluated alongside throughput when determining whether a consumer’s internet access qualifies as broadband”.
Another response, backed by 63 signatures, suggests Congress and the FCC should not consider raising the broadband standard beyond 100 * 20 Mbps at this time without considering latency, and the technology and software that exist to mitigate it.
The comment submitted by Dave Taht, chief science officer of LibreQoS, argues that today’s applications are not typically bandwidth-limited, but are instead significantly limited by working latency. LibreQoS is a software tool designed to reduce the delay or lag experienced by internet users when accessing the web.
Taht argues the commission should balance its near-term efforts to increase speed and bandwidth with the goal of minimizing latency. Taht’s comment emphasizes that web page load time is almost entirely bound by latency, not by bandwidth, stating “it is rare that a typical web page will use more than 20 megabits at any instant in time.”
The comment references studies demonstrating that a 10 gigabit link with 50 milliseconds latency is readily outperformed by a 10 megabit link with 1 ms latency for most interactive traffic.
“Calls for further bandwidth increases are analogous to calling for cars to have top speeds of 100, 500, or 1000 miles per hour. Without calling also for better airbags, bumpers, brakes, or roads designed to minimize travel delay,” the comment reads. “Increasing the ‘speed limit’ of the link without actually making the road navigable at the higher speed is a waste of effort.”
The comment highlights that real-time applications like video conferencing, online gaming, and voice over Internet Protocol ‘VoIP’ are more sensitive to latency. High latency or variable latency (jitter) in these applications can cause delays, echoes, distortions, or freezes in video calls.
The comment further raises the question of why the federal government never inquires about the type of fiber a subrecipient plans to deploy, stating that active fiber offers considerably greater capacity for transit compared to GPON.
Latency refers to the time it takes for data to travel from its source to its destination. It is typically measured by round-trip time in milliseconds (ms). Both upload and download processes can benefit from lower latency. And certain technologies offer lower latency than others. Fiber optic cables offer the lowest latency among wired connections, ranging from 1 to 10 ms, due to the high speed at which the light travels through the glass fiber.