Metrics Workshop: Measuring Current Network Versus Internet Users' Needs

WASHINGTON, September 2, 2009 – The Federal Communication Commission’s workshop on how to best benchmark broadband for evaluating the various dimensions of broadband across geographic areas highlighted the difference between measuring the current network versus focusing on internet users’ needs.

WASHINGTON, September 2, 2009 – The Federal Communication Commission’s Wednesday workshop on how to best benchmark broadband for evaluating the various dimensions of broadband across geographic areas highlighted the difference between measuring the current network versus focusing on internet users’ needs.

Richard Clarke, assistant vice president of public policy at AT&T, said that the FCC should benchmark broadband very broadly. This would allow the agency to cope with different classes of user necessity and service differentiation across user capabilities and time of day.

Clarke also argued that the FCC must establish benchmarks that do not vary over time.

Taking a different point of view was Harold Feld, legal director of Public Knowledge and Catherine Sandoval, Assistant Professor of Law, Santa Clara University. Feld and Sandoral said that the focus of benchmarks should be upon the American citizens’ right to use broadband – and should not be limited by usage availability or cost.

They also said that FCC benchmarks must somewhat be adaptive to the changing needs of consumers, and will inevitably change over time.

Where Clarke said that broadband should be tailored to different service levels depending upon the needs of different types of consumers, Feld, Sandoral and Scott Berendt said that it will take superior levels of broadband – beyond that what is currently used in low-usage areas – for internet usage in rural and low-income areas to progress. Berendt is director of research, evaluation of documentation for the non-profit group One Economy.

The three argued that broadband must be benchmarked by types of technology, and by gaps of service, as well as by speed and by ZIP code-based locations of service.

In particular, Sandoral’s presentation urged the FCC to not only focus on the traditional metrics like speed, but also on internet service providers restrictions on downloading applications, application use, computer tethering, device attachment and congestion policies and practices.

She also urged interpretation of the different types of broadband available when accounting for where improvements in broadband service are necessary.

Sandoral gave examples of how the types of broadband available exist due to “application restrictions, bandwidth limits, usage policies, slowdown policies, device attachment prohibitions, peak, average and slowdown speeds.”

Among the issues discussed during the question and answer session included the most meaningful way to measure the price of broadband, why “average use” broadband speeds are so low, and how to most effectively collect data about where and broadband is actually being used.

On the question of how to collect insightful broadband data, Jon Peha, the Chief Techonlogy Officer of the FCC, asked how we make sure the data that is collected about broadband is not “slanted” towards a certain direction.

In response, Santa Clara University Computer Science Director Jon Eisenberg said that there are several ways to collect data, even including how Apple tracks iTunes download performance. Feld jumped in to mention that many independent companies already track broadband usage data for profit.

A questioner from the audience asked whether it might be possible to collect information about broadband access in the 2010 Census. Most of the workshop participants liked this idea. But Sandoval and Feld said that these questions must be addressed in strategic ways for those that have little knowledge about how broadband access works and is defined.

Ironically, the participants in the workshop themselves did not have definitive answer to that definition.