Panelists Consider Pros and Cons of Alternatives to Internet's Transport Protocol
ARLINGTON, Va., September 26, 2009 – Whether internet service providers will accelerate early efforts to prioritize bandwidth, and what impact such measure might have upon the open internet, were actively discussed by panelists at the Telecommunications Policy Research Conference here.
Editor’s Note: This is the one of a series of panelist summary articles that BroadbandCensus.com will be reporting from the Telecommunications Policy Research Conference, September 25-27, at George Mason University School of Law in Arlington, Va.
ARLINGTON, Va., September 26, 2009 – Whether internet service providers will accelerate early efforts to prioritize bandwidth, and what impact such measure might have upon the open internet, were actively discussed by panelists at the Telecommunications Policy Research Conference here.
Traditionally, internet traffic has been managed by the Transmission Control Protocol (TCP), the engineering standard for almost all internet transmissions. When there is a great demand for internet content than is available to flow over the network at any given point in time, “each flow of the network gets a roughly equal share of the bottleneck capacity,” according to Steve Bauer, a professor of computer science at MIT.
Bauer was presenting a paper on “The Evolution of Internet Congestion,” with Professor David Clark and William Lehr, also of MIT.
Such a standard for routing internet traffic has been dubbed “TCP Fair,” and this approach remains the standard for dealing with congestion. However, a variety of internet providers, including Comcast – which was punished by the FCC for blocking traffic from the BitTorrent application – have been experimenting with alternatives.
Bauer said that Comcast and other broadband providers have been experimenting with changes to the “TCP Fair” approach because of changing expectation by end users, changing composition of internet traffic, and because of new ideas – ideas challenging the traditional notion of “end-to-end” internet – emerging in the technical community.
Among the alternatives, or additions to, the “TCP Fair” approach include re-ECN, or a re-feedback of Explicit Congestion Notification, LEDBAT, and P4P, or peer-four-peer, an approach to peer-to-peer (P2P) communications that allows DSL providers to maximize the effectiveness of their networks.
Considering the validity of different approaches is particularly significant in light of Federal Communications Communication Chairman Julius Genachowski’s announcement, this past Monday, that the agency will begin implement Network Neutrality requirements.
Bauer recommended that the academic community “obtain more data about traffic patterns, congestion and usage [while also] ensuring that transparency requirements don’t discourage experimentation with new congestion management techniques.”
Also speaking on the panel was Nicholas Weaver, a software expert at the International Computer Science Institute at the University of California at Berkeley. Weaver highlighted the unusual economics of P2P communications. Content providers save enormous amounts. He said CNN has saved up to 30 percent of bandwidth costs by aggressively using P2P.
“But the internet service providers sees a magnification of costs,” said Weaver. The economics can be changed, however, but the introduction of peer-to-peer “edge caches” that are offered free of charge.
Panelists for this session included:
- Marius Schwartz, Georgetown University (Moderator)
- Steve Bauer, David Clark, William Lehr: Massachusetts Institute of Technology
- Guenter Knieps, Albert Ludwigs Universitat Freiburg
- Nicholas Weaver, ICSI