Dateline Ashburn: The Interplay Between IXPs and Data Centers
Data centers and internet exchange points in 'Data Center Alley' may be indistinguishable. However, they serve different purposes.
Jake Neenan, Jericho Casper
ASHBURN, Va., Sept. 9, 2025 – Ashburn, Virginia rose to prominence as the internet’s capital in part because of its unusually dense cluster of internet exchange points, the neutral hubs where networks trade traffic.
Broadband BreakfastBroadband Breakfast
Today, with hundreds of data centers crowding the same corridor, those exchanges and the facilities that host them have become nearly inseparable, with many of Ashburn’s largest data centers now operating exchange fabrics of their own.
As artificial intelligence fuels a fresh wave of hyperscale construction, modern exchange points were being tested and adapting in new ways to keep pace with soaring workload demands.
Ashburn’s rise
Ashburn has by far the largest data center capacity in the world, a rise seeded in the 1990s when early IXPs like MAE-East were established in Ashburn, Reston, Vienna, and Washington, D.C.
By 1998, MAE-East had been relocated from Tysons Corner to Loudoun County, cementing the area’s role as a neutral meet-point for carriers and content providers. But as internet traffic exploded in the late 1990s, the aging exchange struggled with congestion and reliability issues.
Into that gap stepped Equinix, which had launched in 1998 with the explicit goal of creating carrier-neutral interconnection hubs. The company built large facilities in Ashburn that combined colocation space with their own switching fabric, allowing networks, enterprises, and content companies to interconnect directly under one roof.
At the same time, other commercial players like Digital Realty began scaling up wholesale data center campuses across Loudoun County. Together, these operators offered a more reliable and scalable alternative, and many networks that had previously connected at MAE-East or smaller sites migrated their interconnection into Equinix Ashburn. Within a few years, the town had become the de facto core of the U.S. internet exchange system.
By the mid-2000s, this concentration snowballed into what locals now call “Data Center Alley.” Loudoun County’s permissive zoning, tax incentives, abundant land, and robust fiber routes – much of it laid along utility and rail corridors – made it the natural choice for hyperscalers like AOL, Yahoo, Google, Microsoft, and later Amazon Web Services to build at scale.
Today, thousands of networks connect through Ashburn’s exchange points – with new participants joining so often that the exact number is hard to pin down. That density is unrivaled anywhere else in the U.S. But it also means a huge share of the nation’s internet, data center, and AI traffic leans on a single metro area.
TeleGeography counts 17 buildings in Ashburn hosting exchange switches, run by four different operators. Megaport, Digital Realty, and LINX also present in the community.
The AI stress test
Industry experts have warned that advanced AI applications will push latency and interconnection requirements beyond what current hubs were built to handle.
That interconnection and sharing these operators enable is called peering.
In what’s called public peering, networks reach agreements and all exchange traffic with each other via ethernet switches. That can be in a single data center or spread across several of them.
In private peering, networks reach one-on-one agreements to connect with each other. It can be advantageous for large networks exchanging a lot of traffic, like a major ISP and a major streaming service, to directly share that large amount of data rather than go through an intermediary.
Bandwidth purchases for data center connectivity have risen nearly 330 percent since 2020, according to Zayo, and McKinsey & Company projects demand for AI-ready data center capacity will grow more than 30 percent annually through 2030.
Meeting that growth won’t just require more servers. An August report from the Fiber Broadband Association predicted that 573 new data centers would be built in the next three years, with most of those centers requiring fiber connections to at least three different Internet exchange points.
New entrants and adaptation
In response, new players are moving into Northern Virginia. On Aug. 26, Community IX announced the launch of CIX-NoVA, its first internet exchange in the region. Service will start in Ashburn and Reston data centers, with more sites under review.
Industry analysts say modern IXPs were evolving into enterprise-focused platforms, with stricter service level agreements, better security, and more efficient use of power as they scale to 400G, 800G, and beyond.
One way exchanges were adapting was by pushing closer to the edge. New projects increasingly place IXPs inside data centers, in order to reduce latency for emerging applications like AI, gaming, IoT, telemedicine.
To keep pace with enterprise AI workloads, IXPs are layering in new capabilities that go well beyond basic traffic peering. Operators are deploying optical data center interconnects to link hyperscale campuses over short, high-capacity routes, and adding programmable network processors to boost throughput while trimming power consumption.
They are also introducing multi-layer automation across IP and optical networks to detect problems early and meet stricter service-level agreements.
At the same time, IXPs are being pushed into new territory on security and compliance: enterprise customers expect built-in defenses against DDoS and other large-scale attacks, and industries like banking and healthcare are demanding localized data sovereignty so sensitive information never leaves the region.
“As a result, this most recent evolution of the IXP looks very different than its predecessor,” Greg Hankins, senior product line manager for Nokia’s routing product lines, wrote in a column. “While it shares some of the DNA, it is much more focused on adding value and sustainability than peering volumes of traffic cheaply.”
A profile of Equinix
Operators often have several individual data centers that serve as exchange points in a given city, and that network of IXPs is generally considered a single exchange, said Stefan Raab, who leads business development in the Americas for Equinix, a leading operator of data centers and IXPs.

“Generally, I would define an exchange as constrained to a single metro geography made up of one or more data centers that it’s built around,” he told Broadband Breakfast. “The terms have been fairly fluid, and there’s no real need for precision.”
There are currently between 126 and 200 major exchanges in the United States -- different databases have different tallies.
Equinix started the exchange in the early 2000s in response to major networks looking for a neutral facility – one not owned by one of the participants – to share traffic, Raab said.
He said northern Virginia’s combination of land, available power, and existing network presence – the internet grew out of Defense Department initiatives – primed the area to be a hub for internet infrastructure.
Equinix now operates one of the largest commercial exchanges in Ashburn, with more than 350 networks connected, according to PeeringDB. That’s a number Raab called “not 100 percent accurate, but very directionally correct.”
The rural-urban divide
While Ashburn exchange points grow denser, fourteen U.S. states have no IXP at all, forcing their internet traffic to backhaul through centralized hubs, like Ashburn, New York, or Chicago.
In regions without a nearby exchange, traffic may traverse hundreds of fiber miles before reaching the closest IXP.
Because optical networks add roughly 1 millisecond of latency for every 50 miles, that detour can introduce dozens of milliseconds of delay, which some experts say poses a potential barrier to future economic development of these regions.
“All of our network traffic backhauls out of the state. All of it,” said Tonya Witherspoon, a former official at Wichita State University in Kansas and the founder of Mindscapes. “It just does not work for the middle of the country to be devoid of critical infrastructure.”
She spoke on a Broadband Breakfast webinar on August 20, 2025.
Witherspoon pointed to precision agriculture and autonomous manufacturing as use cases that will require the lower latency connections enabled by closer exchange points.
She worked with WSU on securing funding and planning for a carrier-neutral IXP on the university’s campus, a partnership with nonprofit Connected Nation and Newby Ventures.
The project broke ground in May and is expected to be up and running by spring of 2026. It benefited from a $5 million grant from Kansas’s Lasting Infrastructure and Network Connectivity program.
Connected Nation and Newby Ventures are aiming to construct another 125 IXPs across the country, but it’s been relatively slow going so far.
“If we have to wait for grants like we did with the state, that will take the rest of our lives and our kids’ lives,” Hunter Newby, who runs Newby Ventures, said of the Kansas project at a Broadband Breakfast event in Washington in March. “Not gonna happen. We’ve got to find another way to do it.”
Brian Smith, a telecom expert who spoke at the event with Witherspoon, agreed the country “definitely” needs more IXPs, but said keeping them running in less populated areas could be a challenge.
Smith noted they can be expensive to operate, north of $100,000 per year, and require additional infrastructure of their own.
“The other problem is you have LECs and cable companies that probably won’t interconnect at those IXPs, because they don’t today,” he said “They could, but they choose not to because they want to independently offer their routes.”
In Broadband Breakfast’s exchange with Equinix’s Raab, he pointed to an IXP in Minneapolis, which he said had an “outsized number of participants” based on its proximity to existing hubs in Chicago and Ashburn.
“That crew there has just done a good job of building a case and a value proposition and bringing networks together,” he said “There’s a bunch of science to it, but then there’s some art to it as well.”
First Dateline Ashburn story: How to Break the Internet
Second Dateline Ashburn story: How do IXPs contribute to infrastructure resiliency?
Third Dateline Ashburn story: What is the energy impact of data centers?
Fourth Dateline Ashburn story: What is the water impact of data centers?

Resilient Critical Infrastructure Summit
A one-day conference on securing America's vulnerable digital infrastructure
Learn About for Resilient Critical Infrastructure.

Member discussion