Arthur Sidney: Broadband Is Becoming AI Infrastructure. Who Actually Controls It?

As broadband becomes the delivery layer for AI-driven public services, agencies must build audit rights, explainability, and override authority into procurement contracts.

Arthur Sidney: Broadband Is Becoming AI Infrastructure. Who Actually Controls It?
The author of this Expert Opinion is Arthur Sidney. His bio is below.

Broadband is no longer just connectivity infrastructure. It is becoming the delivery layer for AI-enabled public services.

That means federal and state broadband programs should treat AI governance as an infrastructure requirement, not a downstream compliance issue. Agencies should use procurement to require audit rights, system logs, explainability, override authority, and change-control obligations before AI systems are embedded into broadband-supported services.

Without these controls, agencies may be responsible for automated decisions they cannot explain or control.

As billions of dollars continue to flow into broadband buildout across the United States, government agencies are simultaneously integrating artificial intelligence into platforms that manage eligibility, detect fraud, allocate resources, and deliver services. These developments are often discussed as separate policy tracks, but in practice they are converging in ways that fundamentally reshape where decision-making authority resides.

As infrastructure becomes software-driven, authority begins to shift from institutions to systems. Decisions that were once made through administrative processes are now mediated by automated tools that operate at scale, often with limited transparency and evolving behavior.

Many of these systems are developed and maintained by external vendors, hosted on infrastructure that agencies do not fully control, and updated over time in ways that can alter outputs without corresponding changes in formal approval. The result is not simply a technical transformation, but a structural one, in which the relationship between responsibility and control becomes increasingly difficult to maintain.

This shift becomes most visible after deployment, when the central governance questions are no longer about whether a system functions as intended, but whether the institution relying on it retains the ability to understand and intervene in its operation.

The critical questions are straightforward, even if the answers often are not: who can audit the system’s behavior, who can explain the basis of its outputs, and who has the authority to pause or override it when conditions change or outcomes cannot be defended? In many cases, these capabilities are incomplete, fragmented, or absent altogether.

Broadband policy has encountered analogous challenges before, particularly in moments where infrastructure expands more rapidly than oversight mechanisms can adapt. Responsibility becomes distributed across agencies, contractors, and technical systems, and when failures occur, accountability becomes difficult to trace. Artificial intelligence intensifies this dynamic by introducing systems that are not static, but continuously evolving.

Models are retrained, data inputs shift, and outputs change over time, meaning that the system initially approved is not necessarily the system that is operating in practice. This creates a growing disconnect between formal authorization and operational reality.

In this context, an agency may fund the infrastructure, execute the contract, and remain formally accountable for outcomes, while lacking the practical ability to interrogate system behavior or intervene in real time. That gap between formal responsibility and operational control is not merely a technical inconvenience; it is a governance problem with direct implications for public accountability.

Broadband infrastructure, after all, is no longer just a conduit for connectivity. It has become the delivery layer for a wide range of public services, including subsidies, telehealth, education, and workforce programs, all of which are increasingly shaped by automated decision-making systems embedded within that infrastructure.

Once those systems begin to produce consequential outcomes, the nature of failure changes. If an automated system denies benefits, flags individuals for enforcement, or restricts access to services, the relevant question is not simply whether the system performed as designed, but whether the agency retains the ability to examine, challenge, and, if necessary, override the decision. Where that capacity is lacking, institutions are placed in the position of being responsible for decisions they do not fully control, a condition that raises both operational and legal concerns.

The tendency in policy discussions is to treat these issues as downstream problems to be addressed after systems are deployed and their effects become visible. By that point, however, the systems are often deeply embedded in institutional processes, with significant switching costs and limited flexibility. The more effective point of intervention lies earlier, at the stage of procurement, where agencies define the conditions under which systems are adopted and integrated into public operations.

Procurement decisions determine whether agencies will have access to system logs sufficient to reconstruct decision pathways, whether model behavior can be examined and explained, and whether clear authority exists to intervene when outcomes cannot be justified.

These are not merely technical specifications; they are the terms that determine whether institutions retain meaningful control over the systems on which they depend. Without such conditions, infrastructure policy risks repeating a familiar cycle in which systems scale quickly, oversight develops more slowly, and accountability emerges only after harm has occurred.

Broadband is now too central to public service delivery for that pattern to persist without consequence. It carries not only connectivity, but decisions, and as those decisions become increasingly automated, the question of control becomes unavoidable.

The issue is no longer who built the system or even how it was initially approved, but whether the institutions responsible for its use retain the authority to intervene when it matters most. In the emerging landscape of AI-enabled infrastructure, that authority is what ultimately defines where power resides.

Arthur Sidney is available for short-term advisory and client work on AI governance, regulatory risk, federal and state tech policy, procurement strategy, and rapid-turn policy briefs. he is a public policy strategist and attorney working at the intersection of technology, governance, and institutional risk. He is a former Congressional Chief of Staff and Chief Counsel in the U.S. House of Representatives, and for more than 25 years he has advised Members of Congress, federal agencies, global companies, and mission-driven organizations on technology policy, regulatory strategy, and government affairs. This Expert Opinion is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Popular Tags

#if @member /if