Artificial Intelligence
Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist
Congress should shift focus from speech debates to regulation on emerging technologies, says expert.
WASHINGTON, March 9, 2023 – Congress should focus on technology regulation, particularly for emerging technology, rather than speech debates, said Adam Conner, vice president of technology policy at American Progress at Broadband Breakfast’s Big Tech and Speech Summit Thursday.
Conner challenged the view of many in industry who assume that any change to current laws, including section 230, would only make the internet worse.
Conner, who aims to build a progressive technology policy platform and agenda, spent the past 15 years working as a Washington employee for several Silicon Valley companies, including Slack Technologies and Brigade. In 2007, Conner founded Facebook’s Washington office.
Instead, Conner argues that this mindset traps industry leaders in the assumption that the internet is currently the best it could ever be. This is a fallacy, he claims. To avoid this mindset, Conner suggests that the industry focus on regulation for new and emerging technology like artificial intelligence.
Recent AI innovations, like ChatGPT, create the most human readable AI experience ever made through text, images, and videos, Conner said. The penetration of AI will completely change the discussion about protecting free speech, he said, urging Congress to draft laws now to ensure its safe use in the United States.
Congress should start its AI regulation with privacy, anti-trust, and child safety laws, he said. Doing so will prove to American citizens that the internet can, in fact, be better than it is now and will promote future policy amendments, he said.
To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!
Artificial Intelligence
Will Rinehart: Unpacking the Executive Order on Artificial Intelligence
Most are underweighting the legal challenges and problems to rule of law.

If police are working on an investigation and want to tap your phone lines, they’ll effectively need to get a warrant. They will also need to get a warrant to search your home, your business, and your mail.
But if they want to access your email, all they need is just to wait for 180 days.
Because of a 1986 law called the Electronic Communications Privacy Act, people using third-party email providers, like Gmail, only get 180 days of warrant protection. It’s an odd quirk of the law that only exists because no one in 1986 could imagine holding onto emails longer than 180 days. There simply wasn’t space for it back then!¹
ECPA is a stark illustration of consistent phenomena in government: policy choices, especially technical requirements, have durable and long-lasting effects. There are more mundane examples as well. GPS could be dramatically more accurate but when the optical system was recently upgraded, it was held back by a technical requirement in the Federal Enterprise Architecture Framework (FEAF) of 1999. More accurate headlights have been shown to be better at reducing night crashes yet adaptive headlights only just got approved last year, nearly 16 years after Europe because of technical requirements in FMVSS 108. All it takes is one law or regulation to crystallize an idea into an enduring framework that fails to keep up with developments.
I fear the approach pushed by the White House in their recent executive order on AI might represent another crystallization moment. ChatGPT has been public for a year, the models on which they are based are only five years old, and yet the administration is already working to set the terms for regulation.
The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” is sprawling. It spans 13 sections, extends over 100 pages, and lays out nearly 100 deliverables for every major agency. While there are praiseworthy elements to the document, there is also a lot of cause for concern.
Among the biggest changes is the new authority the White House has claimed over newly designated “dual use foundation models.” As the EO defines it, a dual-use foundation model is
- an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.
While the designation seems to be common sense, it is new and without provenance. Until last week, no one had talked about dual use foundation models. Rather, the designation does comport with the power the president has over the export of military tech.
As the EO explains it, the administration is especially interested in those models with the potential to
- lower the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear weapons;
- enable powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
- permit the evasion of human control or oversight through means of deception or obfuscation
The White House is justifying its regulation of these models under the Defense Production Act, a federal law first enacted in 1950 to respond to the Korean War. Modeled after World War II’s War Powers Acts, the DPA was part of a broad civil defense and war mobilization effort that gave the President the power to requisition materials and property, expand government and private defense production capacity, ration consumer goods, and fix wage and price ceilings, among other powers.
The DPA is reauthorized every five years, which has allowed Congress to expand the set of presidential powers in the DPA. Today, the allowable use of DPA extends far beyond U.S. military preparedness and includes domestic preparedness, response, and recovery from hazards, terrorist attacks, and other national emergencies. The DPA has long been intended to address market failures and slow procurement processes in times of crisis. Now the Biden Administration is using DPA to force companies to open up their AI models.
The administration’s invocation of the Defense Production Act is clearly a strategic maneuver to utilize the maximum extent of its DPA power in service of Biden’s AI policy agenda. The difficult part of this process now sits with the Department of Commerce, which has 90 days to issue regulations.
In turn, the Department will likely use the DPA’s industrial base assessment power to force companies to disclose various aspects of their AI models. Soon enough, dual use foundation models will have to report to the government tests based on guidance developed by the National Institute of Standards and Technology (NIST). But that guidance won’t be available for another 270 days. In other words, Commerce will regulate companies without knowing what they will be beholden to.
Recent news from the United Kingdom suggests that all of the major players in AI are going to be included in the new regulation. In closing out a two-day summit on AI, British Prime Minister Rishi Sunak announced that eight companies were going to give deeper access to their models in an agreement that had been signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K. Those eight companies included Amazon Web Services, Anthropic, Google, as well its subsidiary DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, and OpenAI.
Thankfully, the administration isn’t pushing for a pause on AI development, they aren’t denouncing more advanced models, nor are they suggesting that AI needs to be licensed. But this is probably because doing so would face a tough legal challenge. Indeed, it seems little appreciated by the AI community that the demand to report on models is a kind of compelled speech, which has typically triggered First Amendment scrutiny. But the courts have occasionally recognized that compelled commercial speech may actually advance First Amendment interests more than undermine them.
The EO clearly marks a shift in AI regulation because of what will come next. In addition to the countless deliverables, the EO encourages agencies to use their full power to advance rulemaking.
For example, the EO explains that,
- the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.
Innocuous as it may seem, the Federal Trade Commission, as well as all of the other agencies that have been encouraged to use their power by the administration, could come under court scrutiny. In West Virginia v. EPA, the Supreme Court made it more difficult for agencies to expand their power when the court established the major questions doctrine. This new line of legal reasoning takes an ax to agency delegation. Unless there’s explicit, clear-cut authority granted by Congress, an agency cannot regulate a major economic or political issue. Agency efforts to push rules on AI could get caught up by the courts.
To be fair, there are a lot of positive actions that this EO advances.² But details matter, and it will take time for the critical details to emerge.
Meanwhile, we need to be attentive to the creep of power. As Adam Thierer described this catch-22,
- While there is nothing wrong with federal agencies being encouraged through the EO to use NIST’s AI Risk Management Framework to help guide sensible AI governance standards, it is crucial to recall that the framework is voluntary and meant to be highly flexible and iterative—not an open-ended mandate for widespread algorithmic regulation. The Biden EO appears to empower agencies to gradually convert that voluntary guidance and other amorphous guidelines into a sort of back-door regulatory regime (a process made easier by the lack of congressional action on AI issues).
In all, the EO is a mixed bag that will take time to shake out. On this, my colleague Neil Chilson is right: some of it is good, some is bad, and some is downright ugly.
Still, the path we are currently navigating with the Executive Order on AI parallels similar paths in ECPA, GPS, and adaptive lights. It underscores a fundamental truth about legal decisions: even the technical rules we set today will shape the landscape for years, perhaps decades, to come. As we move forward, we must tread carefully, ensuring that our legal frameworks are adaptable and resilient, capable of evolving alongside the very technologies they seek to regulate.
Will Rinehart is a senior research fellow at the Center for Growth and Opportunity, where he specializes in telecommunication, internet and data policy, with a focus on emerging technologies and innovation. He was formerly the Director of Technology and Innovation Policy at the American Action Forum and before that a research fellow at TechFreedom and the director of operations at the International Center for Law & Economics. This piece originally appeared in the Exformation Newsletter on November 9, 2023, and is reprinted with permission.
Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.
Artificial Intelligence
Senators Pitch New Agency for Tech Regulation to Address FTC Shortcomings
Democratic Sens. Michael Bennet of Colorado and Peter Welch of Vermont urging the creation of a new tech regulatory agency.

WASHINGTON, November 2, 2023 – Sen. Michael Bennet D-Colorado, and Sen. Peter Welch, D-Vermont, reiterated at a Brookings event Tuesday the need for the United States to form a new agency to oversee tech regulation.
The senators, alongside former Federal Communications Commission Chairman Tom Wheeler, argued that the government’s approach to regulating AI, social media and big tech does not match the speed at which those industries are changing.
Bennet and Welch both outlined how the Federal Trade Commission and the Department of Justice, two entities that are heavily involved in regulating large tech companies, govern so broadly that they are unable to properly deal with specific cases.
The two added that those respective agencies lack the specific expertise in tech fields to be able to address key issues.
“Despite their work to enforce existing antitrust and consumer protection laws, they lack the expert staff and resources necessary for robust oversight,” Bennet said previously. “Moreover, both bodies are limited by existing statutes to react to case-specific challenges raised by digital platforms, when proactive, long-term rules for the sector are required,” explained Bennet in an earlier press release.
The conversation comes after the two senators introduced a digital technology regulatory bill in May of 2023 outlining how a new proposed agency would regulate the tech industry in consultation with the FTC and the DOJ.
Their proposed bill would require the establishment of a five-person agency to address tech regulation and antitrust cases, as well as establish some kind of protection against things like harmful algorithms.
“For far too long, these companies have largely escaped regulatory scrutiny, but that can’t continue. It’s time to establish an independent agency to provide comprehensive oversight of social media companies,” said Welch in the same press release.
Wheeler, who moderated the event, echoed their concerts after having written his book Techlash, which argues innovators drive tech development and that the government follows their lead in regulation.
Artificial Intelligence
U.S. and Singapore to Strengthen AI and Tech Partnership
The nations held their first Critical Emerging Technology Dialogue in D.C. on Thursday.

WASHINGTON, October 13, 2023 – The United States and Singapore announced on Thursday a new partnership to strengthen ties on artificial intelligence and other technological research. The nations launched the initiative, called the Critical Emerging Technology Dialogue, in D.C. on the same day.
Building on a 2022 meeting between U.S. President Joe Biden and Singaporean Prime Minister Lee Hsien Loong, senior officials from both governments – including Deputy Prime Minister Lawrence Wong from Singapore and National Security Advisor Jake Sullivan from the U.S. – met in Washington for discussions on six areas of focus.
Artificial intelligence
The countries intend to launch a joint AI governance group, according to a White House statement. The group would focus on ensuring “safe, trustworthy, and responsible AI innovation,” the statement said.
The Commerce Department’s National Institute of Standards and Technology recently completed an exercise with the Singapore Infocomm Media Development Authority on AI risk management. Both nations are looking to expand on that and collaborate on research into AI security, the statement said.
AI regulation has been a subject of discussion in Washington. Biden announced in September he plans to issue an executive order on the issue by the end of the year, and a group of Congressional Democrats pushed him on Thursday to use their proposed AI Bill of Rights to inform that policy.
Quantum computing
American and Singaporean agencies are planning to collaborate on post-quantum cryptography methods and standards. While current quantum computers are rudimentary, the technology is in theory capable of cracking current encryption methods.
Biotechnology
The countries plan to convene universities, private and public research institutions, and government agencies on advancing research into gene therapies and delivery systems for those therapies. The nations also expressed an intent to connect their biotechnology startup communities to exchange best practices on scaling, as well as research and development.
Officials also discussed defense technology, data governance, and climate resilience. The next CET Dialogue is planned for 2024 in Singapore.
-
Broadband Mapping & Data4 weeks ago
NTIA OKs Virginia’s Broadband Plan, Commonwealth Launches BEAD Challenge Process
-
Fiber2 weeks ago
The High Cost of Fiber is Leading States to Explore Other Technologies
-
Broadband Mapping & Data4 weeks ago
FCC is Looking to Update its Definition of Broadband
-
Broadband Roundup4 weeks ago
Emergency Connectivity Funding, Comcast in Connecticut, Glo Fiber in Pennsylvania
-
FCC3 weeks ago
‘It Was Graft’: How the FCC’s CAF II Program Became a Money Sink
-
Funding4 weeks ago
NTIA Will Allow Alternatives to Letter of Credit for BEAD Funding in New Guidance
-
Expert Opinion2 weeks ago
Ryan Johnston: What Happens to BEAD Without the Affordable Connectivity Program?
-
Cybersecurity4 weeks ago
Cybersecurity Requirements in BEAD Could Shape Internet Security Regulation More Widely