Artificial Intelligence
‘Watershed Moment’ Has Experts Calling for Increased Federal Regulation of AI
New AI developments could impact jobs that have traditionally been considered safe from technological displacement.
WASHINGTON, April 28, 2023 — As artificial intelligence technologies continue to rapidly develop, many industry leaders are calling for increased federal regulation to address potential technological displacement, algorithmic discrimination and other harms — while other experts warn that such regulation could stifle innovation.
“It’s fair to say that this is a watershed moment,” said Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at a panel hosted Wednesday by the Brookings Institution. “But we have to be honest about this as well, which is to say, there will be displacement.”

Screenshot of Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at the Brookings Institute event
While some AI displacement is comparable to previous technological advances that popularized self-checkout machines and ATMs, Townsend argued that the current moment “feels a little bit different… because of the urgency attached to it.”
Recent AI developments have the potential to impact job categories that have traditionally been considered safe from technological displacement, agreed Cameron Kerry, a distinguished visiting fellow at Brookings.
In order to best equip people for the coming changes, experts emphasized the importance of increasing public knowledge of how AI technologies work. Townsend compared this goal to the general baseline knowledge that most people have about electricity. “We’ve got to raise our level of common understanding about AI similar to the way we all know not to put a fork in the sockets,” he said.
Some potential harms of AI may be mitigated by public education, but a strong regulatory framework is critical to ensure that industry players adhere to responsible development practices, said Susan Gonzales, founder and CEO at AIandYou.
“Leaders of certain companies are coming out and they’re communicating their commitment to trustworthy and responsible AI — but then meanwhile, the week before, they decimated their ethical AI departments,” Gonzales added.
Some experts caution against overregulation in low-risk use cases
However, some experts warn that the regulations themselves could cause harm. Overly strict regulations could hamper further AI innovation and limit the benefits that have already emerged — which range from increasing workplace productivity to more effectively detecting certain types of cancer, said Daniel Castro, director of the Center for Data Innovation, at a Broadband Breakfast event on Wednesday.
“We should want to see this technology being deployed,” Castro said. “There are areas where it will likely have lifesaving impacts; it will have very positive impacts on the economy. And so part of our policy conversation should also be, not just how do we make sure things don’t go wrong, but how do we make sure things go right.”
Effective AI oversight should distinguish between the different risk levels of various AI use cases before determining the appropriate regulatory approaches, said Aaron Cooper, vice president of global policy for the software industry group BSA.
“The AI system for [configuring a] router doesn’t have the same considerations as the AI system for an employment case, or even in a self-driving vehicle,” he said.
There are already laws that govern many potential cases of AI-related harms, even if those laws do not specifically refer to AI, Cooper noted.
“We just think that in high-risk situations, there are some extra steps that the developer and the deployer of the AI system can take to help mitigate that risk and limit the possibility of it happening in the first place,” he said.
Multiple entities considering AI governance
Very little legislation currently governs the use of AI in the United States, but the issue has recently garnered significant attention from Congress, the Federal Trade Commission, the National Telecommunications and Information Administration and other federal entities.
The National Artificial Intelligence Advisory Committee on Tuesday released a draft report detailing recommendations based on its first year of research, concluding that AI “requires immediate, significant and sustained government attention.”
One of the report’s most important action items is increasing sociotechnical research on AI systems and their impacts, said EqualAI CEO Miriam Vogel, who chairs the committee.
Throughout the AI development process, Vogel explained, each human touchpoint presents the risk of incorporating the developer’s biases — as well as a crucial opportunity for identifying and fixing these issues before they become embedded.
Vogel also countered the idea that regulation would necessarily stifle future AI development.
“If we don’t have more people participating in the process, with a broad array of perspectives, our AI will suffer,” she said. “There are study after study that show that the broader diversity in who is… building your AI, the better your AI system will be.”
Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.
Wednesday, April 26, 2023, 12 Noon ET – Should AI Be Regulated?
The recent explosion in artificial intelligence has generated significant excitement, but it has also amplified concerns about how the powerful technology should be regulated — and highlighted the lack of safeguards currently in place. What are the potential risks associated with artificial intelligence deployment? Which concerns are likely just fearmongering? And what are the respective roles of government and industry players in determining future regulatory structures?
Panelists
- Daniel Castro, Vice President, Information Technology and Innovation Foundation and Director, Center for Data Innovation
- Aaron Cooper, Vice President of Global Policy, BSA | The Software Alliance
- Rebecca Klar (moderator), Technology Policy Reporter, The Hill
Panelist resources
- Ten Principles for Regulation That Does Not Harm AI Innovation, Daniel Castro, February 8, 2023
Daniel Castro is vice president at the Information Technology and Innovation Foundation and director of ITIF’s Center for Data Innovation. Castro writes and speaks on a variety of issues related to information technology and internet policy, including privacy, security, intellectual property, Internet governance, e-government and accessibility for people with disabilities. In 2013, Castro was named to FedScoop’s list of the “top 25 most influential people under 40 in government and tech.”
Aaron Cooper serves as vice president of Global Policy for BSA | The Software Alliance. In this role, Cooper leads BSA’s global policy team and contributes to the advancement of BSA members’ policy priorities around the world that affect the development of emerging technologies, including data privacy, cybersecurity, AI regulation, data flows and digital trade. He testifies before Congress and is a frequent speaker on data governance and other issues important to the software industry.
Rebecca Klar is a technology policy reporter at The Hill, covering data privacy, antitrust law, online disinformation and other issues facing the evolving tech world. She is a native New Yorker and graduated from Binghamton University. She previously covered local news at The York Dispatch in York, Pa. and The Island Now in Nassau County, N.Y.

Graphic from Free-Vectors.Net used with permission
As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.
SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTube, Twitter and Facebook.
See a complete list of upcoming and past Broadband Breakfast Live Online events.
Artificial Intelligence
Experts Debate Artificial Intelligence Licensing Legislation
Licensing requirements will distract from wide scale testing and will limit competition, an event heard.

WASHINGTON, May 23, 2023 – Experts on artificial intelligence disagree on whether licensing is the proper legislation for the technology.
If adopted, licensing requirements would require companies to obtain a federal license prior to developing AI technology. Last week, OpenAI CEO Sam Altman testified that Congress should consider a series of licensing and testing requirements for AI models above a threshold of capability.
At a Public Knowledge event Monday, Aalok Mehta, head of US Public Policy at OpenAI, added licensing is a means to ensuring that AI developers put together safety practices. By establishing licensing rules, we are developing external validation tools that will improve consumer experience, he said.
Generative AI — the model used by chatbots including OpenAI’s widely popular ChatGPT and Google’s Bard — is AI designed to produce content rather than simply processing information, which could have widespread effects on copyright disputes and disinformation, experts have said. Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms.
Some industry leaders, however, are concerned that calls for licensing are a way of shutting the door to competition and new startups by large companies like OpenAI and Google.
B Cavello, director of emerging technologies at the Aspen Institute, said Monday that licensing requirements place burdens on competition, particularly small start-ups.
Implementing licensing requirements can place a threshold that defines a set of players allowed to play in the AI space and a set that are not, said B. Licensing can make it more difficult for smaller players to gain traction in the competitive space, B said.
Already the resources required to support these systems create a barrier that can be really tough to break through, B continued. While there should be mandates for greater testing and transparency, it can also present unique challenges we should seek to avoid, B said.
Austin Carson, founder and president of SeedAI, said a licensing model would not get to the heart of the issue, which is to make sure AI developers are consciously testing and measuring their own models.
The most important thing is to support the development of an ecosystem that revolves around assurance and testing, said Carson. Although no mechanisms currently exist for wide-scale testing, it will be critical to the support of this technology, he said.
Base-level testing at this scale will require that all parties participate, Carson emphasized. We need all parties to feel a sense of accountability for the systems they host, he said.
Christina Montgomery, AI ethics board chair at IBM, urged Congress to adopt precision regulation approach to AI that would govern AI in specific use cases, not regulating the technology itself in her testimony last week.
Artificial Intelligence
Senate Witnesses Call For AI Transparency
Regulatory AI transparency will increase federal agency and company accountability to the public.

WASHINGTON, May 16, 2023 – Congress should increase regulatory requirements for transparency in artificial intelligence while adopting the technology in federal agencies, said witnesses at a Senate Homeland Security and Governmental Affairs Committee hearing on Tuesday.
Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms.
The hearing addressed implementing AI in federal agencies. Congress is concerned about ensuring that the United States government is prepared to capitalize on the capabilities afforded by AI technology while also protecting the constitutional rights of citizens, said Sen. Gary Peters, D-Michigan.
The United States “is suffering from a lack of leadership and prioritization on these topics,” stated Lynne Parker, director of AI Tennessee Initiative at the University of Tennessee in her comments.
In a separate hearing Tuesday, CEO of OpenAI Sam Altman said that is is “essential that powerful AI is developed with democratic values in mind which mean US leadership is critical.”
Applications of AI are immensely beneficial, said Altman. However, “we think that regular intervention by governments will be crucial to mitigate the risks of increasingly powerful models.”
To do so, Altman suggested that the U.S. government consider a combination of licensing and testing requirements for the development and release of AI models above a certain threshold of capability.
Companies like OpenAI can partner with governments to ensure AI models adhere to a set of safety requirements, facilitate efficient processes, and examine opportunities for global coordination, he said.
Building accountability into AI systems
Siezing this moment to modernize the government’s systems will strengthen the country, said Daniel Ho, professor at Stanford Law School, encouraging Congress to lead by example to implement accountable AI practices.
An accountable system ensures that agencies are responsible to report to the public and those that AI algorithms directly affect, added Richard Eppink of the American Civil Liberties Union of Idaho Foundation.
A serious risk to implementing AI is that it can conceal how the systems work, including the bad data that they could be trained on, said Eppink. This can prevent accountability to the public and puts citizen’s constitutional rights at risk, he said.
To prevent this, the federal government should implement transparency requirements and governance standards that would include transparency during the implementation process, said Eppink. Citizens have the right to the same information that the government has so we can maintain accountability, he concluded.
Parker suggested that Congress appoint a Chief AI Director at each agency that would help develop Ai strategies for each agency and establish an interagency Chief AI Council to govern the use of the technology in the Federal government.
Getting technical talent into the workforce is the predicate to a range of issues we are facing today, agreed Ho, claiming that less than two percent of AI personnel is in government. He urged Congress to establish pathways and trajectories for technical agencies to attract AI talent to public service.
Congress considers AI regulation
Congress’s attention has been captured by growing AI regulatory concerns.
In April, Senator Check Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability by requiring companies to allow independent experts to review and test AI technologies and make results available publicly.
Later in April, Representative Yvette Clarke, D-N.Y., introduced a bill that would require the disclosure of AI-generated content in political ads.
The Biden Administration announced on May 4 that it will invest $140 million in funding to launch seven new National AI Research Institutes, which investment will bring the total number of Institutes to 25 across the country.
Antitrust
Google CEO Promotes AI Regulation, GOP Urges TikTok Ban for Congress Members, States Join DOJ Antitrust Suit
Widespread AI applications could lead to a dramatic uptick in online disinformation, Pichai warned.

April 18, 2023 — Google CEO Sundar Pichai on Sunday called for increased regulation of artificial intelligence, warning that the rapidly developing technology poses broad societal risks.
“The pace at which we can think and adapt as societal institutions compared to the pace at which the technology’s evolving — there seems to be a mismatch,” Pichai said in an interview with CBS News.
Watch Broadband Breakfast on April 26, 2023 – Should AI Be Regulated?
What are the risks associated with artificial intelligence deployment, and which concerns are just fearmongering?
Widespread AI applications could lead to a dramatic uptick in online disinformation, as it becomes increasingly easy to create and spread fake news, images and videos, Pichai warned.
Google recently released a series of recommendations for regulating AI, advocating for “a sectoral approach that builds on existing regulation” and cautioning against “over-reliance on human oversight as a solution to AI issues.”
But the directive also noted that “while self-regulation is vital, it is not enough.”
Pichai emphasized this point, calling for broad multisector collaboration to best determine the shape of AI regulation.
“The development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on,” he said. “And I think these are all things society needs to figure out as we move along — it’s not for a company to decide.”
Republicans call to ban members of Congress from personal TikTok use
A group of Republican lawmakers on Monday urged the House and Senate rules committees to ban members of Congress from using TikTok, citing national security risks and the need to “lead by example.”
Congress banned use of the app on government devices in late 2022, but several elected officials have maintained accounts on their personal devices.
In Monday’s letter, Republican lawmakers argued that the recent hearing featuring TikTok CEO Shou Zi Chew made it “blatantly clear to the public that the China-based app is mining data and potentially spying on American citizens.”
“It is troublesome that some members continue to disregard these clear warnings and are even encouraging their constituents to use TikTok to interface with their elected representatives – especially since some of these users are minors,” the letter continued.
TikTok is facing hostility from the other side of the aisle as well. On Thursday, Rep. Frank Pallone, D-N.J., sent Chew a list of questions about the app’s privacy and safety practices that House Democrats claimed were left unanswered at the March hearing.
Meanwhile, Montana lawmakers voted Friday to ban TikTok on all personal devices, becoming the first state to pass such legislation. The bill now awaits the signature of Gov. Greg Gianforte — who was one of several state leaders last year to mimic Congress in banning TikTok from government devices.
Nine additional states join DOJ’s antitrust lawsuit against Google
The Justice Department announced on Monday that nine additional states joined its antitrust lawsuit over Google’s alleged abuse of the digital advertising market.
The Attorneys General of Arizona, Illinois, Michigan, Minnesota, Nebraska, New Hampshire, North Carolina, Washington and West Virginia joined the existing coalition of California, Colorado, Connecticut, New Jersey, New York, Rhode Island, Tennessee and Virginia.
“We look forward to litigating this important case alongside our state law enforcement partners to end Google’s long-running monopoly in digital advertising technology markets,” said Doha Mekki, principal deputy assistant attorney general of the Justice Department’s Antitrust Division.
The lawsuit alleges that Google monopolizes digital advertising technologies used for both buying and selling ads, said Jonathan Kanter, assistant attorney general of the Justice Department’s Antitrust Division, when the suit was filed in January.
“Our complaint sets forth detailed allegations explaining how Google engaged in 15 years of sustained conduct that had — and continues to have — the effect of driving out rivals, diminishing competition, inflating advertising costs, reducing revenues for news publishers and content creators, snuffing out innovation, and harming the exchange of information and ideas in the public sphere,” Kanter said.
-
Broadband Mapping & Data3 weeks ago
Video of CostQuest CEO Jim Stegeman at Digital Infrastructure Investment Summit
-
Open Access2 weeks ago
AT&T Closes Open Access Fiber Deal With BlackRock
-
Broadband Roundup3 weeks ago
Starlink Likes FCC Direction on 12 GHz, Verizon & Comcast Urge ACP Funding, FCC Head on ACP Tour
-
Broadband Roundup4 weeks ago
Problematic ACP Qualification Standard, Macquarie Invests in Pavlov, Cogent Closes T-Mobile Wireline Deal
-
#broadbandlive4 weeks ago
Broadband Breakfast on May 10, 2023 – GUMBO and Louisiana’s Broadband Progress
-
Broadband Roundup3 weeks ago
New ACP Landing Page, Cellular Association Wants More Mid-Band Spectrum, New Ezee Fiber CEO
-
Expert Opinion2 weeks ago
Scott Wallsten: A $10 Billion Broadband Black Hole?
-
5G2 weeks ago
Crown Castle CEO Says 5G Plus Fixed Wireless Can Rival Fiber Connections