Risks of Open Source AI Also Exist in Closed Systems: Panelists

Public policy and industry experts call for forward-looking AI regulations.

Risks of Open Source AI Also Exist in Closed Systems: Panelists
Screenshot of Tim DeStefano, Jessie Wang, Mike Linksvayer, Frank Nagle, and Karl Zhao (left to right) speaking at “Discussions on AI: Open vs. Closed AI Development” on Monday, June 9, 2025.

WASHINGTON, June 10, 2025 – Many of the risks associated with open-source AI also exist in closed systems, panelists at a Georgetown University event said Monday, challenging the common perception that openness inherently makes AI more dangerous.

The discussion hosted by Georgetown’s Center for Business and Public Policy brought together academics, policy experts, and industry voices to explore how the design and accessibility of AI systems, whether open-source or proprietary, could shape innovation, governance, and global competitiveness.

“Many researchers have shown that we can get around the guardrails for the closed systems quite easily,” said Frank Nagle, assistant professor in the Strategy Unit at Harvard Business School. “I think that the risks that we see for both open and closed, many of them are overlapping, and many are more overlapping than what people generally think.” 

Open AI models are publicly available and can be inspected, modified, or reused by anyone. Closed models, by contrast, are proprietary systems controlled by companies like OpenAI, Google, or Anthropic.

“In some ways, the closed AI is a bigger risk simply because of the massive distribution,” said Mike Linksvayer, vice president of developer policy at GitHub. “There’s a whole range of actors – from the scariest state-sponsored actors – who have access to their own capabilities, so open models don’t add anything marginal to that.”

“A closed [system] with a nice user interface that’s cheap is the biggest risk there,” Linksvayer emphasized.

Optimism about potential of open-source AI

Throughout the event, panelists expressed optimism about the potential of open-source AI, while noting that many of the risks associated with open AI are present with closed AI as well.

“Frankly, the cat’s already out of the bag, it's too late.” Nagle said, emphasizing open-source AI is already widespread and can't be rolled back.

“The real question is do we want to be at the cutting edge,” Nagel remarked. 

“Today in AI [and] data analytics, open source is the leading edge and has been for a decade,” Nagel explained. “Therefore, outlawing open source AI in the US would be a massive competitive disadvantage and would give our competitors and our allies a big step up in the race to be at that cutting edge.”

Policymakers need to be ready to act quickly

The panelists urged policymakers to be ready to act quickly to new developments in the industry.

“This is a really fast moving field, so promptness of addressing challenges associated with AI use, open or closed, is important,” said Jessie Wang, economist and professor of policy analysis at RAND School of Public Policy. “At the same time the openness of the model means once this material is out there, it’s really difficult to call back so it’s more important that we think ahead of time about our response.”

“A model comes every day, if not every hour,” said Karl Zhao, generative AI consultant at DeepSeek. “That really makes tracking it more difficult, so I think it’s key to have a certain set of evaluation tools where you can really filter out the good and the bad.” 

“The most important thing policymakers can do in policy to prepare for AI including its benefits and risks is just improve governance generally,” Linksvayer said. “Getting the boring stuff right to create inclusive good governance, that is actually across all societies what’s going to produce good outcomes from broad diffusion of AI capabilities whether they’re open or closed.”

“I don’t think openness per se is the problem,” Wang said. “I think it’s the risk, uncertainty, and unintended consequences that come with it.”

“Careful policy design can help us get to that sweet spot where we have a good balance between the two," Wang said.

Popular Tags