AI Testing Nonprofit Flags Reliability Gaps Amid Push for National Standards

Former Meta executive said foreign actors may shape training data to influence model outputs.

AI Testing Nonprofit Flags Reliability Gaps Amid Push for National Standards
Photo of Campbell Brown (left), co-founder and CEO of Forum AI, speaking with Michal Lev-Ram (moderator) at the AI in America Summit in Washington on Wed., Dec. 3, 2025.

WASHINGTON, Dec. 3, 2025 — A nonprofit focused on testing artificial intelligence models said Wednesday that generative systems remained unreliable in high-stakes situations and required independent accuracy standards.

Campbell Brown, co-founder and CEO of Forum AI and a former Meta executive, told The Hill’s AI in America that large language models still struggle to separate credible reporting from unverified material, sometimes elevating think-tank analyses and Reddit threads with equal weight.

Brown said the systems “have a hard time finding signal in the noise,” a weakness she said can mislead teenagers seeking mental-health guidance or adults relying on AI during major events.

Member discussion

Popular Tags