Panel Discusses Responsible AI Implementation in Local Government

Hosted by the CDT, representatives discussed recommendations and examples in state and local government use of AI.

Panel Discusses Responsible AI Implementation in Local Government
Screenshot of CDT Policy Analyst Maddy Dwyer, RGS Deputy Executive Director Rich Oppenheim, NLC Senior Specialist on Urban Innovation Christopher Jordan and NACo Legislative Director Seamus Dowdall from webibar on Tuesda, Feb. 3.

WASHINGTON, Feb. 3, 2026 – Increasingly, generative AI is being utilized by local and state governments to meet constituent needs.

That was the message at a Tuesday Center for Democracy & Technology (CDT) panel that centered on CDT’s five core areas in the use and governance of AI, which include public transparency and stakeholder engagement, accuracy and reliability, governance and coordination, privacy and security, and legal compliance. 

While the panelists aligned on these priorities for AI in the public sector, Regional Government Services Authority Deputy Executive Director Rich Oppenheim stressed the importance of human-centered AI that aligns with how governmental organizations actually operate. Human oversight is also essential with the implementation of AI to prevent errors and build trust in them, he said. 

“Responsible AI is about building systems, including training, establishing decision rights, accountability and transparency — that assume imperfect humans working with imperfect AI, operating under real conditions, but still working to produce outcomes the public can trust,” Oppenheim said.  

Rural communities may be impacted in AI access

The panelists also discussed the role of AI in rural communities that might not have sufficient broadband access or information technology departments yet, leading to a divide in who AI can help. While AI is not always perfect, Oppenheim emphasized that agencies shouldn’t wait for perfect capacity or a full-blown strategy regarding AI, but start small and slowly add and refine AI use and policy from there. 

While some resident needs are met simply by asking generative AI about election information or public works schedules, they also shared how local governments are using AI to meet larger constituent needs. 

National League of Cities Senior Specialist on Urban Innovation Christopher Jordan highlighted Lebanon, New Hampshire’s online AI registry as an example of responsible AI use through the city’s transparency and accountability. If the city uses an AI-related tool or software, it is automatically logged and added to the list. 

 National Association of Counties Telecommunications and Technology Legislative Director Seamus Dowdall explained how Maryland’s Calvert County created training videos on generative AI and how the county plans to use it for employees. This internal education initiative was to ensure the technology was understood before its broad use in government operations. 

Dowdall also highlighted California’s Alameda County, which is using AI to modernize legacy code. This helps to update outdated software and digital systems that were built years ago but critical to daily government operations. 

“There’s so many AI vendors… popping up and promising to do all kinds of new and exciting things, but it's important for cities and all governments to think about what is the problem that these new tools and platforms are trying to fix, and again, how is it affecting residents overall?” Jordan said. 

CDT Policy Analyst Maddy Dwyer moderated the webinar, “How Can State and Local Governments Responsibly Use and Govern AI in 2026.”

Member discussion

Popular Tags