ACLU Hosts Summit on Potential Pitfalls, Promises of AI

Panelists note AI’s dual potential to dismantle or deepen discrimination

ACLU Hosts Summit on Potential Pitfalls, Promises of AI
Screenshot of the ACLU AI Summit on July 10, 2025. From left to right: Marissa Gerchick, Renika Moore, Nathan Wessler, and Cody Venzke.

July 11, 2025 – Artificial intelligence has the power to amplify marginalized voices, or silence them even further, depending on how it is developed and governed.

That was the message panelists shared during the American Civil Liberties Union Summit on Civil Rights in a Digital Age. The event, which ran Thursday from 10:15am-3:00pm ET, was dedicated to exploring the perils and promises of AI, and its potential effects on minority groups within the U.S. The first two hours of the event were streamed online, and featured two panels of speakers.

During the first panel, moderated by Ijeoma Mbamalu, chief technology officer for the ACLU, participants focused on the role of social justice organizations in AI governance. Vilas Dhar, president of the Patrick J. McGovern Foundation, expressed optimism about an AI-filled future.

CTA Image

FROM SPEEDING BEAD SUMMIT
Panel 1: How Are States Thinking About Reasonable Costs Now?
Panel 2: Finding the State Versus Federal Balance in BEAD
Panel 3: Reacting to the New BEAD NOFO Guidance
Panel 4: Building, Maintaining and Adopting Digital Workforce Skills

All Videos from Speeding BEAD Summit

“One of the things that AI does so incredibly well is to actually flatten the distance of power between those who have the ability to create and those who are forced to be consumers,” Dhar said. “There is a world emerging and emerging quickly where everybody in this room can use an AI tool that supercharges their capacities.”

Dhar cautioned that achieving that world would require a proactive approach.

“Innovation by itself, discovery for its own sake, doesn’t lead to better human futures,” he said. “It requires an architecture that both undergirds it and governs it.”

Deborah Archer, Margaret B. Hoppin Professor of Clinical Law at the New York University School of Law, explained what that architecture might look like.

“What we really need to do is bring community in the loop,” Archer said. “We can’t continue to claim that we are building responsible and equitable AI if the people who have the most to lose are excluded from the process, are not enabled and supported to engage in the process in meaningful ways, not just at the end, as testers of this technology, but really involved in the design process.”

She recommended implementing participatory design processes, creating community advisory boards, and contributing financially to community groups and organizations seeking to shape the development of AI.

“Without that, it’s not really an even playing field,” she said.

Dhar agreed, and argued that society must play a key role in deciding AI’s future.

“I don’t need to be able to talk to my refrigerator everyday,” Dhar said. “I don’t need a commercial product that makes it easier for me to buy something online–it’s already pretty easy…I need AI systems that let me have more agency in this world, that let me express myself in a political forum, that make sure I have access to the healthcare that serves my needs and those of my parents and my grandparents.”

Archer expressed a similar sentiment.

“I have heard a story about a foundation that said they kept getting approached by people who wanted to develop AI tools that would help poor folks budget,” Archer said. “Poor folks were like, ‘I don’t need you to help me budget, I am a budget ninja’…we don’t need you for that. What we need is a tool that helps us figure out creatively how to spread the SNAP budget, or to spend the WIC dollars. Give me a tool that helps me get my kids to read earlier and better and encourages them to read more often.”

AI systems reinforcing discrimination

The second panel, moderated by Marissa Gerchick, data science manager and algorithmic justice specialist at the ACLU, focused on justice, particularly racial justice, and AI. Participants cited cases when AI was allegedly used to further discrimination, even when it was advertised as being bias-free.

Renika Moore, director of the ACLU’s Racial Justice Program, told attendees about a client she represents that was impacted by such software.

“These systems are really built on prior human decisions that show real evidence of discrimination,” she said. “We represent a client who is neurodiverse…and was not moved forward [during a job hiring process] after taking an assessment that included questions that we know mirror the kind of clinical questions that can be included for autism and other neurodivergent conditions…it look[ed] like a medical exam, which is prohibited under Federal law.”

Nathan Wessler, deputy director for the ACLU’s Speech, Privacy & Technology Project, raised concerns about how law enforcement has been using AI. He noted that some law enforcement agencies have become overly reliant on facial recognition technology, which can often misidentify suspects, particularly if they’re a racial minority.

“It’s no surprise that we’ve seen a number of cases of wrongful arrest around the country, almost all of which have involved wrongful arrest of black people after police have relied on what turned out to be incorrect results of this technology,” Wessler said. He explained that he recently represented a man who was wrongfully arrested for shoplifting even though he was miles away at the time of the incident. His arrest was due almost entirely to an erroneous identification by facial recognition technology.

Cody Venzke, senior policy counsel at the ACLU, pointed to cases where AI systems arbitrarily cut Medicaid benefits in Idaho and discriminated against racial minorities in determining eligibility for early release under the DOJ’s pattern program. He explained that the organization was aiming to pass federal legislation guarding against the misuse of AI.

“Our ultimate goal in many of these instances is passing legislation at the federal level that provides a civil rights/civil liberties baseline for all people in the United States and then of course permitting states to build on that. That would be true in privacy as well as uses of artificial intelligence,” he said. Venzke also praised recent efforts by Colorado to regulate AI, though he noted that “we certainly believe that that law in Colorado can be significantly improved.”

Member discussion

Popular Tags