Panelists Debate Federal Role in Digital Privacy, But Agree Upon Need to Minimize Algorithmic Bias

WASHINGTON, January 24, 2020 – Panelists at a Next Century Cities event on Thursday clashed on the appropriate federal role in the regulation of digital privacy, but generally agreed on the importance of minimizing algorithmic bias in targeting a particular racial or ethnic group. The panel began wi

Panelists Debate Federal Role in Digital Privacy, But Agree Upon Need to Minimize Algorithmic Bias
Panel on privacy and bias at Next Century City event

WASHINGTON, January 24, 2020 – Panelists at a Next Century Cities event on Thursday clashed on the appropriate federal role in the regulation of digital privacy, but generally agreed on the importance of minimizing algorithmic bias in targeting a particular racial or ethnic group.

The panel began with a spirited debate on who should take the lead on protecting Americans’ digital privacy.

One of two moderators, former National Telecommunications and Information Administrator David Redl (now president of Salt Point Strategies), said California had set the standard for other states in passing the most sweeping legislation on data privacy. The law went into effect on January 1, 2020.

Laura Moy, executive director of Georgetown Law’s Center on Privacy and Technology, pushed back against the premise that California is an appropriate leader in data privacy. She said that the California law did not go far enough. Moy argued for a blend of federal legislation while also granting cities the flexibility to deal with local issues as they arise.

A moderator then asked the panel whether the federal government has the expertise to deal with the issue. Noting that only 40 to 50 staff employees at the Federal Trade Commission deal with this issue, the moderator emphasized that the agency was not equipped to deal with the significant issue of data privacy.

Tom Lenard, senior fellow of the Technology Policy Institute, countered that the FTC would be the most expert entity in the country to address privacy. Moy, by contrast, argued that the average citizen would know best how to handle their data if empowered by greater disclosure and knowledge.

Sean Perryman of the Internet Association referenced the FTC’s fumbling of the Equifax credit score data breach, and argued that the FTC was inadequate to address privacy issues.

The conversation shifted to the issue of discrimination in algorithmic targeting, particularly in regard to cities’ role in banning facial recognition.

K.J. Bagchi, senior counsel of Asian Americans Advancing Justice, criticized the inclusion of civil rights language in data privacy legislation.

The country needed to have algorithmic accountability and zero detection of bias, he said.

Lenard referenced an MIT study demonstrating that discriminatory advertising could happen even when advertisers have not programmed a specific demographic. Therefore, we would need more proactive measures to get at the root of discriminatory AI, he said.

The panel concluded by considering the most significant privacy issues over the next two years, particularly during the course of the 2020 Census and the inauguration of a president on January 20, 2021.

Jeremy Greenberg, policy fellow at Future of Privacy Forum, suggested deidentification, Lenard highlighted the role of platforms in upgrading privacy, Moy answered discrimination, and Bagchi alluded to transparency. Perryman mentioned equity.