WASHINGTON, June 6, 2019 - One of the greatest mistakes policymakers can make as they begin to make rules governing artificial intelligence technologies is a “failure of imagination,” said Purdue University Fellow Lorraine Kisselburgh at a panel hosted by the Electronic Privacy Information Center at the National Press Club on Wednesday.
The panel, entitled “AI and Human Rights: The Future of AI Policy in the U.S.,” took a cautious tone towards AI, with Kisselburgh pointing out that we may not even be able to imagine the changes and risks that these technological developments will bring to our society.
Several speakers emphasized the crucial point that artificial intelligence is not actually “intelligent,” but rather subject to whatever flaws and biases it is designed with. For example, programs designed to decide whether a person should get a mortgage or be given jail often end up demonstrating racial discrimination, because of the information used to “teach” the program.
MIT Professor Sherry Turkle criticized the common view that there is no solution for these biased technologies, calling this perspective an “abdication of responsibility.” Rather than relying on technology to fix society and then giving up when it fails, she emphasized the importance of working to fix the “real world” by challenging assumptions and investing in ethics curricula on a national level.
Turkle also raised the issue of young children bonding with AI devices. As opposed to other toys, which become healthy vehicles for projection, AI devices would engage children by taking a more human role. This new kind of relationship could have unforeseen long-term consequences.
The panel comes four months after an executive order launched the American AI initiative, which invests in AI research and development and sets AI governance standards. These standards are important to ensure consistency across approaches to regulatory and non-regulatory uses of AI, said Lynne Parker, Assistant Director for Artificial Intelligence within the White House Office of Science and Technology Policy.
Georgetown Law Professor Bilyana Petkova agreed that consistent standards were important and emphasized the importance of three principles within human-centered AI development: human autonomy and oversight, prevention of harm, and fairness.
Editor's Note: This story was updated on June 8, 2019, with additional information from the event.
(Photo of Professor Lorraine Kisselburgh from ikawnoclastic blog.)
- Smart Cities Connect to Hold 2020 Global Event Honoring 50 Smart Projects
- Broadband Roundup: More on the Rural Digital Opportunity Fund, 5G National Advocacy, and Policy Hackers
- Panelists on NTIA Broadband Webinar Say Smart Buildings Boost Civic Resiliency and Public Health
- Dynamic Spectrum Sharing Subject of Debate at Senate Commerce Committee Hearing on the Future
- FTC Settlement with YouTube Has Creators Upset and Worried About FTC Approach to Children’s Privacy
Signup for Broadband Breakfast
China2 months ago
Prakash Sangam: China’s Huawei Clones Are Greater Threat to National Security than Huawei
Open Access3 weeks ago
UTOPIA Fiber: A Model Open-Access Network
Broadband's Impact3 months ago
FCC Chairman Ajit Pai Praises Agency’s Work in Promoting High-Speed Internet at ‘Broadband Heros’ Event
Open Access2 months ago
UTOPIA Fiber Announces Partnerships with Morgan, Utah, Idaho Falls, and Other Cities
Broadband Mapping & Data3 months ago
Broadband Data From Providers Needs to be Checked With Data From Users, Say Panelists at Mapping Event
Education2 months ago
State Educational Technology Officials Say Better Broadband Necessary for Pedagogy and Equity
FCC2 months ago
As Next Year’s C-Band Auction Looms, FCC Officials Reflect on Innovation in Spectrum Auctions
FCC1 month ago
Telephony Industry Rises to the Challenge of Robocalls, With Legislation, Regulation and Enforcement Close Behind