WASHINGTON, June 6, 2019 - One of the greatest mistakes policymakers can make as they begin to make rules governing artificial intelligence technologies is a “failure of imagination,” said Purdue University Fellow Lorraine Kisselburgh at a panel hosted by the Electronic Privacy Information Center at the National Press Club on Wednesday.
The panel, entitled “AI and Human Rights: The Future of AI Policy in the U.S.,” took a cautious tone towards AI, with Kisselburgh pointing out that we may not even be able to imagine the changes and risks that these technological developments will bring to our society.
Several speakers emphasized the crucial point that artificial intelligence is not actually “intelligent,” but rather subject to whatever flaws and biases it is designed with. For example, programs designed to decide whether a person should get a mortgage or be given jail often end up demonstrating racial discrimination, because of the information used to “teach” the program.
MIT Professor Sherry Turkle criticized the common view that there is no solution for these biased technologies, calling this perspective an “abdication of responsibility.” Rather than relying on technology to fix society and then giving up when it fails, she emphasized the importance of working to fix the “real world” by challenging assumptions and investing in ethics curricula on a national level.
Turkle also raised the issue of young children bonding with AI devices. As opposed to other toys, which become healthy vehicles for projection, AI devices would engage children by taking a more human role. This new kind of relationship could have unforeseen long-term consequences.
The panel comes four months after an executive order launched the American AI initiative, which invests in AI research and development and sets AI governance standards. These standards are important to ensure consistency across approaches to regulatory and non-regulatory uses of AI, said Lynne Parker, Assistant Director for Artificial Intelligence within the White House Office of Science and Technology Policy.
Georgetown Law Professor Bilyana Petkova agreed that consistent standards were important and emphasized the importance of three principles within human-centered AI development: human autonomy and oversight, prevention of harm, and fairness.
Editor's Note: This story was updated on June 8, 2019, with additional information from the event.
(Photo of Professor Lorraine Kisselburgh from ikawnoclastic blog.)
- Strategies for Interconnecting Middle-Mile and Last-Mile Fiber Critical Amid COVID19 Pandemic
- Breakfast Media Minute: September 25, 2020
- SiFi Network’s FiberCity Now Live in Fullerton, Ajit Pai Addresses Telehealth, Georgia Uses Ookla Speed Data
- State and Regional Broadband Initiatives Are Critical to Expanding Internet Access
- For Broadband and Future of Work, Coronavirus Pandemic is a Reset and Not a Pause
Signup for Broadband Breakfast
Fiber4 months ago
Fiber Networks Hold a Cybersecurity Advantage Over Rival Co-Axial and Wireless Technologies, Say Panelists
Congress4 months ago
Senators Introduce Healthcare Broadband Bill as House Companion, Proposes $2 Billion Telehealth Expansion
Artificial Intelligence3 months ago
Brookings Panelists Emphasize Importance of Addressing Biases in Artificial Intelligence Technology
China5 months ago
China Expert Predicts that Nation’s Flawed Coronavirus Response Will Damage the Power of Chinese Communist Party
Infrastructure6 months ago
Broadband Breakfast Live Online Will Stream Every Wednesday at 12 Noon ET on ‘Broadband and the Coronavirus’
Education6 months ago
Online Elementary Education is No Spring Break for Parents Teaching from Home
Artificial Intelligence3 months ago
U.S. State Department Employing Artificial Intelligence Against COVID-19 Misinformation
Rural5 months ago
Why the Rural Digital Opportunity Fund is So Significant, and How to Succeed in Applying For RDOF