WASHINGTON, June 6, 2019 - One of the greatest mistakes policymakers can make as they begin to make rules governing artificial intelligence technologies is a “failure of imagination,” said Purdue University Fellow Lorraine Kisselburgh at a panel hosted by the Electronic Privacy Information Center at the National Press Club on Wednesday.
The panel, entitled “AI and Human Rights: The Future of AI Policy in the U.S.,” took a cautious tone towards AI, with Kisselburgh pointing out that we may not even be able to imagine the changes and risks that these technological developments will bring to our society.
Several speakers emphasized the crucial point that artificial intelligence is not actually “intelligent,” but rather subject to whatever flaws and biases it is designed with. For example, programs designed to decide whether a person should get a mortgage or be given jail often end up demonstrating racial discrimination, because of the information used to “teach” the program.
MIT Professor Sherry Turkle criticized the common view that there is no solution for these biased technologies, calling this perspective an “abdication of responsibility.” Rather than relying on technology to fix society and then giving up when it fails, she emphasized the importance of working to fix the “real world” by challenging assumptions and investing in ethics curricula on a national level.
Turkle also raised the issue of young children bonding with AI devices. As opposed to other toys, which become healthy vehicles for projection, AI devices would engage children by taking a more human role. This new kind of relationship could have unforeseen long-term consequences.
The panel comes four months after an executive order launched the American AI initiative, which invests in AI research and development and sets AI governance standards. These standards are important to ensure consistency across approaches to regulatory and non-regulatory uses of AI, said Lynne Parker, Assistant Director for Artificial Intelligence within the White House Office of Science and Technology Policy.
Georgetown Law Professor Bilyana Petkova agreed that consistent standards were important and emphasized the importance of three principles within human-centered AI development: human autonomy and oversight, prevention of harm, and fairness.
Editor's Note: This story was updated on June 8, 2019, with additional information from the event.
(Photo of Professor Lorraine Kisselburgh from ikawnoclastic blog.)
- Big Tech’s Response to Coronavirus: Face Masks, Hiring Binges, Free Web Sites and Cash Donations
- Democrats Call for New Infrastructure Stimulus Legislation Includes Large Broadband Provision
- The FCC Could Do More Now About the Digital Divide, Say Panelists at Broadband Breakfast Live Online Event
- Coronavirus Roundup: Senators Urge Distance Learning, Zoom Privacy, NTIA Broadband and RUS Grants
- Federal Communications Commission Proposal for Unlicensed Spectrum in 6 GHz Band Widely-Praised
Signup for Broadband Breakfast
Health1 month ago
Battling Coronavirus COVID-19, Broadband Could Provide Relief Although Telemedicine May Not Help
Net Neutrality1 month ago
FCC Seeks Comment on Net Neutrality Issues Remanded by Appeals Court: Public Safety, Pole Attachments and Lifeline
Health3 weeks ago
Broadband Breakfast Live Online Will Stream Daily in March on ‘Broadband and the Coronavirus’
Artificial Intelligence1 month ago
U.S. Progress on AI and Quantum Computing Will Best China, Says CTO Michael Kratsios
Broadband Mapping2 weeks ago
Commerce Department’s NTIA Details Its New-Found Progress in Broadband Mapping Technology
Antitrust1 week ago
Information Technology and Innovation Foundation Brings Global Antitrust Experts Together in Videoconference
Broadband TV2 weeks ago
Broadband Breakfast Live Online on Monday, March 23, 2020 – Free and Low Cost Internet Plans During Coronavirus
Broadband Roundup4 weeks ago
Industry Groups Praise New Broadband DATA Act, Pai and Kennedy Lock Horns on C-Band, Klobuchar Antitrust