WASHINGTON, June 6, 2019 - One of the greatest mistakes policymakers can make as they begin to make rules governing artificial intelligence technologies is a “failure of imagination,” said Purdue University Fellow Lorraine Kisselburgh at a panel hosted by the Electronic Privacy Information Center at the National Press Club on Wednesday.
The panel, entitled “AI and Human Rights: The Future of AI Policy in the U.S.,” took a cautious tone towards AI, with Kisselburgh pointing out that we may not even be able to imagine the changes and risks that these technological developments will bring to our society.
Several speakers emphasized the crucial point that artificial intelligence is not actually “intelligent,” but rather subject to whatever flaws and biases it is designed with. For example, programs designed to decide whether a person should get a mortgage or be given jail often end up demonstrating racial discrimination, because of the information used to “teach” the program.
MIT Professor Sherry Turkle criticized the common view that there is no solution for these biased technologies, calling this perspective an “abdication of responsibility.” Rather than relying on technology to fix society and then giving up when it fails, she emphasized the importance of working to fix the “real world” by challenging assumptions and investing in ethics curricula on a national level.
Turkle also raised the issue of young children bonding with AI devices. As opposed to other toys, which become healthy vehicles for projection, AI devices would engage children by taking a more human role. This new kind of relationship could have unforeseen long-term consequences.
The panel comes four months after an executive order launched the American AI initiative, which invests in AI research and development and sets AI governance standards. These standards are important to ensure consistency across approaches to regulatory and non-regulatory uses of AI, said Lynne Parker, Assistant Director for Artificial Intelligence within the White House Office of Science and Technology Policy.
Georgetown Law Professor Bilyana Petkova agreed that consistent standards were important and emphasized the importance of three principles within human-centered AI development: human autonomy and oversight, prevention of harm, and fairness.
Editor's Note: This story was updated on June 8, 2019, with additional information from the event.
(Photo of Professor Lorraine Kisselburgh from ikawnoclastic blog.)
- Part III: The GOP Wants to Kill the Fairness Doctrine, Then Applies It to the Internet
- Justice Department Collaborating with State Attorneys General’s Antitrust Investigation of Big Tech, Says Chief
- Part II: Senators Josh Hawley and Ted Cruz Want to Repeal Section 230 and Break the Internet
- A Short History of Online Free Speech, Part I: The Communications Decency Act Is Born
- Free Press Denounces Facebook Ads, New Content Moderation, Problems of Broadband Adoption
Intellectual Property3 weeks ago
In Congressional Oversight Hearing, Register of Copyrights Says Office Is Responding to Online Users
Broadband Data3 months ago
Pennsylvania Broadband Speeds Worse Than Previously Believed, According to State Report
Fiber2 weeks ago
‘Dig Once’ Provides Future-Proofing Solution for Federal Highway Infrastructure, Says BroadbandNow
Drones2 weeks ago
Greater Commercial Use of Drones Will Force Revisions of Federal Aviation Administration Regulations, Say Experts
Broadband Data2 months ago
California Report: Income Most Significant Factor in Low Broadband Adoption
Broadband Roundup7 days ago
Trump Delays 10 Percent Tariff on Chinese Tech Goods, Buttigieg on Broadband, Facebook Eavesdropping
Broadband Roundup1 week ago
Cable Industry Touts Energy Efficiency, Next Century Highlights Open Access Fiber, Aspen Forum Set
Broadband Roundup2 weeks ago
Rep. Bob Latta and Ajit Pai on Robocalls, Rural Massachusetts Projects, John Horrigan Report on Digital Divide