WASHINGTON, June 6, 2019 - One of the greatest mistakes policymakers can make as they begin to make rules governing artificial intelligence technologies is a “failure of imagination,” said Purdue University Fellow Lorraine Kisselburgh at a panel hosted by the Electronic Privacy Information Center at the National Press Club on Wednesday.
The panel, entitled “AI and Human Rights: The Future of AI Policy in the U.S.,” took a cautious tone towards AI, with Kisselburgh pointing out that we may not even be able to imagine the changes and risks that these technological developments will bring to our society.
Several speakers emphasized the crucial point that artificial intelligence is not actually “intelligent,” but rather subject to whatever flaws and biases it is designed with. For example, programs designed to decide whether a person should get a mortgage or be given jail often end up demonstrating racial discrimination, because of the information used to “teach” the program.
MIT Professor Sherry Turkle criticized the common view that there is no solution for these biased technologies, calling this perspective an “abdication of responsibility.” Rather than relying on technology to fix society and then giving up when it fails, she emphasized the importance of working to fix the “real world” by challenging assumptions and investing in ethics curricula on a national level.
Turkle also raised the issue of young children bonding with AI devices. As opposed to other toys, which become healthy vehicles for projection, AI devices would engage children by taking a more human role. This new kind of relationship could have unforeseen long-term consequences.
The panel comes four months after an executive order launched the American AI initiative, which invests in AI research and development and sets AI governance standards. These standards are important to ensure consistency across approaches to regulatory and non-regulatory uses of AI, said Lynne Parker, Assistant Director for Artificial Intelligence within the White House Office of Science and Technology Policy.
Georgetown Law Professor Bilyana Petkova agreed that consistent standards were important and emphasized the importance of three principles within human-centered AI development: human autonomy and oversight, prevention of harm, and fairness.
Editor's Note: This story was updated on June 8, 2019, with additional information from the event.
(Photo of Professor Lorraine Kisselburgh from ikawnoclastic blog.)
- Advocates for Antitrust Enforcement Say Consumer Welfare Standard Only One Layer of Competition Law
- In Law More Than a Year, MOBILE Now Advocates Say Act Requires Further Implementation for 5G Deployment
- Broadband Roundup: Texas Reaches T-Mobile Settlement, Closing the ‘Homework Gap,’ Broadcast Ownership
- UTOPIA Fiber Announces Completion of Latest Round of Funding, a $48 Million Network Expansion
- Prakash Sangam: China’s Huawei Clones Are Greater Threat to National Security than Huawei
Signup for Broadband Breakfast
Intellectual Property4 months ago
In Congressional Oversight Hearing, Register of Copyrights Says Office Is Responding to Online Users
Broadband Data6 months ago
California Report: Income Most Significant Factor in Low Broadband Adoption
Privacy and Security3 months ago
Comparing Privacy Policies for Wearable Fitness Trackers: Apple, Fitbit, Xiaomi and Under Armour
Antitrust3 months ago
Addressing the Impact of Big Data Upon Antitrust is More Complicated Than a Big Tech Breakup
Expert Opinion5 months ago
Geoff Mulligan: A ‘Dumb’ Way to Build Smart Cities
Antitrust3 months ago
Broadband Roundup: Everyone (Almost) Gangs Up on Google, Muni Broadband Fact Sheet, SHLB Anchornet Conference
Broadband Roundup4 months ago
Cable Industry Touts Energy Efficiency, Next Century Highlights Open Access Fiber, Aspen Forum Set
Broadband's Impact5 months ago
Law Enforcement and Advocates of Facial Recognition Technologies Battle Misconceptions