House Democrats Grill Facebook Witness, Tech Officials on Social Media Disinformation
WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online. At an Energy and Commerce subcommittee hearing on “Manipulation and
Adrienne Patton
WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.
At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.
Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.
Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.
But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.
The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.
Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.
Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.
Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.
Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”
Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.
Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.
Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.
Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.