Social Media
House Democrats Grill Facebook Witness, Tech Officials on Social Media Disinformation

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.
At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.
Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.
Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.
But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.
The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.
Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.
Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.
Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.
Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”
Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.
Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.
Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.
Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.
Free Speech
Traditional Media Must Take Unilateral Action On Disinformation, Says Journalist Soledad O’Brien

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.
At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.
Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.
Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.
But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.
The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.
Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.
Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.
Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.
Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”
Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.
Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.
Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.
Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.
Free Speech
Conflicting Arguments on Internet Censorship at Research Session on Free Speech and Societal Harmony

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.
At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.
Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.
Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.
But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.
The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.
Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.
Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.
Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.
Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”
Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.
Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.
Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.
Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.
Social Media
Pressed by House Committee on Buy-Side Restrictions, Robinhood CEO Says, ‘Look, I’m Sorry!’

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.
At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.
Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.
Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.
But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.
The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.
Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.
Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.
Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.
Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”
Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.
Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.
Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.
Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.
-
Artificial Intelligence3 months ago
U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations
-
Broadband Roundup3 months ago
Benton on Middle Mile Open Access Networks, CENIC Fiber Route in California, Investors Buying Bitcoin
-
Section 2304 months ago
President Trump’s FCC Nominee Grilled on Section 230 During Senate Confirmation Hearing
-
Artificial Intelligence2 months ago
Artificial Intelligence Aims to Enhance Human Capabilities, But Only With Caution and Safeguards
-
Broadband Roundup3 months ago
Trump Signs Executive Order on Artificial Intelligence, How Not to Wreck the FCC, Broadband Performance in Europe
-
5G4 months ago
5G Stands to Impact Industry Before Consumers, Says Verizon CEO Hans Vestberg
-
Fiber2 months ago
Smaller Internet Providers Were Instrumental to Fiber Deployment in 2020, Says Fiber Broadband Association
-
#broadbandlive3 months ago
Broadband Breakfast Live Online Event Series on ‘Tools for Broadband Deployment’ on Enhancing Rural America