Connect with us

Social Media

House Democrats Grill Facebook Witness, Tech Officials on Social Media Disinformation

Published

on

Photo of witness table at social media disinformation hearing by Adrienne Patton

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.

At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.

Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.

Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.

But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.

The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.

Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.

Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.

Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.

Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”

Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.

Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.

Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.

Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.

 

Adrienne Patton was a Reporter for Broadband Breakfast. She studied English rhetoric and writing at Brigham Young University in Provo, Utah. She grew up in a household of journalists in South Florida. Her father, the late Robes Patton, was a sports writer for the Sun-Sentinel who covered the Miami Heat, and is for whom the press lounge in the American Airlines Arena is named.

Social Media

Automated Social Media Moderation In Focus Following Allegations Of Censorship

Panelists say they’ve been censored on social media — and they point to platforms’ auto moderation.

Published

on

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.

At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.

Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.

Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.

But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.

The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.

Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.

Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.

Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.

Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”

Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.

Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.

Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.

Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.

 

Continue Reading

Section 230

Broadband Breakfast Hosts Section 230 Debate

Two sets of experts debated the merits of reforming or removing and maintaining Section 230.

Published

on

Screenshot taken from Broadband Live Online event

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.

At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.

Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.

Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.

But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.

The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.

Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.

Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.

Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.

Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”

Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.

Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.

Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.

Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.

 

Continue Reading

Section 230

Despite Speculation, Section 230 Is Here to Stay: Rep. Bob Latta

Republican representative Bob Latta said Section 230 is not going anywhere anytime soon.

Published

on

Republican Rep. for Ohio Bob Latta

WASHINGTON, January 8, 2020 – House Democrats on Wednesday pressed Facebook and other technology observers on why tech companies aren’t doing more to prevent the spread of “deepfakes” and other forms of digital manipulation online.

At an Energy and Commerce subcommittee hearing on “Manipulation and Deception in the Digital Age,” Chairwoman Jan Schakowsky, D-Illinois, set the stage by claiming that Congress had taken a “laissez-faire” approach to online protection.

Americans can be harmed as easily online as in the visible world, Schakowsky said, arguing there was a disparity between protections for in-person commerce that are widely lacking in the virtual realm.

Full committee Chairman Frank Pallone, D-N.J., said that the danger of subtle manipulation now means we can no longer trust our eyes.

But Rep. Cathy Rodgers, R-Washington, stressed innovation over government regulation. She said that Congress needs to be careful not to harm practices people enjoy.

The four witnesses and committee members turned their attention to the danger of deepfakes, cheap fakes, and other deceptive online practices that are difficult to detect and affect millions of people globally.

Monika Bickert, vice president of global policy management at Facebook, said that Facebook had improved its relationship with third party fact-checkers.

Under questioning, Bickert said Facebook would label videos that were false, and emphasized its more active role in content moderation: Whereas Facebook removed only one network in 2016, last year Facebook removed more than 50 networks from the social network.

Working alongside academics, professionals, and fact-checkers, she said that 2020 will be a good year in going after information.

Joan Donovan, research director at Harvard’s Kennedy School, testified that the multi-million-dollar deception industry was a threat to national security. She said the country hasn’t quantified the cost of misinformation, but it needs regulatory guardrails. Otherwise, she said, the “future is forgery.”

Justin Hurwitz, a law professor at University of Nebraska College of Law, said design is powerful because people are predictable, we are programable, and dark patterns harm consumers.

Tristan Harris, executive director for the Center for Humane Technology, said the country has a “dark infrastructure,” and that it needed to protect its digital borders as much as it protects its physical borders.

Rather than form new federal agencies to deal with misinformation challenges, Harris said Congress needed to give existing agencies a digital update.

Rep. Kathy Castor, D-Florida, asked Harris to expound upon the possible harm that children face with addictive algorithms. Harris said the autoplay feature on YouTube automatically plays videos that lean-to extremes. For example, he said, if you watch a video on 9/11, autoplay will scout videos on 9/11 conspiracy theories to play immediately afterwords.

 

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

 

Trending