February 8, 2021— The spread of disinformation and misinformation can be controlled if the same rules on transparency required of the broadcast industry are applied to social media, an Atlantic Council webinar heard Wednesday.
That includes making changes to Section 230 of the Communications Decency Act governing liability of internet intermediaries to include a requirement that social media companies make clear who paid for ads that are displayed, said Pablo Breuer, co-author of the Adversarial Misinformation and Influence Tactics and Techniques framework.
Breuer’s framework, which was co-authored with Sara-Jayne Terp, seeks to identify the best means to detect and discuss what Terp referred to as “disinformation behaviors.”
The webinar last Wednesday focused on the critical issue of misinformation and disinformation and the roles and responsibilities of social media, the government and citizens.
Breuer noted that just four years ago, the attitude surrounding misinformation and disinformation campaigns was very different.
“When Sara-Jayne and I started talking about this, people thought we were crazy—they thought there was no disinformation problem,” he said. “Now you see it covered on the nightly news.”
When asked why the issue has only come to the forefront of society within the last couple of years, Breuer pointed out that in the past, disseminating information required a lot of capital. With the advent of social media, that was no longer the case.
“We’ve democratized the ability to reach a mass audience. Now we live in a world where an entertainer has twice the number of followers as the President of the United States,” said Breuer. ”They don’t have to clear their message with anyone—they can say something completely false.”
For a long time, social media was a largely-unregulated wild west of commentary, news and opinions.
But then the data-harvesting exploits of firms like Cambridge Analytica exposed how information was used to mold citizens’ thinking on issues that impacted political elections around the world began to put things into focus.
We may be approaching the end of non-regulation, as the banning of former President Donald Trump and other right-wing political commentators from Twitter and other social media platforms may lead to renewed scrutiny on the power of tech companies.
Breuer conceded that while more attention being focused on the issue is a step in the right direction, there are still huge dangers associated with the spread of fraudulent information and the many channels at the hands of malevolent actors..
Following the banning of the aforementioned figures, more of that base gravitated toward other more receptive applications, including Parler and Gab.
Counter-measures to social media disinformation?
Terp and Breuer compiled a list of what they regard as effective countermeasures to mitigate misinformation.Terpnoted that many people have been unknowingly co-opted as “unwitting agents.” In addition to being unwitting, they are not necessarily being influenced by external entities.
“Disinformation is coming from inside the house. What we are seeing is this move past, ‘the Russians are coming,’ to a more honest discussion about financial motivations, political motivations and reputational drivers of misinformation.
Terp also expressed that there is a strong relationship between privacy, democracy, and disinformation. She explained how greater consumer privacy reduces the level of targeting by outside entities in terms of the content a consumer is exposed to.
In the aftermath of Facebook’s move to wholly integrate Whatsapp into the social media ecosystem, for example, Signal, a privacy-by-design messaging app, saw its adoption skyrocket. End-to-end encryption messaging has also been a problem for law enforcement, they say, because it inhibits their ability to access messages of criminals.
Terp described disinformation as merchandise, and that one of the primary goals of anyone trying to curb its spread should be to take money out of it. According to Terp, countermeasure efforts deployed by social media platforms designed to make disinformation less profitable have had a mitigating effect.
Tackling bad behavior, not combatting people
In her conclusion, Terp made it clear that the only way to make policies that are effective at combatting the spread of disinformation is to tackle the behavior and not people. More needs to be done to spot behaviors early so that social media and government can engage in more preventative action, she said, rather than simply reacting to things as they happen.
Breuer offered some advice for the average person: He encouraged the audience to engage with those they disagree with, and to avoid trapping themselves in a virtual echo chamber.
He added the government needs to reexamine Section 230 and be more proactive in crafting policy to address the demands of modern technology.