Social Media
Panelists Call for Stakeholder Collaboration to Establish Trust in Content Moderation Processes

June 24, 2020 — Four members of the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression detailed the group’s newly released report entitled “Freedom and Accountability: A Transatlantic Framework for Moderating Speech Online,” on a Tuesday webinar hosted by the Aspen Institute.
The report is the culmination of two years of investigation and input by 28 legislators, government officials, tech executives, civil society leaders and academics spanning across North America and Europe.
“Members of the group held the common goal of seeking and promoting the best practices to tackle hate speech, violent extremism, and disinformation online, without chilling freedom of expression or further fracturing the Internet,” said Susan Ness, distinguished fellow at the Annenberg Public Policy Center.
In the report, the group calls for greater transparency in content moderation and offers a framework to lead to greater platform accountability.
“Transparency is the way towards accountability,” said Jeff Jarvis, a professor at the Newmark School of Journalism. “Data about what content is moderated by platforms must be made transparent to the research community in order to create evidence-based policy.”
According to the panel, increased transparency would not only foster more quantitative data for future policy decisions, but would also work to establish trust among stakeholders, a step the panelists deemed necessary in the production of more democratic platforms.
Panelists argued that restoring trust among government, tech companies and the public was necessary to tackle future moderation challenges, as finding solutions will require all of these entities to work together.
Eilleen Donahoe, executive director of Stanford Global Digital Policy, called for increased collaboration among all stakeholders, noting that netizens’ voices need to be particularly amplified in the conversation around moderation decisions.
Donahoe argued that a lack of understanding is affecting the institutions involved, stating that “governments are confused about whether they want the private sector to take down more content, while private sector companies are confused on the rules that apply to them, what they should prohibit, and what the best course of action is.”
The Transatlantic Working Group’s methods
To create their framework, the group held round table hearings with specific tech companies. Through this process, the group established what they hope will be a long tradition of using research and evidence to foster more democratic speech environments online.
The members quickly realized that there is no one-size-fits-all approach to moderating content. Due to this, the report’s framework focused on valuing transparency and accountability in content moderation, as these are standards that are upheld in democracies across the globe.
While the group’s report found that transparency is the most crucial aspect in future moderation, members pointed out that platforms are extremely wary to release this data.
To overcome this, one of the five central recommendations in the report called for establishing a multi-tier disclosure system, so that only data that must be known is released, and only to those necessary.
“What information do researchers need to know? What does the public need to know? These are extremely difficult questions to ask, but the present system is no system at all,” Jarvis said.
A second recommendation called for platforms to establish effective redress mechanisms, such as social media councils. Facebook’s Oversight Board is a prime example of this type of external regulation entity, panelists said.
A third recommendation calls for targeting regulation at individual malicious actors rather than content at large, as “going against bad actors is more effective than targeting content and has less of a chilling effect,” Donahoe said.
The search for solutions going forward
“The solution will have to be a combination of automation, human moderation and platform initiatives,” Donahoe said. “There is no silver bullet solution.”
“Revoking section 230 is not the solution, as imposing liability on platforms for user generated speech would have consequences for expression that are gigantic,” she continued.
“We need to encourage platforms to do more to protect democracy in the name of their own free expression,” she said. “The private sector has immense power — they should choose to accept it.”
“This is just the beginning of lots of research that needs to be done alongside tech companies,” Ness concluded. “It cannot be done by just the government or stakeholders alone.”
Many of the arguments made in the Transatlantic Working Group’s report resonated with panelists participating in a webinar hosted by New America on Tuesday evening.
Kate Klonick, an assistant professor of law at St. John’s University Law School, echoed the report’s ideals, saying that “we have to move towards creating transparency and accountability.”
David Kaye, a clinical professor of law at the University of California, similarly urged that it was critical to know “what set of principles is being used to make moderation decisions.”
One additional suggestion Klonick offered was creating better user participation buttons, to collect data on how netizens think certain content should be moderated and to improve systems.
Social Media
Senate Commerce Committee Passes Two Bills To Protect Children Online
The bills failed to make headway in a previous Congress.

WASHINGTON, July 27, 2023 – The Senate Commerce committee on Thursday swiftly passed two pieces of legislation aimed to protect the safety and privacy of children online, exactly one year after the same bills passed the committee but failed to advance further.
The first bill to clear the committee was the Kids Online Safety Act, which requires social media sites to put in place safeguards protecting users under the age of 17 from content that promotes harmful behaviors, such as suicide and eating disorders. KOSA was first introduced in 2022 by Sen. Richard Blumenthal, D-Conn., and Sen. Marsha Blackburn, D-Tenn. It previously won bipartisan support but ultimately failed to become law.
The current version of the bill was reintroduced in May, gaining traction in several hearings, and picked up more than 30 co-sponsors. Several changes were made to the text, including a specific list of online harms and certain exemptions for support services, such as substance abuse groups that might unintentionally suffer from the bill’s requirements.
The bill was also amended Thursday to include a provision proposed by Sen. John Thune, R-S.D. that would require companies to disclose the use of algorithms for content filtering and give users the choice to opt out.
Critics of the bill, however, said the revised version largely resembled the original one and failed to address issues raised before. These concerns included sections that would require tech companies to collect more data to filter content and verify user age, as well as an infringement on children’s free speech.
Sen. Ted Cruz, R-Texas, supported the bill but agreed that more work needs to be done before it moves to the floor. Since the last committee’s markup of KOSA, several states have approved measures concerning children’s online safety that might be inconsistent with the existing provisions, he noted, proposing a preemptive provision to ensure the bill would be enforced regardless of state laws.
The Children and Teens’ Online Privacy Protection Act, or COPPA 2.0, introduced by Sen. Edward Markey, D-Mass., and Bill Cassidy, R-LA, was the second bill passed out of the committee. It expands on existing legislation that has been in effect since 2000 to protect children from harmful marketing. The bill would make it illegal for websites to collect data on children under the age of 16, outlaw marketing specifically aimed at kids, and allow parents to erase their kids’ information on the websites.
“It is time for Congress to meet this moment and to act with the urgency that these issues demand,” said Sen. Markey.
This pair of legislation is among many others that seek to protect children from online harms, none of which have made any headway in Congress so far.
Free Speech
UK’s Online Safety Bill Likely to Impact American User Experience
The bill will affect the tone and content of discussion on U.S.-owned platforms that wish to continue offering UK services.

WASHINGTON, July 21, 2023 – The United Kingdom’s Online Safety Bill will impact the American-based user’s experience on various platforms, said panelist at a Broadband Breakfast Live Online event Wednesday.
The Online Safety Bill is the UK’s response to concerns about the negative impact of various internet platforms and applications. The core of the bill addresses illegal content and content that is harmful to children. It places a duty of care on internet sites, including social media platforms, search engines, and online shopping centers, to provide risk assessments for their content, prevent access to illegal content, protect privacy, and prevent children from accessing harmful content.
The legislation would apply to any business that has a substantial user base in the UK, having unforeseen impacts on the end user experience, said Amy Peikoff, Chief Policy Officer of UK-based video-streaming platform, BitChute.
Even though the legislation is not U.S. legislation, it will affect the tone and content of discussion on U.S.-owned platforms that wish to continue offering their services in the jurisdictions where this legislation will be enacted, said Peikoff. Already, the European Union’s Digital Services Act, is affecting Twitter, which is “throttling its speech” to turn out statistics that say a certain percentage of their content is “healthy,” she claimed.
Large social media companies as we know them are finished, Peikoff said.
Ofcom, the UK’s communications regulator, will be responsible to provide guidelines and best practices as well as conduct investigations and auditing. It will be authorized to apprehend revenue if a company fails to adhere to laws and may enact rules that require companies to provide user data to the agency and/or screen user messages for harmful content.
Peikoff claimed that the legislation could set off a chain of events, “namely, that platforms like BitChute would be required to affirmatively, proactively scan every single piece of content – comments, videos, whatever posted to the platform – and keep a record of any flags.” She added that U.S-based communication would not be exempt.
Meta-owned WhatsApp, a popular messaging app, has warned that it will exit the UK market if the legislation requires it to release data about its users or screen their messages, claiming that doing so would “compromise” the privacy of all users and threaten the encryption on its platform.
Matthew Lesh, director of public policy and communications at the UK think tank Institute of Economic Affairs, said that the bill is a “recipe for censorship on an industrial, mechanical scale.” He warned that many companies will choose to simply block UK-based users from using their services, harming UK competitiveness globally and discouraging investors.
In addition, Lesh highlighted privacy concerns introduced by the legislation. By levying fines on platforms that host harmful content accessible by children, companies may have to screen for children by requiring users to present government-issued IDs, presenting a major privacy concern for users.
The primary issue with the bill and similar policies, said Lesh, is that it enacts the same moderation policies to all online platforms, which can limit certain speech and stop healthy discussion and interaction cross political lines.
The bill is currently in the final stages of the committee stage in the House of Lords, the UK’s second chamber of parliament. Following its passage, the bill will go to the House of Commons in which it will either be amended or be accepted and become law. General support in the UK’s parliament for the bill suggests that the bill will be implemented sometime next year.
This follows considerable debate in the United States regarding content moderation, many of which discussions are centered around possible reform of Section 230. Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter.
Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.
Wednesday, July 19, 2023 – The UK’s Online Safety Bill
The UK’s Online Safety Bill seeks to make the country “the safest place in the world to be online” has seen as much upheaval as the nation itself in the last four years. Four prime ministers, one Brexit and one pandemic later, it’s just a matter of time until the bill finally passes the House of Lords and eventually becomes law. Several tech companies including WhatsApp, Signal, and Wikipedia have argued against its age limitation and breach of end-to-end encryption. Will this legislation serve as a model for governments worldwide to regulate online harms? What does it mean for the future of U.S. social media platforms?
Panelists
- Amy Peikoff, Chief Policy Officer, BitChute
- Matthew Lesh, Director of Public Policy and Communications at the Institute of Economic Affairs.
- Drew Clark (moderator), Editor and Publisher, Broadband Breakfast
Panelist resources
- An Unsafe Bill: How the Online Safety Bill threatens free speech, innovation and privacy, Institute of Economic Affairs
- Big Tech Behind Bars? The UK’s Online Safety Bill Explained, CNET, January 19, 2023
- The hidden harms in the Online Safety Bill, The Spectator, August 20, 2022
Amy Peikoff is Chief Policy Officer for BitChute. She holds a BS in Math/Applied Science and a JD from UCLA, as well as a PhD in Philosophy from University of Southern California, and has focused in her academic work and legal activism on issues related to the proper legal protection of privacy. In 2020, she became Chief Policy Officer for the free speech social media platform, Parler, where she served until Parler was purchased in April 2023.
Matthew Lesh is the Director of Public Policy and Communications at the Institute of Economic Affairs. Matthew often appears on television and radio, is a columnist for London’s CityAM newspaper, and a regular writer for publications such as The Times, The Telegraph and The Spectator. He is also a Fellow of the Adam Smith Institute and Institute of Public Affairs.
Drew Clark is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

Illustration from the Spectator
As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.
SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTube, Twitter and Facebook.
See a complete list of upcoming and past Broadband Breakfast Live Online events.
Free Speech
New Tool Measures Economic Impact of Internet Shutdowns
The calculator is being called a ‘major step forward’ for those pushing back against such shutdowns.

July 10, 2023 – New measuring tool NetLoss launched by the Internet Society shows the impacts of internet shutdowns on economies including Iraq, Sudan and Pakistan, where government-mandated outages have cost millions of dollars in a matter of hours or days.
NetLoss, launched on June 28, calculated a four-hour shutdown in July in Iraq, implemented by the government to prevent cheating during high school exam season, resulted in an estimated loss of $1.6 million. In May, a shutdown in Pakistan cost more than $13 million over the span of four days, while a five-day internet outage in Sudan in April cost the economy more than $4 million and resulted in the loss of 560 jobs.
NetLoss is unique among other internet assessment tools as it also includes subsequent economic impacts on the unemployment rate, foreign direct investments, and the risk of future shutdowns, claimed the advocacy group Internet Society. It provides data on both ongoing and anticipated shutdowns, drawing from historical dataset of over 90 countries dating back to 2019.
“The calculator is a major step forward for the community of journalists, policymakers, technologists and other stakeholders who are pushing back against the damaging practice of Internet shutdowns,” said Andrew Sullivan, CEO of the Internet Society. “Its groundbreaking and fully transparent methodology will help show governments around the world that shutting down the Internet is never a solution.”
The tool relies on open-access databases, including the Internet Society Pulse’s Shutdown data, the World Bank’s economic indicators, the Armed Conflict Location and Event Data Project’s civil unrest data, Yale University’s election data, and other relevant socioeconomic factors. To stay up to date with real-time changes, the data will be updated quarterly.
According to the press release, internet shutdowns worldwide peaked in 2022 with governments increasingly blocking internet services due to concerns over civil unrest or cybersecurity threats. These disruptions are extremely damaging to the economy, read the document, as they impede online commercial activities and expose companies and the economy to financial and reputational risks.
-
Community Broadband4 weeks ago
Rural Broadband Provider Touts Cooperative and Coalition-based Models
-
#broadbandlive3 weeks ago
Broadband Breakfast on Wednesday, November 1, 2023 – Broadband Deployment from India, Australia, South Africa
-
Funding1 week ago
BEAD Director Says NTIA is Working on Changes to Letter of Credit
-
Funding4 weeks ago
A Deep Dive into the BEAD Program’s Matching Funds
-
Broadband Roundup3 weeks ago
NTIA Announces Middle Mile Funds, NDIA Director on Closing Digital Divide, More Tribal ACP Outreach Funds
-
Broadband Roundup4 weeks ago
FCC Waives Hurricane Idalia Rules, North Carolina Awards, Fiber Deployment in Kansas
-
Broadband Mapping & Data3 weeks ago
Broadband Breakfast Webinar on Broadband Geospatial Planning
-
Broadband Roundup4 weeks ago
Cost Model Funds Announced, FCC to Tighten Robocall Rules, X to Collect Biometric Data