Connect with us

Social Media

Social Media Companies Can Block and Control Harmful Content Amidst Current Coronavirus Disinformation

Published

on

Photo of Ryan Calo, principal investigator at the Center for an Informed Public, courtesy of Cornell University

June 9, 2020 — Though disinformation is rampant online, there is still hope that social media companies can control it, said Ryan Calo, principal investigator at the Center for an Informed Public, in an interview on KUOW Public Radio Tuesday.

While social media platforms have recently seen vast amounts of false information about the coronavirus pandemic and national protests, social media companies like Facebook have been able to block similarly harmful content in the past, Calo said.

“You just don’t see online gambling advertisements the way you used to; you don’t see jihadi recruitment videos the way you used to,” he pointed out.

However, Calo claimed that companies must be placed under pressure in order for them to want to cut down on harmful content.

“What you see over time is that when a company is highly motivated to put an end to something…they’ve been really good at it,” he said.

Calo also argued for increased responsibility for those in positions of authority who knowingly share false or misleading material. If they recognize that the content they shared was misleading, he said, they should take steps to clarify and correct the mistake.

“The best practice is that they should come clean about it, and they should take a screenshot of it,” he said. “Delete the actual tweets so it can’t continue to propagate.”

Calo said that these measures are crucial in an age of uncertainty about social media platforms’ responsibility for misleading content on their websites.

These concerns reached a high point in late May when President Donald Trump tweeted that mail-in ballots will be “substantially fraudulent.”

“Mail boxes [sic] will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone living in the state, no matter who they are or how they got there, will get one,” he tweeted.

Twitter added warning labels to the tweets, saying that they were misleading and urging users to “Get the facts about mail-in ballots.”

In response, Trump signed an executive order attempting to roll back protections on Twitter and other platforms that choose to engage with content moderation.

However, Calo said that Twitter’s actions were not only constitutional but also part of the  cost of opting into their service in the first place.

“The First Amendment limits what the government can do, not what Twitter can do as a private company,” he said. “…The President can’t stop them from commenting on what he’s saying.”

The interview can be viewed here.

Elijah Labby was a Reporter with Broadband Breakfast. He was born in Pittsburgh, Pennsylvania and now resides in Orlando, Florida. He studies political science at Seminole State College, and enjoys reading and writing fiction (but not for Broadband Breakfast).

Free Speech

Improved Age Verification Allows States to Consider Restricting Social Media

Constitutional issues leading courts to strike down age verification law are still present, said EFF.

Published

on

WASHINGTON, November 20, 2023 — A Utah law requiring age verification for social media accounts is likely to face First Amendment lawsuits, experts warned during an online panel Wednesday hosted by Broadband Breakfast.

The law, set to take effect in March 2024, mandates that all social media users in Utah verify their age and imposes additional restrictions on minors’ accounts.

The Utah law raises the same constitutional issues that have led courts to strike down similar laws requiring age verification, said Aaron Mackey, free speech and transparency litigation director at the non-profit Electronic Frontier Foundation.

“What you have done is you have substantially burdened everyone’s First Amendment right to access information online that includes both adults and minors,” Mackey said. “You make no difference between the autonomy and First Amendment rights of older teens and young adults” versus young children, he said.

But Donna Rice Hughes, CEO of Enough is Enough, contended that age verification technology has successfully restricted minors’ access to pornography and could be applied to social media as well.

“Utah was one of the first states [to] have age verification technology in place to keep minor children under the age of 18 off of porn sites and it’s working,” she said.

Tony Allen, executive director of Age Check Certification Scheme, agreed that age verification systems had progressed considerably from a generation ago, when the Supreme Court in 2002’s Ashcroft v. American Civil Liberties Union, struck down the 1998 Child Online Protection Act. The law had been designed to shield minors from indecent material, but the court ruled that age-verification methods often failed at that task.

Andrew Zack, policy manager at the Family Online Safety Institute, said that his organization he welcomed interest in youth safety policies from Utah.

But Zack said, “We still have some concerns about the potential unintended consequences that come with this law,”  worrying particularly about potential unintended consequences for teen privacy and expression rights.

Taylor Barkley, director of technology and innovation at the Center for Growth and Opportunity, highlighted the importance of understanding the specific problems the law aims to address. “Policy Solutions have trade-offs.” urging that solutions be tailored to the problems identified.

Panelists generally agreed that comprehensive data privacy legislation could help address social media concerns without facing the same First Amendment hurdles.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, November 15, 2023 – Social Media for Kids in Utah

In March 2023, Utah became the first state to adopt laws regulating kids’ access to social media. This legislative stride was rapidly followed by several states, including Arkansas, Illinois, Louisiana, and Mississippi, with numerous others contemplating similar measures. For nearly two decades, social media platforms enjoyed unbridled growth and influence. The landscape is now changing as lawmakers become more active in shaping the future of digital communication. This transformation calls for a nuanced evaluation of the current state of social media in the United States, particularly in light of Utah’s pioneering role. Is age verification the right way to go? What are the broader implications of this regulatory trend for the future of digital communication and online privacy across the country?

Panelists

  • Andrew Zack, Policy Manager, Family Online Safety Institute
  • Donna Rice Hughes, President and CEO of Enough Is Enough
  • Taylor Barkley, Director of Technology and Innovation, Center for Growth and Opportunity
  • Tony Allen, Executive Director, Age Check Certification Scheme
  • Aaron Mackey, Free Speech and Transparency Litigation Director, Electronic Frontier Foundation
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources

Andrew Zack is the Policy Manager for the Family Online Safety Institute, leading policy and research work relating to online safety issues, laws, and regulations. He works with federal and state legislatures, relevant federal agencies, and industry leaders to develop and advance policies that promote safe and positive online experience for families. Andrew joined FOSI after five years in Senator Ed Markey’s office, where he worked primarily on education, child welfare, and disability policies. Andrew studied Government and Psychology at the College of William and Mary.

Donna Rice Hughes, President and CEO of Enough Is Enough is an internationally known Internet safety expert, author, speaker and producer. Her vision, expertise and advocacy helped to birth the Internet safety movement in America at the advent of the digital age. Since 1994, she has been a pioneering leader on the frontlines of U.S. efforts to make the internet safer for children and families by implementing a three-pronged strategy of the public, the technology industry and legal community sharing the responsibility to protect children online.

Taylor Barkley is the Director of Technology and Innovation at the Center for Growth and Opportunity where he manages the research agenda, strategy, and represents the technology and innovation portfolio. His primary research and expertise are at the intersection of culture, technology, and innovation. Prior roles in tech policy have been at Stand Together, the Competitive Enterprise Institute, and the Mercatus Center at George Mason University.

Tony Allen a Chartered Trading Standards Practitioner and acknowledged specialist in age restricted sales law and practice. He is the Chair of the UK Government’s Expert Panel on Age Restrictions and Executive Director of a UKAS accredited conformity assessment body specialising in age and identity assurance testing and certification. He is the Technical Editor of the current international standard for Age Assurance Systems.

Aaron Mackey is EFF’s Free Speech and Transparency Litigation Director. He helps lead cases advancing free speech, anonymity, and privacy online while also working to increase public access to government records. Before joining EFF in 2015, Aaron was in Washington, D.C. where he worked on speech, privacy, and freedom of information issues at the Reporters Committee for Freedom of the Press and the Institute for Public Representation at Georgetown Law

Breakfast Media LLC CEO Drew Clark has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Social Media

Senate Commerce Committee Passes Two Bills To Protect Children Online

The bills failed to make headway in a previous Congress.

Published

on

Screenshot of Sen. Edward Markey, D-Mass., during the markup Thursday

WASHINGTON, July 27, 2023 – The Senate Commerce committee on Thursday swiftly passed two pieces of legislation aimed to protect the safety and privacy of children online, exactly one year after the same bills passed the committee but failed to advance further.

The first bill to clear the committee was the Kids Online Safety Act, which requires social media sites to put in place safeguards protecting users under the age of 17 from content that promotes harmful behaviors, such as suicide and eating disorders. KOSA was first introduced in 2022 by Sen. Richard Blumenthal, D-Conn., and Sen. Marsha Blackburn, D-Tenn. It previously won bipartisan support but ultimately failed to become law.

The current version of the bill was reintroduced in May, gaining traction in several hearings, and picked up more than 30 co-sponsors. Several changes were made to the text, including a specific list of online harms and certain exemptions for support services, such as substance abuse groups that might unintentionally suffer from the bill’s requirements.

The bill was also amended Thursday to include a provision proposed by Sen. John Thune, R-S.D. that would require companies to disclose the use of algorithms for content filtering and give users the choice to opt out.

Critics of the bill, however, said the revised version largely resembled the original one and failed to address issues raised before. These concerns included sections that would require tech companies to collect more data to filter content and verify user age, as well as an infringement on children’s free speech.

Sen. Ted Cruz, R-Texas, supported the bill but agreed that more work needs to be done before it moves to the floor. Since the last committee’s markup of KOSA, several states have approved measures concerning children’s online safety that might be inconsistent with the existing provisions, he noted, proposing a preemptive provision to ensure the bill would be enforced regardless of state laws.

The Children and Teens’ Online Privacy Protection Act, or COPPA 2.0, introduced by Sen. Edward Markey, D-Mass., and Bill Cassidy, R-LA, was the second bill passed out of the committee. It expands on existing legislation that has been in effect since 2000 to protect children from harmful marketing. The bill would make it illegal for websites to collect data on children under the age of 16, outlaw marketing specifically aimed at kids, and allow parents to erase their kids’ information on the websites.

“It is time for Congress to meet this moment and to act with the urgency that these issues demand,” said Sen. Markey.

This pair of legislation is among many others that seek to protect children from online harms, none of which have made any headway in Congress so far.

Continue Reading

Free Speech

UK’s Online Safety Bill Likely to Impact American User Experience

The bill will affect the tone and content of discussion on U.S.-owned platforms that wish to continue offering UK services.

Published

on

Screenshot of Amy Peikoff of BitChute

WASHINGTON, July 21, 2023 – The United Kingdom’s Online Safety Bill will impact the American-based user’s experience on various platforms, said panelist at a Broadband Breakfast Live Online event Wednesday.  

The Online Safety Bill is the UK’s response to concerns about the negative impact of various internet platforms and applications. The core of the bill addresses illegal content and content that is harmful to children. It places a duty of care on internet sites, including social media platforms, search engines, and online shopping centers, to provide risk assessments for their content, prevent access to illegal content, protect privacy, and prevent children from accessing harmful content. 

The legislation would apply to any business that has a substantial user base in the UK, having unforeseen impacts on the end user experience, said Amy Peikoff, Chief Policy Officer of UK-based video-streaming platform, BitChute. 

Even though the legislation is not U.S. legislation, it will affect the tone and content of discussion on U.S.-owned platforms that wish to continue offering their services in the jurisdictions where this legislation will be enacted, said Peikoff. Already, the European Union’s Digital Services Act, is affecting Twitter, which is “throttling its speech” to turn out statistics that say a certain percentage of their content is “healthy,” she claimed. 

Large social media companies as we know them are finished, Peikoff said.  

Ofcom, the UK’s communications regulator, will be responsible to provide guidelines and best practices as well as conduct investigations and auditing. It will be authorized to apprehend revenue if a company fails to adhere to laws and may enact rules that require companies to provide user data to the agency and/or screen user messages for harmful content. 

Peikoff claimed that the legislation could set off a chain of events, “namely, that platforms like BitChute would be required to affirmatively, proactively scan every single piece of content – comments, videos, whatever posted to the platform – and keep a record of any flags.” She added that U.S-based communication would not be exempt. 

Meta-owned WhatsApp, a popular messaging app, has warned that it will exit the UK market if the legislation requires it to release data about its users or screen their messages, claiming that doing so would “compromise” the privacy of all users and threaten the encryption on its platform. 

Matthew Lesh, director of public policy and communications at the UK think tank Institute of Economic Affairs, said that the bill is a “recipe for censorship on an industrial, mechanical scale.” He warned that many companies will choose to simply block UK-based users from using their services, harming UK competitiveness globally and discouraging investors.  

In addition, Lesh highlighted privacy concerns introduced by the legislation. By levying fines on platforms that host harmful content accessible by children, companies may have to screen for children by requiring users to present government-issued IDs, presenting a major privacy concern for users.  

The primary issue with the bill and similar policies, said Lesh, is that it enacts the same moderation policies to all online platforms, which can limit certain speech and stop healthy discussion and interaction cross political lines. 

The bill is currently in the final stages of the committee stage in the House of Lords, the UK’s second chamber of parliament. Following its passage, the bill will go to the House of Commons in which it will either be amended or be accepted and become law. General support in the UK’s parliament for the bill suggests that the bill will be implemented sometime next year. 

This follows considerable debate in the United States regarding content moderation, many of which discussions are centered around possible reform of Section 230. Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. 

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, July 19, 2023 – The UK’s Online Safety Bill

The UK’s Online Safety Bill seeks to make the country “the safest place in the world to be online” has seen as much upheaval as the nation itself in the last four years. Four prime ministers, one Brexit and one pandemic later, it’s just a matter of time until the bill finally passes the House of Lords and eventually becomes law. Several tech companies including WhatsApp, Signal, and Wikipedia have argued against its age limitation and breach of end-to-end encryption. Will this legislation serve as a model for governments worldwide to regulate online harms? What does it mean for the future of U.S. social media platforms?

Panelists

  • Amy Peikoff, Chief Policy Officer, BitChute
  • Matthew Lesh, Director of Public Policy and Communications at the Institute of Economic Affairs.
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources

Amy Peikoff is Chief Policy Officer for BitChute. She holds a BS in Math/Applied Science and a JD from UCLA, as well as a PhD in Philosophy from University of Southern California, and has focused in her academic work and legal activism on issues related to the proper legal protection of privacy. In 2020, she became Chief Policy Officer for the free speech social media platform, Parler, where she served until Parler was purchased in April 2023.

Matthew Lesh is the Director of Public Policy and Communications at the Institute of Economic Affairs. Matthew often appears on television and radio, is a columnist for London’s CityAM newspaper, and a regular writer for publications such as The TimesThe Telegraph and The Spectator. He is also a Fellow of the Adam Smith Institute and Institute of Public Affairs.

Drew Clark is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

 

 

 

Illustration from the Spectator

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending