Connect with us

Big Tech

Big Tech’s Response to Coronavirus: Face Masks, Hiring Binges, Free Web Sites and Cash Donations

Published

on

Photo collage of Jeff Bezos, Tim Cook, Mark Zuckerberg and Sundar Pichai

April 2, 2020 – Big technology companies have been closely scrutinized by legislators and the executive branch in Washington for at least two years now. But this “techlash” is largely forgotten in the wake of the coronavirus pandemic.

Whether the future looks back on big tech as a savior or a vulture depends much on how these companies – particularly Amazon, Apple, Facebook and Google –act and react in the coming weeks and months.

To see how they have been acting since the pandemic emerged, Broadband Breakfast has assembled this inventory of their actions since the import of the new coronavirus and the COVID-19 disease has come to light.

A timeline of how the big four of big tech have responded to coronavirus

Amazon has made its software largely available to governments, schools, and health workers and has also donated several millions of dollars to fighting the epidemic. The company has seemed to have pounced on a suddenly vulnerable economy, announcing that it will hire 100,000 new warehouse workers while other companies began mass layoffs. And it has not escaped criticism regarding its handling of those workers’ health and safety.

In general, Apple seems to have had a single-minded obsession with donating as many masks as possible.

Facebook has donated several hundred million dollars to aspects of the coronavirus response, ranging from combatting misinformation to donating ad space to health authorities. Its newly unveiled Community Help feature, which connects Facebook users offering help with those requesting it, is untested but widely-anticipated.

Of the big four, Google has donated the most to address the coronavirus, with a contribution of $800 million for small businesses, governments, and health workers. It has also suffered in the press when it got caught in the crossfire between the Trump administration and the press over the coronavirus website affair.

Amazon

March 16th: Amazon announces it will hire 100,000 employees amid pandemic, according to the New York Times.

March 18th: The Atlantic reports that Amazon confirmed the first case of coronavirus in an American warehouse, in Queens, New York.

March 20th: Amazon announces it will reduce the quality of its streaming service Amazon Prime Video in Europe, following YouTube’s footsteps, according to The Verge.

March 20th: Amazon donates $20 million to research and development of coronavirus testing, announced by Amazon blog post.

March 22nd: Trump announces during daily coronavirus task force conference that Amazon, as part of a consortium comprising Google, Microsoft, and IBM, will be offering hundreds of petaflops worth of computational power in analyzing projects in epidemiology, bioinformatics and molecular modeling according to TechCrunch.

March 25th: Amazon provides its Amazon Web Services “cloud technologies and technical expertise” to help the World Health Organization aggregate epidemiological data.

March 25th: Amazon offers its Future Engineer collection of computer science literacy courses to schools.

March 27th: Amazon donates $4 million to British Red Cross and several British government institutions.

March 28th: Amazon donates its AWS infrastructure technology to help Boston Children’s Hospital track the spread of coronavirus through the use of crowdsourcing maps.

March 30th: Amazon grants $5 million to 400 Seattle small businesses.

March 30th: Amazon provides 73,000 meals to medically vulnerable residents in Seattle in conjunction with local catering business Gourmondo.

March 30th: Amazon remotely upgrades Alexa to answer the question “Alexa, what do I do if I think I have coronavirus?”

March 31st: Amazon fires warehouse worker who staged a walkout at a Staten Island warehouse. New York Attorney General Letitia James, who is already leading an investigation of Facebook with eight other state attorneys general, calls for an investigation.

April 1st: Amazon formally recommends “All corporate office employees who work in a capacity that can be done from home are recommended to do so through April 24.”

April 2nd: Amazon announces it will release a bevy of webinars “on a variety of remote learning topics” for free available on April 6th.

April 2nd: Amazon donates infrastructure technology that will allow the aggregation of information across formerly siloed British National Health Service departments. They will share data such as occupancy levels and wait times at specific hospitals.

Apple

Sometime in early March: Apple issues a work-at-home order to its employees, loosening its notorious policy of secrecy surrounding unreleased products, such as having designers work in rooms with blacked out windows, according to a report by Bloomberg.

March 13th: Apple closes all retail stores outside of China in Tim Cook’s statement to the public. In the same letter, Apple announces its donations have reached $15 million for coronavirus-related efforts.

March 19th: Tim Cook announces via Twitter that Apple will be donating an undisclosed amount of money to Protezione Civile, Italy’s official emergency response task force for the coronavirus.

March 20th: 9to5Mac is the first to notice Apple’s unannounced decision to downgrade streaming resolution on its European customers’ Apple TV+s.

March 25th: Tim Cook announces via Twitter that Apple has sourced 10 million masks for the U.S. and millions more for Europe.

March 27th: Apple releases COVID-19 Screening website in conjunction with the CDC and the White House that tells users about the disease, testing, and what steps they can take.

April 1st: Apple donates $20 million to China’s coronavirus efforts, Tim Cook announced through China’s equivalent of Twitter, Weibo.

April 1st: Apple donates 2 million masks to the State of New York according to a tweet by Governor Andrew Cuomo .

April 2nd: Apple teams up with actor Leonardo DiCaprio to raise $15 million for food charity according to Business Insider. As of this writing, they have already raised $12 million.

Facebook

January 30th: Facebook announces a more aggressive stance on removing misinformation regarding coronavirus following its declaration by the World Health Organization as a public health emergency of international concern. Facebook also begins offering free ad credits for organizations that advertise responsibly on the coronavirus.

February 26th: Facebook begins promoting links to WHO at the top of search results on coronavirus.

March 13th: CEO Mark Zuckerberg matches $20 million worth of donations to WHO and the Center for Disease Control through a Facebook post.

March 16th: Facebook, along with Google, Microsoft, and other tech companies, releases a vague statement promising to help “people stay connected,” “[elevate] authoritative content,” and “[share] critical updates,” according to The Verge.

March 17th: Facebook donates $1 million to the International Fact-Checking Network and $1 million in grant money to local news organizations across the U.S. and Canada to support their coverage of coronavirus.

March 17th: Facebook restores all posts that were incorrectly flagged for spam following a report by The Verge, announced through a Twitter post by Guy Rosen, Facebook’s vice president of integrity.

March 18th: Facebook doubles server capacity for WhatsApp.

March 18th: Chief Operating Officer Sheryl Sandberg announces through a Facebook post that the company will offer $100 million in cash grants and ad credits for “up to 30,000 eligible small businesses in over 30 countries where we operate.”

March 18th: Zuckerberg hosts press call where he announces Coronavirus Information Center feature in the Facebook News Feed. He also announces that Facebook will give governments and emergency services free access to work collaboration tool Facebook Workplace for 12 months.

Sometime between March 19th and March 23rd : Facebook data scientist Ranjan Subramanian releases internal report acquired by The New York Times showing a huge uptick in news clicks through Facebook Newsfeed, which had been declining for years. The uptick in new clicks was specifically for “high-quality” and local news.

March 19thFacebook sends its content reviewers packing. Facebook shifts content review work from contractors to full time employees.

March 22nd:  Zuckerberg announces through a Facebook post donation of 720,000 masks reserved for their employees in case of forest fires. Although Facebook has only 45,000 employees, a Facebook spokesman said that “the masks were from our emergency disaster reserve and many had been acquired due to the recent dangerous California wildfires. As recommended, Facebook has emergency supplies like food, water, masks and other supplies on hand like many other companies.”

March 23rd: Facebook announces it will temporarily downgrade video streaming quality on Facebook and Instagram in Europe and Latin America, according to Reuters.

March 23rd– March 26th: Facebook co-hosts a COVID-19 Global Hackathon with the aim to foster the development of software that addresses “some of the challenges related to the current coronavirus pandemic.” Winners will be announced on April 10.

March 26th: Facebook launches Get Digital, an online resource that helps teach kids how to responsibly use the internet.

March 26th: Facebook launches its “Messenger Coronavirus Community Hub,” a webpage that explains how to get the most out of Facebook Messenger.

March 29th: Facebook invests $100 million in news industry to support publishers “at a time when advertising revenue is declining.”

March 30th: Facebook donates $25 million to support healthcare workers.

March 31st: Facebook launches its Community Help feature, making it easier for users to both request and offer services such as delivering groceries or providing transportation.

Google

March 3rd: Google makes its video-conferencing tool Hangouts Meet available for free for G Suite Users and schools and expands its hosting maximum to 250 participants.

March 13th: Trump blindsides Google by announcing a thorough coronavirus screening website. As of March 13th, Google has developed no such thing.

March 15th: CEO of Alphabet, Google’s parent company, Sundar Pichai clarifies the situation through a blog post, insisting that Google is creating a very straightforward database of resources.

March 20th: Google subsidiary YouTube announces it will downgrade it its video streaming resolution in Europe following a Twitter plea from European Commissioner Thierry Breton.

March 21st: Google rolls out the coronavirus information website made notorious by Trump’s surprise and confusing announcement at the White House Rose Garden. The website hardly resembles the website Trump and Federal coronavirus response coordinator Dr. Deborah Birx described using a big, fictional chart.

March 21st: Google redesigns it search page to highlight content from health authorities like CDC and WHO.

March 22nd: Trump announces during his daily Coronavirus Task Force press conference that Google, as part of a consortium comprising Amazon, Microsoft, and IBM, will offer hundreds of petaflops worth of computational power for the analysis projects in epidemiology, bioinformatics and molecular modeling according to TechCrunch.

March 24th: YouTube announces it will be lowering streaming quality around the world according to a report from Bloomberg.

March 27th: Pichai announces $800 million donation to small businesses, governments, and health workers via blog post.

April 2nd: Google donates $6.5 million to fund fact-checking organizations in an effort to combat misinformation

April 2nd: Google creates a public health dataset built on the back of its BigQuery data warehouse and opens the dataset to researchers.

Section 230

Tech Groups, Free Expression Advocates Support Twitter in Landmark Content Moderation Case

The Supreme Court’s decision could dramatically alter the content moderation landscape.

Published

on

Photo of Supreme Court Justice Clarence Thomas courtesy of Stetson University

WASHINGTON, December 8, 2022 — Holding tech companies liable for the presence of terrorist content on their platforms risks substantially limiting their ability to effectively moderate content without overly restricting speech, according to several industry associations and civil rights organizations.

The Computer & Communications Industry Association, along with seven other tech associations, filed an amicus brief Tuesday emphasizing the vast amount of online content generated on a daily basis and the existing efforts of tech companies to remove harmful content.

A separate coalition of organizations, including the Electronic Frontier Foundation and the Center for Democracy & Technology, also filed an amicus brief.

Supreme Court to hear two social media cases next year

The briefs were filed in support of Twitter as the Supreme Court prepares to hear Twitter v. Taamneh in 2023, alongside the similar case Gonzalez v. Google. The cases, brought by relatives of ISIS attack victims, argue that social media platforms allow groups like ISIS to publish terrorist content, recruit new operatives and coordinate attacks.

Both cases were initially dismissed, but an appeals court in June 2021 overturned the Taamneh dismissal, holding that the case adequately asserted its claim that tech platforms could be held liable for aiding acts of terrorism. The Supreme Court will now decide whether an online service can be held liable for “knowingly” aiding terrorism if it could have taken more aggressive steps to prevent such use of its platform.

The Taamneh case hinges on the Anti-Terrorism Act, which says that liability for terrorist attacks can be placed on “any person who aids and abets, by knowingly providing substantial assistance.” The case alleges that Twitter did this by allowing terrorists to utilize its communications infrastructure while knowing that such use was occurring.

Gonzalez is more directly focused on Section 230, a provision under the Communications Decency Act that shields platforms from liability for the content their users publish. The case looks at YouTube’s targeted algorithmic recommendations and the amplification of terrorist content, arguing that online platforms should not be protected by Section 230 immunity when they engage in such actions.

Justice Clarence Thomas tips his hand against Section 230

Supreme Court Justice Clarence Thomas wrote in 2020 that the “sweeping immunity” granted by current interpretations of Section 230 could have serious negative consequences, and suggested that the court consider narrowing the statute in a future case.

Experts have long warned that removing Section 230 could have the unintended impact of dramatically increasing the amount of content removed from online platforms, as liability concerns will incentivize companies to err on the side of over-moderation.

Without some form of liability protection, platforms “would be likely to use necessarily blunt content moderation tools to over-restrict speech or to impose blanket bans on certain topics, speakers, or specific types of content,” the EFF and other civil rights organizations argued.

Platforms are already self-motivated to remove harmful content because failing to do so can risk their user base, CCIA and the other tech organizations said.

There is an immense amount of harmful content to be found on online and moderating it is a careful, costly and iterative process, the CCIA brief said, adding that “mistakes and difficult judgement calls will be made given the vast amounts of expression online.”

Continue Reading

Social Media

Twitter Takeover by Elon Musk Forces Conflict Over Free Speech on Social Networks

Transparency laws in Calif. and N.Y. are the ‘liberal’ counterpart to the ‘conservative’ speech laws in Texas and Florida.

Published

on

WASHINGTON, November 23, 2022 — As the Supreme Court prepares to hear two cases that may decide the future of content moderation, panelists on a Broadband Breakfast Live Online panel disagreed over the steps that platforms can and should take to ensure fairness and protect free speech.

Mike Masnick, founder and editor of Techdirt, argued that both sides of the aisle were attempting to control speech in one way or another, pointing to laws in California and New York as the liberal counterpoints to the laws in Texas and Florida that are headed to the Supreme Court.

“They’re not as blatant, but they are nudging companies to moderate in a certain way,” he said. “And I think those are equally unconstitutional.”

Censorship posed a greater threat to the ideal of free speech than would a law forcing platforms to carry certain content, said Bret Swanson, a nonresident senior fellow at the American Enterprise Institute.

“Free speech and pluralism, as an ethos for the country and really for the West, are in fact more important than the First Amendment,” he said.

At the same time, content moderation legislation is stalled by a sharp partisan divide, said Mark MacCarthy, a nonresident senior fellow in governance studies at the Brookings Institution’s Center for Technology Innovation.

“Liberals and progressives want action to remove lies and hate speech and misinformation from social media and the conservatives want equal time for conservative voices, so there’s a logjam gridlock that can’t move,” he said. “I think it might be broken if, as I predict, the Supreme Court says that the only way you can regulate social media companies is through transparency.”

Twitter’s past and current practices raise questions about bias and free speech

While talking about Elon Musk’s controversial changes to Twitter’s content moderation practices, panelists also discussed the impact of Musk’s rhetoric surrounding the topic more broadly.

“Declaring yourself as a free speech site without understanding what free speech actually means is something that doesn’t last very long,” Masnick said.

When a social media company like Twitter or Parler declares itself to be a “free speech site” is really just sending a signal to some of the worst people and trolls online to begin harassment, abuse and bigotry, he said.

That is not a sustainable business model, Masnick argued.

But Swanson took the opposite approach. He called Musk’s acquisition of Twitter “a real seminal moment in the history and the future of free speech,” and called it an antidote to “the most severe collapse of free speech maybe in American history.”

MacCarthy said he didn’t believe the oft-repeated assertion that Twitter was biased against conservatives before most Musk took over. “The only study I’ve seen of political pluralism on Twitter — and it was done by Twitter itself back when they had the staff to do that kind of thing — suggested that Twitter’s amplification and recommendation engines actually favored conservative tweets over liberal ones.”

Masnick agreed, pointing to other academic studies: “They seemed to bend over backwards to often allow conservatives to break the rules more than others,” he said.

Randolph May, president of The Free State Foundation, said that he was familiar with the studies but disagreed with their findings.

Citing the revelations from the laptop of Hunter Biden, a story that the New York Post broke in October 2020 about the Joe Biden’s son, May said: “To me, that that was a consequential censorship action. Then six months later before a congressional committee, [Twitter CEO] Jack Dorsey said, ‘Oops, we made we made a big mistake when we took down the New York Post stories.’”

Multiple possibilities for the future of content moderation

Despite his criticism of current practices, May said he did not believe platforms should eliminate content moderation practices altogether. He drew a distinction between topics subject to legitimate public debate and those posts that encourage terrorism or facilitate sex trafficking. Those kinds of posts should be subject to moderation practices, he said.

May made three suggestions for better content moderation practices: First, platforms should establish a presumption that they will not censor or downgrade material without clear evidence that their terms of service have been violated.

Second, platforms should work to enable tools that facilitate personalization of the user experience.

Finally, the current state of Section 230 immunity should be replaced with a “reasonableness standard,” he said.

Other panelists disagreed with the subjectivity of such a reasonableness standard. MacCarthy highlighted the Texas social media law, which bans discrimination based on viewpoint. “Viewpoint is undefined: What does that mean?” he asked.

“Does it mean you can’t get rid of Nazi speech, you can’t get rid of hate speech, you can’t get rid of racist speech? What does it mean? No one knows. And so here’s a requirement of government that no one can interpret. If I were the Supreme Court, I’d declare that void for vagueness in a moment.”

MacCarthy predicted that the Supreme Court would reject the content-based provisions in the Texas and Florida laws while upholding the transparency standard, opening the door, he argued, for bipartisan transparency legislation.

But to Masnick, even merely a transparency requirement would be an unsatisfactory result: “How would conservatives feel if the government said, ‘Fox News needs to be transparent about how they make their editorial decision making?’”

“I think everyone would recognize immediately that that is a huge First Amendment concern,” he said.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, November 23, 2022, 12 Noon ET – Elon and Ye and Donald, Oh My!

With Elon Musk finally taking the reins at Twitter after a tumultuous acquisition process, what additional new changes will come to the world’s de facto public square? The world’s richest man has already reinstated certain banned accounts, including that of former president Donald Trump. Trump has made his own foray into the world of conservative social media, as has politically polarizing rapper Ye, formerly Kanye West, currently in the process of purchasing right-wing alternative platform Parler. Ye is no stranger to testing the limits of controversial speech. With Twitter in the hands of Musk, Parler in the process of selling and Trump’s Truth Social sort-of-kind-of forging ahead in spite of false starts, is a new era of conservative social media upon us?

Panelists

  • Mark MacCarthy, Nonresident Senior Fellow in Governance Studies, Center for Technology Innovation, Brookings Institution
  • Mike Masnick, Founder and Editor, Techdirt
  • Randolph May, President, The Free State Foundation
  • Bret Swanson, Nonresident Senior Fellow, American Enterprise Institute
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

Mark MacCarthy is a Nonresident Senior Fellow in Governance Studies at the Center for Technology Innovation at Brookings. He is also adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department. He teaches courses in the governance of emerging technology, AI ethics, privacy, competition policy for tech, content moderation for social media, and the ethics of speech. He is also a Nonresident Senior Fellow in the Institute for Technology Law and Policy at Georgetown Law.

Mike Masnick is the founder and editor of the popular Techdirt blog as well as the founder of the Silicon Valley think tank, the Copia Institute. In both roles, he explores the intersection of technology, innovation, policy, law, civil liberties, and economics. His writings have been cited by Congress and the EU Parliament. According to a Harvard Berkman Center study, his coverage of the SOPA copyright bill made Techdirt the most linked-to media source throughout the course of that debate.

Randolph May is founder and president of The Free State Foundation, an independent, non-profit free market-oriented think tank founded in 2006. He has practiced communications, administrative, and regulatory law as a partner at major national law firms. From 1978 to 1981, May served as Assistant General Counsel and Associate General Counsel at the Federal Communication Commission. He is a past Chair of the American Bar Association’s Section of  Administrative Law and Regulatory Practice.

Bret Swanson is president of the technology research firm Entropy Economics LLC, a nonresident senior fellow at the American Enterprise Institute, a visiting fellow at the Krach Institute for Tech Diplomacy at Purdue University and chairman of the Indiana Public Retirement System (INPRS). He writes the Infonomena newsletter at infonomena.substack.com.

Drew Clark (moderator) is CEO of Breakfast Media LLC, the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Under the American Recovery and Reinvestment Act of 2009, he served as head of the State Broadband Initiative in Illinois. Now, in light of the 2021 Infrastructure Investment and Jobs Act, attorney Clark helps fiber-based and wireless clients secure funding, identify markets, broker infrastructure and operate in the public right of way.

Social media controversy has centered around Elon Musk’s Twitter, Ye’s new role in Parler, and former U.S. President Donald Trump

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Social Media

Trump’s Twitter Account Reinstated as Truth Social Gets Merger Extension

The merger, delayed by a federal probe, has left Truth Social without expected funding.

Published

on

Elon Musk next to a phone displaying the Twitter account of Donald Trump, who has said he will continue to post only on Truth Social.
Photo courtesy of Steve Jurvetson. Graphic by Em McPhie.

WASHINGTON, November 22, 2022 — Digital World Acquisition Corp. shareholders voted Tuesday to extend the Dec. 8 deadline for its merger with Truth Social, giving the platform a chance at survival as it faces financial and legal challenges.

The right-wing alternative social media platform championed by former President Donald Trump is currently under federal investigation for potential securities violations, which has delayed the merger and forced Truth Social to operate without $1.3 billion in expected funding.

The DWAC vote was delayed six times in order to raise the necessary support, with the company noting in a securities filing that it would be “forced to liquidate” if the vote was unsuccessful. Private investors have already withdrawn millions in funding.

Trump indicated on Truth Social in September that he was prepared to find alternative funding. “SEC trying to hurt company doing financing (SPAC),” he wrote. “Who knows? In any event, I don’t need financing, ‘I’m really rich!’ Private company anyone???”

Trump’s potential return to Twitter poses another risk for Truth Social

Meanwhile, under the new leadership of Elon Musk, Twitter reinstated Trump’s account, which was banned after then-Twitter executives alleged he stoked the January 6 riot at the Capitol. The reinstatement was made official after Musk asked in a public Twitter poll — which received around 15 million votes — whether he should allow the controversial former president back on the platform.

Trump’s potential return to Twitter could undermine Truth Social’s primary attraction, which could be another blow to the fledgling platform.

On Truth Social, the former president encouraged his followers to vote in the poll while indicating that he would not return to Twitter. But with 87 million followers on Twitter and fewer than 5 million on Truth Social, Trump may be tempted to make use of his newly reinstated account despite statements to the contrary, particularly in light of the official announcement of his 2024 presidential campaign.

The campaign could also allow him to bypass his agreement to first post all social media messages to Truth Social and wait six hours before sharing to other platforms. The agreement makes a specific exception for political messaging and fundraising, according to an SEC filing.

Musk’s decision to bring back Trump was one of many controversial decisions he’s made in his short tenure at the social media company — including a number of high-profile firings and the reinstatement of multiple formerly-banned accounts — which has led several major advertisers to pause spending.

Musk tweeted in October that he would convene a “content moderation council with widely diverse viewpoints” before making any “major content decisions or account reinstatements.” No such council has been publicly announced, and the Tweet appeared to have been deleted as of Tuesday.

Ye returns to Twitter while details of Parler acquisition remain uncertain

Trump’s reinstatement seems to have motivated at least one controversial figure to return to Twitter: Ye, formerly Kanye West, whose account was restricted in October after tweeting that he would go “death con 3 on JEWISH PEOPLE.” The restrictions were lifted prior to Musk’s acquisition of Twitter, but the rapper remained silent on the platform until Nov. 20.

“Testing Testing Seeing if my Twitter is unblocked,” he posted.

Right-wing social media platform Parler announced in October that Ye had agreed to purchase the company. Completion of the acquisition is expected by the end of December, but further details, including financial terms, have yet to be announced.

Twitter draws legislative attention, with changes to the social media landscape on the horizon

One of Musk’s first major changes to Twitter attempted to replace the existing verification system with a process through which anyone could pay $8 per month for a verified account. The initial rollout of paid verification sparked a swarm of accounts impersonating brands and public figures such as Sen. Ed Markey, D-Mass., who responded with a letter demanding answers about how the new verification process would prevent future impersonation.

Markey also co-signed a Nov. 17 letter written by Sen. Richard Blumenthal, D-Conn., asking the Federal Trade Commission to investigate Twitter for consumer protection violations in light of “serious, willful disregard for the safety and security of its users.”

Musk responded to the letter by posting a meme that mocked the senators’ priorities, but he later appeared to be rethinking the new verification process.

“Holding off relaunch of Blue Verified until there is high confidence of stopping impersonation,” Musk tweeted on Monday.

Other changes to the platform may be out of Musk’s hands, as state and federal legislators consider an increasing number of proposals for the regulation of digital platforms.

The Computer and Communications Industry Association released on Monday a summary of the trends in state legislation regarding content moderation. More than 250 such bills have been introduced during the past two years.

“As a result of the midterm elections, a larger number of states will have one party controlling both chambers of the legislature in addition to the governor’s seat,” CCIA State Policy Director Khara Boender said in a press release. “This, coupled with an increased interest in content moderation issues — on both sides of the aisle — leads us to believe this will be an increasingly hot topic.”

Continue Reading

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Broadband Breakfast Research Partner

Trending