Connect with us

Big Tech

Twitter Whistleblower Says Company Needs to Work to Permanently Delete User Data

Meanwhile, Twitter shareholders approved a deal to sell the company to Elon Musk, who wants out.

Published

on

Photo of Peiter Zatko at Tuesday's Senate Judiciary hearing

WASHINGTON, September 14, 2022 – Twitter’s former head of security and now company whistleblower told a Senate Judiciary committee Tuesday that Twitter must put more resources into trying to permanently delete user data upon the elimination of accounts to preserve the security and privacy of users.

Peiter Zatko, who was fired from Twitter in January due to performance issues, blew the whistle on the company last month by alleging Twitter’s lack of sufficient security and privacy safeguards poses a national security risk. He alleged that the company does not delete user data when accounts are deleted.

On Tuesday, Zatko told the Senate Judiciary committee that the company needs to take the step of ensuring that the personal information of users are deleted when they destroy their accounts.

He alleged company engineers can access any user data on Twitter, including home addresses, phone numbers and contact lists, and sell the data without company executives knowing.

“I continued to believe in the mission of the company and root for its success, but that success can only happen if the privacy and security of Twitter users and the public are protected,” Zatko said.

The Wall Street Journal reported Tuesday that Twitter investors approved SpaceX CEO Elon Musk’s takeover of the company, despite the billionaire trying to back out of the deal allegedly over a lack of information about the number of fake accounts on the platform. The company and Musk are currently in court battling over whether he must follow through on the deal.

Musk’s lawyer has asked the court to delay the trial — scheduled for mid-October — to allow his client to investigate the whistleblower’s claims, according to reporting from Reuters.

Free Speech

Panel Hears Opposing Views on Content Moderation Debate

Some agreed there is egregious information that should be downranked on search platforms.

Published

on

Screenshot of Renee DiResta, research manager at Stanford Internet Observatory.

WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.

Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.

Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.

That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.

The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.

At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.

Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.

Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.

Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.

Continue Reading

Big Tech

A White House Event, Biden Administration Seeks Regulation of Big Tech

Participants voiced concerns over alleged abuses by big tech companies.

Published

on

Photo of President Joe Biden

WASHINGTON, September 9, 2022 – President Joe Biden on Thursday called for a federal privacy standard, Section 230 reform, and increased antitrust scrutiny against big tech.

“Although tech platforms can help keep us connected, create a vibrant marketplace of ideas, and open up new opportunities for bringing products and services to market, they can also divide us and wreak serious real-world harms,” according to a White House readout from the administration’s listening session on Thursday.

Participants at the White House event voiced concerns over alleged abuses by big tech companies.

A new data privacy regime?

The Biden administration called for “clear limits on the ability to collect, use, transfer, and maintain our personal data.” It also endorsed bipartisan congressional efforts to establish a national privacy standard.

Last June, Rep. Frank Pallone Jr., D-N.J., introduced the American Data Privacy and Protection Act. The bill gained substantial bipartisan support and was advanced by the House Energy and Commerce Committee in July.

In the absence of federal privacy laws, several states drafted privacy laws of their own. The Golden State, for instance, implemented the California Consumer Privacy Act in 2018. The CCPA’s protections were extended by the California Privacy Rights Act of 2020, which goes into effect in January 2023.

Biden maintains his position seeking changes to Section 23o

“Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials,” argued the White House document.

Biden’s hostility towards Section 230 is not new. Section 230 protects internet platforms from most legal liability that might otherwise result from third party–generated content. For example, although an online publication may be guilty of libel for a news story it publishes, it cannot be held liable for slanderous reader posts in its comments section.

Critics of Section 230 say that it unfairly shields rogue social media companies from accountability for their misdeeds. And in addition to Biden and other Democrats, many Republicans are dislike the provision. Sens. Ted Cruz, R-Texas, and Josh Hawley, R-Missouri, argue that platforms such as Twitter, Facebook, and YouTube discriminate against conservative speech and therefore should not benefit from such federal legal protections.

Section 230’s proponents say that it is the foundation of online free speech.

Ramping up antitrust

“Today…a small number of dominant Internet platforms use their power to exclude market entrants,” Thursday’s press release said. This sentiment is consonant with the administration’s antitrust policies to date. Indeed, Lina Khan, chair of the Federal Trade Commission, was a vocal antitruster in the academy and has greatly expanded the scope of the agency’s antitrust efforts since her appointment in 2021.

In the Senate, Sen. Amy Klobuchar, D-Minnesota, is sponsoring the American Innovation and Choice Online Act, a bill that bans large online platforms from engaging in putatively “anticompetitive” business practices. The measure was approved by the Judiciary Committee earlier this year, and, though it was stalled over the summer to make way for other Democratic legislative priorities, it may come for a vote this fall.

Continue Reading

Big Tech

Tech Policy Conference Panelists Tackle Challenges of Federal Privacy, Antitrust Laws

Academics were concerned about an anti-preference bill, while one state AG said he’s ‘pragmatic’ about a federal privacy law.

Published

on

Screenshot of Colorado Attorney General Phil Weiser at the TPI Aspen Forum on Monday

ASPEN, Colorado, August 15, 2022 – Academics expressed concern Monday about antitrust legislation before Congress that would prevent companies from preferencing their own products on their platforms, arguing the legislation targets only certain companies and hasn’t shown it would benefit consumers.

The American Innovation and Choice Online Act, S.2992, which is currently before the Senate and aims to ban discrimination against third-party products on the host platform, defines targeted companies by their value – which effectively narrows the number of affected companies and makes it a problematic piece of legislation, according to some academics.

“I think it’s very difficult to single out specific companies…for specific rules,” Judy Chevalier, a professor of finance and economics at Yale University, said at the TPI Aspen Forum on Monday.

“It’s hard to imagine what is the principle whereby private label band aids are a bad idea at Amazon but they’re a good idea at Walmart,” she added. “The self-preferencing rule can be applied to Amazon in a way that I think can be interpreted to limit their ability to introduce and promote their private label products.

“It’s not very convincing that this behavior has thus far harmed consumers,” she continued. “So I think singling out particular companies in this broad brush way strikes me as problematic.”

Dennis Carlton, a professor of economics at the University of Chicago business school, said the legislation makes him “nervous” because of the impact on innovation of targeting certain industries over others.

“High tech industries are rapidly changing, and whenever we have regulation or try and have regulation of rapidly changing industries, it is just too hard for the regulators to keep track of what’s going on and they wind up causing delays in innovation,” Carlton said.

“Innovation is one of the strongest ways we improve our products and our standard of living. It makes me very nervous when you target specifically an industry or…make exceptions to other industries without…economic criteria or any attempt to show that this would produce a benefit not a harm. So it makes me nervous these proposals.”

Similar sentiments were expressed on a Broadband Breakfast panel in March, in which an association representing large technology companies blasted the legislation introduced by Senator Amy Klobuchar, D-Minn., as unfairly targeting certain online platforms and excluding large retailers.

“The bill very carefully picks winners and losers,” said Arthur Sidney, vice president of public policy at Computer and Communications Industry Association, which includes members like Amazon, Google, and Facebook.

State AGs weigh in on privacy legislation

On a separate panel at the Forum on Monday, the state attorneys general of Colorado and Nebraska discussed the state of privacy legislation – both in their own state and at the federal level.

Introduced in June, the American Data Privacy and Protection Act (H.R. 8152) cleared the House Energy and Commerce Committee last month for House floor votes. The proposed bill would provide Americans protections against discriminatory use of their data, require covered entities to minimize the data they collect, and prevent customers from needing to pay for privacy.

Despite his state having passed comprehensive privacy laws that are considered leading and a model by some, Colorado AG Phil Weiser said he’s “pragmatic” about a federal law.

“If a federal law is as good and strong as what we worked on in Colorado, I am comfortable with that law preempting Colorado, provided state AGs have the authority to enforce federal law,” he said. “It’s important to me to have that model because, you could imagine a world where the feds are not engaged in active enforcement, then the states can pick up that slack.”

Before the introduction of the legislation, some experts were concerned that having a number of different state privacy laws would harm smaller companies operating across multiple states. One lawyer noted that the longer companies have to wait for a uniform federal law, the greater the burden of compliance on them.

In fact, two Democratic California reps – Anna Eshoo and Nanette Barragan – were concerned that such a federal law would override their own state’s law. Eshoo proposed a provision, which was not included during a markup of the bill, that would have allowed states to add privacy provisions on top of the federal baseline.

“If you do have multiple standards,” Weiser said, “we have to solve for the problem, which is a problem right now of what I call interoperability or harmonization: How do we make sure that different state laws enable compliance across them as opposed to putting businesses in, to me, the unacceptable position of saying, ‘I can either comply with Colorado’s law or California’s law, but not both.’?”

Having had a privacy proposal in its legislature that did not pass, Doug Peterson, AG for Nebraska, said the state is taking a wait-and-see approach, including observing how states, including Colorado, fare with their own laws.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending