February 21, 2021 – “Look, I’m sorry.”
That’s what Robinhood CEO Vladimir Tenev said to those who lost money in the stock frenzy surrounding GameStop that began in late January. The virtual hearing got off to a tense start.
House Financial Services Committee Chairman Maxine Waters, D-Calif., grilled Tenev and other chief executives involved in stock spectacle.
Both Democrats and Republicans on the committee pressed Tenev to explain why he authorized Robinhood to prevent its users from buying into GameStop stock and other popular meme stocks.
The actions, causing outrage for allegedly protecting large hedge funds against small investors, were probed throughout the hearing titled “Game Stopped? Who Wins and Loses When Short Sellers, Social Media, and Retail Investors Collide.”
Rep. Patrick McHenry, R-N.C., asked Robinhood why it restricted the buying but not the selling of GameStop. Further, he said, why did traders get locked out of the buy side only?
Tenev replied that restricting GameStop and other securities was driven purely by depositing collateral requirements imposed by its clearing houses.
Tenev said there was no other choice but to impose restrictions. He said he sympathized with investors who lost money after GameStop stock fell from its record highs once the buying frenzy ended.
He warned that it would have been “significantly worse” if Robinhood equally prevented its users from selling the stock as well after a frenzy that began on the social media site Reddit.
Rep. Carolyn Maloney
Rep. Carolyn Maloney, D-N.Y., said that Robinhood’s actions to halt buying of certain stocks didn’t just cause confusion and anger, but undermined investor confidence in fundamental fairness.
She asked if Robinhood owed users more disclosure and transparency, and if Robinhood’s “lack of candor” with its customers may have led to wild speculation In reply, Tenev said, ”Look, I’m sorry for what happened.”
While Robinhood did not do everything perfect, it commits to improve and learn from this to prevent similar mistakes in the future, said Tenev.
In particular, she targeted Robinhood’s customer agreement, noting the vague wording regarding when the company can restrict trades.
She said that there was no language whatsoever that mentions clearing house deposit requirements Robinhood is allegedly obligated to meet, other than “volatility” being mentioned.
Citadel CEO Kenneth Griffin, said that his company had no role in Robinhood’s decision to limit trading in GameStop or any other stock. He only became aware of Robinhood’s trading restrictions the same time the public was made aware, he said.
Citadel bet GameStop shares would fall but suffered when the shares rose because millions of small investors began buying up the stock. Citadel’s losses were not nearly as badly as another hedge fund, Melvin Capital, which took a $2 billion investment from Citadel and some of its employees to shore up its finances.
Echoing Citadel, Melvin Capital also pleaded innocence from placing pressure on Robinhood to restrict trades. Melvin Capital Management CEO Gabriel Plotkin, said he was “humbled by these unprecedented events,” and expressed regret for those who lost money.
He also said that Melvin Capital played no role in these trading decisions! Melvin closed out its positions in GameStop days before the trading restrictions went into effect.
There was no evidence of market manipulation in this specific case surrounding GameStop, said Reddit CEO Steve Huffman. He defended Reddit moderators, who are not paid employees.
He said Reddit had an “anti-evil” team composed of engineers, data scientists, and other specialists whose focus is to ensure site integrity and protect against manipulation and spam, among other things.
Huffman did say that the popular thread called WallStreetBets, responsible for fueling the GameStop frenzy, was indeed a real community of users that supports its members who lose money as fast as it congratulates them for their successes in the stock market.
Roaring Kitty relishes the attention
Keith Gill, one of the most influential voices that pushed GameStop on the WallStreetBets Reddit forum, also testified and said he was happy to talk about his GameStop stock purchases. He is believed to have made millions as a result of his investment in GameStop which has enraged others who believe he deceptively manipulated the market in his favor with GameStop via Reddit.
Gill clarified that he was first and foremost “not a cat, not an institutional investor, and not a hedge fund.” He maintained he has no clients and does not provide any personalized investment advice for fees or commissions.
“I’m just an individual whose investment in GameStop and post on social media were based upon my own research and analysis,” he said. He worked for Mass Mutual in the past.
Stocks often trade higher or lower, said Cato Institute
Stocks often trade at higher or lower levels than what formal analysis or fundamentals may claim is best, said Jennifer Schulp, director of financial regulation studies at the Cato Institute. Even as GameStop might still be trading at a higher than fair valuation, it is no cause for concern as “markets are no strangers to bubbles.”
“I cannot opine on whether any regulatory changes are warranted on this incomplete record,” she said. “By no means, though, should these events lead to restrictions on retail investors’ access to the markets.”
Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230
The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.
WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.
The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.
The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.
Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.
Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.
“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”
The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”
From Haugen testimony to legislation
Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.
“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.
Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.
Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.
Haugen’s testimony versus Zuckerberg’s Section 230 vision
In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”
This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.
Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”
Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.
Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.
But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.
An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.
Where the parties agree
Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.
Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.
Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.
With reporting from Riley Steward
Congress Must Force Facebook to Make Internal Research Public, Whistleblower Testifies
Frances Haugen testifies in front of the Senate studying protecting kids online after revealing herself as Facebook whistleblower.
WASHINGTON, October 5, 2021 – The former Facebook employee who outed herself as the whistleblower who leaked documents to the Wall Street Journal that showed Facebook knew its photo-sharing app Instagram contributed to harming the mental health of kids told a Senate committee that the company’s alleged profit-driven motives means the company’s internal research cannot be kept behind closed doors.
Frances Haugen testified Tuesday in front of the Senate Subcommittee on Consumer Protection, Product Safety and Data Security, which is looking into protecting kids online, after identifying herself Sunday on the television program 60 Minutes as the person who gave the Journal and the Securities and Exchange Commission documents showing the company going forward with development of a kids version of Instagram despite knowing the mental health impact its apps have on that demographic. (Facebook has since halted development of the kids app after the Journal story and lawmakers asking for it to be suspended.)
“We should not expect Facebook to change. We need action from Congress,” Haugen said Tuesday.
That action, she recommended, includes forcing Facebook to make all future internal research fully public because the company cannot be trusted to act on its own commissioned work.
Haugen noted that the reason the company did not — and does not — take such action, which could include preemptively shutting down development of its Instagram for kids product, is because the company is allegedly driven by a profit-first model.
“Facebook repeatedly encountered conflicts between its own profits and our safety. Facebook consistently resolved those conflicts in favor of its own profits,” alleged Haugen, who now considers herself an advocate for public oversight of social media.
“The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people. In other cases, their profit optimizing machine is generating self-harm and self-hate — especially for vulnerable groups, like teenage girls. These problems have been confirmed repeatedly by Facebook’s own internal research.”
Despite calls to modify Section 230 of the Communications Decency Act, which shields large tech platforms from legal liability for what their users post, Haugen said that – and tweaks to its outdated privacy protections – won’t be enough.
Facebook has for months touted it removes millions of groups and accounts that violate its community guidelines on hate speech and inciting violence. But Haugen alleges that despite the claims that it actively makes its platforms safer, in actuality, it only takes down three to five percent of those threats.
Asked by Senator Ben Ray Lujan, D-New Mexico, if Facebook “ever found a feature on its platform harmed its users, but the feature moved forward because it would also grow users or increase revenue,” Frances said yes, alleging the company prioritized ease of resharing over the feature’s susceptibility to growing “hate speech, misinformation or violence incitement,” even though the feature would only “decrease growth a tiny, little amount.”
She also alleged that those directions came from the head of the company himself, Mark Zuckerberg, who allegedly chose arbitrary or vague “metrics defined by Facebook, like meaningful social interactions over changes that would have significantly decreased misinformation, hate speech and other inciting content.”
Facebook’s troubles, up to this point
Facebook has already been the target of Washington’s ire for months now. It has been cited as an alleged enabler of the January 6 Capitol Hill riot that sought to stop the transition to a Joe Biden presidency, despite the platform banning former president Donald Trump. Its platform had also been blamed for allowing the spread of information that has led to violence in parts of the world, including genocide in Myanmar.
The platform has already been accused of suppressing stories from progressive news outlets and censors information that conflicts with its own personal interest, and that its algorithms deliver the same kinds of information to people so they are not exposed to different viewpoints, as a number of public interest groups have claimed.
In 2018, Facebook made worldwide news after reports in the Guardian and the New York Times found nearly 100 million Facebook profiles were harvested by a company called Cambridge Analytica, which used the data to build profiles of people to provide them with material that made them sway in a political direction.
Federal regulators have already been looking to deal with Facebook and other Big Tech companies, as that has one clear agenda item of the Biden administration. The White House has already perched Amazon critic Lina Khan as the head of the Federal Trade Commission, which has recently filed a monopoly complaint against the company in court, and other figures, including Google critic Jonathan Kanter to the Department of Justice’s antitrust division.
Facebook’s week has gone from bad to worse. Haugen, a former Facebook product manager and Harvard MBA graduate, testified in a hearing titled “Protecting Kids Online” before the Subcommittee on Consumer Protection, Product Safety, and Data Security Hearing on Tuesday. Previous opposition to Facebook’s plans to expand its products to minors has come from external parties like public interest groups and Congress.
Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree
While some advocate for a tightening of language, other experts believe Section 230 should not be touched.
WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.
There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions
“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”
Antitrust not necessarily the best means to improve competition policy
For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.
Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.
Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.
“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.
More content moderation needed
Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.
“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”
Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.
“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”
She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.
What problems do critics of Section 230 want addressed?
Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.
Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”
Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.
Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.
“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”
“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.
- Broadband Breakfast on October 20, 2021 — Get Your Share of $75 Billion in Broadband Grants with Broadband.Money
- Celebrating Progress on 5G, the FCC’s Brendan Carr Urges Broadband Mapping
- Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230
- Democrats Frustrated with Biden Inaction on FCC, Comcast Gets 10 Gbps, Louisiana Wants Widespread Broadband
- UTOPIA Fiber Goes to Court in Utah Over American Fork’s Build Permit Refusals
- Internet of Things Will Revolutionize Industries as Connectivity Ceases To Be Limiting Factor
Signup for Broadband Breakfast
Antitrust4 months ago
Experts Disagree Over Need, Feasibility of Global Standards for Antitrust Rules
Broadband Roundup2 months ago
Senators Intro App Bill, Groups Drop TracFone Buy Complaint, States Want Shorter Robocall Deadline
Infrastructure3 months ago
Lumen Responds to Allegations it Underbuilds While Collecting Public Funds
Broadband Roundup2 months ago
Mapping Comment Deadline Extended, AT&T Gets Federal Contract, 5G and LTE Drive Microwave Demand
Antitrust4 months ago
House Judiciary Committee Clears Six Antitrust Bills Targeting Big Tech Companies
Antitrust2 months ago
Daniel Hanley: Federal Communications Commission Must Block Verizon’s Acquisition of TracFone
Broadband Roundup4 months ago
AT&T Labelling Over 1B Robocalls, NTIA Updates Broadband Guide, Fiber Assoc. Says Current Speeds Inadequate
#broadbandlive2 months ago
Broadband Breakfast on September 1, 2021 — What’s Next for Broadband Infrastructure Legislation?