WASHINGTON, December 2, 2021 – Facebook whistleblower Frances Haugen told the House Subcommittee on Communications and Technology on Wednesday that the committee must act to investigate Facebook’s social harms to consumers.
Haugen said Congress should be concerned about how Facebook’s products are used to influence vulnerable populations.
Haugen’s testimony, delivered at Wednesday’s subcommittee hearing, urged lawmakers to impose accountability and transparency safeguards on Facebook to prevent it from misleading the public. It comes on the heels of her first testimony in October in front of the subcommittee on consumer protection, product safety and data security in which she urged Congress to force Facebook to make its internal research public allegedly because it can’t be trusted to act on it.
That testimony came after she leaked documents to the Wall Street Journal and the Securities and Exchange Commission that suggested Facebook knew about the negative mental health impacts of photo-sharing app Instagram had on its teen users but allegedly did nothing to combat it.
“No efforts to address these problems are ever going to be effective if Facebook is not required to share data in support of its claims or be subject to oversight of its business decisions,” Haugen said Wednesday. “The company’s leadership keeps vital information from the public, the U.S. government, its shareholders, and governments around the world. The documents I have provided prove that Facebook has repeatedly misled us about what its own research reveals about the safety of children, its role in spreading hateful and polarizing messages, and so much more.”
Facebook’s impact on communities of color
Among the social harms that advocates highlighted, lawmakers were particularly interested in Facebook’s negative impact on communities of color. Rashad Robinson, president of online racial justice organization Color of Change, expressed frustration at technology companies’ disregard for the truth.
“I have personally negotiated with leaders and executives at Big Tech corporations like Facebook, Google, Twitter and Airbnb, including Mark Zuckerberg, over a number of years,” Robinson said. “I sat across the table from him, looking into his eyes, experiencing firsthand the lies, evasions, ignorance and complete lack of accountability to any standard of safety for Black people and other people of color.”
Robinson recalled during the height of the national racial justice protests in 2020 that Zuckerberg told him that the harms Black people were experiencing on Facebook “weren’t reflected in their own internal data.” Now, Robinson said, “we know from the documents shared by Frances Haugen and others that his internal researchers were, in fact, sounding alarms at the exact same time.”
Robinson also highlighted how Facebook’s own data shows that the company disables Black users for less extreme content more often than white users, “often for just talking about the racism they face,” he said.
To foster real solutions for social media consumer protection, Robinson suggests that lawmakers reform Section 230 of the Communications Decency Act to hold companies accountable for minimizing the adverse impact of the content from which they profit.
Currently, Section 230 shields online platforms from liability derived from content posted on their platforms that leads to harm. Conservative advocates for gutting Section 230 say the law should be repealed because it gives social media companies too much power to censor conservative voices, while proponents of keeping Section 230 argue that the law is necessary in some capacity because it allows for the free exchange of thoughts and ideas in our society.
Robinson said reforming Section 230 to impose liability for content on the companies sites would “protect people against Big Tech design features that amplify or exploit content that is clearly harmful to the public.”
These recommendations come as the House considered four social media consumer protection bills on Wednesday: H.R. 2154, the “Protecting Americans from Dangerous Algorithms Act”; H.R. 3184, the “Civil Rights Modernization Act of 2021”; H.R. 3421, the “Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act” or the “SAFE TECH Act”; and H.R. 5596, the “Justice Against Malicious Algorithms Act of 2021.”
Tech Policy Conference Panelists Tackle Challenges of Federal Privacy, Antitrust Laws
Academics were concerned about an anti-preference bill, while one state AG said he’s ‘pragmatic’ about a federal privacy law.
ASPEN, Colorado, August 15, 2022 – Academics expressed concern Monday about antitrust legislation before Congress that would prevent companies from preferencing their own products on their platforms, arguing the legislation targets only certain companies and hasn’t shown it would benefit consumers.
The American Innovation and Choice Online Act, S.2992, which is currently before the Senate and aims to ban discrimination against third-party products on the host platform, defines targeted companies by their value – which effectively narrows the number of affected companies and makes it a problematic piece of legislation, according to some academics.
“I think it’s very difficult to single out specific companies…for specific rules,” Judy Chevalier, a professor of finance and economics at Yale University, said at the TPI Aspen Forum on Monday.
“It’s hard to imagine what is the principle whereby private label band aids are a bad idea at Amazon but they’re a good idea at Walmart,” she added. “The self-preferencing rule can be applied to Amazon in a way that I think can be interpreted to limit their ability to introduce and promote their private label products.
“It’s not very convincing that this behavior has thus far harmed consumers,” she continued. “So I think singling out particular companies in this broad brush way strikes me as problematic.”
Dennis Carlton, a professor of economics at the University of Chicago business school, said the legislation makes him “nervous” because of the impact on innovation of targeting certain industries over others.
“High tech industries are rapidly changing, and whenever we have regulation or try and have regulation of rapidly changing industries, it is just too hard for the regulators to keep track of what’s going on and they wind up causing delays in innovation,” Carlton said.
“Innovation is one of the strongest ways we improve our products and our standard of living. It makes me very nervous when you target specifically an industry or…make exceptions to other industries without…economic criteria or any attempt to show that this would produce a benefit not a harm. So it makes me nervous these proposals.”
Similar sentiments were expressed on a Broadband Breakfast panel in March, in which an association representing large technology companies blasted the legislation introduced by Senator Amy Klobuchar, D-Minn., as unfairly targeting certain online platforms and excluding large retailers.
“The bill very carefully picks winners and losers,” said Arthur Sidney, vice president of public policy at Computer and Communications Industry Association, which includes members like Amazon, Google, and Facebook.
State AGs weigh in on privacy legislation
On a separate panel at the Forum on Monday, the state attorneys general of Colorado and Nebraska discussed the state of privacy legislation – both in their own state and at the federal level.
Introduced in June, the American Data Privacy and Protection Act (H.R. 8152) cleared the House Energy and Commerce Committee last month for House floor votes. The proposed bill would provide Americans protections against discriminatory use of their data, require covered entities to minimize the data they collect, and prevent customers from needing to pay for privacy.
Despite his state having passed comprehensive privacy laws that are considered leading and a model by some, Colorado AG Phil Weiser said he’s “pragmatic” about a federal law.
“If a federal law is as good and strong as what we worked on in Colorado, I am comfortable with that law preempting Colorado, provided state AGs have the authority to enforce federal law,” he said. “It’s important to me to have that model because, you could imagine a world where the feds are not engaged in active enforcement, then the states can pick up that slack.”
Before the introduction of the legislation, some experts were concerned that having a number of different state privacy laws would harm smaller companies operating across multiple states. One lawyer noted that the longer companies have to wait for a uniform federal law, the greater the burden of compliance on them.
In fact, two Democratic California reps – Anna Eshoo and Nanette Barragan – were concerned that such a federal law would override their own state’s law. Eshoo proposed a provision, which was not included during a markup of the bill, that would have allowed states to add privacy provisions on top of the federal baseline.
“If you do have multiple standards,” Weiser said, “we have to solve for the problem, which is a problem right now of what I call interoperability or harmonization: How do we make sure that different state laws enable compliance across them as opposed to putting businesses in, to me, the unacceptable position of saying, ‘I can either comply with Colorado’s law or California’s law, but not both.’?”
Having had a privacy proposal in its legislature that did not pass, Doug Peterson, AG for Nebraska, said the state is taking a wait-and-see approach, including observing how states, including Colorado, fare with their own laws.
Americans Should Look to Filtration Software to Block Harmful Content from View, Event Hears
One professor said it is the only way to solve the harmful content problem without encroaching on free speech rights.
WASHINGTON, July 21, 2022 – Researchers at an Internet Governance Forum event Thursday recommended the use of third-party software that filters out harmful content on the internet, in an effort to combat what they say are social media algorithms that feed them content they don’t want to see.
Users of social media sites often don’t know what algorithms are filtering the information they consume, said Steve DelBianco, CEO of NetChoice, a trade association that represents the technology industry. Most algorithms function to maximize user engagement by manipulating their emotions, which is particularly worrisome, he said.
But third-party software, such as Sightengine and Amazon’s Rekognition – which moderate what users see by bypassing images and videos that the user selects as objectionable – could act in place of other solutions to tackle disinformation and hate speech, said Barak Richman, professor of law and business at Duke University.
Richman argued that this “middleware technology” is the only way to solve this universal problem without encroaching on free speech rights. He suggested Americans in these technologies – that would be supported by popular platforms including Facebook, Google, and TikTok – to create the buffer between harmful algorithms and the user.
Such technologies already exist in limited applications that offer less personalization and accuracy in filtering, said Richman. But the market demand needs to increase to support innovation and expansion in this area.
Americans across party lines believe that there is a problem with disinformation and hate speech, but disagree on the solution, added fellow panelist Shannon McGregor, senior researcher at the Center for Information, Technology, and Public Life at the University of North Carolina.
The conversation comes as debate continues regarding Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for content their users post. Some say Section 230 only protects “neutral platforms,” while others claim it allows powerful companies to ignore user harm. Experts in the space disagree on the responsibility of tech companies to moderate content on their platforms.
Surveillance Capitalism a Symptom of Web-Dependent Companies, Not Ownership
Former Google executive Richard Whitt critiqued Ben Tarnoff’s argument in ‘Internet for the People’ during Gigabit Libraries discussion.
July 15, 2022 – A former Google executive pushed back against a claim that the privatization of broadband infrastructure has created the world’s current data and privacy concerns, instead suggesting that it’s the companies that rely on the web that have helped fuel the problem.
Richard Whitt, president of technology non-profit GLIA Foundation and former employee of Google, argued that while the World Wide Web is rife with problems, the internet infrastructure underlying the web remains fundamentally sound.
Whitt was responding to claims made by Ben Tarnoff, a journalist and founder of Logic Magazine, at the Libraries in Response event on July 8. Tarnoff argued – as he does in his recent book, “Internet for the People” – that the privatization of broadband infrastructure in the 1990s has allowed the use and commodification of personal data for profit to flourish (known as surveillance capitalism).
Privatization, Tarnoff claims, has raised such issues as polarization of ideologies and the “annihilation of our privacy.” As a result, he said, the American people are losing trust in tech companies that “rule the internet.”
Whitt responded that the internet is working well based on the protocols, standardized rules for routing and addressing packets of data to travel across networks, derived at the onset of the internet.
The World Wide Web, a system built on the internet to allow communication using easy-to-understand graphical user interfaces, allowed for browsers and other applications to emerge, which have since perpetuated surveillance capitalism into the governing approach of the web that it is today, said Whitt, suggesting it’s not ownership of the hard infrastructure that’s the problem.
The advertising market that encourages surveillance extraction, analysis and manipulation is, and will continue to be, profitable, Whitt continued.
The discussion follows a Pew Research Center study that found that only half of Americans believe tech companies have a positive effect in 2019 compared to a seventy-one percent in 2015.
- Public Knowledge Urges VoIP to Be Regulated Under Title II to Stop Robocalls
- Jeremy Jurick and Paul Schneid: Preparing Data for the FCC’s Broadband Filing
- Google Not Publisher to Australian Court, Omnispace Testing 5G Satellites, AT&T’s $6M to Digital Literacy
- All States Want BEAD Funds, Digicomm Secures Investment, Glo Fiber Expanding in PA
- Institute for Local Self-Reliance Announces Two Initiatives to Foster Local Broadband Solutions
- Broadband Breakfast on August 31, 2022 – How to Maximize Minority Participation in the Affordable Connectivity Program
Signup for Broadband Breakfast
Broadband Roundup2 months ago
Crypto Regulation Bill, Ziply Fiber Acquires EONI, AT&T Tests 5G via Drone
Fiber2 months ago
AT&T Says Gigabit Download Speed Demand Continues to Grow
Broadband Roundup2 months ago
Broadband Prices Decline, AT&T’s Fiber Build in Texas, Conexon Partners for Build in Georgia
Broadband Roundup1 month ago
TikTok Data Practices, FCC’s Mandate on Wireless Outages, AT&T First Responder Network
Broadband Roundup2 months ago
Global Tech Competition Bill, AT&T Hits 20 Gbps Symmetrical, Hargray Fiber in Georgia
Broadband Roundup2 months ago
FiberLight Buy, T-Mobile Shuts Down Older Networks, AT&T and Dish Lead US O-RAN Alliance
Broadband Roundup1 month ago
Broadcast Transparency Decision, AT&T McDonald’s Expansion, Brightspeed in Missouri
#broadbandlive3 months ago
Broadband Breakfast on June 29, 2022 — Broadband Mapping and Data