Connect with us

Big Tech

Amazon, Apple, Facebook and Google Respond and Deflect Manifold Criticism by House Judiciary Panel

Published

on

Screenshot of Rep. David Cicilline from the webcast

July 30, 2020 — Wednesday’s House Judiciary Antitrust subcommittee hearing marked the first time four tech CEOs appeared before Congress, and the first time Amazon’s head, Jeff Bezos, spoke to U.S. legislators, albeit remotely.

Apple’s Tim Cook, Facebook’s Mark Zuckerberg and Google’s Sundar Pichai also dialed in through video conference, while most of the committee members were present in Congress, for the sixth hearing in the subcommittee’s year-long investigation into the tech companies’ business practices.

Representatives grilled the tech CEOs on their allegedly anticompetitive businesses practices.

The CEOs largely contended that size and conglomeration benefits end-users, by making technology easier to deploy.

Screenshot of Amazon CEO Jeff Bezos participating in the hearing remotely

But the majority of members on the subcommittee said that these corporations stifle choice and oppress the next generation of entrepreneurs. They claimed that big tech companies hold dangerous power and stifle competition. They criticized the heads of each of the four companies, calling them “gatekeepers of the digital economy.”

Accusations against the ‘big four’ add up

Each of the four CEOs were accused of stealing ideas, buying up their competitors and utilizing consumer data to expand at a near exponential rate.

“Each platform uses data to protect its power,” said Rep. James Sensenbrenner, R-Wis., asserting that “tech companies abuse their control over current technology to extend their control.”

Yet the CEOs mustered their way through the nearly six-hour long questioning session, mostly managing to not admit to the ways in which their business practices might stifle competition.

The CEOs avoided giving direct answers to the Representatives’ questions, often saying that they were unaware of the situations being referenced or claiming that their words were being taken out of context.

The companies’ heads continued to refute the idea that they lead monopolies, arguing that they have competition both outside of the tech sector and globally.

Screenshot of Facebook CEO Mark Zuckerberg participating in the hearing remotely

“When Google bought YouTube, they were able to compete against cable operators,” Zuckerberg said. “When Amazon acquired Whole Foods, they were able to compete against Kroger.”

Below are snapshots of the interactions involving each of the four big tech companies.

Amazon and third-party sellers

Amazon, which dominates 70 percent of the online market space, was accused of stifling third-party sellers by Rep. Lucy McBath, D-Ga.

According to McBath, third-party sellers often use the words “bullying, fear and panic” to describe their relationships with Amazon.

McBath cited one third-party seller who said, “we’re stuck and we don’t have a choice,” claiming that selling on Amazon, while unfortunate, was their only viable option.

Screenshot of Rep. Lucy McBath from the webcast

In response, members of Congress demanded Amazon be more transparent with their use of third-party data.

Rep. Joe Neguse, D-Colo., accused Amazon Web Services of purposely stifling competitors, citing instances in which the company identified startups with promising technology and copied their designs.

Rep. Jamie Raskin, D-Md., criticized Amazon for undercutting the prices of their smart home products in order to sell more than the competitors listed on their site.

The company was further criticized for promoting the sale of its own products during the pandemic, after vowing it would only sell essential products.

Facebook and its Instagram acquisition

Facebook, the largest global social networking service, which accrued $18 billion in revenue last year alone, was grilled over its 2012 Instagram acquisition.

Rep. Pramila Jayapal, D-Wash., read a testament from Instagram’s founder, saying he felt pressured to sell the company to Zuckerberg.

Zuckeberg argued back that, at the time of the acquisition, it was “not obvious that Instagram would have reached the scale” it has achieved today.

Rep. Jerrold Nadler, D-N.Y., cited evidence that Facebook saw Instagram as a threat and bought it to avoid competition.

In retrospect, members of Congress called the 2012 merger approval a failure on the part of the Federal Trade Commission.

Rep. Val Demings, D-Fla., criticized the platform for restricting the access of its competitors to Facebook, citing a case in which Facebook restricted Pinterest’s use of the site in 2012.

Screenshot of Rep. Val Demings from the webcast

Members referenced when Facebook stole Snapchat’s “stories” feature, rendering the competitor nearly obsolete, after Snapchat refused to be bought out by the company.

Yet Zuckerberg reacted with surprise when Congressmembers referred to Facebook as a monopoly, saying, “Monopoly? We face a lot of competitors in everything we do.”

Neguse pushed back, citing evidence that Facebook accounted for 95 percent of all social media use in the U.S., as early as 2012.

Google’s efforts to privilege its own products

Google, the search engine which captures 90 percent of online searches, was ridiculed by Congressmembers for being a walled garden, stealing content and privileging its own sites.

Members of Congress referenced two incidents of Google stealing from competitors: restaurant reviews from Yelp and music lyrics from Genius.

When Yelp spoke out against the incident, Google responded by threatening to delist Yelp from its website entirely.

“Isn’t that anti-competitive?,” questioned Rep. David Cicilline, D-R.I.

Pichai responded, maintaining that Google does not steal content.

Demings questioned Pichai on his 2016 decision to combine data sets that Google promised Congress it would keep separate, insinuating that Pichai no longer cared about the legal binding after gaining “exponential” power.

When questioned by members about Google’s ad revenue, Pichai revealed, in a near whisper, that it accounts for around $100 billion of the company’s overall returns.

Apple’s gatekeeper role as guardian of the App Store

Apple, which profits from over 100 million iPhone users in the U.S. alone, was accused by Representatives of picking and choosing what apps are marketed to users.

What Cook referred to as a “seamless integration of software and hardware,” others saw as Apple having the power to exclude apps that compete with the company.

Screenshot of Apple CEO Tim Cook participating in the hearing remotely

Apple is sole decision maker in the rules governing the app store.

One member noted that Apple made screen time apps obsolete by automatically installing iPhones with a similar function in iOS 13.

In response, Cook maintained that, “the app store is accessible” and that “Apple does not have a dominant share in any sector in which they do business.”

See additional story on the hearing.

Former Assistant Editor Jericho Casper graduated from the University of Virginia studying media policy. She grew up in Newport News in an area heavily impacted by the digital divide. She has a passion for universal access and a vendetta against anyone who stands in the way of her getting better broadband. She is now Associate Broadband Researcher at the Institute for Local Self Reliance's Community Broadband Network Initiative.

Big Tech

Frances Haugen, U.S. House Witnesses Say Facebook Must Address Social Harms

The former Facebook employee-turned-whistleblower said the company must be accountable for the social harm it causes.

Published

on

Facebook whistleblower Frances Haugen

WASHINGTON, December 2, 2021 – Facebook whistleblower Frances Haugen told the House Subcommittee on Communications and Technology on Wednesday that the committee must act to investigate Facebook’s social harms to consumers.

Haugen said Congress should be concerned about how Facebook’s products are used to influence vulnerable populations.

Haugen’s testimony, delivered at Wednesday’s subcommittee hearing, urged lawmakers to impose accountability and transparency safeguards on Facebook to prevent it from misleading the public. It comes on the heels of her first testimony in October in front of the subcommittee on consumer protection, product safety and data security in which she urged Congress to force Facebook to make its internal research public allegedly because it can’t be trusted to act on it.

That testimony came after she leaked documents to the Wall Street Journal and the Securities and Exchange Commission that suggested Facebook knew about the negative mental health impacts of photo-sharing app Instagram had on its teen users but allegedly did nothing to combat it.

“No efforts to address these problems are ever going to be effective if Facebook is not required to share data in support of its claims or be subject to oversight of its business decisions,” Haugen said Wednesday. “The company’s leadership keeps vital information from the public, the U.S. government, its shareholders, and governments around the world. The documents I have provided prove that Facebook has repeatedly misled us about what its own research reveals about the safety of children, its role in spreading hateful and polarizing messages, and so much more.”

Facebook’s impact on communities of color

Among the social harms that advocates highlighted, lawmakers were particularly interested in Facebook’s negative impact on communities of color. Rashad Robinson, president of online racial justice organization Color of Change, expressed frustration at technology companies’ disregard for the truth.

“I have personally negotiated with leaders and executives at Big Tech corporations like Facebook, Google, Twitter and Airbnb, including Mark Zuckerberg, over a number of years,” Robinson said. “I sat across the table from him, looking into his eyes, experiencing firsthand the lies, evasions, ignorance and complete lack of accountability to any standard of safety for Black people and other people of color.”

Robinson recalled during the height of the national racial justice protests in 2020 that Zuckerberg told him that the harms Black people were experiencing on Facebook “weren’t reflected in their own internal data.” Now, Robinson said, “we know from the documents shared by Frances Haugen and others that his internal researchers were, in fact, sounding alarms at the exact same time.”

Robinson also highlighted how Facebook’s own data shows that the company disables Black users for less extreme content more often than white users, “often for just talking about the racism they face,” he said.

To foster real solutions for social media consumer protection, Robinson suggests that lawmakers reform Section 230 of the Communications Decency Act to hold companies accountable for minimizing the adverse impact of the content from which they profit.

Currently, Section 230 shields online platforms from liability derived from content posted on their platforms that leads to harm. Conservative advocates for gutting Section 230 say the law should be repealed because it gives social media companies too much power to censor conservative voices, while proponents of keeping Section 230 argue that the law is necessary in some capacity because it allows for the free exchange of thoughts and ideas in our society.

Robinson said reforming Section 230 to impose liability for content on the companies sites would “protect people against Big Tech design features that amplify or exploit content that is clearly harmful to the public.”

These recommendations come as the House considered four social media consumer protection bills on Wednesday: H.R. 2154, the “Protecting Americans from Dangerous Algorithms Act”; H.R. 3184, the “Civil Rights Modernization Act of 2021”; H.R. 3421, the “Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act” or the “SAFE TECH Act”; and H.R. 5596, the “Justice Against Malicious Algorithms Act of 2021.”

Continue Reading

Section 230

Experts Warn Against Total Repeal of Section 230

Panelists note shifting definition of offensive content.

Published

on

WASHINGTON, November 22, 2021 – Communications experts say action by Congress to essentially gut Section 230 would not truly solve any problems with social media.

Experts emphasized that it is not possible for platforms to remove from their site all content that people may believe to be dangerous. They argue that Section 230 of the Communications Decency Act, which shields platforms from legal liability with respect to what their users post, is necessary in at least some capacity.

During discussion between these experts at Broadband Breakfast’s Live Online Event on Wednesday, Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, who used to work as a content moderator, said that to a certain extent it is impossible for platforms to moderate speech that is “dangerous” because every person has differing opinions about what speech they consider to be dangerous. He says it is this ambiguity that Section 230 protects companies from.

Still, Feerst believes that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible. He believes that the effects of artificial intelligence’s use by platforms makes some degree of liability even more essential.

Particularly with the amount of online speech to be reviewed by moderators in the internet age, Feerst says the clear-cut moderation standards are too messy and expensive to be viable options.

Matt Gerst, vice president for legal and policy affairs at the Internet Association, and Shane Tews, nonresident senior fellow at the American Enterprise Institute, also say that while content moderation is complex, it is necessary. Scott McCollough, attorney at McCollough Law Firm, says large social media companies like Facebook are not the causes of all the problems with social media that are in the national spotlight right now, but rather that social features of today’s society, such as the extreme prevalence of conflict, are to blame for this focus on social media.

Proposals for change

Rick Lane, CEO of Iggy Ventures, proposes that reform of Section 230 should include a requirement for social media platforms to make very clear what content is and is not allowed on their sites. McCullough echoed this concern, saying that many moderation actions platforms take presently do not seem to be consistent with those platforms’ stated terms and conditions, and that individual states across the nation should be able to look at these instances on a case-by-case basis to determine whether platforms fairly apply their terms and conditions.

Feerst highlighted the nuance of this issue by saying that people’s definitions of “consistent” are naturally subjective, but agrees with McCullough that users who have content removed should be notified of such, as well as the reasoning for moderators’ action.

Lane also believes that rightfully included in the product of Section 230 reform will be a requirement for platforms to demonstrate a reasonable standard of care and moderate illegal and other extremely dangerous content on their sites. Tews generally agreed with Lane that such content moderation is complex, as she sees a separation between freedom of speech and illegal activity.

Gerst highlighted concerns from companies the Internet Association represents that government regulation coming from Section 230 reform will require widely varied platforms to standardize their operation approaches, diminishing innovation on the internet.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. You can watch the November 17, 2021, event on this page. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, November 17, 2021, 12 Noon ET — The Changing Nature of the Debate About Social Media and Section 230

Facebook is under fire as never before. In response, the social-networking giant has gone so far as to change its official name, to Meta (as in the “metaverse”). What are the broader concerns about social media beyond Facebook? How will concerns about Facebook’s practices spill over into other social media networks, and to debate about Section 230 of the Communications Act?

Panelists for this Broadband Breakfast Live Online session:

  • Scott McCullough, Attorney, McCullough Law Firm
  • Shane Tews, Nonresident Senior Fellow, American Enterprise Institute
  • Alex Feerst, Co-founder, Digital Trust & Safety Partnership
  • Rick Lane, CEO, Iggy Ventures
  • Matt Gerst, VP for Legal & Policy Affairs, Internet Association
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Shane Tews is a nonresident senior fellow at the American Enterprise Institute (AEI), where she works on international communications, technology and cybersecurity issues, including privacy, internet governance, data protection, 5G networks, the Internet of Things, machine learning, and artificial intelligence. She is also president of Logan Circle Strategies.

Alex Feerst is a lawyer and technologist focused on building systems that foster trust, community, and privacy. He leads Murmuration Labs, which helps tech companies address the risks and human impact of innovative products, and co-founded the Digital Trust & Safety Partnership, the first industry-led initiative to establish best practices for online trust and safety. He was previously Head of Legal and Head of Trust and Safety at Medium, General Counsel at Neuralink, and currently serves on the editorial board of the Journal of Online Trust & Safety, and as a fellow at Stanford University’s Center for Internet and Society.

Rick Lane is a tech policy expert, child safety advocate, and the founder and CEO of Iggy Ventures. Iggy advises and invests in companies and projects that can have a positive social impact. Prior to starting Iggy, Rick served for 15 years as the Senior Vice President of Government Affairs of 21st Century Fox.

Matt Gerst is the Vice President for Legal & Policy Affairs and Associate General Counsel at Internet Association, where he builds consensus on policy positions among IA’s diverse membership of companies that lead the internet industry. Most recently, Matt served as Vice President of Regulatory Affairs at CTIA, where he managed a diverse range of issues including consumer protection, public safety, network resiliency, and universal service. Matt received his J.D. from New York Law School, and he served as an adjunct professor of law in the scholarly writing program at the George Washington University School of Law.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

WATCH HERE, or on YouTubeTwitter and Facebook

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Big Tech

Experts Caution Against One Size Fits All Approach to Content Moderation

Cost of moderation another factor as to why some experts say standardized content moderation policies may not work for all.

Published

on

Former President Donald Trump sued Facebook, Twitter and Google earlier this year

WASHINGTON, November 10, 2021 – Some experts say they are concerned about a lack of diversity in content moderation practices across the technology industry because some companies may not be well-served – and could be negatively affected – by uniform policies.

Many say following what other influential platforms do, like banning accounts, could do more harm than good when it comes to protecting free speech on the internet.

Since former President Donald Trump was banned from Twitter and Facebook for allegedly stoking the January Capitol riot, debate has raged about what Big Tech platforms should do when certain accounts cross the generally protected free speech line into promoting violence, disobedience, or other illegal behavior.

But the Knight Foundation event on November 2 heard that standardized content moderation policies imply a one-size fits all approach that would work across the tech spectrum. In fact, experts say, it won’t.

Lawmakers have been calling for commitments from social media companies to agree to content and platform policies, including increasing protections for minors online. But representatives from Snapchat, TikTok, and YouTube who sat before members of the Senate Commerce Subcommittee on Consumer Protection last month did not commit to that.

Facebook itself has an Oversight Board that is independent of the company; the Board earlier this year upheld Trump’s ban from the platform but recommended the company set a standard for the penalty (Trump was banned indefinitely).

Among proposed solutions for many platforms is a move toward decentralized content regulation with more delegation of moderation to individuals that are not employed by the platforms. There are even suggestions of incentivizing immunity from certain antitrust regulation should platforms implement decentralized structures.

Costs of content moderation

At an Information Technology and Innovation Foundation event on Tuesday, experts suggested a level of decentralization that would involve user tools, as opposed to plowing money to employ content moderators.

Experts noted the expense of hiring content moderators. With global social media platforms, employees who are able to moderate content in all languages and dialects must be hired, and the accumulation of these hiring costs have the potential to be lethal to many platforms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending