Connect with us

Big Tech

Facebook Faces A Skeptical Audience on Data Privacy and Terror Transparency

Published

on

WASHINGTON, June 7, 2018 – A top Facebook policy official on Wednesday defended the social media giant’s new policies about safeguarding privacy and data transparency against the doubts of an audience at the New America Foundation, a think tank generally friendly to Facebook.

In recent months, Facebook has faced heavy scrutiny from Congress for potential data privacy violations, as well as its role in spreading disinformation during the 2016 elections.

Speaking at New America Foundation, Monika Bickert said that Facebook’s deals with numerous companies – including a recently-disclosed data-sharing arrangement with phone manufacturer Huawei – are “completely different” from the deal struck with Cambridge Analytica.

That’s because the data is stored on the Huawei phone held by the consumer, and not on Cambridge Analytica’s servers, said Bickert, the company’s vice president of global privacy.

She stressed that unlike the freewheeling days of Facebook’s earlier years, new policies regarding the sharing of user data have been put in place.

Finding the balance to protect data privacy with new research initiatives

However, new measures to protect data privacy of users may prove difficult to balance with Facebook’s development of new research initiatives aimed towards creating new counterterrorism efforts.

One thing Facebook is looking at is how Facebook can do research transparently, and yet not threaten user privacy.

Facebook has removed 1.9 million pieces of content for violating its policies against terrorist-related speech in the past quarter, she said.

Due to the sheer volume of live posts, content reviewers at Facebook do not look at every post that goes live. Instead, rather than relying on users to flag the content, they rely on technical tools to do a large amount of the work.

“We use technical tools to find content likely to violate policies,” she said.

Facebook and others use the hash-sharing database

One of these tools is a “hash” sharing database that Facebook launched in 2016 along with Microsoft, Twitter, and YouTube. This allows companies to share the “hash,” a unique digital fingerprint or signature, of terrorist images or videos with one another, so that social media websites can prevent the content from being uploaded.

But it is much more difficult to stop hate speech on the platform, she said, because something like hate speech is heavily dependent on context.

While the social media giant faces criticisms for potentially creating a monopoly within the industry, there may be advantages to Facebook’s power as an authority in the industry. “It cannot be a one company approach,” said Bickert, responding to concerns about the spread of terrorist propaganda on social media.

The benefits of bigness in rapidly identifying and removing terrorist propaganda

With ISIS, she said, they observed that the better the big companies such as Facebook become at rapidly finding and taking down content containing terrorist propaganda, the more those malicious users begin to move towards and target smaller social media companies, which may not have the technology and manpower necessary to combat those groups.

Companies must work together on the issue of counterterrorism efforts. “The sophistication and coordination of the terror groups really brought that lesson home,” she said.

More than 99 percent of what they remove for terror propaganda is flagged by technical tools, Bickert claimed.

Changes in the disclosure and display of political ads

Facebook has also recently launched new policies regarding how political ads will be displayed on the platform. Political ads will be clearly labeled with information about the sponsor of the ad. Viewers of the ad can also click on an icon to find more information, such as the budget of the campaign for the ad as well as data statistics of other people who have viewed the ad.

When asked about how Facebook intends to deal with the disinformation that may increase during the 2018 midterm elections, Bickert said, “We are focused on midterm elections, but there are so many elections around the world where this is a problem.”

Facebook has focused in the past German and French elections, she said, on removing fake accounts beforehand in order to prevent those accounts from spreading disinformation.

(Photo of Monika Bickert at SXSW in 2017 by nrkbeta used with permission.)

Big Tech

Frances Haugen, U.S. House Witnesses Say Facebook Must Address Social Harms

The former Facebook employee-turned-whistleblower said the company must be accountable for the social harm it causes.

Published

on

Facebook whistleblower Frances Haugen

WASHINGTON, December 2, 2021 – Facebook whistleblower Frances Haugen told the House Subcommittee on Communications and Technology on Wednesday that the committee must act to investigate Facebook’s social harms to consumers.

Haugen said Congress should be concerned about how Facebook’s products are used to influence vulnerable populations.

Haugen’s testimony, delivered at Wednesday’s subcommittee hearing, urged lawmakers to impose accountability and transparency safeguards on Facebook to prevent it from misleading the public. It comes on the heels of her first testimony in October in front of the subcommittee on consumer protection, product safety and data security in which she urged Congress to force Facebook to make its internal research public allegedly because it can’t be trusted to act on it.

That testimony came after she leaked documents to the Wall Street Journal and the Securities and Exchange Commission that suggested Facebook knew about the negative mental health impacts of photo-sharing app Instagram had on its teen users but allegedly did nothing to combat it.

“No efforts to address these problems are ever going to be effective if Facebook is not required to share data in support of its claims or be subject to oversight of its business decisions,” Haugen said Wednesday. “The company’s leadership keeps vital information from the public, the U.S. government, its shareholders, and governments around the world. The documents I have provided prove that Facebook has repeatedly misled us about what its own research reveals about the safety of children, its role in spreading hateful and polarizing messages, and so much more.”

Facebook’s impact on communities of color

Among the social harms that advocates highlighted, lawmakers were particularly interested in Facebook’s negative impact on communities of color. Rashad Robinson, president of online racial justice organization Color of Change, expressed frustration at technology companies’ disregard for the truth.

“I have personally negotiated with leaders and executives at Big Tech corporations like Facebook, Google, Twitter and Airbnb, including Mark Zuckerberg, over a number of years,” Robinson said. “I sat across the table from him, looking into his eyes, experiencing firsthand the lies, evasions, ignorance and complete lack of accountability to any standard of safety for Black people and other people of color.”

Robinson recalled during the height of the national racial justice protests in 2020 that Zuckerberg told him that the harms Black people were experiencing on Facebook “weren’t reflected in their own internal data.” Now, Robinson said, “we know from the documents shared by Frances Haugen and others that his internal researchers were, in fact, sounding alarms at the exact same time.”

Robinson also highlighted how Facebook’s own data shows that the company disables Black users for less extreme content more often than white users, “often for just talking about the racism they face,” he said.

To foster real solutions for social media consumer protection, Robinson suggests that lawmakers reform Section 230 of the Communications Decency Act to hold companies accountable for minimizing the adverse impact of the content from which they profit.

Currently, Section 230 shields online platforms from liability derived from content posted on their platforms that leads to harm. Conservative advocates for gutting Section 230 say the law should be repealed because it gives social media companies too much power to censor conservative voices, while proponents of keeping Section 230 argue that the law is necessary in some capacity because it allows for the free exchange of thoughts and ideas in our society.

Robinson said reforming Section 230 to impose liability for content on the companies sites would “protect people against Big Tech design features that amplify or exploit content that is clearly harmful to the public.”

These recommendations come as the House considered four social media consumer protection bills on Wednesday: H.R. 2154, the “Protecting Americans from Dangerous Algorithms Act”; H.R. 3184, the “Civil Rights Modernization Act of 2021”; H.R. 3421, the “Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act” or the “SAFE TECH Act”; and H.R. 5596, the “Justice Against Malicious Algorithms Act of 2021.”

Continue Reading

Section 230

Experts Warn Against Total Repeal of Section 230

Panelists note shifting definition of offensive content.

Published

on

WASHINGTON, November 22, 2021 – Communications experts say action by Congress to essentially gut Section 230 would not truly solve any problems with social media.

Experts emphasized that it is not possible for platforms to remove from their site all content that people may believe to be dangerous. They argue that Section 230 of the Communications Decency Act, which shields platforms from legal liability with respect to what their users post, is necessary in at least some capacity.

During discussion between these experts at Broadband Breakfast’s Live Online Event on Wednesday, Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, who used to work as a content moderator, said that to a certain extent it is impossible for platforms to moderate speech that is “dangerous” because every person has differing opinions about what speech they consider to be dangerous. He says it is this ambiguity that Section 230 protects companies from.

Still, Feerst believes that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible. He believes that the effects of artificial intelligence’s use by platforms makes some degree of liability even more essential.

Particularly with the amount of online speech to be reviewed by moderators in the internet age, Feerst says the clear-cut moderation standards are too messy and expensive to be viable options.

Matt Gerst, vice president for legal and policy affairs at the Internet Association, and Shane Tews, nonresident senior fellow at the American Enterprise Institute, also say that while content moderation is complex, it is necessary. Scott McCollough, attorney at McCollough Law Firm, says large social media companies like Facebook are not the causes of all the problems with social media that are in the national spotlight right now, but rather that social features of today’s society, such as the extreme prevalence of conflict, are to blame for this focus on social media.

Proposals for change

Rick Lane, CEO of Iggy Ventures, proposes that reform of Section 230 should include a requirement for social media platforms to make very clear what content is and is not allowed on their sites. McCullough echoed this concern, saying that many moderation actions platforms take presently do not seem to be consistent with those platforms’ stated terms and conditions, and that individual states across the nation should be able to look at these instances on a case-by-case basis to determine whether platforms fairly apply their terms and conditions.

Feerst highlighted the nuance of this issue by saying that people’s definitions of “consistent” are naturally subjective, but agrees with McCullough that users who have content removed should be notified of such, as well as the reasoning for moderators’ action.

Lane also believes that rightfully included in the product of Section 230 reform will be a requirement for platforms to demonstrate a reasonable standard of care and moderate illegal and other extremely dangerous content on their sites. Tews generally agreed with Lane that such content moderation is complex, as she sees a separation between freedom of speech and illegal activity.

Gerst highlighted concerns from companies the Internet Association represents that government regulation coming from Section 230 reform will require widely varied platforms to standardize their operation approaches, diminishing innovation on the internet.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. You can watch the November 17, 2021, event on this page. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, November 17, 2021, 12 Noon ET — The Changing Nature of the Debate About Social Media and Section 230

Facebook is under fire as never before. In response, the social-networking giant has gone so far as to change its official name, to Meta (as in the “metaverse”). What are the broader concerns about social media beyond Facebook? How will concerns about Facebook’s practices spill over into other social media networks, and to debate about Section 230 of the Communications Act?

Panelists for this Broadband Breakfast Live Online session:

  • Scott McCullough, Attorney, McCullough Law Firm
  • Shane Tews, Nonresident Senior Fellow, American Enterprise Institute
  • Alex Feerst, Co-founder, Digital Trust & Safety Partnership
  • Rick Lane, CEO, Iggy Ventures
  • Matt Gerst, VP for Legal & Policy Affairs, Internet Association
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Shane Tews is a nonresident senior fellow at the American Enterprise Institute (AEI), where she works on international communications, technology and cybersecurity issues, including privacy, internet governance, data protection, 5G networks, the Internet of Things, machine learning, and artificial intelligence. She is also president of Logan Circle Strategies.

Alex Feerst is a lawyer and technologist focused on building systems that foster trust, community, and privacy. He leads Murmuration Labs, which helps tech companies address the risks and human impact of innovative products, and co-founded the Digital Trust & Safety Partnership, the first industry-led initiative to establish best practices for online trust and safety. He was previously Head of Legal and Head of Trust and Safety at Medium, General Counsel at Neuralink, and currently serves on the editorial board of the Journal of Online Trust & Safety, and as a fellow at Stanford University’s Center for Internet and Society.

Rick Lane is a tech policy expert, child safety advocate, and the founder and CEO of Iggy Ventures. Iggy advises and invests in companies and projects that can have a positive social impact. Prior to starting Iggy, Rick served for 15 years as the Senior Vice President of Government Affairs of 21st Century Fox.

Matt Gerst is the Vice President for Legal & Policy Affairs and Associate General Counsel at Internet Association, where he builds consensus on policy positions among IA’s diverse membership of companies that lead the internet industry. Most recently, Matt served as Vice President of Regulatory Affairs at CTIA, where he managed a diverse range of issues including consumer protection, public safety, network resiliency, and universal service. Matt received his J.D. from New York Law School, and he served as an adjunct professor of law in the scholarly writing program at the George Washington University School of Law.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

WATCH HERE, or on YouTubeTwitter and Facebook

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Big Tech

Experts Caution Against One Size Fits All Approach to Content Moderation

Cost of moderation another factor as to why some experts say standardized content moderation policies may not work for all.

Published

on

Former President Donald Trump sued Facebook, Twitter and Google earlier this year

WASHINGTON, November 10, 2021 – Some experts say they are concerned about a lack of diversity in content moderation practices across the technology industry because some companies may not be well-served – and could be negatively affected – by uniform policies.

Many say following what other influential platforms do, like banning accounts, could do more harm than good when it comes to protecting free speech on the internet.

Since former President Donald Trump was banned from Twitter and Facebook for allegedly stoking the January Capitol riot, debate has raged about what Big Tech platforms should do when certain accounts cross the generally protected free speech line into promoting violence, disobedience, or other illegal behavior.

But the Knight Foundation event on November 2 heard that standardized content moderation policies imply a one-size fits all approach that would work across the tech spectrum. In fact, experts say, it won’t.

Lawmakers have been calling for commitments from social media companies to agree to content and platform policies, including increasing protections for minors online. But representatives from Snapchat, TikTok, and YouTube who sat before members of the Senate Commerce Subcommittee on Consumer Protection last month did not commit to that.

Facebook itself has an Oversight Board that is independent of the company; the Board earlier this year upheld Trump’s ban from the platform but recommended the company set a standard for the penalty (Trump was banned indefinitely).

Among proposed solutions for many platforms is a move toward decentralized content regulation with more delegation of moderation to individuals that are not employed by the platforms. There are even suggestions of incentivizing immunity from certain antitrust regulation should platforms implement decentralized structures.

Costs of content moderation

At an Information Technology and Innovation Foundation event on Tuesday, experts suggested a level of decentralization that would involve user tools, as opposed to plowing money to employ content moderators.

Experts noted the expense of hiring content moderators. With global social media platforms, employees who are able to moderate content in all languages and dialects must be hired, and the accumulation of these hiring costs have the potential to be lethal to many platforms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending