WASHINGTON, June 7, 2018 – A top Facebook policy official on Wednesday defended the social media giant’s new policies about safeguarding privacy and data transparency against the doubts of an audience at the New America Foundation, a think tank generally friendly to Facebook.
In recent months, Facebook has faced heavy scrutiny from Congress for potential data privacy violations, as well as its role in spreading disinformation during the 2016 elections.
Speaking at New America Foundation, Monika Bickert said that Facebook’s deals with numerous companies – including a recently-disclosed data-sharing arrangement with phone manufacturer Huawei – are “completely different” from the deal struck with Cambridge Analytica.
That’s because the data is stored on the Huawei phone held by the consumer, and not on Cambridge Analytica’s servers, said Bickert, the company’s vice president of global privacy.
She stressed that unlike the freewheeling days of Facebook’s earlier years, new policies regarding the sharing of user data have been put in place.
Finding the balance to protect data privacy with new research initiatives
However, new measures to protect data privacy of users may prove difficult to balance with Facebook’s development of new research initiatives aimed towards creating new counterterrorism efforts.
One thing Facebook is looking at is how Facebook can do research transparently, and yet not threaten user privacy.
Facebook has removed 1.9 million pieces of content for violating its policies against terrorist-related speech in the past quarter, she said.
Due to the sheer volume of live posts, content reviewers at Facebook do not look at every post that goes live. Instead, rather than relying on users to flag the content, they rely on technical tools to do a large amount of the work.
“We use technical tools to find content likely to violate policies,” she said.
Facebook and others use the hash-sharing database
One of these tools is a “hash” sharing database that Facebook launched in 2016 along with Microsoft, Twitter, and YouTube. This allows companies to share the “hash,” a unique digital fingerprint or signature, of terrorist images or videos with one another, so that social media websites can prevent the content from being uploaded.
But it is much more difficult to stop hate speech on the platform, she said, because something like hate speech is heavily dependent on context.
While the social media giant faces criticisms for potentially creating a monopoly within the industry, there may be advantages to Facebook’s power as an authority in the industry. “It cannot be a one company approach,” said Bickert, responding to concerns about the spread of terrorist propaganda on social media.
The benefits of bigness in rapidly identifying and removing terrorist propaganda
With ISIS, she said, they observed that the better the big companies such as Facebook become at rapidly finding and taking down content containing terrorist propaganda, the more those malicious users begin to move towards and target smaller social media companies, which may not have the technology and manpower necessary to combat those groups.
Companies must work together on the issue of counterterrorism efforts. “The sophistication and coordination of the terror groups really brought that lesson home,” she said.
More than 99 percent of what they remove for terror propaganda is flagged by technical tools, Bickert claimed.
Changes in the disclosure and display of political ads
Facebook has also recently launched new policies regarding how political ads will be displayed on the platform. Political ads will be clearly labeled with information about the sponsor of the ad. Viewers of the ad can also click on an icon to find more information, such as the budget of the campaign for the ad as well as data statistics of other people who have viewed the ad.
When asked about how Facebook intends to deal with the disinformation that may increase during the 2018 midterm elections, Bickert said, “We are focused on midterm elections, but there are so many elections around the world where this is a problem.”
Facebook has focused in the past German and French elections, she said, on removing fake accounts beforehand in order to prevent those accounts from spreading disinformation.
(Photo of Monika Bickert at SXSW in 2017 by nrkbeta used with permission.)
Panel Hears Opposing Views on Content Moderation Debate
Some agreed there is egregious information that should be downranked on search platforms.
WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.
Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.
Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.
That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.
The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.
At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.
Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.
Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.
Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.
Twitter Whistleblower Says Company Needs to Work to Permanently Delete User Data
Meanwhile, Twitter shareholders approved a deal to sell the company to Elon Musk, who wants out.
WASHINGTON, September 14, 2022 – Twitter’s former head of security and now company whistleblower told a Senate Judiciary committee Tuesday that Twitter must put more resources into trying to permanently delete user data upon the elimination of accounts to preserve the security and privacy of users.
Peiter Zatko, who was fired from Twitter in January due to performance issues, blew the whistle on the company last month by alleging Twitter’s lack of sufficient security and privacy safeguards poses a national security risk. He alleged that the company does not delete user data when accounts are deleted.
On Tuesday, Zatko told the Senate Judiciary committee that the company needs to take the step of ensuring that the personal information of users are deleted when they destroy their accounts.
He alleged company engineers can access any user data on Twitter, including home addresses, phone numbers and contact lists, and sell the data without company executives knowing.
“I continued to believe in the mission of the company and root for its success, but that success can only happen if the privacy and security of Twitter users and the public are protected,” Zatko said.
The Wall Street Journal reported Tuesday that Twitter investors approved SpaceX CEO Elon Musk’s takeover of the company, despite the billionaire trying to back out of the deal allegedly over a lack of information about the number of fake accounts on the platform. The company and Musk are currently in court battling over whether he must follow through on the deal.
Musk’s lawyer has asked the court to delay the trial — scheduled for mid-October — to allow his client to investigate the whistleblower’s claims, according to reporting from Reuters.
A White House Event, Biden Administration Seeks Regulation of Big Tech
Participants voiced concerns over alleged abuses by big tech companies.
WASHINGTON, September 9, 2022 – President Joe Biden on Thursday called for a federal privacy standard, Section 230 reform, and increased antitrust scrutiny against big tech.
“Although tech platforms can help keep us connected, create a vibrant marketplace of ideas, and open up new opportunities for bringing products and services to market, they can also divide us and wreak serious real-world harms,” according to a White House readout from the administration’s listening session on Thursday.
Participants at the White House event voiced concerns over alleged abuses by big tech companies.
A new data privacy regime?
The Biden administration called for “clear limits on the ability to collect, use, transfer, and maintain our personal data.” It also endorsed bipartisan congressional efforts to establish a national privacy standard.
Last June, Rep. Frank Pallone Jr., D-N.J., introduced the American Data Privacy and Protection Act. The bill gained substantial bipartisan support and was advanced by the House Energy and Commerce Committee in July.
In the absence of federal privacy laws, several states drafted privacy laws of their own. The Golden State, for instance, implemented the California Consumer Privacy Act in 2018. The CCPA’s protections were extended by the California Privacy Rights Act of 2020, which goes into effect in January 2023.
Biden maintains his position seeking changes to Section 23o
“Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials,” argued the White House document.
Biden’s hostility towards Section 230 is not new. Section 230 protects internet platforms from most legal liability that might otherwise result from third party–generated content. For example, although an online publication may be guilty of libel for a news story it publishes, it cannot be held liable for slanderous reader posts in its comments section.
Critics of Section 230 say that it unfairly shields rogue social media companies from accountability for their misdeeds. And in addition to Biden and other Democrats, many Republicans are dislike the provision. Sens. Ted Cruz, R-Texas, and Josh Hawley, R-Missouri, argue that platforms such as Twitter, Facebook, and YouTube discriminate against conservative speech and therefore should not benefit from such federal legal protections.
Section 230’s proponents say that it is the foundation of online free speech.
Ramping up antitrust
“Today…a small number of dominant Internet platforms use their power to exclude market entrants,” Thursday’s press release said. This sentiment is consonant with the administration’s antitrust policies to date. Indeed, Lina Khan, chair of the Federal Trade Commission, was a vocal antitruster in the academy and has greatly expanded the scope of the agency’s antitrust efforts since her appointment in 2021.
In the Senate, Sen. Amy Klobuchar, D-Minnesota, is sponsoring the American Innovation and Choice Online Act, a bill that bans large online platforms from engaging in putatively “anticompetitive” business practices. The measure was approved by the Judiciary Committee earlier this year, and, though it was stalled over the summer to make way for other Democratic legislative priorities, it may come for a vote this fall.
- Kate Forscey: Mobile Broadband Gap Needs to Be Remedied, Too
- FCC Proposal for Robotexts, FCC Mapping Problems, TikTok’s Preliminary Deal
- As Middle Mile Program Deadline Approaches, NTIA Proposes ‘Buy America’ Exemptions
- Reason 5 to Attend Broadband Mapping Masterclass: Understanding Public Challenges
- FCC Spectrum Authority Expires on September 30, Agency Seeks Renewal
- NTCA Smart Rural Communities, International Telecommunications Union Conference, Carr on TikTok
Signup for Broadband Breakfast
Broadband Roundup4 weeks ago
Comcast and Charter’s State Grants, AT&T Fiber in Arizona, New US Cellular Lobbyist
Broadband Roundup3 weeks ago
AT&T Sues T-Mobile Over Ad, Nokia Partners with Ready, LightPath Expanding
Broadband Roundup4 weeks ago
Promoting Affordable Connectivity Program, Google Bars Truth Social, T-Mobile Wins 2.5 GHz Auction
#broadbandlive4 weeks ago
Broadband Breakfast on September 21, 2022 – Broadband Mapping and Data
Fiber4 weeks ago
Missouri City Utility to Complete Fiber Build Using Utility Lease Model
Rural4 weeks ago
FCC Commits Additional $800 Million From Rural Digital Opportunity Fund
#broadbandlive4 weeks ago
Broadband Breakfast on September 14, 2022 – How Can Cities Take Advantage of Federal Broadband Funding?
#broadbandlive4 weeks ago
Broadband Breakfast on September 7, 2022 – Assessing the NTIA’s Middle Mile Grant Application Process