June 1, 2020 — Google and Apple are rarely criticized for not collecting enough data on users, but under the state of emergency COVID-19 has brought about, this is no longer the case, said panelists at a webinar hosted Thursday by the Cato Institute.
Because users’ location data could help reduce the spread of coronavirus, there currently exists a tradeoff between privacy and public health.
The technology utilized by the app has sparked debates between healthcare officials, media experts, and the public. In an attempt to reduce public anxieties, and increase app adoption, Apple and Google banned the use of location tracking on their coronavirus-tracing technology, as GPS based solutions “can be centrally stored and used against you,” noted Harper Reed, senior fellow at the Annenberg Innovation Lab.
While healthcare officials have pleaded for these companies to allow them to utilize more private and accurate GPS location data, the app protects user privacy by instead utilizing low-energy Bluetooth antennas in smartphones, which only log data when people come into contact for short periods of time.
Panelists argued that building a system that preserves privacy is crucial, and that the companies chose the correct route.
It is not yet clear which tracking technology the public would be more comfortable with. Reed argued that the public may be more comfortable with GPS tracking, as many are already familiar with it.
Furthermore, without the useful location data that would allow for more accurate contact tracing, healthcare officials believe that Apple and Google’s protocol will be of little use.
But while digital contact tracing will not replace shoe leather contact tracing anytime soon, panelists agreed it can supplement other solutions to help alleviate problems in an already overwhelmed healthcare sector.
Panel Hears Opposing Views on Content Moderation Debate
Some agreed there is egregious information that should be downranked on search platforms.
WASHINGTON, September 14, 2022 – Panelists wrangled over how technology platforms should handle content moderation at an event hosted by the Lincoln Network Friday, with one arguing that search engines should neutralize misinformation that cause direct, “tangible” harms and another advocating an online content moderation standard that doesn’t discriminate on viewpoints.
Debate about what to do with certain content on technology platforms has picked up steam since former President Donald Trump was removed last year from platforms including Facebook and Twitter for allegedly inciting the January 6, 2021, storming of the Capitol.
Search engines generally moderate content algorithmically, prioritizing certain results over others. Most engines, like Google, prioritize results from institutions generally considered to be credible, such as universities and government agencies.
That can be a good thing, said Renee DiResta, research manager at Stanford Internet Observatory. If search engines allow scams or medical misinformation to headline search results, she argued, “tangible” material or physical harms will result.
The internet pioneered communications from “one-to-many” broadcast media – e.g., television and radio – to a “many-to-many” model, said DiResta. She argued that “many-to-many” interactions create social frictions and make possible the formation of social media mobs.
At the beginning of the year, Georgia Republic representative Marjorie Taylor Greene was permanently removed from Twitter for allegedly spreading Covid-19 misinformation, the same reason Kentucky Senator Rand Paul was removed from Alphabet Inc.’s YouTube.
Lincoln Network senior fellow Antonio Martinez endorsed a more permissive content moderation strategy that – excluding content that incites imminent, lawless action – is tolerant of heterodox speech. “To think that we can epistemologically or even technically go in and establish capital-T Truth at scale is impossible,” he said.
Trump has said to be committed to a platform of open speech with the creation of his social media website Truth Social. Other platforms, such as social media site Parler and video-sharing website Rumble, have purported to allow more speech than the incumbents. SpaceX CEO Elon Musk previously committed to buying Twitter because of its policies prohibiting certain speech, though he now wants out of that commitment.
Alex Feerst, CEO of digital content curator Murmuration Labs, said that free-speech aphorisms – such as, “The cure for bad speech is more speech” – may no longer hold true given the volume of speech enabled by the internet.
Twitter Whistleblower Says Company Needs to Work to Permanently Delete User Data
Meanwhile, Twitter shareholders approved a deal to sell the company to Elon Musk, who wants out.
WASHINGTON, September 14, 2022 – Twitter’s former head of security and now company whistleblower told a Senate Judiciary committee Tuesday that Twitter must put more resources into trying to permanently delete user data upon the elimination of accounts to preserve the security and privacy of users.
Peiter Zatko, who was fired from Twitter in January due to performance issues, blew the whistle on the company last month by alleging Twitter’s lack of sufficient security and privacy safeguards poses a national security risk. He alleged that the company does not delete user data when accounts are deleted.
On Tuesday, Zatko told the Senate Judiciary committee that the company needs to take the step of ensuring that the personal information of users are deleted when they destroy their accounts.
He alleged company engineers can access any user data on Twitter, including home addresses, phone numbers and contact lists, and sell the data without company executives knowing.
“I continued to believe in the mission of the company and root for its success, but that success can only happen if the privacy and security of Twitter users and the public are protected,” Zatko said.
The Wall Street Journal reported Tuesday that Twitter investors approved SpaceX CEO Elon Musk’s takeover of the company, despite the billionaire trying to back out of the deal allegedly over a lack of information about the number of fake accounts on the platform. The company and Musk are currently in court battling over whether he must follow through on the deal.
Musk’s lawyer has asked the court to delay the trial — scheduled for mid-October — to allow his client to investigate the whistleblower’s claims, according to reporting from Reuters.
A White House Event, Biden Administration Seeks Regulation of Big Tech
Participants voiced concerns over alleged abuses by big tech companies.
WASHINGTON, September 9, 2022 – President Joe Biden on Thursday called for a federal privacy standard, Section 230 reform, and increased antitrust scrutiny against big tech.
“Although tech platforms can help keep us connected, create a vibrant marketplace of ideas, and open up new opportunities for bringing products and services to market, they can also divide us and wreak serious real-world harms,” according to a White House readout from the administration’s listening session on Thursday.
Participants at the White House event voiced concerns over alleged abuses by big tech companies.
A new data privacy regime?
The Biden administration called for “clear limits on the ability to collect, use, transfer, and maintain our personal data.” It also endorsed bipartisan congressional efforts to establish a national privacy standard.
Last June, Rep. Frank Pallone Jr., D-N.J., introduced the American Data Privacy and Protection Act. The bill gained substantial bipartisan support and was advanced by the House Energy and Commerce Committee in July.
In the absence of federal privacy laws, several states drafted privacy laws of their own. The Golden State, for instance, implemented the California Consumer Privacy Act in 2018. The CCPA’s protections were extended by the California Privacy Rights Act of 2020, which goes into effect in January 2023.
Biden maintains his position seeking changes to Section 23o
“Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials,” argued the White House document.
Biden’s hostility towards Section 230 is not new. Section 230 protects internet platforms from most legal liability that might otherwise result from third party–generated content. For example, although an online publication may be guilty of libel for a news story it publishes, it cannot be held liable for slanderous reader posts in its comments section.
Critics of Section 230 say that it unfairly shields rogue social media companies from accountability for their misdeeds. And in addition to Biden and other Democrats, many Republicans are dislike the provision. Sens. Ted Cruz, R-Texas, and Josh Hawley, R-Missouri, argue that platforms such as Twitter, Facebook, and YouTube discriminate against conservative speech and therefore should not benefit from such federal legal protections.
Section 230’s proponents say that it is the foundation of online free speech.
Ramping up antitrust
“Today…a small number of dominant Internet platforms use their power to exclude market entrants,” Thursday’s press release said. This sentiment is consonant with the administration’s antitrust policies to date. Indeed, Lina Khan, chair of the Federal Trade Commission, was a vocal antitruster in the academy and has greatly expanded the scope of the agency’s antitrust efforts since her appointment in 2021.
In the Senate, Sen. Amy Klobuchar, D-Minnesota, is sponsoring the American Innovation and Choice Online Act, a bill that bans large online platforms from engaging in putatively “anticompetitive” business practices. The measure was approved by the Judiciary Committee earlier this year, and, though it was stalled over the summer to make way for other Democratic legislative priorities, it may come for a vote this fall.
- Garland McCoy: How Your State Can Defend Its Broadband Maps for Maximum Funds
- High Demand for Middle Mile Grants, Local Concerns in FCC Process, Musk Agrees to Buy Twitter Again
- Paul Atkinson: Why Fiber Trumps Satellite When Bridging the Digital Divide
- FCC Targets Spam Call Offenders, Disaster Assistance Requirements, U.S. 23rd in Fiber Development
- Wireless Internet Service Providers Facing Challenges Meeting BEAD Program Requirements: Experts
- Johnny Kampis: Wireless Survey Shows 5G’s Role in Closing Digital Divide
Signup for Broadband Breakfast
Broadband Roundup4 weeks ago
AT&T Sues T-Mobile Over Ad, Nokia Partners with Ready, LightPath Expanding
#broadbandlive3 days ago
Broadband Breakfast on October 5, 2022 – How to Reform the Universal Service Fund
Broadband Mapping & Data3 weeks ago
Broadband Mapping Masterclass on September 27, 2022
Broadband Mapping & Data4 weeks ago
FCC’s Fabric Challenge Process Important Part of Getting Map Right, Agency Says
WISP3 weeks ago
Wisper Internet CEO Takes Issue With Federal Government Preference for Fiber
Big Tech4 weeks ago
A White House Event, Biden Administration Seeks Regulation of Big Tech
Funding4 weeks ago
NTIA Middle Mile Director Stresses Need for Infrastructure to Withstand Climate Events
Fiber4 weeks ago
In ‘Office Hours’ Sessions, NTIA Addresses Questions of Middle Mile Grant Applicants