Connect with us

Big Tech

Google and Apple May Help Us Understand ‘Typhoid Mary’s’ Cell Phone Movements

Published

on

Screenshot of the Future Tense event on Thursday

April 19, 2020 – Apple and Google are forgoing using their massive troves of location data for contact tracing and instead tinkering with the less invasive “proximity tracing,” according to industry analysts at a FutureTense webinar on Thursday on “Will the Coronavirus Claim Privacy Among its Victims?”

Proximity tracing involves using Bluetooth technology to locate all the devices that have been near a person’s phone to create a log of all the people they have crossed.

That way, an individual’s location is not tracked, but if authorities come across someone with a confirmed case of coronavirus, then all the devices that came into contact with the infected person’s device will be notified.

Apple and Google are trying this approach, which is viewed as “less invasive” because Bluetooth doesn’t track “where you were,” said Kathryn Waldron, the resident fellow of national security and cybersecurity at the R Street Institute.

“The benefit of the Google/Apple approach is that its decentralized,” said Al Gidari, consulting director of privacy at the Stanford Center for Internet and Society, adding that proximity tracing is “as close to anonymous as you can get, technically.”

“The beauty” of the Google and Apple approach is that it works at a scale “that works for 3 billion people,” Gidari said.

Gidari also addressed a common critique of proximity tracing: Bluetooth knows no boundaries. The example that is often used is that of a floormate who lives across a wall from you but is close enough to activate your Bluetooth. However, Gidari argued that “you would want to know” who in your building and immediate surroundings has it.

Gidari contrasted corporation-led proximity tracing with the “centralized approach” that Singapore has taken wherein the government holds all its citizens Bluetooth data. He cited the Google and Apple efforts as the ideal alternative, since it is decentralized which is good for privacy but also goes beyond national borders and works at a scale of up to “3 billion people.”

Not everybody on the call was convinced of the privacy assurities.

Waldron questioned whether this approach would work, because for proximity tracing to succeed Americans would have to voluntarily download an app made by Apple or Google, and doing so “will raise alarms for everyday Americans.”

Perhaps Americans won’t have their phones on them when they move about, Waldron said. Waldron also attacked the Singapore model, noting “not a high enough percentage of the population was buying in” to download the app for it to be successful.

Panelists then turned to the “60 million-dollar question” of whether or not the government should, or even can, mandate Americans to download an app like this in the interest of public health.

“It’s a red line for a lot people,” said Gidari. However, he still acknowledged that the data would help, and that some data is better than none.

“Twenty percent, that’s better than zero,” Gidari said, referring to the percent adherence of downloading this hypothetical app for it to be beneficial. He compared the rate of downloading this potential app to the trends of voting patterns in the U.S. “Some people show up, others don’t,” he said.

Gidari did not relent on the potential benefit proximity tracking could have in retarding the spread of COVID-19. “If Typhoid Mary had a cellphone,” Gidari at one point said, “we’d definitely want to know where she was.”

David Jelke was a Reporter for Broadband Breakfast. He graduated from Dartmouth College with a degree in neuroscience. Growing up in Miami, he learned to speak Spanish during a study abroad semester in Peru. He is now teaching himself French on his iPhone.

Big Tech

Frances Haugen, U.S. House Witnesses Say Facebook Must Address Social Harms

The former Facebook employee-turned-whistleblower said the company must be accountable for the social harm it causes.

Published

on

Facebook whistleblower Frances Haugen

WASHINGTON, December 2, 2021 – Facebook whistleblower Frances Haugen told the House Subcommittee on Communications and Technology on Wednesday that the committee must act to investigate Facebook’s social harms to consumers.

Haugen said Congress should be concerned about how Facebook’s products are used to influence vulnerable populations.

Haugen’s testimony, delivered at Wednesday’s subcommittee hearing, urged lawmakers to impose accountability and transparency safeguards on Facebook to prevent it from misleading the public. It comes on the heels of her first testimony in October in front of the subcommittee on consumer protection, product safety and data security in which she urged Congress to force Facebook to make its internal research public allegedly because it can’t be trusted to act on it.

That testimony came after she leaked documents to the Wall Street Journal and the Securities and Exchange Commission that suggested Facebook knew about the negative mental health impacts of photo-sharing app Instagram had on its teen users but allegedly did nothing to combat it.

“No efforts to address these problems are ever going to be effective if Facebook is not required to share data in support of its claims or be subject to oversight of its business decisions,” Haugen said Wednesday. “The company’s leadership keeps vital information from the public, the U.S. government, its shareholders, and governments around the world. The documents I have provided prove that Facebook has repeatedly misled us about what its own research reveals about the safety of children, its role in spreading hateful and polarizing messages, and so much more.”

Facebook’s impact on communities of color

Among the social harms that advocates highlighted, lawmakers were particularly interested in Facebook’s negative impact on communities of color. Rashad Robinson, president of online racial justice organization Color of Change, expressed frustration at technology companies’ disregard for the truth.

“I have personally negotiated with leaders and executives at Big Tech corporations like Facebook, Google, Twitter and Airbnb, including Mark Zuckerberg, over a number of years,” Robinson said. “I sat across the table from him, looking into his eyes, experiencing firsthand the lies, evasions, ignorance and complete lack of accountability to any standard of safety for Black people and other people of color.”

Robinson recalled during the height of the national racial justice protests in 2020 that Zuckerberg told him that the harms Black people were experiencing on Facebook “weren’t reflected in their own internal data.” Now, Robinson said, “we know from the documents shared by Frances Haugen and others that his internal researchers were, in fact, sounding alarms at the exact same time.”

Robinson also highlighted how Facebook’s own data shows that the company disables Black users for less extreme content more often than white users, “often for just talking about the racism they face,” he said.

To foster real solutions for social media consumer protection, Robinson suggests that lawmakers reform Section 230 of the Communications Decency Act to hold companies accountable for minimizing the adverse impact of the content from which they profit.

Currently, Section 230 shields online platforms from liability derived from content posted on their platforms that leads to harm. Conservative advocates for gutting Section 230 say the law should be repealed because it gives social media companies too much power to censor conservative voices, while proponents of keeping Section 230 argue that the law is necessary in some capacity because it allows for the free exchange of thoughts and ideas in our society.

Robinson said reforming Section 230 to impose liability for content on the companies sites would “protect people against Big Tech design features that amplify or exploit content that is clearly harmful to the public.”

These recommendations come as the House considered four social media consumer protection bills on Wednesday: H.R. 2154, the “Protecting Americans from Dangerous Algorithms Act”; H.R. 3184, the “Civil Rights Modernization Act of 2021”; H.R. 3421, the “Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act” or the “SAFE TECH Act”; and H.R. 5596, the “Justice Against Malicious Algorithms Act of 2021.”

Continue Reading

Section 230

Experts Warn Against Total Repeal of Section 230

Panelists note shifting definition of offensive content.

Published

on

WASHINGTON, November 22, 2021 – Communications experts say action by Congress to essentially gut Section 230 would not truly solve any problems with social media.

Experts emphasized that it is not possible for platforms to remove from their site all content that people may believe to be dangerous. They argue that Section 230 of the Communications Decency Act, which shields platforms from legal liability with respect to what their users post, is necessary in at least some capacity.

During discussion between these experts at Broadband Breakfast’s Live Online Event on Wednesday, Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, who used to work as a content moderator, said that to a certain extent it is impossible for platforms to moderate speech that is “dangerous” because every person has differing opinions about what speech they consider to be dangerous. He says it is this ambiguity that Section 230 protects companies from.

Still, Feerst believes that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible. He believes that the effects of artificial intelligence’s use by platforms makes some degree of liability even more essential.

Particularly with the amount of online speech to be reviewed by moderators in the internet age, Feerst says the clear-cut moderation standards are too messy and expensive to be viable options.

Matt Gerst, vice president for legal and policy affairs at the Internet Association, and Shane Tews, nonresident senior fellow at the American Enterprise Institute, also say that while content moderation is complex, it is necessary. Scott McCollough, attorney at McCollough Law Firm, says large social media companies like Facebook are not the causes of all the problems with social media that are in the national spotlight right now, but rather that social features of today’s society, such as the extreme prevalence of conflict, are to blame for this focus on social media.

Proposals for change

Rick Lane, CEO of Iggy Ventures, proposes that reform of Section 230 should include a requirement for social media platforms to make very clear what content is and is not allowed on their sites. McCullough echoed this concern, saying that many moderation actions platforms take presently do not seem to be consistent with those platforms’ stated terms and conditions, and that individual states across the nation should be able to look at these instances on a case-by-case basis to determine whether platforms fairly apply their terms and conditions.

Feerst highlighted the nuance of this issue by saying that people’s definitions of “consistent” are naturally subjective, but agrees with McCullough that users who have content removed should be notified of such, as well as the reasoning for moderators’ action.

Lane also believes that rightfully included in the product of Section 230 reform will be a requirement for platforms to demonstrate a reasonable standard of care and moderate illegal and other extremely dangerous content on their sites. Tews generally agreed with Lane that such content moderation is complex, as she sees a separation between freedom of speech and illegal activity.

Gerst highlighted concerns from companies the Internet Association represents that government regulation coming from Section 230 reform will require widely varied platforms to standardize their operation approaches, diminishing innovation on the internet.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. You can watch the November 17, 2021, event on this page. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, November 17, 2021, 12 Noon ET — The Changing Nature of the Debate About Social Media and Section 230

Facebook is under fire as never before. In response, the social-networking giant has gone so far as to change its official name, to Meta (as in the “metaverse”). What are the broader concerns about social media beyond Facebook? How will concerns about Facebook’s practices spill over into other social media networks, and to debate about Section 230 of the Communications Act?

Panelists for this Broadband Breakfast Live Online session:

  • Scott McCullough, Attorney, McCullough Law Firm
  • Shane Tews, Nonresident Senior Fellow, American Enterprise Institute
  • Alex Feerst, Co-founder, Digital Trust & Safety Partnership
  • Rick Lane, CEO, Iggy Ventures
  • Matt Gerst, VP for Legal & Policy Affairs, Internet Association
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Shane Tews is a nonresident senior fellow at the American Enterprise Institute (AEI), where she works on international communications, technology and cybersecurity issues, including privacy, internet governance, data protection, 5G networks, the Internet of Things, machine learning, and artificial intelligence. She is also president of Logan Circle Strategies.

Alex Feerst is a lawyer and technologist focused on building systems that foster trust, community, and privacy. He leads Murmuration Labs, which helps tech companies address the risks and human impact of innovative products, and co-founded the Digital Trust & Safety Partnership, the first industry-led initiative to establish best practices for online trust and safety. He was previously Head of Legal and Head of Trust and Safety at Medium, General Counsel at Neuralink, and currently serves on the editorial board of the Journal of Online Trust & Safety, and as a fellow at Stanford University’s Center for Internet and Society.

Rick Lane is a tech policy expert, child safety advocate, and the founder and CEO of Iggy Ventures. Iggy advises and invests in companies and projects that can have a positive social impact. Prior to starting Iggy, Rick served for 15 years as the Senior Vice President of Government Affairs of 21st Century Fox.

Matt Gerst is the Vice President for Legal & Policy Affairs and Associate General Counsel at Internet Association, where he builds consensus on policy positions among IA’s diverse membership of companies that lead the internet industry. Most recently, Matt served as Vice President of Regulatory Affairs at CTIA, where he managed a diverse range of issues including consumer protection, public safety, network resiliency, and universal service. Matt received his J.D. from New York Law School, and he served as an adjunct professor of law in the scholarly writing program at the George Washington University School of Law.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

WATCH HERE, or on YouTubeTwitter and Facebook

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Big Tech

Experts Caution Against One Size Fits All Approach to Content Moderation

Cost of moderation another factor as to why some experts say standardized content moderation policies may not work for all.

Published

on

Former President Donald Trump sued Facebook, Twitter and Google earlier this year

WASHINGTON, November 10, 2021 – Some experts say they are concerned about a lack of diversity in content moderation practices across the technology industry because some companies may not be well-served – and could be negatively affected – by uniform policies.

Many say following what other influential platforms do, like banning accounts, could do more harm than good when it comes to protecting free speech on the internet.

Since former President Donald Trump was banned from Twitter and Facebook for allegedly stoking the January Capitol riot, debate has raged about what Big Tech platforms should do when certain accounts cross the generally protected free speech line into promoting violence, disobedience, or other illegal behavior.

But the Knight Foundation event on November 2 heard that standardized content moderation policies imply a one-size fits all approach that would work across the tech spectrum. In fact, experts say, it won’t.

Lawmakers have been calling for commitments from social media companies to agree to content and platform policies, including increasing protections for minors online. But representatives from Snapchat, TikTok, and YouTube who sat before members of the Senate Commerce Subcommittee on Consumer Protection last month did not commit to that.

Facebook itself has an Oversight Board that is independent of the company; the Board earlier this year upheld Trump’s ban from the platform but recommended the company set a standard for the penalty (Trump was banned indefinitely).

Among proposed solutions for many platforms is a move toward decentralized content regulation with more delegation of moderation to individuals that are not employed by the platforms. There are even suggestions of incentivizing immunity from certain antitrust regulation should platforms implement decentralized structures.

Costs of content moderation

At an Information Technology and Innovation Foundation event on Tuesday, experts suggested a level of decentralization that would involve user tools, as opposed to plowing money to employ content moderators.

Experts noted the expense of hiring content moderators. With global social media platforms, employees who are able to moderate content in all languages and dialects must be hired, and the accumulation of these hiring costs have the potential to be lethal to many platforms.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending