Connect with us

Big Tech

Big Tech’s Response to Coronavirus: Face Masks, Hiring Binges, Free Web Sites and Cash Donations

Published

on

Photo collage of Jeff Bezos, Tim Cook, Mark Zuckerberg and Sundar Pichai

April 2, 2020 – Big technology companies have been closely scrutinized by legislators and the executive branch in Washington for at least two years now. But this “techlash” is largely forgotten in the wake of the coronavirus pandemic.

Whether the future looks back on big tech as a savior or a vulture depends much on how these companies – particularly Amazon, Apple, Facebook and Google –act and react in the coming weeks and months.

To see how they have been acting since the pandemic emerged, Broadband Breakfast has assembled this inventory of their actions since the import of the new coronavirus and the COVID-19 disease has come to light.

A timeline of how the big four of big tech have responded to coronavirus

Amazon has made its software largely available to governments, schools, and health workers and has also donated several millions of dollars to fighting the epidemic. The company has seemed to have pounced on a suddenly vulnerable economy, announcing that it will hire 100,000 new warehouse workers while other companies began mass layoffs. And it has not escaped criticism regarding its handling of those workers’ health and safety.

In general, Apple seems to have had a single-minded obsession with donating as many masks as possible.

Facebook has donated several hundred million dollars to aspects of the coronavirus response, ranging from combatting misinformation to donating ad space to health authorities. Its newly unveiled Community Help feature, which connects Facebook users offering help with those requesting it, is untested but widely-anticipated.

Of the big four, Google has donated the most to address the coronavirus, with a contribution of $800 million for small businesses, governments, and health workers. It has also suffered in the press when it got caught in the crossfire between the Trump administration and the press over the coronavirus website affair.

Amazon

March 16th: Amazon announces it will hire 100,000 employees amid pandemic, according to the New York Times.

March 18th: The Atlantic reports that Amazon confirmed the first case of coronavirus in an American warehouse, in Queens, New York.

March 20th: Amazon announces it will reduce the quality of its streaming service Amazon Prime Video in Europe, following YouTube’s footsteps, according to The Verge.

March 20th: Amazon donates $20 million to research and development of coronavirus testing, announced by Amazon blog post.

March 22nd: Trump announces during daily coronavirus task force conference that Amazon, as part of a consortium comprising Google, Microsoft, and IBM, will be offering hundreds of petaflops worth of computational power in analyzing projects in epidemiology, bioinformatics and molecular modeling according to TechCrunch.

March 25th: Amazon provides its Amazon Web Services “cloud technologies and technical expertise” to help the World Health Organization aggregate epidemiological data.

March 25th: Amazon offers its Future Engineer collection of computer science literacy courses to schools.

March 27th: Amazon donates $4 million to British Red Cross and several British government institutions.

March 28th: Amazon donates its AWS infrastructure technology to help Boston Children’s Hospital track the spread of coronavirus through the use of crowdsourcing maps.

March 30th: Amazon grants $5 million to 400 Seattle small businesses.

March 30th: Amazon provides 73,000 meals to medically vulnerable residents in Seattle in conjunction with local catering business Gourmondo.

March 30th: Amazon remotely upgrades Alexa to answer the question “Alexa, what do I do if I think I have coronavirus?”

March 31st: Amazon fires warehouse worker who staged a walkout at a Staten Island warehouse. New York Attorney General Letitia James, who is already leading an investigation of Facebook with eight other state attorneys general, calls for an investigation.

April 1st: Amazon formally recommends “All corporate office employees who work in a capacity that can be done from home are recommended to do so through April 24.”

April 2nd: Amazon announces it will release a bevy of webinars “on a variety of remote learning topics” for free available on April 6th.

April 2nd: Amazon donates infrastructure technology that will allow the aggregation of information across formerly siloed British National Health Service departments. They will share data such as occupancy levels and wait times at specific hospitals.

Apple

Sometime in early March: Apple issues a work-at-home order to its employees, loosening its notorious policy of secrecy surrounding unreleased products, such as having designers work in rooms with blacked out windows, according to a report by Bloomberg.

March 13th: Apple closes all retail stores outside of China in Tim Cook’s statement to the public. In the same letter, Apple announces its donations have reached $15 million for coronavirus-related efforts.

March 19th: Tim Cook announces via Twitter that Apple will be donating an undisclosed amount of money to Protezione Civile, Italy’s official emergency response task force for the coronavirus.

March 20th: 9to5Mac is the first to notice Apple’s unannounced decision to downgrade streaming resolution on its European customers’ Apple TV+s.

March 25th: Tim Cook announces via Twitter that Apple has sourced 10 million masks for the U.S. and millions more for Europe.

March 27th: Apple releases COVID-19 Screening website in conjunction with the CDC and the White House that tells users about the disease, testing, and what steps they can take.

April 1st: Apple donates $20 million to China’s coronavirus efforts, Tim Cook announced through China’s equivalent of Twitter, Weibo.

April 1st: Apple donates 2 million masks to the State of New York according to a tweet by Governor Andrew Cuomo .

April 2nd: Apple teams up with actor Leonardo DiCaprio to raise $15 million for food charity according to Business Insider. As of this writing, they have already raised $12 million.

Facebook

January 30th: Facebook announces a more aggressive stance on removing misinformation regarding coronavirus following its declaration by the World Health Organization as a public health emergency of international concern. Facebook also begins offering free ad credits for organizations that advertise responsibly on the coronavirus.

February 26th: Facebook begins promoting links to WHO at the top of search results on coronavirus.

March 13th: CEO Mark Zuckerberg matches $20 million worth of donations to WHO and the Center for Disease Control through a Facebook post.

March 16th: Facebook, along with Google, Microsoft, and other tech companies, releases a vague statement promising to help “people stay connected,” “[elevate] authoritative content,” and “[share] critical updates,” according to The Verge.

March 17th: Facebook donates $1 million to the International Fact-Checking Network and $1 million in grant money to local news organizations across the U.S. and Canada to support their coverage of coronavirus.

March 17th: Facebook restores all posts that were incorrectly flagged for spam following a report by The Verge, announced through a Twitter post by Guy Rosen, Facebook’s vice president of integrity.

March 18th: Facebook doubles server capacity for WhatsApp.

March 18th: Chief Operating Officer Sheryl Sandberg announces through a Facebook post that the company will offer $100 million in cash grants and ad credits for “up to 30,000 eligible small businesses in over 30 countries where we operate.”

March 18th: Zuckerberg hosts press call where he announces Coronavirus Information Center feature in the Facebook News Feed. He also announces that Facebook will give governments and emergency services free access to work collaboration tool Facebook Workplace for 12 months.

Sometime between March 19th and March 23rd : Facebook data scientist Ranjan Subramanian releases internal report acquired by The New York Times showing a huge uptick in news clicks through Facebook Newsfeed, which had been declining for years. The uptick in new clicks was specifically for “high-quality” and local news.

March 19thFacebook sends its content reviewers packing. Facebook shifts content review work from contractors to full time employees.

March 22nd:  Zuckerberg announces through a Facebook post donation of 720,000 masks reserved for their employees in case of forest fires. Although Facebook has only 45,000 employees, a Facebook spokesman said that “the masks were from our emergency disaster reserve and many had been acquired due to the recent dangerous California wildfires. As recommended, Facebook has emergency supplies like food, water, masks and other supplies on hand like many other companies.”

March 23rd: Facebook announces it will temporarily downgrade video streaming quality on Facebook and Instagram in Europe and Latin America, according to Reuters.

March 23rd– March 26th: Facebook co-hosts a COVID-19 Global Hackathon with the aim to foster the development of software that addresses “some of the challenges related to the current coronavirus pandemic.” Winners will be announced on April 10.

March 26th: Facebook launches Get Digital, an online resource that helps teach kids how to responsibly use the internet.

March 26th: Facebook launches its “Messenger Coronavirus Community Hub,” a webpage that explains how to get the most out of Facebook Messenger.

March 29th: Facebook invests $100 million in news industry to support publishers “at a time when advertising revenue is declining.”

March 30th: Facebook donates $25 million to support healthcare workers.

March 31st: Facebook launches its Community Help feature, making it easier for users to both request and offer services such as delivering groceries or providing transportation.

Google

March 3rd: Google makes its video-conferencing tool Hangouts Meet available for free for G Suite Users and schools and expands its hosting maximum to 250 participants.

March 13th: Trump blindsides Google by announcing a thorough coronavirus screening website. As of March 13th, Google has developed no such thing.

March 15th: CEO of Alphabet, Google’s parent company, Sundar Pichai clarifies the situation through a blog post, insisting that Google is creating a very straightforward database of resources.

March 20th: Google subsidiary YouTube announces it will downgrade it its video streaming resolution in Europe following a Twitter plea from European Commissioner Thierry Breton.

March 21st: Google rolls out the coronavirus information website made notorious by Trump’s surprise and confusing announcement at the White House Rose Garden. The website hardly resembles the website Trump and Federal coronavirus response coordinator Dr. Deborah Birx described using a big, fictional chart.

March 21st: Google redesigns it search page to highlight content from health authorities like CDC and WHO.

March 22nd: Trump announces during his daily Coronavirus Task Force press conference that Google, as part of a consortium comprising Amazon, Microsoft, and IBM, will offer hundreds of petaflops worth of computational power for the analysis projects in epidemiology, bioinformatics and molecular modeling according to TechCrunch.

March 24th: YouTube announces it will be lowering streaming quality around the world according to a report from Bloomberg.

March 27th: Pichai announces $800 million donation to small businesses, governments, and health workers via blog post.

April 2nd: Google donates $6.5 million to fund fact-checking organizations in an effort to combat misinformation

April 2nd: Google creates a public health dataset built on the back of its BigQuery data warehouse and opens the dataset to researchers.

Section 230

Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230

The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.

Published

on

Rep. Anna Eshoo, D-California

WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

Continue Reading

Big Tech

OECD Ratifies Global 15% Digital Tax Rate, Aims For 2023 Implementation

The OECD finalized an earlier agreement that would impose a 15% tax on companies operating in 136 member nations.

Published

on

US Treasury Secretary Janet Yellen.

WASHINGTON, October 11, 2021 – The Organization for Economic Cooperation and Development on Friday finalized an agreement to levy a 15 percent tax rate on digital multinational businesses, like Amazon, Apple, Google, and Facebook, starting in 2023.

The ratification of the tax rate comes after years of negotiations and after individual countries have proposed their own tax systems to keep up with internet businesses that have long skirted the tax of laws of nations they operate in because they don’t necessarily have a physical connection inside those borders. The Liberal Party in Canada, for example, had proposed a 3 percent tax on revenues obtained inside the country, while Britain, France, Italy, and Spain had been contemplating digital sales taxes on their own.

The 15 percent tax rate has been signed by 136 member nations, all OECD and G20 countries, out of 140 states (Kenya, Nigeria, Sri Lanka, and Pakistan did not join) and finalizes a July political agreement to reform international tax rules. The United States had proposed the 15 percent global corporate tax rate earlier this year.

Hungary and Ireland, the latter of which is a corporate tax haven for companies like Apple and Google, were two of the last holdouts. Hungary agreed to join Friday after they were guaranteed a ten-year rollout period for the regulation, and Ireland agreed Thursday after guarantees that the rate would not be subsequently increased.

The new tax rate is expected to generate US $150 billion annually for the countries involved and targets companies with revenues of over 750 million Euros. “The global minimum tax agreement does not seek to eliminate tax competition, but puts multilaterally agreed limitations on it,” the OECD said, adding the tax will not only stabilize the international tax system but also provide companies with more certainty as to their obligations.

The regulation would be the first foundational cross-border corporate tax rate regulatory change in over a century. Some are skeptical of President Joe Biden’s and Congress’s ability to ratify the agreement. The OECD hopes to sign a multilateral convention by 2022 and implement the reform by 2023.

The final agreement will be delivered to the G20 finance ministers meeting in Washington D.C. on Wednesday, then it will be charted off to the G20 Leaders’ Summit in Rome at the end of this month, according to a OECD press release.

The United States was in a bit of a defensive pattern under former President Donald Trump, after the country made tariff threats if the European nations, particularly France, decided to tax its big homegrown corporations.

French Finance Minister Bruno Le Maire said that the agreement, “opens the path to a true fiscal revolution.” US Treasury Secretary Janet Yellen said that the OECD has “decided to end the race to the bottom on corporate taxation,” referring to the practice of attracting large companies to headquarter in one’s country through purposefully incentivized lower tax rates.

Continue Reading

Social Media

Congress Must Force Facebook to Make Internal Research Public, Whistleblower Testifies

Frances Haugen testifies in front of the Senate studying protecting kids online after revealing herself as Facebook whistleblower.

Published

on

Facebook whistleblower Frances Haugen testifies in front of Senate committee on October 5.

WASHINGTON, October 5, 2021 – The former Facebook employee who outed herself as the whistleblower who leaked documents to the Wall Street Journal that showed Facebook knew its photo-sharing app Instagram contributed to harming the mental health of kids told a Senate committee that the company’s alleged profit-driven motives means the company’s internal research cannot be kept behind closed doors.

Frances Haugen testified Tuesday in front of the Senate Subcommittee on Consumer Protection, Product Safety and Data Security, which is looking into protecting kids online, after identifying herself Sunday on the television program 60 Minutes as the person who gave the Journal and the Securities and Exchange Commission documents showing the company going forward with development of a kids version of Instagram despite knowing the mental health impact its apps have on that demographic. (Facebook has since halted development of the kids app after the Journal story and lawmakers asking for it to be suspended.)

“We should not expect Facebook to change. We need action from Congress,” Haugen said Tuesday.

That action, she recommended, includes forcing Facebook to make all future internal research fully public because the company cannot be trusted to act on its own commissioned work.

Haugen noted that the reason the company did not — and does not — take such action, which could include preemptively shutting down development of its Instagram for kids product, is because the company is allegedly driven by a profit-first model.

“Facebook repeatedly encountered conflicts between its own profits and our safety. Facebook consistently resolved those conflicts in favor of its own profits,” alleged Haugen, who now considers herself an advocate for public oversight of social media.

“The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people. In other cases, their profit optimizing machine is generating self-harm and self-hate — especially for vulnerable groups, like teenage girls. These problems have been confirmed repeatedly by Facebook’s own internal research.”

Despite calls to modify Section 230 of the Communications Decency Act, which shields large tech platforms from legal liability for what their users post, Haugen said that – and tweaks to its outdated privacy protections – won’t be enough.

Facebook has for months touted it removes millions of groups and accounts that violate its community guidelines on hate speech and inciting violence. But Haugen alleges that despite the claims that it actively makes its platforms safer, in actuality, it only takes down three to five percent of those threats.

Asked by Senator Ben Ray Lujan, D-New Mexico, if Facebook “ever found a feature on its platform harmed its users, but the feature moved forward because it would also grow users or increase revenue,” Frances said yes, alleging the company prioritized ease of resharing over the feature’s susceptibility to growing “hate speech, misinformation or violence incitement,” even though the feature would only “decrease growth a tiny, little amount.”

She also alleged that those directions came from the head of the company himself, Mark Zuckerberg, who allegedly chose arbitrary or vague “metrics defined by Facebook, like meaningful social interactions over changes that would have significantly decreased misinformation, hate speech and other inciting content.”

Facebook’s troubles, up to this point

Facebook has already been the target of Washington’s ire for months now. It has been cited as an alleged enabler of the January 6 Capitol Hill riot that sought to stop the transition to a Joe Biden presidency, despite the platform banning former president Donald Trump. Its platform had also been blamed for allowing the spread of information that has led to violence in parts of the world, including genocide in Myanmar.

The platform has already been accused of suppressing stories from progressive news outlets and censors information that conflicts with its own personal interest, and that its algorithms deliver the same kinds of information to people so they are not exposed to different viewpoints, as a number of public interest groups have claimed.

In 2018, Facebook made worldwide news after reports in the Guardian and the New York Times found nearly 100 million Facebook profiles were harvested by a company called Cambridge Analytica, which used the data to build profiles of people to provide them with material that made them sway in a political direction.

Federal regulators have already been looking to deal with Facebook and other Big Tech companies, as that has one clear agenda item of the Biden administration. The White House has already perched Amazon critic Lina Khan as the head of the Federal Trade Commission, which has recently filed a monopoly complaint against the company in court, and other figures, including Google critic Jonathan Kanter to the Department of Justice’s antitrust division.

Facebook’s week has gone from bad to worse. Haugen, a former Facebook product manager and Harvard MBA graduate, testified in a hearing titled “Protecting Kids Online” before the Subcommittee on Consumer Protection, Product Safety, and Data Security Hearing on Tuesday. Previous opposition to Facebook’s plans to expand its products to minors has come from external parties like public interest groups and Congress.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending