May 8, 2020 – Eric Yuan came up with the idea for Zoom as a student while taking 10-hour train rides to visit his girlfriend in China. In 2011 he left Cisco Webex to found Zoom in San Jose, California, with the mission “to make video communications frictionless.” Zoom earned a billion-dollar valuation by 2017 and went public in 2019 in one of the most successful IPOs of that year.
And then the coronavirus appeared in Zoom’s waiting room, and it was not to be ejected from the chat.
As Americans have entered a world riddled with tele-prefixes, Zoom, whether it has wanted to or not, has entered the pantheon of Tide and Alexa to become a household name. By April 1, the number of Zoom’s daily participants skyrocketed from 10 million in 2019 to 200 million.
Indeed, Zoom became overnight king of an increasingly-important industry thrust into new prominence by the pandemic: Videoconferencing.
As hundreds of millions of Americans and billions of global citizens adjust to new norms for work, medicine, and education, Zoom has emerged as the go-to application, cutting commute times to zero.
What is Zoom and what propelled it to widespread name recognition? It’s not Webex
The most likely answer to what propelled Zoom to prominence comes from its mission statement— “to make video communications frictionless.”
Rachna Sizemore Heizer, a member at large of the Fairfax County Public Schools Board, highlighted simplicity as an advantage in her initial decision to use Zoom for her school board meetings. “It’s easier to understand if you’re new to the stuff,” Heizer said.
Cynthia Jelke, 18, a sophomore at Tufts University, found Zoom essential to her success. “I genuinely wouldn’t be able to do my education without it,” Jelke said.
Even the Federal Communications Commission, the agency tasked with improving communications, drew criticism on Tuesday for using Cisco Webex video conferencing technology to launch its Rural Digital Opportunity Fund auction webinar.
The web seminar designed to teach applicants how to apply for for more than $20 billion worth of funds ended up turning away business and media leaders due to a clunky audio-capacity limitations.
Commentators in the chat box complained in real time of the frustrations they faced. User “Natee” chirped at 4:10 p.m. on that webinar: “Webex is no good. That is why the original Webex developer created Zoom.”
Workplaces and schools have taken to Zoom
The workforce has also taken quickly to the interface. Patrick McGrath, a software engineer from Chicago, praised Zoom for its Whiteboarding feature, which allows users to sketch concepts in a creative and expressive way. “It allows for collaboration,” McGrath said in an interview with Broadband Breakfast.
Then there are the memes. Perhaps because Zoom resonated with teenagers, many of whom have had to use Zoom for school, it has become an endless generator for viral content and a hub for consolidating a shared experience.
Students from different colleges started saying that they all attend “Zoom University.” Zoom University T-shirt vendors began popping up online.
Zoom also offers the option to easily customize one’s background without a green screen, adding a touch of personalization that is reminiscent of social media.
The videoconferencing service has a “hotter brand” than other teleconferencing companies, Rishi Jaluria, a senior research analyst at D.A. Davidson told The New York Times. “Younger people don’t want to use the older technology.”
Joshua Rush, 18, a high school senior in Los Angeles, told the Times: “Out of nowhere, I feel like Zoom has clout.”
The memes “help lighten to mood of being kicked out of your school,” Tufts sophomore Jelke told Broadband Breakfast.
If there was any doubt that Zoom had chiseled a frieze in the pantheon of pop culture, Saturday Night Live’s first virtual episode put that skepticism to rest.
“Live from Zoom, it’s Saturday Night Live,” announced the cast of SNL, who used Zoom for large swaths of its episode on April 11.
Tom Hanks, the host of the episode and a popular coronavirus survivor, had fun with the monologue, using video cuts and costumes to play different characters. The episode featured many playful jabs at the ubiquitous platform, and one sketch dedicated to Zoom profiled common videoconferencing personalities.
OK, so why is Zoom suffering?
Almost as quickly as “Zoom” has become a verb, “Zoombombing” has entered the national lexicon. Zoombombing occurs when a Zoom meeting host or attendee leaves the join URL unattended, which, in the world of the internet, can happen many different ways.
A prankster can then use this neglected link to crash a meeting and broadcast improper material, such as pornography or racist content. The FBI issued a warning about Zoombombing on March 30 — but that hasn’t curbed the rise of this new breed of troll.
#FBI warns of Teleconferencing and Online Classroom Hijacking during #COVID19 pandemic. Find out how to report and protect against teleconference hijacking threats here: https://t.co/jmMxyZZqMv pic.twitter.com/Y3h9bVZG30
— FBI Boston (@FBIBoston) March 30, 2020
The Anti-Defamation League has already documented 21 instances (as of April 6) of anti-Semitic Zoombombing at the levels of government, school and worship.
Journalists Kara Swisher of The New York Times and Jessica Lenin of The Information were forced to shut down their Zoom webinar on feminism in tech on March 15, when trolls broke into their meeting and began broadcasting a shock video. A meeting of the Indiana Election Commission was interrupted by a video of a man masturbating.
The graphic examples don’t stop there:
Okay so someone started screensharing extremely graphic porn during the Lauv and Chipotle + Zoom hangout and it abruptly ended lol. Maybe these platforms need to be thoroughly tested first?
*blurred for obvious reasons* pic.twitter.com/9mBlQSia1U
— Kenneth Takanami (@exitpost) March 17, 2020
When McGrath, the software engineer from Chicago, discussed the issue of security, he responded “we have a definite team to take care of that… It’s totally because of the security concerns that have been going around.”
From Zoombombing to… other privacy and security concerns
Then there’s the issue of privacy.
As early as March 26, Vice reported that it had uncovered that Zoom had been sharing its users’ data with Facebook without their knowledge.
Data being shared included when the user opens the app, details on the user’s device such as the model, the time zone and city the user is connecting from, which phone carrier the user is using, and information that allows third-party companies to target a user with advertisements.
And then there’s the issue of the Chinese server.
The University of Toronto’s Citizen Lab published a report showing that some Zoom user data is accessible by the company’s server in China, “even when all meeting participants, and the Zoom subscriber’s company, are outside of China,” the authors of the report wrote.
The Toronto lab also noted that Zoom’s arrangement of owning three Chinese-based companies and employing 700 Chinese mainland software developers “may make Zoom responsive to pressure from Chinese authorities.” These vulnerabilities give the Chinese government a way to tap in to Zoom phone calls said Bill Marczak, a research fellow at Citizen Lab.
Zoom’s claim to offer end-to-end encryption was scrutinized by The Intercept and found to be wrong. The company was forced to backtrack and apologize in a blog post by Oded Gal, Zoom’s chief product officer:
“In light of recent interest in our encryption practices, we want to start by apologizing for the confusion we have caused by incorrectly suggesting that Zoom meetings were capable of using end-to-end encryption…. While we never intended to deceive any of our customers, we recognize that there is a discrepancy between the commonly accepted definition of end-to-end encryption and how we were using it. This blog is intended to rectify that discrepancy and clarify exactly how we encrypt the content that moves across our network.”
The Attorney General of New York sent a letter to the company asking questions regarding its privacy shortcomings that allow Zoombombing and questioned its murky agreement with Facebook. That was just one of 26 letters the Zoom office has received from state attorney generals.
A shareholder for Zoom is suing the company for overstating its encryption capabilities. Even local school districts, such as the Fairfax County Public Schools, have deemed the technology unsafe, and are experimenting with alternatives.
The Era of Self-Repair
Days after Vice’s report, Zoom changed codes that had shared user data with Facebook.
Zoom began allowing users to deactivate the Chinese server. By April 25, any user that had not expressly kept their data on the Chinese server was to be automatically removed from its data route.
Such an “opt-in” approach to data sharing is rare in the world of privacy.
And Zoom has been highly communicative about its blunders. Yuan has posted blogs repeatedly on his website updating users about security and new, common-sense features such as making security settings more prominent and reporting users.
He has also used his blogs to draw attention to the tools that have always existed for dealing with trolls, such as good cyber hygiene and tutorials for using the Zoom Waiting Room to vet join requests.
You can ask Zoom anything, as long as it’s on Zoom
Most notably, Zoom is hosting a series of weekly webinars since April 8 with Yuan himself, called “Ask Eric Anything.” He’s made himself as available as a CEO can be.
At one of the first of these webcasts, the majority of questions revolved around interface and troubleshooting, but some addressed security concerns.
For “the next 90 days,” Zoom will be “incredibly focused on enhancing our privacy and security,” promised Yuan.
See “Zoom CEO Eric Yuan Pledges to Address Security Shortcomings in ‘The Next 90 Days’,” Broadband Breakfast, April 20, 2020.
In fact, Zoom has branded itself around “The Next 90 Days,” where it has committed to focusing itself on solely privacy-related challenges.
Asked about the specifics of its efforts by Broadband Breakfast, a Zoom spokesperson said, “Together, I have no doubt we will make Zoom synonymous with safety and security.”
Zoom’s has also had a slew of conspicuous hires: Katie Moussouris, a cybersecurity expert who debugged Microsoft and the Pentagon; Leah Kissner, Google’s former head of privacy; and Alex Stamos, director of the Stanford Internet Observatory and Facebook’s former chief security officer.
During Stamos’ time at Facebook, he advocated greater disclosure around Russian interference on Facebook during the 2016 election. His insistence that Facebook do more created internal disagreements that eventually led to his departure.
“To successfully scale a video-heavy platform to such a size, with no appreciable downtime and in the space of weeks,” Stamos said in a blog post explaining his decision to temporarily leave Stanford and join Zoom, “is literally unprecedented in the history of the internet.”
He described the challenge as “too interesting to pass up.”
In the end, the problem that Zoom has faced isn’t specific to Zoom, but a human problem. The real challenge, as Stamos said, “is how to empower one’s customers without empowering those who wish to abuse them.”
Federal Trade Commission Will Likely Not Be Able to Implement Competition Rules, Panelists Say
Panelists at TechFreedom event said judiciary will prevent the FTC from developing proposed antitrust policies.
WASHINGTON, October 22, 2021 –The Federal Trade Commission’s attempts to use rulemaking authority to issue antitrust policy governing technology companies will be struck down in federal courts, said panelists participating in a TechFreedom event on Thursday.
Recently formed conservative majorities on the Supreme Court and other panels have expressed opposition to the idea that the FTC possesses such rulemaking authority, these panelists said.
Hence, unlike past supreme courts, they current bench is likely to strike down FTC-issued binding rules.
Panelists highlighted former President Donald Trump appointees Brett Kavanaugh and Neil Gorsuch as justices who have opposed legal reasoning often used to permit FTC rulemaking.
Indeed, some panelists said early 20th Century legislation governing the FTC makes the case that the agency was created as an investigative body rather than a regulatory one.
Peter Wallison, senior fellow emeritus at the American Enterprise Institute, said that between five and six Supreme Court justices would ultimately vote to weaken precedents that allow for FTC rulemaking.
The Judiciary Committee of the House of Representatives recently advanced six antitrust bills that attempt to regulate the tech industry and foster greater competition, including the Ending Platform Monopolies Act and the Platform Competition and Opportunity Act.
FTC rules have taken on increased importance in terms of economic regulation due to the frequent inability of Congress to pass major legislation due to partisan gridlock. The FTC has proposed new procedures to ensure competition since Lina Khan was appointed as chair.
However, NERA Economic Consulting on Wednesday concluded that legislative proposals to regulate competition would impose costs of around $300 billion while impacting 13 additional American companies in the near term and more than 100 companies in the next decade.
Study author Christian Dippon contends that the legislation would limit American startup growth and international competitiveness while at the same time increasing costs for Americans.
Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230
The Justice Against Malicious Algorithms Act seeks to target large online platforms that push harmful content.
WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.
The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.
The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.
Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.
Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.
“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”
The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”
From Haugen testimony to legislation
Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.
“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.
Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.
Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.
Haugen’s testimony versus Zuckerberg’s Section 230 vision
In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”
This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.
Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”
Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.
Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.
But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.
An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.
Where the parties agree
Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.
Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.
Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.
With reporting from Riley Steward
OECD Ratifies Global 15% Digital Tax Rate, Aims For 2023 Implementation
The OECD finalized an earlier agreement that would impose a 15% tax on companies operating in 136 member nations.
WASHINGTON, October 11, 2021 – The Organization for Economic Cooperation and Development on Friday finalized an agreement to levy a 15 percent tax rate on digital multinational businesses, like Amazon, Apple, Google, and Facebook, starting in 2023.
The ratification of the tax rate comes after years of negotiations and after individual countries have proposed their own tax systems to keep up with internet businesses that have long skirted the tax of laws of nations they operate in because they don’t necessarily have a physical connection inside those borders. The Liberal Party in Canada, for example, had proposed a 3 percent tax on revenues obtained inside the country, while Britain, France, Italy, and Spain had been contemplating digital sales taxes on their own.
The 15 percent tax rate has been signed by 136 member nations, all OECD and G20 countries, out of 140 states (Kenya, Nigeria, Sri Lanka, and Pakistan did not join) and finalizes a July political agreement to reform international tax rules. The United States had proposed the 15 percent global corporate tax rate earlier this year.
Hungary and Ireland, the latter of which is a corporate tax haven for companies like Apple and Google, were two of the last holdouts. Hungary agreed to join Friday after they were guaranteed a ten-year rollout period for the regulation, and Ireland agreed Thursday after guarantees that the rate would not be subsequently increased.
The new tax rate is expected to generate US $150 billion annually for the countries involved and targets companies with revenues of over 750 million Euros. “The global minimum tax agreement does not seek to eliminate tax competition, but puts multilaterally agreed limitations on it,” the OECD said, adding the tax will not only stabilize the international tax system but also provide companies with more certainty as to their obligations.
The regulation would be the first foundational cross-border corporate tax rate regulatory change in over a century. Some are skeptical of President Joe Biden’s and Congress’s ability to ratify the agreement. The OECD hopes to sign a multilateral convention by 2022 and implement the reform by 2023.
The final agreement will be delivered to the G20 finance ministers meeting in Washington D.C. on Wednesday, then it will be charted off to the G20 Leaders’ Summit in Rome at the end of this month, according to a OECD press release.
The United States was in a bit of a defensive pattern under former President Donald Trump, after the country made tariff threats if the European nations, particularly France, decided to tax its big homegrown corporations.
French Finance Minister Bruno Le Maire said that the agreement, “opens the path to a true fiscal revolution.” US Treasury Secretary Janet Yellen said that the OECD has “decided to end the race to the bottom on corporate taxation,” referring to the practice of attracting large companies to headquarter in one’s country through purposefully incentivized lower tax rates.
- Space Cybersecurity Concerns, USTelecom’s New Board, Agriculture’s $1.15 Rural Broadband Grant
- Catherine McNally: The Digital Divide is an Equality Issue
- National Telecommunications and Information Administration on Minority Community Grant Applications
- Federal Trade Commission Will Likely Not Be Able to Implement Competition Rules, Panelists Say
- House Passes Ban on Chinese Equipment, 3.45 GHz Auction Reaches Reserve Price, Against a ‘Wi-Fi Tax’
- LEO Satellite Technology Should Be in All Schools, Gigabit Libraries Network Says
Signup for Broadband Breakfast
Antitrust4 months ago
Experts Disagree Over Need, Feasibility of Global Standards for Antitrust Rules
Broadband Roundup2 months ago
Senators Intro App Bill, Groups Drop TracFone Buy Complaint, States Want Shorter Robocall Deadline
Infrastructure3 months ago
Lumen Responds to Allegations it Underbuilds While Collecting Public Funds
Broadband Roundup2 months ago
Mapping Comment Deadline Extended, AT&T Gets Federal Contract, 5G and LTE Drive Microwave Demand
Antitrust3 months ago
Daniel Hanley: Federal Communications Commission Must Block Verizon’s Acquisition of TracFone
Section 2303 months ago
Facebook, Google, Twitter Register to Lobby Congress on Section 230
#broadbandlive2 months ago
Broadband Breakfast on September 1, 2021 — What’s Next for Broadband Infrastructure Legislation?
Broadband Roundup2 months ago
FCC and FTC Announce Open Meeting Agendas and AT&T Signs Deal with OneWeb