Supreme Court Considers Liability for Twitter Not Removing Terrorist Content
Many of the arguments in Twitter v. Taamneh hinged on specific interpretations of the Anti-Terrorism Act.
Em McPhie
WASHINGTON, February 22, 2023 — In the second of two back-to-back cases considering online intermediary liability, Supreme Court justices on Wednesday sought the precise definitions of two words — “substantial” and “knowingly” — in order to draw lines that could have major implications for the internet as a whole.
The oral arguments in Twitter v. Taamneh closely examined the text of the Anti-Terrorism Act, considering whether the social media platform contributed to a 2017 terrorist attack by hosting terrorist content and failing to remove ISIS-affiliated accounts — despite the absence of a direct link to the attack. The hearing followed Tuesday’s arguments in Gonzalez v. Google, a case stemming from similar facts but primarily focused on Section 230.
Many of Wednesday’s arguments hinged on specific interpretations of the ATA, which states that liability for injuries caused by international terrorism “may be asserted as to any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”
Seth Waxman, the attorney representing Twitter, argued that Twitter should not be held liable unless it knew that it was substantially assisting the act of terrorism that injured the plaintiff.
“But [it’s] not enough to know that you’re providing substantial assistance to a group that does this kind of thing?” Justice Ketanji Brown Jackson asked.
“Of course not,” Waxman said.
Jackson was unconvinced, saying that she did not see a clear distinction.
Justice Amy Coney Barrett questioned whether the means of communication to individuals planning a terrorist attack would be considered “substantial assistance.” Waxman replied that it would depend on how significant and explicit the communications were.
Clashing interpretations of Anti-Terrorism Act left unresolved
At one point, Justice Neil Gorsuch suggested that Waxman was misreading the law by taking the act of terrorism as the object of the “aiding and abetting” clause, rather than the person who committed the act.
The latter reading would help Twitter, the justice said, because the plaintiff would then have to prove that the company aided a specific person, rather than an abstract occurrence.
However, Waxman doubled down on his original reading.
“Are you sure you want to do that?” Gorsuch asked, drawing laughs from the gallery.
Waxman also pushed back against assertions that he claimed were “combining silence or inaction with affirmative assistance.” If Twitter said that its platform should not be used to support terrorist groups or acts, Waxman argued, the company should not be held liable for any potential terrorist content, even if it did nothing at all to enforce that rule.
Justice Elena Kagan disagreed. “You’re helping by providing your service to those people with the explicit knowledge that those people are using it to advance terrorism,” she said.
Justices expressed concern over broad scope of potential liability
Unlike in the Gonzalez arguments, where the government largely supported increasing platform liability, Deputy Solicitor General Edwin Kneedler defended Twitter, saying that holding the company liable could result in hindering “legitimate and important activities by businesses, charities and others.”
Several justices raised similar concerns about the decision’s potentially far-reaching impacts.
“If we’re not pinpointing cause and effect or proximate cause for specific things, and you’re focused on infrastructure or just the availability of these platforms, then it would seem that every terrorist act that uses this platform would also mean that Twitter is an aider and abettor in those instances,” Justice Clarence Thomas told Eric Schnapper, the attorney representing the plaintiffs.
Schnapper agreed that this would be the case, but proposed setting reasonable boundaries around liability by using a standard of “remoteness in time, weighed together with the volume of activity.”
Justice Samuel Alito proposed a scenario in which a police officer tells phone companies, gas stations, restaurants and other businesses to stop serving individuals who are broadly suspected of committing a crime. Would the businesses have to comply, Alito questioned, to avoid liability for aiding and abetting?
Schnapper did not answer directly. “That’s a difficult question,” he said. “But clearly, at one end of the spectrum… If you provide a gun to someone who you know is a murderer, I think you could be held liable for aiding and abetting.”