
The Supreme Court has sided with social media companies in two lawsuits that challenge whether or not these companies should be held liable for terrorist-related content on their platforms.
The court’s first ruling was in regard to Twitter v. Tammneh. This case centered around the death of Jordanian citizen Nawras Alassaf. In 2017, Alassaf was a victim of an ISIS terrorist attack in Istanbul. After his death, Alassaf’s family sued social media platforms including Twitter, arguing that they “aided and abetted” the terrorist attack for not moderating terrorist-related content posted by the users on their platforms. The plaintiffs argued that by failing to moderate content, Twitter had violated antiterrorism law.
In a unanimous decision, the Supreme Court ruled that Twitter cannot be charged with “aiding and abetting terrorism” solely because the content was posted on their platform. In the court’s decision, Justice Clarence Thomas stated that the court concludes that the “plaintiffs’ allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.”
The Supreme Court also decided to dismiss a similar case, Gonzalez v. Google. This case, which has a similar background to Twitter v. Tammneh, accuses Google of “aiding and abetting” terrorism by recommending YouTube videos that promote terrorism. Google cited their algorithms and argued that they were protected by Section 230. The Supreme Court dismissed these allegations, citing a lower court ruling that stated Google cannot be held liable. The court stated that “We decline to address the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief. Instead, we vacate the judgment below and remand the case for Ninth Circuit to consider plaintiffs’ complaint in light of our decision in Twitter.”
These two Supreme Court decisions are major victories for tech companies, but many are left dissatisfied as the Supreme Court declined to weigh in directly on the status of Section 230.
Section 230 of the Communications Decency Act states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Essentially, this means that social media companies are not liable for the content their users post because they are considered platforms and not publishers.
Think of it this way: if The New York Times wrote an article that was knowingly dishonest, the newspaper could be subject to a libel lawsuit. Likewise, if they posted an article that was a call to violence or crime, they could be held liable for aiding and abetting.
Now, imagine a group of individuals planning a crime on the phone and over email with their friends and then carrying out this crime. Could the phone carrier, Verizon for example, be held liable for aiding and abetting because the criminals used their services to carry out their actions? Could Google be held liable because they wrote out their plans over email?
In these two scenarios, The New York Times would be held liable because they are a publisher, while Verizon and Gmail would not, because they are simply a communication platform. Justice Thomas made a similar analogy in the Supreme Court’s decision.
“It might be that bad actors like ISIS are able to use platforms like defendants’ for illegal – and sometimes terrible – ends,” Thomas explained. “But the same could be said of cell phones, email, or the internet generally.”
This is what Section 230 determines; social media users have a right to the content they post and are responsible for what they say and do. The tech companies, however, do not own this content and cannot be held liable.
Still, many legal experts argue whether or not this law should remain in place as the internet grows. Many argue that social media platforms and big tech companies should have a legal obligation to regulate the content users post on their sites. Section 230’s ambiguity towards algorithms has also been questioned, leaving many to ask if a company’s algorithms are protected under the same criteria that determine a site to be a platform. By not making a definitive decision on the legality of Section 230, the Supreme Court has left the current policy in place.



