Google and other tech giants are not liable for terrorist content
3 mins read

Google and other tech giants are not liable for terrorist content

The Supreme Court today ruled that tech companies are not liable for terrorist content posted on their platforms.

The lawsuit, filed by the family of a victim of an ISIS attack in 2017, argued that Twitter, Facebook and Google should be held accountable for allowing the terrorist organization to use their platforms for their terrorist attacks .

However, the court ruled unanimously that the lawsuit could not proceed.

Judge Clarence Thomas, writing for the unanimous court in Twitter v. Taamneh, clarified that social media platforms are not guilty even when nefarious actors use such platforms for illegal and sometimes horrific purposes.

The argument put forward by the victim’s family that tech companies should be held liable for their alleged failure to prevent ISIS from using these platforms lacked the necessary connection between the tech companies and the terrorist attack to establish liability.

Judge Ketanji Brown Jackson, in a brief unanimous opinion, stressed that the court’s opinion was narrow on key points. She suggested that other cases, with different allegations and records, might lead to different conclusions.

Implications of the Gonzalez vs. Google case

Following the Twitter ruling, the Supreme Court considered the Gonzalez v. Google case, a lawsuit brought by the family of Nohemi Gonzalez, a 23-year-old American woman who was killed in the 2015 ISIS attack on a Parisian cafe.

The Gonzalez family argued that through its ownership of YouTube, Google aided ISIS recruitment by allowing the terrorist group to post videos on YouTube that incited violence and attempted to recruit would-be ISIS members.

The family also claimed that Google’s algorithms recommended ISIS videos to users.

The US Circuit Court of Appeals for the 9th Circuit previously ruled that Section 230 of the Communications Decency Act of 1996, which shields technology companies from liability for content posted by users, protects such recommendations.

However, in light of Twitter’s ruling, the Supreme Court overturned that ruling and remanded the case for reconsideration.

The court refrained from deciding the scope of Section 230, suggesting that that question is best left to Congress or a future case.

Some congressmen are keen on reforming Section 230, believing it offers too much protection for tech giants.

The office of Senator Mark Warner, a vocal critic and supporter of Section 230 reforms, submitted a statement to Search Engine Journal on the decision in Gonzalez v. Google.

He calls Section 230 old and outdated, arguing it is a “free jail escape card” for big corporations.

“For years I have been saying that Congress must take action to address the broad protections that Section 230 affords technology companies. This outdated law has outlived its usefulness, giving the biggest platform companies a “Free Jail Escape Card” as their websites are used by scammers, harassers and violent extremists.”

However, Senator Warner makes it clear that he does not see Section 230 reform as opening the door to massive liability claims against platform companies.

“The reform of Section 230 does not mean that platforms will automatically face massive liability claims; Victims still have to prove their case in court.”

In total

Taken together, these cases highlight the ongoing debate about the responsibility of tech companies when moderating user-generated content and the extent to which they can be held liable for harmful content shared on their platforms.

The Supreme Court’s rulings suggest that, for the time being at least, a direct link between the actions of tech companies and certain acts of terrorism is required to establish legal liability.

Still, the court’s comments suggest that different circumstances could lead to different outcomes.


Sources: SCOTUS

Featured image created by the author using Midjourney.