What a law designed to protect the internet has to do with abortion
The United States Supreme Court unleashed a political earthquake when it overturned Roe v. Wade in June 2022, reversing nearly fifty years of precedent establishing a constitutional right to abortion.
After the decision, red states moved quickly to ban or severely limit access to the procedure. This made the virtual sphere uniquely important for people seeking information about abortion, especially those living in states that have outlawed the procedure with little or no exceptions.
Google searches for abortion medications increased by 70% the month following the court ruling. People flocked to social media platforms and websites with resources about where and how to end a pregnancy, pay for an abortion or seek help to obtain an abortion out of state.
Despite state laws criminalizing abortion, these digital spaces are legally protected from liability for hosting this kind of content. That’s thanks to the landmark Section 230 of the 1996 Communications Decency Act, the 26 words that are often credited with creating today’s internet as we know it. Thanks to Section 230, websites of all kinds are protected from lawsuits over material that users might post on their platforms. This legal shield allows sites to host speech about all kinds of things that might be illegal — abortion included — without worrying about being sued.
But the future of 230 is on shaky ground. Next month, the U.S. Supreme Court will hear oral arguments on a case that challenges the scope of the landmark internet law. The Court’s decision could have sweeping consequences for digital speech about abortion and reproductive health in a post-Roe America.
When armed ISIS assailants staged a series of attacks in central Paris in November 2015, an American college student named Nohemi Gonzalez was among the 130 people who lost their lives. Her family has since taken Google (the owner of YouTube) to court. Their lawyers argue that the tech giant aided and abetted terrorism by promoting YouTube videos featuring ISIS fighters and other material that could radicalize viewers and make them want to carry out attacks like the one that killed Nohemi. Central to the case is YouTube’s recommendation algorithm, which feeds users a never-ending stream of videos in an effort to keep them hooked. Independent research has shown that the algorithm tends to promote videos that are more “extreme” or shocking than what a person might have searched to begin with. Why? Because this kind of material is more likely to capture and sustain users’ attention.
Section 230 protects Google from legal liability for the videos it hosts on YouTube. But does it protect Google from legal liability for recommending videos that could inspire a person to join a terrorist group and commit murder? That is the central question of Gonzalez v. Google. If the Supreme Court decides that the legal shield of Section 230 does not apply to the recommendation engine, the outcome could affect all kinds of videos on the platform. Any video that could be illegal under state laws — like abortion-related content in the post-Roe era — could put the company at risk of legal liability and would probably cause Google to more proactively censor videos that might fall afoul of the law. This could end up making abortion and reproductive health-related information much harder to access online.
If this all sounds wonky and technical, that’s because it is. But the Court’s decision has the potential to “dramatically reshape the internet,” according to Eric Goldman, a professor at California’s Santa Clara University School of Law specializing in internet law.
Algorithmic systems are deeply embedded in the architecture of online services. Among other things, websites and social media platforms use algorithms to recommend material to users in response to their online activity. These algorithmic recommendations are behind the personalized ads we see online, recommended videos and accounts to follow on social media sites and what pops up when we look at search engines. They create a user’s newsfeed on social media platforms like Facebook and Twitter. They have become a core feature of how the internet functions.
WHAT ARE THE STAKES IN A POST-ROE AMERICA?
If the Supreme Court rules in the plaintiffs’ favor, it could open up a vast world of possible litigation, as websites and platforms move assertively to take down content that could put them at legal risk, including speech about abortion care and reproductive health. Platforms then would face the threat of litigation for recommending content that stands in violation of state laws — including, in thirteen cases, laws against abortion.
“That’s going to dramatically affect [the] availability of abortion-related material because, at that point, anything that a service does that promotes or raises the profile of abortion-related material over other kinds of content would no longer be protected by Section 230, would be open for all these state criminal laws, and services simply can’t tolerate that risk,” Goldman explained.
In this scenario, technology companies could not only be exposed to lawsuits but could even find themselves at risk of criminal charges for algorithmically recommending content that runs afoul of state abortion bans. One example is Texas’ anti-abortion “bounty” law, SB 8, which deputizes private citizens to sue anyone who “aids or abets” another person seeking an abortion. If the Court decides to remove Section 230’s shield for algorithmic amplification, websites and platforms could be sued for recommending content that helps a Texas resident to obtain an abortion in violation of SB 8. Most sites would likely choose to play it safe and simply remove any abortion-related speech that could expose them to criminal or legal risks.
The abortion information space is just one realm where this could play out if the Court decides that Section 230’s protections do not apply to algorithmic promotion of content. Anupam Chander, a law professor at Georgetown University who focuses on international tech regulation, explained: “Making companies liable for algorithmically promoting speech when they haven’t themselves developed it will lead to the speech that is most controversial being removed from these online services.”
Goldman had similar concerns. “We’ve never had this discussion about what kind of crazy things could a state legislature do if they wanted to hold services liable for third-party content. And that’s because Section 230 basically takes that power away from state legislatures,” he said. “But the Supreme Court could open that up as a new ground for the legislatures to plow. And they’re going to plant some really crazy stuff in that newly fertile ground that we’ve never seen before.”
Consider the #MeToo movement. Section 230 protects platforms against defamation lawsuits for hosting content alleging sexual harassment, abuse or misconduct. Without the law’s shield, the movement could have had a different trajectory. Platforms may have taken down content that could have exposed them to lawsuits from some of the powerful people who were subjects of allegations.
“That kind of speech, which we have seen the internet empower over the last decade in ways that have literally reshaped society, would lead to the kind of liability concerns that would mean that it would be suppressed in the future,” Chander added. “So, when someone claims that Harvey Weinstein assaulted them, companies are in a difficult position having to assess whether or not they can leave that up when Harvey Weinstein’s lawyers might be sending cease and desist and saying, ‘we’re going to sue you for it for defamation.’”
Proponents of Section 230, who have long argued that changing or eliminating the law would end up disproportionately censoring the speech of marginalized groups, are hoping to avoid this scenario. But it’s hard to predict how the Supreme Court justices will rule in this case. Section 230 is one of the rare issues in contemporary American politics that doesn’t map neatly onto partisan or ideological lines. As I reported for Coda in 2021, conservative and liberal politicians alike have taken issue with Section 230 in recent years, introducing dozens of bills seeking to change or eliminate it. Both U.S. President Joe Biden and former president Donald Trump have called for the law to be repealed.
“This is not just a left-right issue,” Chander explained. “It has this kind of strange bedfellows character. So I think there’s a real possibility here of an odd coalition both from the left and the right to essentially rewrite Section 230 and remove much of its protections.”
If the Supreme Court decides that platforms are on the hook legally for recommendation algorithms, it may be harder for people seeking abortions to come across the information they need, say, in a Google search or on a social media platform like Instagram, as those companies will probably take down (or geoblock) any content that could put them at legal risk. It feels almost impossible to imagine this scenario in the U.S., where we expect to find the world at our fingertips every time we look at our phones. But that reality has been constructed, in large part, on the shoulders of Section 230. Without it, the free flow of information we have come to expect in the digital era may become a relic of the past — when abortion was a constitutional right and information about it was accessible online. The Supreme Court’s decision on this tech policy case could, once again, turn back the clock on abortion rights.
The story you just read is a small piece of a complex and an ever-changing storyline that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Show your support for journalism that stays on the story by becoming a member today. Coda Story is a 501(c)3 U.S. non-profit. Your contribution to Coda Story is tax deductible.