Updated October 20, 2023: Removed two sentences for clarity. 

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

For the past two years, Congress has been trying to revise the Kids Online Safety Act (KOSA) to address criticisms from EFF, human and digital rights organizations, LGBTQ groups, and others, that the core provisions of the bill will censor the internet for everyone and harm young people. All of those changes fail to solve KOSA’s inherent censorship problem: As long as the “duty of care” remains in the bill, it will still force platforms to censor perfectly legal content. (You can read our analyses here and here.)

Despite never addressing this central problem, some members of Congress are convinced that a new change will avoid censoring the internet: KOSA’s liability is now theoretically triggered only for content that is recommended to users under 18, rather than content that they specifically search for. But that’s still censorship—and it fundamentally misunderstands how search works online. 

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults

As a reminder, under KOSA, a platform would be liable for not “acting in the best interests of a [minor] user.” To do this, a platform would need to “tak[e] reasonable measures in its design and operation of products and services to prevent and mitigate” a long list of societal ills, including anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal behaviors. As we have said, this will be used to censor what young people and adults can see on these platforms. The bills’ coauthors agree, writing that KOSA “will make platforms legally responsible for preventing and mitigating harms to young people online, such as content promoting suicide, eating disorders, substance abuse, bullying, and sexual exploitation.” 

Our concern, and the concern of others, is that this bill will be used to censor legal information and restrict the ability for minors to access it, while adding age verification requirements that will push adults off the platforms as well. Additionally, enforcement provisions in KOSA give power to state attorneys general to decide what is harmful to minors, a recipe for disaster that will exacerbate efforts already underway to restrict access to information online (and offline). The result is that platforms will likely feel pressured to remove enormous amounts of information to protect themselves from KOSA’s crushing liability—even if that information is not harmful.

The ‘Limitation’ section of the bill is intended to clarify that KOSA creates liability only for content that the platform recommends. In our reading, this is meant to refer to the content that a platform shows a user that doesn’t come from an account the user follows, is not content the user searches for, and is not content that the user deliberately visits (such as by clicking a URL). In full, the ‘Limitation’ section states that the law is not meant to prevent or preclude “any minor from deliberately and independently searching for, or specifically requesting, content,” nor should it prevent the “platform or individuals on the platform from providing resources for the prevention or mitigation of suicidal behaviors, substance use, and other harms, including evidence-informed information and clinical resources.” 

In layman’s terms, minors will supposedly still have the freedom to follow accounts, search for, and request any type of content, but platforms won’t have the freedom to share some types of content to them. Again, that fundamentally misunderstands how social media works—and it’s still censorship. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Courts Have Agreed: Recommendations are Protected

If, as the bills’ authors write, they want to hold platforms accountable for “knowingly driving toxic, addicting, and dangerous content” to young people, why stop at search—which can also show toxic, addicting, or dangerous content? We think this section was added for two reasons. 

First, members of Congress have attacked social media platforms’ use of automated tools to present content for years, claiming that it causes any number of issues ranging from political strife to mental health problems. The evidence supporting those claims is unclear (and the reverse may be true). 

Second, and perhaps more importantly, the authors of the bill likely believe pinning liability on recommendations will allow them to square a circle and get away with censorship while complying with the First Amendment. It will not.

Platforms’ ability to “filter, screen, allow, or disallow content;” “pick [and] choose” content; and make decisions about how to “display,” “organize,” or “reorganize” content is protected by 47 U.S.C. § 230 (“Section 230”), and the First Amendment. (We have written about this in various briefs, including this one.) This “Limitation” in KOSA doesn’t make the bill any less censorious. 

Search Results Are Recommendations

Practically speaking, there is also no clear distinction between “recommendations” and “search results.” The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is. Accuracy and relevance in search results are algorithmically generated, and any modern search method uses an automated process to determine the search results and the order in which they are presented, which it then recommends to the user. 

KOSA’s authors also assume, incorrectly, that content on social media can easily be organized, tagged, or described in the first place, such that it can be shown when someone searches for it, but not otherwise. But content moderation at infinite scale will always fail, in part because whether content fits into a specific bucket is often subjective in the first place.

The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is.

For example: let’s assume that using KOSA, an attorney general in a state has made it clear that a platform that recommends information related to transgender healthcare will be sued for increasing the risk of suicide in young people. (Because trans people are at a higher risk of suicide, this is one of many ways that we expect an attorney general could torture the facts to censor content—by claiming that correlation is causation.) 

If a young person in that state searches social media for “transgender healthcare,” does this mean that the platform can or cannot show them any content about “transgender healthcare” as a result? How can a platform know which content is about transgender healthcare, much less whether the content matches the attorney general’s views on the subject, or whether they have to abide by that interpretation in search results? What if the user searches for “banned healthcare?” What if they search for “trans controversy?” (Most people don’t search for the exact name of the piece of content they want to find, and most pieces of content on social media aren’t “named” at all.) 

In this example, and in an enormous number of other cases, platforms can’t know in advance what content a person is searching for—and will, at the risk of showing something controversial that the person did not intend to find, remove it entirely—from recommendations as well as search results. If liability exists for showing it, platforms will remove users’ ability to access all content that relates to a dangerous topic rather than risk showing it in the occasional instance when they can determine, for certain, that is what the user is looking for. This blunt response will not only harm children who need access to information, but adults who also may seek the same content online.

“Nerd Harder” to Remove Content Will Never Work

Third, as we have written before, it is impossible for platforms to know what types of content they would be liable for recommending (or showing in search results) in the first place. Because there is no definition of harmful or depressing content that doesn’t include a vast amount of protected expression, almost any content could fit into the categories that platforms would have to censor.  This would include truthful news about what’s going on in the world, such as wars, gun violence, and climate change. 

This Limitation section will have no meaningful effect on the censorial nature of the law. If KOSA passes, the only real option for platforms would be to institute age verification and ban minors entirely, or to remove any ‘recommendations’ and ‘search’ functions almost entirely for minors. As we’ve said repeatedly, these efforts will also impact adult users who either lack the ability to prove they are not minors or are deterred from doing so. Most smaller platforms would be pressured to ban minors entirely, while larger ones, with more money for content moderation and development, would likely block them from finding enormous swathes of content unless they have the exact URL to locate it. In that way, KOSA’s censorship would further entrench the dominant social media platforms.

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults. We urge anyone who cares about free speech and privacy online to send a message to Congress voicing your opposition. 

TAKE ACTION

TELL CONGRESS YOU WON'T ACCEPT INTERNET CENSORSHIP

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

Related Issues