Preventing Big Tech Censorship: How States Can Defend Free Speech Online

James Sherk,  November 15, 2021

EXECUTIVE SUMMARY

Social media technology has given Americans a public voice as never before. Americans increasingly follow the news, debate current events, and keep in touch with friends online. Unfortunately, major technology corporations are using their power to censor Americans’ online speech. 

While Twitter’s deplatforming of President Donald Trump attracted significant attention, many Americans who are not public figures have also been censored. Nearly 100,000 Americans reported undergoing online censorship to the America First Policy Institute. Nearly half of Americans say they personally know someone who has been temporarily or permanently banned from a social media platform. Three-quarters of Americans believe technology companies intentionally censor views they dislike. 

Social media censorship undermines free speech. As Professor Alan M. Dershowitz explains, social media censorship “raise[s] serious, substantial legal issues.” Major corporations should not judge which Americans may speak online or what they can say. The Biden administration has shown no interest in combatting this problem.

Federal inaction means it’s up to the states to defend free speech. Major technology companies and their allies argue these efforts are futile. They contend that Section 230 of the Communications Decency Act and the First Amendment’s free speech protections prevent states from regulating online content moderation. The companies overstate their case. Supreme Court precedents suggest states have considerable room to legislate consistently with the First Amendment and Section 230. 

The government has long enforced common carrier requirements. These laws require certain regulated industries, like telephone companies, to serve all customers. As Justice Clarence Thomas recently noted, the Supreme Court considers these laws consistent with the First Amendment. Courts see common carrier laws as requiring firms to transmit the speech of others—not speak themselves. There are strong constitutional arguments that states can require social media companies to serve all residents, just as telephone companies must do.

Courts have also interpreted Section 230 inconsistently. Some courts construe it to expansively immunize most content moderation. Big Tech points to these precedents. But other courts pay closer attention to the statutory text when interpreting Section 230. Under the textual precedents, states have considerable room to act. The Supreme Court has yet to resolve this dispute, though Justice Thomas has endorsed a textualist approach. States accordingly can legislate and point to the textual precedents. Unless and until the Supreme Court resolves this issue, states have reasonable grounds to act. Indeed, state legislation could give the Supreme Court an opportunity to clarify Section 230’s scope. 

Nonetheless, it is important for states to act carefully. Big Tech will likely sue over any restrictions. States need to draft laws that minimize litigation risk under existing precedents. This report presents model legislation that lawmakers can look to when drafting such legislation.

The model bill regulates major social media platforms as common carriers, prohibiting Big Tech from deplatforming state residents, restricting content based on political or philosophical views, or inconsistently applying content moderation standards. It further requires platforms to notify users when their content is restricted, explain why, and give them an opportunity to appeal. These requirements fit within the common carrier legal tradition and the textual reading some courts give Section 230. 

Other courts that interpret Section 230 expansively have still held platforms must honor terms of service governing content moderation. Thus, the model legislation imposes substantial fees on major social media platforms and then exempts platforms that add anti-censorship protections to their terms of service. This approach strongly encourages platforms to stop censoring users, while empowering users to sue for breach of contract if the companies do not honor their commitments. It also provides alternative legal grounds for upholding anti-censorship protections.

These draft common carrier requirements and financial incentives provide complementary and overlapping protections. While litigation is still very likely, this bill is designed to fit within existing precedents. States can prevent Big Tech from censoring their residents.

 

KEY POINTS

  • Social media companies are widely censoring the American people. Nearly half of Americans know someone temporarily or permanently banned from a social media platform. Given the enormity of their platforms, Big Tech should not have the power to judge who can or cannot speak online.
  • States can protect their residents’ speech in a manner consistent with the First Amendment and the requirements of Section 230 of the Communications Decency Act.
  • This paper presents model legislation designed to prevent online censorship while fitting within existing precedents. It regulates dominant social media platforms as common carriers, requiring them to serve all state residents. It also incentivizes social media platforms to incorporate free speech protections into their terms of service.
  • States do not have to let Big Tech decide who can speak in the 21st Century public square.

 

PREVENTING BIG TECH CENSORSHIP: HOW STATES CAN DEFEND FREE SPEECH ONLINE

Social media technology has given Americans a public voice as never before. Unfortunately, major technology corporations are censoring this online speech. Nearly half of Americans say they personally know someone temporarily or permanently banned from a social media platform. Three-quarters of Americans believe the big technology companies intentionally censor views they dislike. 

When the corporations that control internet communications make judgments on who can speak online, or what they can say, they restrict free speech just as effectively as direct government regulations would. While the Biden administration appears uninterested in preventing online censorship, states can protect their residents’ speech. State legislation needs to be carefully drafted to comport with the First Amendment and federal law. This report presents model legislation that does so.

The model bill regulates Big Tech firms as common carriers, prohibiting major online platforms from deplatforming state residents, restricting content based on political or philosophical views, or inconsistently applying content moderation standards. It also financially incentivizes the platforms to prohibit censorship in their terms of service. This model legislation provides a path for state legislators to protect their constituents from Big Tech censorship.

 

TECH CENSORSHIP WIDESPREAD

As the Supreme Court has recognized (Packingham v. North Carolina, 2017), the internet has become the 21st century public square. Today most Americans keep in contact with friends and relatives, follow the news, and discuss current events online. Indeed, the internet has largely supplanted traditional mediums of communication: most Americans say they prefer to get their news online. Most Americans also regularly get news from social media websites such as Facebook, Twitter, and YouTube (Shearer, 2021). Major online platforms allow Americans to communicate as never before. 

These developments have given major technology companies unprecedented private control over American speech without the accountability that robust competition normally provides. When major online platforms ban a user or their content, users have few alternative ways to make their voice heard. As Supreme Court Justice Clarence Thomas recently observed:

When a user does not already know exactly where to find something on the Internet—and users rarely do—Google is the gatekeeper between that user and the speech of others 90% of the time. It can suppress content by deindexing or downlisting a search result or by steering users away from certain content by manually altering autocomplete results. Facebook and Twitter can greatly narrow a person’s information flow through similar means (Biden v. Knight First Amendment Institute, 2021).

By deplatforming users or restricting access to their content—especially when acting in concert—Big Tech can effectively shut Americans out of the public square. Historically no private party possessed such power. Now a handful of large technology companies do, and they are wielding it to control America’s public discourse. Big Tech frequently censors users and viewpoints that they or the federal government oppose. 

In some cases, Big Tech openly censors content at the government’s request. Facebook is working with the Biden administration to remove posts that contain government-determined “COVID misinformation” (Nelson, 2021). Other times the platforms simply prohibit certain viewpoints. For example, Facebook now bans “content in the voice of Donald Trump” and took down videos of Lara Trump interviewing the former President (Feiner, 2021). YouTube prohibits medical professionals from presenting coronavirus treatment or prevention recommendations that differ from CDC guidance (YouTube, 2020). Leaked internal Facebook documents also show the company selectively censoring content and groups (Horwitz & Scheck, 2021).

More frequently, online censorship takes the form of establishing vague and superficially neutral terms of service, then selectively applying them to censor certain viewpoints. Several recent high-profile content takedowns illustrate this phenomenon:

  • Twitter deleted President Donald Trump’s Twitter account after he tweeted that “The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!” and “To all of those who have asked, I will not be going to the Inauguration on January 20th.” Twitter stated these tweets violated its “Glorification of Violence” policy by encouraging violent attacks on the Biden Inauguration (Twitter, 2021a). However, Twitter refused to take down tweets by Iranian Supreme Leader Ayatollah Khamenei calling Israel a “cancerous growth” that will be “destroyed” (Bowden, 2020Khamenei, 2020). Twitter left up Cuban dictator Miguel Mario Díaz-Canel Bermúdez’s account while Cuban security forces violently attacked peaceful anti-communist protesters (Rondon, 2021). Twitter also left up the account of the Taliban’s spokesman as he posted regular updates on the terrorist groups’ conquest of Afghanistan following the Biden Administration’s botched withdrawal (DeMarche, 2021).
  • During the 2020 presidential elections, Twitter froze the New York Post’s account for several weeks for reporting materials recovered from Hunter Biden’s abandoned laptop. Twitter claimed the coverage violated their policy against publishing hacked materials. However, Twitter did not previously restrict tweets covering materials published by Wikileaks or reported by Edward Snowden (Flood, 2020).
  • On August 11, 2020, OutKick.com—a prominent sports and opinion website—ran articles covering the site founder’s interview with President Trump, where they discussed the importance of not canceling the fall 2020 college football season. The articles proved highly popular and OutKick web traffic increased substantially. But the next day, and for the next week, their Facebook traffic dropped more than two-thirds below normal levels. OutKick’s tech team determined that Facebook had restricted OutKick’s audience following the Trump interview. Over the next several months, OutKick tested positive articles about President Trump and then-candidate Biden. Biden articles had no effect on OutKick’s Facebook traffic, but positive Trump coverage induced traffic collapses, despite high levels of interest in these articles by readers who did come to the site. OutKick concluded that Facebook was restricting their audience when they posted materials friendly towards President Trump (Reviving Competition, 2021).
  • Former House Speaker Newt Gingrich’s Twitter account was suspended for “hateful conduct” after he tweeted: “If there is a covid surge in Texas the fault will not be Governor Abbott’s common sense reforms. The greatest threat of a covid surge comes from Biden’s untested illegal immigrants pouring across the border. We have no way of knowing how many of them are bringing covid with them” (McFall, 2021). However, Twitter did not suspend the account of the digital magazine the Root when it tweeted an article titled “Whiteness is a Pandemic” (The Root, 2021). Similarly, it did not suspend a Twitter account with over 160,000 followers that tweeted “ban bill burr. actually just ban white men. a disgrace” (bora, 2021).

Big Tech  also frequently censors less high-profile Americans. A recent survey found that 46% of Americans personally know someone who has been temporarily or permanently banned from a social media platform (Rasmussen, 2021). President Trump recently filed a class-action lawsuit challenging Facebook, YouTube, and Twitter’s content censorship. Within months nearly 100,000 other Americans also reported stories of online censorship to the America First Policy Institute (Take On Big Tech, 2021). 

Online censorship affects all users, not simply those with controversial or disfavored views. It allows the tech companies to determine what users do—or do not—see. Users typically do not know when a platform prevents them from seeing particular content or steers them towards one viewpoint instead of another. But the companies have wide latitude to do so. For example, the first Google result for “Section 230” is an article from a Google-backed organization describing Section 230 as “the most important law protecting internet speech” (Google, 2021Electronic Frontier Foundation, n.d.). This article appears much lower in search results for that term from Bing, Yahoo, or DuckDuckGo. 

The American people widely recognize that technology companies use their power to shape public debate. A Pew poll found that almost three-quarters of Americans (73%) believe social media sites intentionally censor viewpoints they find objectionable. This majority included 90% of Republicans and 59% of Democrats (Vogels, Perrin, and Anderson, 2020). 

 

SECTION 230

Americans opposed to online censorship and discrimination face an immediate legal obstacle. Congress has passed legislation shielding online platforms from liability for restricting content. Section 230 of the Communications Decency Act of 1996 (Section 230) provides in relevant part:

 (c) Protection for “Good Samaritan” blocking and screening of offensive material

  1. Treatment of publisher or speaker. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
  2. Civil liability. No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected[.] …

(e)(3) State Law

Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.

Section 230 effectively operates as a federal subsidy for online platforms: it absolves them of liability they would otherwise incur. Professor Dershowitz concludes “the question of social media censorship under 47 U.S.C. 230 is an issue of major legal importance” (Dershowitz, 2021).

Congress originally passed the Communications Decency Act (CDA) to combat online pornography when the internet was in its infancy. Other provisions of the law—since held unconstitutional—directly prohibited transmitting some indecent materials to minors online. Congress added Section 230 to also encourage platforms to remove such materials.

Common law historically holds publishers strictly liable for publishing illegal content (e.g. defamatory materials). But distributors are only liable for knowingly distributing illegal content. In 1996, before the creation of Section 230, courts considered online platforms distributors if they did not moderate content. But they considered platforms that moderated content publishers—holding them liable for everything they did not take down (Cubby, Inc. v. CompuServe, Inc., 1991Stratton-Oakmont, Inc. v. Prodigy Services Co., 1995). These court rulings discouraged platforms from removing pornography or other indecent materials (Brannon, 2019a, pp. 1-2). 

Section 230 legislatively overruled these cases. Subsection 230(c)(1) ensured courts would not treat platforms as publishers of third-party content. Subsection 230(c)(2) eliminated any liability for removing specific types of indecent material. Section 230 thus eliminated the risk content moderation would convert platforms into publishers and encouraged platforms to remove offensive materials by granting them blanket immunity for doing so. 

Section 230 did encourage platforms to remove indecent content. However, many lower courts have disregard Section 230’s plain language and interpreted it to immunize content restrictions more broadly—not just those involving obscenity or similar materials (Brannon, 2019b, pp. 14-15).[1] Section 230 has accordingly preempted most legal challenges when online platforms restrict content (Brannon, 2019b, pp. 15, 22). President Trump repeatedly asked Congress to remove Section 230’s protections for online censorship.[2]

 

BATTLE MOVES TO THE STATES

The Biden administration’s early actions indicate federal efforts to combat Big Tech censorship ended with the Trump administration. President Biden rescinded a Trump executive order combatting online censorship (Exec. Order 13925Exec. Order 14029). Congressional Democrats and the Biden administration have proposed amending Section 230 to encourage censorship, not reduce it (Kelly, 2021; Bose & Renshaw, 2021Klein, 2021). And the Biden administration is actively working with major social media platforms to suppress what it considers “misinformation” (Nelson, 2021).

The Biden administration’s stance has moved the battle to protect online free speech to the states. States may be able to enact legislation that withstands legal scrutiny and protects free speech. If a substantial number of states passed legislation protecting online discourse, then the Big Tech companies would need to modify their policies nationwide. Florida and Texas have already enacted legislation prohibiting online censorship. 

 

IMPEDIMENTS TO STATE-BASED LEGISLATION

Critics counter that two principle obstacles prevent states from enforcing such laws: Section 230 and the First Amendment (Soave, 2021).[3] Section 230 has been interpreted to immunize online platforms’ content moderation decisions. Opponents argue that since this Federal law supersedes conflicting state legislation, states cannot independently regulate platform content moderation. 

Even if Section 230 did not exist, the First Amendment prohibits the government from punishing platforms (or anyone else) for their speech. Critics argue that platforms engage in such constitutionally protected speech when they curate content. They point to Supreme Court decisions holding that the First Amendment protects the right not to communicate ideological views one disagrees with (Wooley v. Maynard, 1977).[4] The Supreme Court similarly held that the government cannot force newspapers to publish opposing views (Miami Herald Publishing Co. v. Tornillo, 1974). 

These arguments persuaded conservative policymakers in both Utah and North Dakota to reject legislation combatting tech censorship (Turley, 2021). And they helped a federal district judge enjoin Florida’s social media law from being implemented (NetChoice vs. Moody, 2021).

But these legal obstacles are not insurmountable. To the extent that the government is using Section 230 to indirectly accomplish what it cannot do directly, Section 230 may be unconstitutional. Even assuming Section 230 is lawful, states may have leeway to combat bad faith content moderation within its framework. A textualist reading of the statute shows Section 230 only immunizes removing a few narrowly defined types of content; it does not protect viewpoint-based removals. Additionally, First Amendment precedent holds that while the government generally cannot force entities to express a specific message, it can require them to impartially host third-party speech. While the Supreme Court has not ruled directly on this point with regards to social media, existing precedents give states good arguments for their authority to protect online speech.

 

CONSTITUTIONALITY OF SECTION 230

The constitutionality of Section 230 has not been definitively established. If it is ultimately deemed unconstitutional as currently applied, it has no effect and cannot prevent states from protecting their residents. Congress enacted Section 230(c)(2) immunity to encourage internet platforms to voluntarily remove indecent material, even if that material is constitutionally protected. 

The Supreme Court has held that it is “axiomatic that a state may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish” (Norwood v. Harrison, 1973). The Court has previously held that when the government encourages third party conduct, including by granting immunity to it, that conduct can become government action subject to constitutional requirements (Railway Employees’ Dep’t v. Hanson, 1956Skinner v. Railway Labor Executives’ Ass’n, 1989). Consequently, Section 230’s incentives to restrict third-party speech may violate the First Amendment. The Supreme Court has not ruled on this issue with respect to Section 230, but it may find Congress lacks authority to incentivize third-party censorship by granting blanket immunity to it.[5] Section 230 may not preempt states laws protecting online discourse at all. 

 

INCONSISTENT RULINGS ON THE SCOPE OF SECTION 230

Even if Section 230 is lawful, it may not preempt all state laws protecting online discourse. Lower courts have interpreted Section 230’s immunity for content moderation inconsistently. Some lower courts have interpreted subsection (c)(1) to sweepingly immunize virtually all content restrictions.[6] Other lower courts have instead held that immunity depends on platforms acting in “good faith” under subsection (c)(2) (National Telecommunications and Information Administration, 2020, pp. 28-30). The Supreme Court has not resolved this question, although Justice Thomas has described the “sweeping immunity” as “questionable precedent” that “reads extra immunity into [Section 230] where it does not belong” (Malwarebytes v. Enigma Software, 2020). Justice Thomas, and many lower courts, conclude that construing subsection (c)(1) to generally immunize content restriction would render subsection (c)(2)’s specific immunity for good faith behavior superfluous. This violates canons of statutory construction that call for giving effect to all parts of a statute.[7]

If Section 230’s immunity for content restrictions depends on “good faith,” then states have room to act.  Section 230 does not clarify what “good faith” means. Commentators have observed that states could flesh out subsection (c)(2)’s “good faith” requirement (Coleman, 2019). A social media platform may act in bad faith if it acts inconsistently with its terms of service, applies those terms unequally, or changes them frequently to target certain classes of people. For example, Facebook exempts VIPs from content moderation standards that apply to ordinary users (Horwitz, 2021). States can reasonably construe such behavior as bad faith, and thus outside Section 230(c)(2)’s protection. States could accordingly pass laws prohibiting selective or inconsistent application of terms of service.

Additionally, there are powerful arguments that the term “otherwise objectionable” in subsection 230(c)(2) does not cover all content platforms consider objectionable. Under the canon ejusdem generis, courts interpret general terms that follow a specific list to only encompass objects like those expressly identified. Similarly, the doctrine noscitur a sociis holds that the meaning of unclear or uncertain terms should be determined by surrounding terms and context (Sutton v. Providence St. Joseph Medical Center, 1999).[8] Applying these canons, the term “otherwise objectionable” only immunizes removing content similar to the expressly identified categories: e.g. lewd, obscene, excessively violent content, etc. (Candeub & Volokh, 2021). Under this interpretation, Section 230 does not protect platforms’ ability to restrict most non-indecent material.[9] If so, Section 230 does not prevent states from passing laws combatting ideological censorship. 

Moreover, even the expansive reading of (c)(1) may leave states some room to act. Some—though not all—lower courts that construe (c)(1) expansively have also held that it does not prevent suits for breach of contract.[10] As the Ninth Circuit reasoned in Barnes vs. Yahoo!, Inc. (2009) such suits do “not seek to hold [online platforms] liable as a publisher or speaker of third-party content, but rather as the counter-party to a contract, as a promisor who has breached.” Thus, states may be able to combat online censorship by requiring or inducing platforms to modify their terms of service. States have good reason to believe they can protect online speech consistently with Section 230.

 

COMMON CARRIER LAWS CONSTITUTIONALLY PERMISSIBLE

States also have good reason to believe laws preventing social media companies from censoring their users are constitutional. The First Amendment generally prevents the government from forcing people (or companies) to express a particular message. However, as Justice Stephen Breyer has noted, “requiring someone to host another person’s speech is often a perfectly legitimate thing for the Government to do” (Agency for Int'l Development v. Alliance for Open Society International, 2020). In particular, existing legal precedents suggest the government can regulate social media companies like common carriers under current First Amendment precedents. Justice Thomas discussed this possibility in a recent concurrence (Biden v. Knight First Amendment Institute, 2021).

Common carrier laws require regulated businesses to serve all customers. They originally applied to industries like transportation and courier services. Congress subsequently applied them to telecommunications services. Under common carrier laws, for example, telephone companies cannot deny customers service based on their political views. They must transmit all customers’ speech. Justice Thomas noted that such common carrier regulations existed at the time of the founding and were not—and are still not—seen as infringing on free speech.

 

PRECEDENTS SUGGEST SOCIAL MEDIA COMPANIES CAN BE REGULATED AS COMMON CARRIERS

The Supreme Court has not directly considered whether the government can impose common carrier type duties on social media companies. However, Eugene Volokh—a prominent First Amendment legal scholar—observes that several of the Court’s precedents suggest it can (Volokh, 2021). Legal scholar Richard Epstein has also reached a similar conclusion (Varadarajan, 2021).

Three main precedents support this argument. In Pruneyard Shopping Center v. Robins (1980), the Court considered a challenge to a California law requiring shopping malls to allow petition signature gathering on their premises. PruneYard Shopping Center argued “that a private property owner has a First Amendment right not to be forced by the State to use his property as a forum for the speech of others.” The Supreme Court unanimously rejected that argument and upheld the California law.

In Rumsfeld v. Forum for Academic and Institutional Rights (2006), the Supreme Court considered a challenge to a law that required universities to provide military recruiters equal access to on campus recruiting or lose federal funds. Several law schools objected to the military’s then-extant “Don’t Ask, Don’t Tell” policy. They argued that it was unconstitutional for the government to force them to host speech that they opposed. The Supreme Court unanimously disagreed, holding that “students can appreciate the difference between speech a school sponsors and speech the school permits because legally required to do so, pursuant to an equal access policy.”

In Turner Broadcasting System v. FCC (1994) the Justices considered a law requiring cable providers to carry local broadcast stations. Cable providers sued, arguing these requirements unconstitutionally forced them to transmit messages they did not want to convey. The Supreme Court held Congress could constitutionally require them to do so.

Under these precedents, the government can require organizations to host third-party speakers under a few conditions. First, compelled hosting must not prevent the organizations from expressing their own speech (Volokh, 2021, pp. 423-428). Second, compelled hosting must not prevent the organizations from dissociating with or disavowing the speakers’ message (Volokh, 2021, pp. 428-432). Finally, the government generally may not require organizations to convey a specific viewpoint. The government can only require organizations to provide a viewpoint-neutral forum for others to speak, not to communicate specific messages (Volokh, 2021, pp. 445-448).

These conditions apply to social media companies. Common carrier type duties would not prevent social media platforms from communicating their own views, and the platforms could easily disassociate themselves from third-party content. Indeed, Section 230 legally prevents third-party content from being attributed to platforms. And common carrier type duties would only oblige platforms to provide a forum for others, not to communicate a specific message.

Moreover, in Turner, the Supreme Court expressly held that private control over a technological communications “bottleneck” can constitutionally justify requiring private entities to host third-party speech:

When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber's home. Hence, simply by virtue of its ownership of the essential pathway for cable speech, a cable operator can prevent its subscribers from obtaining access to programming it chooses to exclude. A cable operator, unlike speakers in other media, can thus silence the voice of competing speakers with a mere flick of the switch.

The potential for abuse of this private power over a central avenue of communication cannot be overlooked. Each medium of expression must be assessed for First Amendment purposes by standards suited to it, for each may present its own problems. The First Amendment's command that government not impede the freedom of speech does not disable the government from taking steps to ensure that private interests not restrict, through physical control of a critical pathway of communication, the free flow of information and ideas … assuring that the public has access to a multiplicity of information sources is a governmental purpose of the highest order, for it promotes values central to the First Amendment. (Turner Broadcasting, 512 U.S. at 656-57, 663) (cleaned up). 

This reasoning applies with equal if not greater force to major social media platforms. Because the value of a social media network increases with its number of users, major social media platforms experience economies of scale that give them the natural ability to accrue market power and “gatekeeper” status (Srinivasan, 2019).

Federal law does not preempt state common carrier type regulations either. Current Federal Communications Commission (FCC) common carrier regulations do not limit states’ ability to regulate internet service providers (ISPs).[11] President Biden plans to change these ISP regulations to resurrect Net Neutrality (Exec. Order 14036). However, the Biden administration has shown no interest in extending these regulations to cover online platforms. Unless the Federal government affirmatively regulates online platforms under Title II of the Communications Act, states can set their own policies.[12] So states have solid legal arguments for their authority to impose common carrier duties on Big Tech.

 

LIABILITY RELIEF WOULD REINFORCE COMMON CARRIER STATUS

Common carriage is often viewed as a regulatory deal. In this framework, the government imposes non-discrimination obligations on an industry. In exchange, the government relieves the industry of liabilities they would otherwise incur (Candeub, 2020, pp. 403-412). States could firmly ground platform regulations in the common carrier framework by adopting a similar “carrot and stick” approach.

Under this approach, states would both impose common carrier duties on major platforms (the “stick”) while relieving them of a significant liability (the “carrot”). However, since Section 230 already provides legal immunity for third-party content distribution, states would need to provide a different source of liability relief. 

State taxes and fees provide a natural alternative. Under the Supreme Court’s holding in South Dakota v. Wayfair, Inc. (2018) states have broad authority to tax online firms without a physical nexus in their state. States can accordingly constitutionally impose significant taxes or fees on online platforms. States could then exempt firms that act as common carriers from the tax or fee. This relief would provide platforms a substantial incentive to stop censoring state residents, while placing the state regulations firmly within the common carrier framework. 

Imposing common carrier duties on social media companies is not without risk. Industries often end up “capturing” agencies that regulate them—successfully lobbying the regulators to put their interests first. This can lead the government to protect the industry from competition and stifle competition (Kenton, 2021). Turning social media companies into common carriers creates some risk of such regulatory capture. However, social media economies of scale already stymie competition. Moreover, common carrier duties could be primarily enforced through private litigation and state courts. Lobbying and capturing the state court system would be much more difficult than a specialized administrative agency.[13] The risk of regulatory capture is thus not as great in the social media sector as in other industries.

 

LITIGATION PROSPECTS

States can in principle prohibit online censorship in a manner consistent with constitutional requirements and federal law. Nonetheless, the Big Tech will likely sue over such laws. While there are sound theoretical justifications for states’ ability to act, little case law exists directly on point. Neither the Supreme Court nor any federal appeals courts has considered the extent to which platform restrictions on third-party content is itself constitutionally protected speech, and only a handful of district courts have done so (Brannon, 2019b, pp. 21-23).[14] This will make these cases of first impression. Similarly, almost no case-law examines the constitutionality of applying common carrier type duties to social media platforms. First Amendment analysis therefore necessarily extrapolates from related precedents. And lower courts have not agreed on the scope of Section 230 immunity for content moderation.

Lower court judges will thus have considerable discretion over how to rule on challenges to state laws. States have strong arguments under existing precedents for upholding those laws. But the Supreme Court has not yet ruled directly on this question. Judges inclined to do so can distinguish these precedents and rule against them.

This happened to Florida’s social media legislation. The case was assigned to Judge Robert Hinkle, a Clinton appointee known for aggressive liberal rulings.[15] Judge Hinkle followed the strand of precedent that reads sweeping immunity into subsection (c)(1) (NetChoice vs. Moody, 2021, pp. 15-16). He further distinguished the PruneYard and Rumsfeld precedents and enjoined the law on First Amendment grounds (NetChoice vs. Moody, 2021, pp. 16-28). A different judge could have interpreted Section 230 and applicable precedents differently and upheld the law. 

States have reasonable grounds to believe neither Section 230 nor the First Amendment prevent them from combatting online censorship. But the relevant case law will likely remain unsettled until the Supreme Court resolves these issues.  It is consequently important that states carefully draft their laws to maximize success under existing precedents, and to make it more difficult for hostile judges to rule against them.

An additional benefit of tying tax relief to platform non-discrimination is this approach would be particularly difficult for platforms to litigate against. If the legislation contained a severability clause, a court judgement overturning the exemption would not affect the validity of the underlying tax or fee.[16] Thus, a successful lawsuit could only force the major platforms to pay the tax, not eliminate it altogether.

 

AFPI MODEL LEGISLATION

The America First Policy Institute (AFPI) has developed model language to help states draft legislation that has the strongest prospect of withstanding legal challenges.[17] It is included as an appendix to this report. The model legislation regulates major online platforms as common carriers, requiring them to act in good faith and avoid viewpoint discrimination. The model legislation reinforces the common carrier framework by imposing large, but not debilitating, fees on online platforms to support state universal service funds—funds typically used to improve services and connectivity for disadvantaged individuals or underserved areas. It then relieves platforms of this liability if they contractually commit to not censor users. 

The legislation includes a severability clause and is structured so that if part of the law is struck down the remaining parts will operate effectively. If courts give Section 230 the textual interpretation Justice Thomas considers appropriate, or the courts strike down Section 230 entirely, the entire law would be upheld. If courts interpret Section 230 more expansively, parts of the law would fall, but the remaining provisions would still significantly protect online speech. 

 

COMMON CARRIER DUTIES AND GOOD FAITH CONTENT MODERATION

Section 1 of the model bill imposes common carrier duties on market dominant online platforms. It requires them to furnish their services to state residents upon reasonable request, without discrimination, and upon just and reasonable terms. These terms come from common carriage law.[18] Sections 1 and the definitions in Section 5 specify what these duties entail:

 

Open Political Discourse 

Section 1 of the bill makes market dominant online platforms open forums for political debate. It generally prevents them from treating content adversely based on philosophical, political, ideological, or religious views. It contains an exception for content that falls within Section 230(c)(2)’s expressly enumerated categories (e.g. content that is obscene, excessively violent, harassing, etc.). If upheld, this provision would significantly protect American’s free speech rights. 

However, this provision in the draft bill is vulnerable to legal challenges. Upholding it requires courts to both construe Section 230(c)(1) as inapplicable to content removal, and to apply the ejusdem generis or noscitur a sociis canons to the term “otherwise objectionable” in (c)(2). 

This is a risk worth taking on behalf of the American people. States have strong arguments that these are the best reading of Section 230. If the courts do not accept that reading, the severability clause ensures striking it down does not impair the rest of the act. 

One constitutional concern is requiring platforms to serve as neutral forums for political, philosophical, and religious discourse could be seen as content-based. The Supreme Court heavily scrutinizes content-based laws (Reed v. Town of Gilbert, 2015). There are arguments that such open-forum requirements are not properly considered content-based. Even if they are held to be content-based, Eugene Volokh argues such viewpoint-neutral laws are constitutional (Volokh, 2021, pp. 445-448). Nonetheless, as a safeguard, the severability clause clarifies that if a court invalidates the open forum requirements because they apply specifically to political, philosophical, or religious views, then those limitations would fall and platforms could not treat any content adversely based on viewpoint—not just political or ideological views.[19]

 

User Deplatforming Prohibited

Section 1 further prohibits market dominant platforms from categorically deplatforming state users. Deplatforming is defined as denying someone access to the platform outright, as opposed to moderating their content on a case-by-case basis. This access requirement is a standard common carrier type obligation. Telephone companies must generally serve all customers; they cannot deny service outright. The model bill similarly requires major platforms to serve all comers. 

States arguably have authority to ban deplatforming without qualification because section 230(c)(2) protects only good faith actions “restrict[ing] access to or availability of material” (emphasis added). Subsection (c)(2) does not provide immunity for restricting individual users as such. Of course, individual users can post objectionable material, and (c)(2) immunizes good faith actions restricting that content. But (c)(2) does not on its face protect deplatforming users, regardless of the nature of their future postings.[20]

A categorical ban on deplatforming is also content-neutral. It applies to all users, no matter what they post.[21] This content neutrality significantly reduces constitutional concerns. 

 

Good Faith Content Moderation Required

Section 1 further prohibits platforms from moderating content in bad faith. Section 5 defines this as banning, deleting, demonetizing, or restricting access to content in a pretextual manner or otherwise inconsistently with terms of service (TOS).[22] It also includes platforms “selectively applying [their] terms of service” to restrict access to or availability of content that is “similarly situated to content that the provider intentionally declines to restrict.” Section 1 further requires major platforms to provide meaningful appeals of alleged bad faith content moderation. 

This language allows major platforms to set viewpoint neutral “community standards” defining objectionable material, and to remove content that violates those standards. But it requires them to act honestly and transparently. Major platforms would violate the law if they gave pretextual justifications for restricting content. This can be demonstrated by, for example, showing platforms knowingly apply these standards inconsistently—such as Facebook exempting VIPs from content moderation applicable to regular users (Horwitz, 2021). This provision prevents platforms from representing themselves as open to all users, but deceptively applying terms of service against privately disfavored individuals or views. This language provides additional protection against surreptitious censorship. 

Section 1 also expressly allows platforms to take down illegal or otherwise constitutionally unprotected content, notwithstanding their common carrier duties. For example, the First Amendment does not protect obscenity.[23] This language addresses concerns that common carrier duties would force platforms to disseminate illegal or obscene content.[24]

 

Disclosure Requirements

The bill also imposes several notification and disclosure requirements on major online platforms. First, it requires them to describe their content moderation policies in plain and particular terms of service available at the time of use.[25] Section 1 further clarifies that these TOS cannot allow platforms to take down content for any reason. Instead, users would have to violate specific TOS before their content could be restricted. This prevents platforms from using vague or ill-defined TOS to justify arbitrarily censoring content.

Second, the bill requires platforms to notify users when their content is restricted. The notification must identify the specific TOS the user allegedly violated. This would prohibit practices such as “shadow banning” where platforms surreptitiously restrict content.

Third, it requires platforms to regularly disclose how often they restrict content, how many times those restrictions are appealed, and the outcome of those appeals. Platforms do not currently report this information.

 

PENALTIES AND ENFORCEMENT

Section 2 covers penalties and enforcement. It provides separate statutory damages for deplatforming, adversely treating political discourse, content moderation not taken in good faith, failure to provide notice and an opportunity to appeal content moderation decisions, and failure to disclose content moderation statistics.[26] It also allows litigants to recover actual damages and attorney fees, and courts to provide injunctive relief.

Section 2 initially gives the state Attorney General (AG) the sole right to bring enforcement actions and gives the AG authority to issue civil investigative demands. State AGs will generally have greater capability to enforce these requirements, especially when equipped with civil investigative demands. However, the bill provides plaintiffs a private right of action if the AG does not bring charges.

 

FEE EXEMPTIONS FOR PLATFORMS THAT PROTECT FREE SPEECH

Sections 1 and 2’s legality depends on courts interpreting only Section 230(c)(2)—with its “good faith” requirement—to immunize content moderation. Some courts construe Section 230 in textual manner. Others also construe 230(c)(1) to immunize all content moderation decisions, irrespective of good faith (Barnes vs. Yahoo!, Inc., 2009Sikhs for Justice v. Facebook, Inc., 2015). Courts that adopt the latter and more expansive framework would strike down Sections 1 and 2.

Sections 3 and 4 provide additional protections that can operate under an expansive reading of (c)(1). These sections provide financial incentivize to eschew censorship. They do so in a manner that some courts that expansively construe section (c)(1) have found permissible. These sections complement Section 1 and 2’s direct regulations. They serve as a legal backstop, providing alternative legal grounds to protect state residents. 

Section 3 charges major online platforms a quarterly fee on their active in-state users.[27] The draft suggests fees of about one-tenth of the major platforms’ revenues per user, ranging from $1.50 to $7.50 per user per quarter.[28] To ensure the fees are not overly burdensome, Section 3 caps platform fees at no more than 15 percent of gross revenues generated through each platforms’ services. States would estimate the number of platform users either using existing commercially available services or conducting their own surveys, though administrative records could also be used.[29] The fees would fund state universal service programs.

Section 4 exempts platforms from these fees if they (1) publish the content moderation statistics required by Section 1 and (2) incorporate into their terms of service contractually enforceable commitments to abide by Section 1’s open discourse and fair treatment requirements (e.g. not to deplatform users, not to engage in viewpoint discrimination, not to engage in selective or pretextual content moderation, etc.). The proposed terms of service further entitle users to recover court costs and attorney fees, as well as any actual damages, if they successfully sue for breach of these commitments. These contractual terms would also waive the choice of forum and choice of law provisions that platforms frequently use to channel cases towards preferred judicial forums. Section 4 also states that platforms cannot claim the fee exemption if courts hold any of these commitments judicially unenforceable.

This approach provides strong financial incentives for platforms to avoid censoring their users: doing so costs them about one-tenth of their in-state revenues. It situates the model legislation firmly within the common carrier legal tradition of liability relief in exchange for serving all customers. It comports with the First Amendment for the same reason Section 1 does. Indeed, section 4 raises fewer First Amendment concerns because it merely provides financial incentives, rather than regulating behavior directly.

This approach also provides a legal backstop against courts that construe Section 230(c)(1) to expansively immunize all content moderation. Some of these courts, including the Ninth Circuit, have held that Section (c)(1) does not insulate platforms from breach of contract lawsuits.[30] So even if courts strike down Section 1’s direct regulations, they may permit incentivizing platforms to contractually commit to fair treatment and open discourse. This “belt and suspenders” approach maximizes the prospects of protecting state residents from online censorship.

Additionally, the severability clause means that the Section 3 fees will remain if courts strike down the Section 4 exemption for non-censorious platforms. This structure discourages lawsuits seeking to strike down the non-discrimination incentives. A successful lawsuit would simply force the platforms to pay the fees, not eliminate the burden altogether. Consequently, the primary legal challenges would probably be brought against the underlying Section 3 fees.

Section 3 has been drafted to comply with applicable legal requirements. In National Federation of Independent Businesses vs. Sebelius (2012) Chief Justice Roberts held that the government’s taxing power permits large financial charges, so long as the fines were not so “prohibitory” that affected entities had to avoid them. Fees of approximately one-tenth of revenue attributable to users in a state are large, but are not coercive in the way fees of 50% or 100% of revenue would be.[31]

Using the fees to fund state universal service programs also avoids Internet Tax Freedom Act (ITFA) preemption (2016). The ITFA prohibits states from taxing internet access or imposing discriminatory taxes on online commerce. The ITFA definition of “internet access” encompasses most online platforms.[32] The ITFA accordingly prohibits most taxes or fees on online platforms. The Chamber of Commerce is suing to overturn Maryland’s new digital advertising tax under the ITFA (Gaines, 2021). However, the ITFA expressly permits state fees that fund universal service programs.[33] Currently 42 states have such programs (Lichtenberg, 2019).[34] Fees for universal service programs are also reasonably connected to services states provide the platforms.[35] These programs fund initiatives that make internet access—and thus the platform’s services—available to all state residents.[36] States that do not have, and do not want to create, universal service funds could avoid ITFA preemption by imposing fees that support alternative services.[37]

 

MARKET DOMINANT VS. MAJOR ONLINE PLATFORMS

Sections 1 and 2 regulate “market dominant” firms as common carriers. The draft defines market dominant firms as those with majority state market share for the distinct services they provide. This definition encompasses the major online platforms, such as Google and Twitter, but excludes smaller platforms such as DuckDuckGo and Parler. For example, Google processes approximately 62 percent of the U.S. internet searches, versus 26 percent for Bing and 11 percent for Yahoo! (Seattle Organic SEO, 2020).[38]

The bill regulates only market dominant firms as common carriers because some—but not all—legal precedents hold that common carrier duties can only be imposed on firms with significant market power (Candeub, 2020, pp. 404-407). In this framework, common carrier duties ensure private companies cannot arbitrarily wield their market power to hurt private individuals. 

Additionally, the Supreme Court in Turner held that the government can ensure “the free flow of information and ideas” when companies possess “bottleneck” or “gatekeeper” control over critical communication channels. Limiting coverage to market dominant firms keeps the model bill within those precedents. 

This is especially important because Justice Kavanaugh, when he served on the D.C. Circuit Court of Appeals, held that “absent a demonstration of a company's market power in the relevant geographic market, the Government may not interfere with a cable operator's or an Internet service provider's First Amendment right to exercise editorial discretion over the content it carries.” This logic applies to social media platforms. Then-Judge Kavanaugh expressly questioned whether the government could impose common carrier duties on companies like Twitter or YouTube (U.S. Telecom Association v. FCC, 2017). 

Limiting the bill’s coverage to firms that control most of the relevant market therefore reduces litigation risk. Moreover, virtually all the major online platforms are “market dominant” under this definition. Using this definition accomplishes virtually everything advocates want. States can subsequently expand the law’s coverage if the courts uphold it on grounds unconnected to market power. 

Sections 3 and 4, by contrast, impose and excuse fees on major online platforms: those with gross in-state revenues above a given threshold (e.g. $10 million) with services used by more than a small proportion of states residents (e.g. 10 percent). This definition includes large platforms that are not market dominant, as well as market-dominant platforms. For example, Comscore data suggests this definition also encompasses Bing and Yahoo!, as well as Google search, in many states (Seattle Organic SEO, 2020). 

Imposing the fees on major platforms grounds the fees under state taxing authority. Courts could construe fees imposed on only one firm in each market segment as exercises of regulatory authority, not taxing power. At the same time, limiting fees to only major platforms avoids burdening innovative start-ups that have smaller revenue and less ability to pay.

The bill also evaluates firms only on their in-state activities. The Supreme Court generally does not allow states to “impose economic sanctions” based on “conduct in other States” (BMW of North America, Inc. v. Gore, 1996). Evaluating platform revenues, users, and market share on an in-state basis, rather than nationwide, avoids potential constitutional vulnerabilities.

 

ADDITIONAL PROVISIONS

Section 6 clarifies that the law only applies to activity that originates in the state—not activity that state residents engage in while physically located out of the state. This addresses concerns raised in the Florida litigation that platforms could not reliably determine if out-of-state users were in fact state residents.

Section 7 governs severability. Subsection (a) provides that the provisions of the act are generally severable. So, if a court strikes down part of the law (e.g. Section 1 and 2’s common carrier regulations), the rest continues in force (e.g. Section 3 and 4 financial incentives not to censor content). Subsection (b) ensures that if courts strike down the open-forum requirement because it only protects political or ideological views, that action extends open-forum requirements to all subjects, rather than eliminating them altogether. Subsection (c) makes Section 4’s fee exemption components internally inseverable: the entire fee exemption falls if a reviewing court strikes down any part. However, striking down the fee exemption would not affect the rest of the bill—including the fees in Section 3. 

 

STATES CAN PROTECT FREE SPEECH

Major technology companies and their allies argue that Section 230 and the First Amendment prevent states from passing anti-censorship legislation. However, legal analysis suggests states can effectively act within these requirements. Well-crafted legislation can protect state residents from online censorship. 

This report proposes model state legislation that uses common carrier regulations and financial incentives to combat online censorship. It is drafted to provide multiple layers of protection, so even if courts hold some provisions inoperable, the remaining provisions will operate effectively. 

Americans need these protections to protect a free society. Major technology companies are using their power to control America’s public debate—arbitrarily ruling some views off limits, while promoting others. President Trump saying his supporters will not be disrespected is hardly a “glorification of violence.” It is certainly much less so than Ayatollah Khamenei calling for the destruction of Israel, or the Taliban broadcasting propaganda and updates on their military conquests. Yet Twitter suspended Trump’s account, but not the others. Similarly, Facebook’s terms of service allow content favorable or hostile to both Trump and Biden. But OutKick.com only found its Facebook traffic throttled when it ran articles portraying President Trump positively, not then-candidate Biden. 

Under this model legislation, tech platforms could not categorically deplatform users, could not discriminate against political viewpoints, and would have to moderate content consistently and transparently. They would also have strong financial incentives to adhere to these policies. 

While Congress and the Biden administration seem unlikely to act nationally, states can prevent major tech companies from silencing their residents.

 

APPENDIX –MODEL STATE LEGISLATION COMBATTING ONLINE CENSORSHIP

THE PROTECTING ONLINE FREE SPEECH ACT 

  1. Duties of Market Dominant Online Platforms.
    1. Market dominant online platforms engaged in commerce in this state shall furnish the platform services in which they are market dominant to information content providers within this state upon reasonable request therefore, without discrimination, and upon just and reasonable terms. These duties mean that such online platforms:
      1. Must describe relevant content moderation policies applicable to information content providers in this state in plain and particular terms of service or use (terms of service) that are available at the time of use;
      2. May not deplatform or otherwise categorically deny service to information content providers within this state;
      3. May not adversely treat content on the basis of philosophical, political, ideological, or religious views expressed; provided that this does not prevent a platform from removing content that is obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable on similar grounds because such content also expresses philosophical, political, ideological, or religious views;
      4. Shall refrain from content moderation not taken in good faith; and
      5. When adversely treating content posted, uploaded, or published by an information content provider in this state, must:
        1. Provide the aggrieved information content provider with a written explanation of the action taken and the reason(s) therefore, including an identification of the specific term(s) of service violated and the specific content deemed in violation of such term(s) of service, within 7 days of the action being taken; and
        2. Provide the aggrieved information content provider with a timely, meaningful, and good faith opportunity to appeal content moderation decisions allegedly not taken in good faith. With each appeal decision that sustains, in whole or part, an initial decision to adversely treat an information content provider’s content, said market dominant platform shall certify that such decision did not constitute content moderation not taken in good faith, as that phrase is defined under this Act.
  2. A market dominant online platform shall not assert that content moderation allegedly not taken in good faith was consistent with its terms of service unless it shows that such content was not permitted under its plain and particular terms of service. For the platform services in which the platform is market dominant, such terms of service may not permit the online platform to adversely treat content posted, uploaded, or published in this state without an information content provider in this state or their content violating one or more plain and particular terms of service.
  3. Market dominant online platforms shall, for the platform services in which they are market dominant, publish statistics on a quarterly basis of the number of posts and information content providers within this state subject to actions described under subparagraph (a)(v) of this section, the number of appeals filed, and the number of appeals granted.
  4. Exclusions.
    Nothing in this section shall prevent an online platform from adversely treating content it objectively and reasonably believes to be:
    1. Constitutionally unprotected content in furtherance of unlawful activity, including but not limited to content in furtherance of human or drug trafficking, terrorism, or cyberstalking;
    2. Content in violation of federal intellectual property laws;
    3. Content subject to a final judgment of a United States federal or state court directing the removal of such content; or
    4. Obscenity.

       2. Penalties and Enforcement.

  1. The [state Office of Attorney General] shall have sole initial authority to bring actions to recover damages on behalf of users affected by violations of section 1 of this Act, as well as injunctive relief on behalf of the state. The [state Office of Attorney General] shall have the authority to issue civil investigative demands, consistent with the authority provided under the [state unfair and deceptive trade practice law], to investigate instances of market dominant online platforms violating the requirements of section 1 of this Act. This Office may bring an action within the ordinary statutory period, irrespective of other consumer or user-generated suits.
  2. In the event the Attorney General does not bring an action within [X] days of notice of an alleged violation, an information content provider or user may bring an action(s) against a market dominant online platform in [appropriate state court, e.g. district court] in the county in which the plaintiff resides to enforce section 1 of this Act. A plaintiff is not required to exhaust an online platform’s appeals process before bringing such actions.
  3. As to any cause of action arising under section 1, the [district] court may exercise personal jurisdiction over a nonresident defendant in the same manner as if the defendant were a person domiciled in this state if the defendant:
    1. Makes the interactive computer service available to residents of this state; or
    2. Enters into agreements with residents of this state for the provision of interactive computer services.
  4. The court shall award a plaintiff that prevails against a market dominant online platform for violation of section 1 of this Act the following:
    1. Actual damages;
    2. Statutory damages of:
      1. [V] dollars for each day an information content provider is deplatformed or otherwise categorically denied service;
      2. [W] dollars each for each incident of adversely treating an information content providers’ content based on the philosophical, political, ideological, or religious viewpoints expressed;
      3. [X] dollars for each incident of content moderation not taken in good faith;
      4. [Y] dollars for each violation of the requirements of section 1(a)(v) of this Act;
      5. [Z] dollars for the failure to disclose the statistics required by section 1(c) of this Act;
      6. Court costs, fees, and reasonable attorney fees; and
      7. Injunctive relief, if and as the court deems appropriate.
  5. The [appropriate state agency] shall issue annual determinations identifying market dominant online platforms in this state and the platform services in which they are market dominant and to which section 1 therefore applies. The [agency] may make this determination based on reputable commercially available data or on a statistically representative survey of residents of this state aged 13 and older with a sample size sufficient to produce a margin of error of less than [2] percent at the 95 percent confidence level.

       3. Platform Fees to Support Universal Service Programs

  1. A corporation with annual gross revenues attributable to users located in this state of more than [$10 million] that owns or operates an online platform or platforms, shall be, in addition to any taxes, fees, or other charges, assessed a quarterly fee on platform services actively used by [10] percent or more of individuals located in this state aged 13 and older (platform fees). Such platform fee shall be equal to:
    1. [$7.50] per quarter per active state user of the corporation’s general internet search platform services;
    2. [$5] per quarter per active state user of the corporation’s personal social networking platform services;
    3. [$1.50] per quarter per active state user of the corporation’s microblogging social networking platform services;
    4. [$1.50] per quarter per active state user of the corporation’s online video sharing platform services; and
    5. [$4] per quarter per active state user of the corporation’s online photo sharing platform services.
      1. Provided further that platform fees on each platform service shall not exceed 15 percent of the annual gross revenues attributable to users located in this state that the corporation generates through such platform service.
  2. The [appropriate state agency] shall determine the number of applicable platform services’ active state users on which platform fees are owed, and the proportion of individuals located within this state who actively use such platform services, as follows:
    1. The [agency] shall estimate the quarterly number and proportion of active state users by:
      1. Utilizing reputable commercially available estimates of the platform services’ active state users aged 13 and older, and computing the proportion of active state users by dividing such number by the most recent annual estimates for the state population aged 13 or older produced by the United States Census Bureau; or
      2. Conducting a statistically representative survey of individuals located within this state aged 13 and older with a sample size sufficient to produce a margin of error of less than [2] percent at the 95 percent confidence level. Such survey shall estimate, for each corporation subject to online platform fees, the proportion of individuals located within this state who used each applicable platform service three or more times in the previous quarter. The estimated active state users for that quarter shall be the product of that proportion and the most recent annual estimates for the state population aged 13 and older produced by the U.S. Census Bureau.
      3. The [appropriate state agency] shall transmit its estimated number of active state users to the applicable corporations within 60 days of the end of the applicable quarter. The proportion and number of active state users shall be the estimated proportion and estimated number of active state users for that quarter, unless the online platform provides administrative records demonstrating by a preponderance of evidence that a different number of users within this state used the applicable platform services three or more times in the previous quarter. In such cases the fee shall be owed on the administratively determined number of active state users, and the proportion shall be calculated by dividing such administratively determined number by the most recent annual estimates for the state population aged 13 and older produced by the U.S. Census Bureau.
  3. Platform fees collected under this section shall be deposited in [applicable state universal service fund] and used for [specific purpose of the applicable universal service fund, e.g. promoting high speed internet access in rural locations].
  4. Platform Fee Returns and Filing.
    1. Each corporation that owns or operates an online platform or platforms subject to fees under this Act shall complete, under oath, and file with [the appropriate state official] a return for quarterly fee payments, along with such fee payment, within 120 days of the completion of the applicable quarter, provided that [appropriate state official] may extend this deadline for good cause related to administratively determining the number of active state users in the applicable quarter.
    2. A person, including an officer of a corporation, who willfully files a false return under this section with the intent to evade the payment of fees due under this section is guilty of perjury and, on conviction, is subject to the penalty for perjury.
    3. A person, including an officer of a corporation, who is required to file a fee return and who willfully fails to file the return as required under this section is guilty of a misdemeanor an, on conviction, is subject to a fine not exceeding $10,000 or imprisonment not exceeding 5 years, or both.
  5. Penalties and interest.
    1. The [appropriate state official] shall assess interest on unpaid platform fees from the due date to the date on which the fee is paid if a person who is required to pay an online platform fee under this section either fails to pay an installment when due or pays less than the amount due.
    2. In addition to such interest, the [appropriate state official] shall assess a penalty not exceeding 25 percent of the amount due if a corporation required to pay a platform fee under this section fails to pay such tax within 180 days of the due date of such fee.
  6. The [Appropriate State Department] shall issue regulations governing the assessment and collection of platform fees under this section, including the process for corporations to provide administrative data on the number of active state users on which fees are owed and certifying corporations for exemption under section 4 of this Act. Such regulations shall be issued within [X] days of the date of enactment of this Act.

       4. Fee Exemption for Platforms that Foster Open Discourse. Notwithstanding section 3 of this Act, a corporation shall not owe any platform fees, nor be required to file a platform fee return, for any platform services for which, whether or not such platform services are market dominant, the corporation:

  1. Publishes the statistics called for by section 1(c) of this Act; and
  2. Incorporates into such platform service’s terms of service applicable to users in this state the following contractual terms:

“Section [Appropriate Section Number] – Open Discourse and Fair Treatment
       

Part 1. Coverage and Scope.

This section applies to individuals who are residents of and physically located in the state of [state] and are either users, or desired users, of our service.

In the event of a conflict between the provisions of this section and any other provision in these terms of service, the provisions of this section shall prevail.

Part 2. Definitions.

For the purpose of this section:

The phrase “restricting access to or availability of content” means our restricting, in whole or in part, covertly or overtly, manually or algorithmically, the availability, visibility or distribution of content a user posts, uploads, or publishes; provided that this phrase does not encompass the output of an algorithm we use for presenting or prioritizing content when such algorithm is (i) generally applicable; (ii) viewpoint neutral; and (iii) not designed to restrict the visibility or distribution of content of a specific user.

The term “demonetize” refers to excluding or restricting a user from participating in user advertisement revenue sharing arrangements.

The term “deplatform” means our restricting, in whole or in part, covertly or overtly, a user or desired users’ ability to post, upload, or publish content, as opposed to our taking such actions on a case-by-case basis against specific and particular content produced by such individual.

Part 3. Commitments to Open Discourse and Fair Treatment.

We promise:

  1. We will not deplatform or otherwise categorically deny service to you, although this commitment does not prejudice the ability of other users to decide with whom they interact, continue to interact or accept to dialogue from.
  2. We will provide you an open forum for public debate or dialogue, without regard to differing ideological, political, philosophical, or religious perspectives.
  3. We will not demonetize or restrict access to or availability of your content based on ideological, political, philosophical or religious views implied or expressed; provided that nothing in this paragraph prevents us from removing content that is otherwise obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable on similar grounds even though such content may also express philosophical, political, ideological, or religious views.
  4. We will only demonetize or restrict access to, or availability of, your content if it is not permitted under specific and plain and particular provisions of either our community standards or other provisions of our terms of service. We will apply those community standards and terms of service transparently, consistently, in good faith, and without pretext. We will not apply our community standards or terms of service selectively to some users and not others. If we demonetize or restrict access to or availability of your content, evidence that we have intentionally declined to demonetize or restrict access to or availability of similarly situated content from other users may be taken as evidence we have violated our obligations under this paragraph.
  5. If we demonetize or restrict access to or availability of your content, we will give you written notification within seven days of the action being taken. That notification will provide a specific and detailed explanation of the reason(s) that content violated our community standards or terms of service, including a description of the plain and particular provisions of our community standards or terms of service such content violated.
  6. Appeals: Upon any restriction(s), demonetization(s), or content moderation as described above or under relevant law, you will have a meaningful opportunity to appeal to have such actions reversed. The grounds for appeal include, but are not limited to, the fact that our act of content moderation, whatever form it is in, must be made in good faith, without pretext, and applied consistently to all users.

Part 4. Limitations.

Nothing in this section affects our ability to demonetize or restrict access to or availability of content that is obscene or pornographic. Nor does anything in this section limit our ability to demonetize or restrict access to or availability of any content that is illegal under state or federal law, such as constitutionally unprotected content in furtherance of unlawful activity, content that is in violation of intellectual property laws, or content subject to a final judgment of a United States federal or state court directing the removal of such content.

Part 5. Enforcement and Damages.

The provisions of this section are contractual and are enforceable at law or in equity. We expressly do not contract for any venue, jurisdiction, judicial forum, or choice of law provision for enforcement of this section. Notwithstanding any other provision in these terms of service, we waive said forum and choice of law provisions as applied to this section, allowing you or any proper legal authority to determine those, should the need arise, under all relevant and applicable laws.

If you bring an action against us to enforce the terms of this section and obtain a final judgement prevailing against us, we will, in addition to any other remedies or penalties provided by law:

  1. Reimburse your court costs, fees, and reasonable attorney fees; and
  2. Pay any actual damages you incurred through our failure to abide by the terms of this section.”


Provided that the fee exemption provided under this section shall not apply to any corporation’s platform services if a court of competent jurisdiction issues a final order holding the contractual language set forth in subsection (b) of this section unenforceable, in whole or in part, against such corporation and platform services. In such event the [appropriate state official] shall submit a notice within 30 days informing such corporation that it will be liable for the platform fees of Section 3 of this Act. Such liability shall commence the first full quarter beginning after the [appropriate state official] submits such notice.

5) Definitions. As used within this Act

  1. The term “active state user” means an individual who uses a particular online platform’s platform services three or more times in a quarter while located in this state.
  2. The term “adversely treat content” means to delete, remove, demonetize, or restrict access to, or availability of, such content.
  3. The term “annual gross revenues” means income or revenue from all sources, before any expenses or taxes, computed according to generally accepted accounting principles.
  4. The phrase “annual gross revenues attributable to users located in this state” means the part of the annual gross revenues of the corporation that is computed using the apportionment fraction, the numerator of which is the population of residents of this state aged 13 years or older, and the denominator of which is the population of the United States aged 13 years or older, both as reported in the most annual estimates produced by the U.S. Census Bureau.
  5. The term “content moderation not taken in good faith” —
    1. Means an online platform taking any of the following adverse steps against an information content provider in this state, or lawful content posted, uploaded, or published by an information content provider in this state, in a manner that is pretextual or inconsistent with the online platform’s terms of service:
      1. Deleting or removing content;
      2. Demonetizing; or
      3. Restricting access to, or availability of, content; and
      4. Also includes an online platform selectively applying its terms of service to take such adverse steps against content posted, uploaded, or published by an information content provider in this state that is similarly situated to content that the platform intentionally declines to delete, remove, demonetize, or restrict.
    2. The term "demonetize" refers to excluding or restricting an information content provider from participating in the service's advertisement revenue sharing arrangements.
  6. The term “deplatform” means an online platform restricting, in whole or in part, covertly or overtly, the ability of an information content provider to post, upload, or publish content, as opposed to such platform taking such actions on a case-by-case basis against specific and particular content produced by such information content provider.
  7. The term “information content provider” has the meaning provided under 47 U.S.C. § 230(f)(3).
  8. The term “interactive computer service” has the meaning provided under 47 U.S.C. § 230(f)(2).
  9. The term “market dominant” means an online platform
    1. with one or more platform services that are actively used by a majority of the residents of this state aged 13 and older that actively use such platform services from any provider; or
    2. with a majority market share in this state for one or more platform services the platform offers, with market share being defined over total uses of such platform services for general internet search, microblogging social networking, and online photo sharing services, and over total time spent using such services for personal social networking and online video sharing; provided that a firm may be considered market dominant based on its market share in the United States if insufficient data exists to determine whether such platform has majority market share in this state. For the purposes of this subsection, the term “actively use” means to use such platform services an average of at least once a month over a calendar or fiscal year.
  10. The term “online platform” means any website or application that is open to the public and allows users to create and share content electronically or engage in social networking, or any general search engine, provided that an online platform does not include:
    1. Electronic mail services; or
    2. A website or application that consists primarily of news, sports, entertainment, or other information or content that is not user generated but is created or preselected by the provider and for which any chat, comments, or interactive functionality is incidental to, directly related to, or dependent upon the provision of such information or content
  11. The term “platform services” means the distinct category of services an online platform offers to the public for creating and sharing content electronically, engaging in social networking, or searching for content. Distinct platform services consist of:
  12. General internet search, defined as internet-based software that
    1. responds to a users’ textual query by using an algorithm or other methods to produce potentially relevant responses to such query, and
    2. responds to general queries, not simply those confined to a particular subject or featuring results from a specific website; examples of a general internet search services include the Bing, DuckDuckGo, Google, and Yahoo! search engines;
    3. Personal social networking, defined as an internet-based service that allows users to construct public or semi-public profiles, publish content on such profiles, articulate a list of other users with whom they share a connection, and view or exchange content with such users, without the service being oriented towards a specific interest or service such as career networking or romantic connections; examples of personal social networking services include Facebook, Google+, and MySpace;
    4. Microblogging social networking, defined as a combination of blogging and instant messaging focused around users creating short messages to be posted and shared on an online social networking service; examples of microblogging social networking services include Gab, Parler, and Twitter;
    5. Online video sharing, defined as an internet-based service that allows users to upload and store videos and share them with other users, and that is primarily focused on the posting and transmission of such user-provided videos; examples of online video sharing services include Dailymotion, Vimeo, and YouTube; and
    6. Online photo sharing, defined as an internet-based service that allows users to upload and store photographs and share them with other users, and that is primarily focused on the posting and transmission of such user-provided photos; examples of online photo sharing services include Instagram and Flickr.
  13. The phrase “restricting access to, or availability of content” means an online platform restricting, in whole or in part, covertly or overtly, manually or algorithmically, the visibility or distribution of content posted, uploaded, or published by an information content provider, provided that this phrase does not encompass the output of an algorithm for presenting or prioritizing content when such algorithm is (i) generally applicable; (ii) viewpoint neutral; and (iii) not designed to restrict the visibility or distribution of content of a particular information content provider.

6) Rules of Construction.

  1. This act shall not be construed as requiring online platforms to verify the state of residency of users of their services. An online platform fulfills its duties under this Act if it satisfies them with regard to conduct that occurs within this state.
  2. Platform services shall be construed as mutually distinct categories. For example, a personal social networking service shall not be construed to also be an online photo sharing service, even if such personal social networking service includes photo sharing.

7) Severability.

  1. Subject to the provisions of this section, the provisions of this act are severable. If any section, subsection, or other part of this act is declared invalid or unconstitutional by a court of competent jurisdiction, that declaration shall not affect the part which remains.
  2. In the event that Section 1(a)(iii) is held invalid because of the inclusion of the qualifying terms “philosophical, political, ideological, or religious” such terms shall be held inoperable, and such section shall be applied as though no such qualifications were present.
  3. The provisions of Section 4 of this Act are not severable. If any provision or a part thereof is declared invalid or unconstitutional, that declaration shall invalidate the whole of Section 4; provided that such declaration shall not affect the rest of this Act which remains.

 

AUTHOR BIOGRAPHY

James Sherk, is the Director of the Center for American Freedom at the America First Policy Institute. He previously served as a Special Assistant to the President on the White House Domestic Policy Council. In that role he lead the White House inter-agency working group on combatting Big Tech censorship, including the development of what became Executive Order 13925 on Preventing Online Censorship.

 

WORKS CITED

Agency for Int'l Development v. Alliance for Open Society International, 591 U.S. ___ (2020) (Breyer, J., dissenting) https://supreme.justia.com/cases/federal/us/591/19-177/#tab-opinion-4267239

Alphabet, Inc. (2021, July 27). Alphabet announces second quarter 2021 results. https://abc.xyz/investor/static/pdf/2021Q2_alphabet_earnings_release.pdf?cache=4db52a1 

Auxier, B. & Anderson, M. (2021, April 7). Social media use in 2021. Pew Research Center. https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2021/04/PI_2021.04.07_Social-Media-Use_FINAL.pdf 

Baker v. Nelson. 409 U.S. 810. (1972). https://www.scribd.com/document/21017674/Baker-v-Nelson-409-U-S-810-1972 

Barnes vs. Yahoo!, Inc., 570 F. 3d 1096. (9th Circuit, 2009). https://law.justia.com/cases/federal/appellate-courts/ca9/05-36189/05-36189-2011-02-25.html 

Biden v. Knight First Amendment Institute at Columbia University. 593 U.S. _______  (2021) https://www.supremecourt.gov/opinions/20pdf/20-197_5ie6.pdf 

BMW of North America, Inc. v. Gore, 517 U.S. 559. (1996). https://supreme.justia.com/cases/federal/us/517/559/ 

bora [@modooborahae] (2021, March 14) ban bill burr. actually just ban white men. a disgrace. [Tweet]. Twitter. https://twitter.com/modooborahae/status/1371186619177447433 

Bose, N. & Renshaw, J. (2021, Feb. 17). Exclusive: Big Tech's Democratic critics discuss ways to strike back with White House. Reutershttps://www.reuters.com/article/us-usa-tech-white-house-exclusive/exclusive-big-techs-democratic-critics-discuss-ways-to-strike-back-with-white-house-idUSKBN2AH1A4 

Bowden, E. (2020, July 30). Twitter execs refused Israel’s request to remove Iran’s Ayatollah Khamenei tweets. The New York Posthttps://nypost.com/2020/07/30/twitter-execs-refused-request-to-remove-ayatollah-khamenei-tweets/ 

Brannon, V.C. (2019a). Liability for content hosts: An overview of the Communication Decency Act’s section 230. Congressional Research Service. Legal Sidebar. https://crsreports.congress.gov/product/pdf/LSB/LSB10306 

Brannon, V.C. (2019b). Free speech and the regulation of social media content. Congressional Research Service report No. 45650. https://crsreports.congress.gov/product/pdf/R/R45650 

Brenner v. Scott. 999 F. Supp. 2d 1278. (2014). https://www.scribd.com/document/237438005/4-14-cv-00107-74-Florida-Preliminary-Injunction 

Butler, H. N. and Johnston, J. S. (2009). Reforming state consumer protection liability: an economic approach. University of Pennsylvania Law School, Public Law Research Paper No. 08-47. Available at SSRN: https://ssrn.com/abstract=1125305 

Butler, H. N. and Wright, J. D. (2010). Are state consumer protection acts really little-FTC Acts? Northwestern University School of Law Faculty Working Paper No. 41. https://scholarlycommons.law.northwestern.edu/facultyworkingpapers/41/ 

Candeub, A. (2020). Bargaining for free speech: Common carriage, network neutrality, and section 230. Yale Journal of Law & Technology. 22, 391-433. https://www.journaloffreespeechlaw.org/candeubvolokh.pdf 

Candeub, A. & Volokh, E. (2021, June 28). Interpreting §230(c)(2). Journal of Free Speech Law. 1(1), 175-190. https://reason.com/wp-content/uploads/2021/06/CandeubAndVolokh.pdf 

Children’s Online Privacy Protection Act, 15 U.S.C. §§ 6501 - 6506 (1998). https://www.law.cornell.edu/uscode/text/15/chapter-91 

Coleman, R. D. (2019, May 6). Corporate censorship in social media, section 230 and a role for the stateshttps://www.jdsupra.com/legalnews/corporate-censorship-in-social-media-and-89549/ 

Communications Decency Act, 47 U.S.C. § 230 (1996). https://www.law.cornell.edu/uscode/text/47/230 

Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991) https://law.justia.com/cases/federal/district-courts/FSupp/776/135/2340509/ 

DeMarch, E. (Aug. 16, 2021). Trump barred from Twitter, but Taliban spokesman tweets away. Fox Newshttps://www.foxnews.com/tech/trump-barred-from-twitter-but-taliban-spokesman-tweets-away 

Dershowitz, A. M. (2021, July 1). Affidavit in support of motion for preliminary injunction in Donald J. Trump et al. v. Twitter Inc., and Jack Dorsey in the U.S. District Court for the Southern District of Florida. 

Domen v. Vimeo, Inc., No. 20-616 (2d Cir. 2021). https://law.justia.com/cases/federal/appellate-courts/ca2/20-616/20-616-2021-07-21.html 

Electronic Frontier Foundation (n.d.) CDA 230: The most important law protecting internet speech. https://www.eff.org/issues/cda230 

Enigma Software Grp. USA, LLC v. Malwarebytes, Inc., 946 F.3d 1040 (9th Cir. 2019). https://law.justia.com/cases/federal/appellate-courts/ca9/17-17351/17-17351-2019-09-12.html 

Executive Order 13925, 85 Fed. Reg. 34079 (July 14, 2021) https://www.federalregister.gov/documents/2020/06/02/2020-12030/preventing-online-censorship

Executive Order 14029, 86 Fed. Reg. 27025 (May 19, 2020) https://www.federalregister.gov/documents/2021/05/19/2021-10691/revocation-of-certain-presidential-actions-and-technical-amendment

Executive Order 14036, 86 Fed. Reg. 36987 (May 28, 2020) https://www.federalregister.gov/documents/2021/07/14/2021-15069/promoting-competition-in-the-american-economy 

Facebook, Inc. (2021). FB Earnings Presentation Q2 2021https://s21.q4cdn.com/399680738/files/doc_financials/2021/q2/Q2-2021_Earnings-Presentation.pdf 

Federal Trade Commission v. Facebook, Inc. (2021). First amended complaint for injunctive and equitable relief. Filed in the U.S. District Court for the District of Columbia. Case 1:20-cv-03590-JEB. https://www.ftc.gov/system/files/documents/cases/ecf_75-1_ftc_v_facebook_public_redacted_fac.pdf 

Feiner, L. (2021, Mar. 31). Facebook removes video interview with Trump, citing his ban from the platform. CNBChttps://www.cnbc.com/2021/03/31/facebook-removes-video-interview-with-trump-citing-his-ban-from-the-platform.html 

Flood, B. (2020, Oct. 15). Twitter's double standard emerges after NY Post Hunter Biden story blocked, other media get pass, critics say. Fox Newshttps://www.foxnews.com/media/twitter-double-standard-hunter-biden-claims-censor 

Gaines, D. (2021, Feb. 18). Tech groups, U.S. Chamber sue to halt Maryland digital ad tax. Maryland Mattershttps://www.marylandmatters.org/2021/02/18/tech-groups-u-s-chamber-sue-to-halt-maryland-digital-ad-tax/ 

Google.com. (2021, Oct. 26). Search query: “Section 230.” https://perma.cc/VTX2-FWUA 

Guttman, A. (2020, July 8). Instagram ad revenues in the U.S. 2018-2021. Statistahttps://www.statista.com/statistics/1104447/instagram-ad-revenues-usa/ 

Hiam v. HomeAway. com, Inc., 267 F.Supp.3d 338 (D. Mass. 2017) https://docs.justia.com/cases/federal/district-courts/massachusetts/madce/1:2016cv10360/178063/75 

Horwitz, J. (2021, Sept. 13). Facebook says its rules apply to all. Company documents reveal a secret elite that’s exempt. The Wall Street Journalhttps://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353 

Horwitz, J. & Scheck, J. (2021, Oct. 22). Facebook increasingly suppresses political movements it deems dangerous. The Wall Street Journalhttps://www.wsj.com/articles/facebook-suppresses-political-movements-patriot-party-11634937358

Internet Tax Freedom Act, 47 U.S.C. § 151 (Note) (2016). https://www.law.cornell.edu/uscode/text/47/151 

Jane Doe One v. Oliver, 46 Conn. Supp. 406 (2000), aff'd, 68 Conn. App. 902 (2002). https://casetext.com/case/jane-doe-one-v-oliver-2

Kenton, W. (2021, Mar. 1). Regulatory capture. Investopedia. https://www.investopedia.com/terms/r/regulatory-capture.asp

Kelly, M. (2021, Feb. 5). Democrats take first stab at reforming Section 230 after Capitol riots. The Vergehttps://www.theverge.com/2021/2/5/22268368/democrats-section-230-moderation-warner-klobuchar-facebook-google 

Khamenei, S.A.  [@khamenei_ir] (2020, May 22) 4. The Zionist regime is a deadly, cancerous growth and a detriment to this region. It will undoubtedly be uprooted and destroyed. Then, the shame will fall on those who put their facilities at the service of normalization of relations with this regime. [Tweet]. Twitter. https://twitter.com/khamenei_ir/status/1263749566744100864 

Klein, B. (2021, July 20). White House reviewing Section 230 amid efforts to push social media giants to crack down on misinformation. CNNhttps://www.cnn.com/2021/07/20/politics/white-house-section-230-facebook/index.html 

Lemongello, S. and Rohrer, G. (2020, Sept. 12). Federal court rules that felons cannot vote in Florida if they owe fines or fees. Los Angeles Timeshttps://www.msn.com/en-us/news/politics/federal-court-rules-that-felons-cannot-vote-in-florida-if-they-owe-fines-or-fees/ar-BB18X0vx 

Lichtenberg, S. (April 2019). State universal service funds 2018: Updating the numbers. National Regulatory Research Institute. https://pubs.naruc.org/pub/3EA33142-00AE-EBB0-0F97-C5B0A24F755A 

Malwarebytes, Inc. v. Enigma Software Group, 592 U.S. _____ (2020). Statement of Justice Thomas respecting the denial of certiorari. https://www.supremecourt.gov/orders/courtorders/101320zor_8m58.pdf 

Marashlian, J. (June 14, 2021). FCC USF fee factor decreases from 33.4% to 31.8% for Q3 2021. The CommLaw Grouphttps://commlawgroup.com/2021/fcc-usf-fee-factor-decreases-from-33-4-to-31-8-for-q3-2021/ 

McFall, C. (2021, March 10). Newt Gingrich says he’s back on Twitter after being locked out for Biden immigration slam. Fox Newshttps://www.foxnews.com/politics/newt-gingrich-says-hes-back-on-twitter-after-being-locked-out-for-biden-immigration-slam 

Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241. (1974). https://supreme.justia.com/cases/federal/us/418/241/ 

Miller v. California. 413 U.S. 15.  (1973). https://supreme.justia.com/cases/federal/us/413/15/ 

Mozilla Corp. v. Federal Communications Commission, 940 F.3d. 1 (D.C. Cir. 2019). https://www.leagle.com/decision/infco20191002213 

National Federation of Independent Businesses v. Sebelius, 567 U.S. 519. (2012). https://supreme.justia.com/cases/federal/us/567/519/ 

National Telecommunications and Information Administration. (2020). Petition for Rulemaking of the National Telecommunications and Information Administration before the Federal Communications Commissionhttps://www.ntia.gov/files/ntia/publications/ntia_petition_for_rulemaking_7.27.20.pdf 

Net Choice et al v. Ashley Moody et al. (June 30, 2021). Preliminary Injunction. Case 4:21-cv-00220-RH-MAF in the U.S. District Court for the Northern District of Florida. https://storage.courtlistener.com/recap/gov.uscourts.flnd.371253/gov.uscourts.flnd.371253.113.0_1.pdf 

Nelson, S. (2021, July 15). White House ‘flagging’ posts for Facebook to censor over COVID ‘misinformation.' The New York Posthttps://nypost.com/2021/07/15/white-house-flagging-posts-for-facebook-to-censor-due-to-covid-19-misinformation/ 

Norwood v. Harrison, 413 U.S. 455. (1973). https://supreme.justia.com/cases/federal/us/413/455/#tab-opinion-1950417 

Obergefell v. Hodges. 576 U.S. 644. (2015). https://www.supremecourt.gov/opinions/14pdf/14-556_3204.pdf 

Packingham v. North Carolina. 582 U.S. _____. (2017) https://supreme.justia.com/cases/federal/us/582/15-1194/ 

Perrin, A. & Atske, S. (2021, Apr. 2). 7% of Americans don't use the internet. Who are they? Pew Research Center. https://www.pewresearch.org/fact-tank/2021/04/02/7-of-americans-dont-use-the-internet-who-are-they/ 

Prager University v. Google, 951 F.3d 991 (9th Circuit 2020) https://law.justia.com/cases/federal/appellate-courts/ca9/18-15712/18-15712-2020-02-26.html 

PruneYard Shopping Center v. Robbins. 447 U.S. 74. (1980). https://supreme.justia.com/cases/federal/us/447/74/ 

Railway Employes v. Hanson. 351 US 225. (1956). https://supreme.justia.com/cases/federal/us/351/225/ 

Rasmussen, S. (2021, Aug. 1). 29% believe social media companies provide neutral platform. SR Poll Results. https://scottrasmussen.com/29-believe-social-media-companies-provide-neutral-platform/ 

Reed v. Town of Gilbert. 576 U.S. 155. (2015) https://supreme.justia.com/cases/federal/us/447/74/ 

Reviving Competition, Part 2: Saving the Free and Diverse Press: Hearings Before the Committee on the Judiciary, U.S. House of Representatives, 117th Congress (2021) (Testimony of Clay Travis)

Rondon, S. M. (July 14, 2021). Selective censorship: Twitter blocked Trump for 'inciting violence' but maintains account of Cuba's Diaz-Canal. El Americanhttps://elamerican.com/twitter-blocks-trump-maintains-diaz-canel/ 

Roth v. United States. 354 U.S. 476. (1957). https://supreme.justia.com/cases/federal/us/354/476/ 

Rumsfeld v. Forum for Academic and Institutional Rights, Inc. 547 U.S. 47. (2006). https://supreme.justia.com/cases/federal/us/547/47/ 

Seattle Organic SEO (2020, Dec. 18). Comscore search rankings October 2020 – Bing creeps up the search marketing share ranks. Blog. https://seattleorganicseo.com/comscore-search-rankings-oct-2020-bing-creeps-up-the-search-marketing-share-ranks/ 

Shearer, E. (2021, Jan. 12).  More than eight-in-ten Americans get news from digital devices. Pew Research Center. https://www.pewresearch.org/fact-tank/2021/01/12/more-than-eight-in-ten-americans-get-news-from-digital-devices/ 

Sikhs for Justice Inc. v. Facebook, Inc., 144 F. Supp. 3d 1088 (N.D. Cal. 2015), aff’d, 697 Fed. Appx. 526 (9th Cir. 2017). https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=2089&context=historical 

Skinner v. Railway Labor Executives’ Ass’n, 489 U.S. 602 (1989). https://supreme.justia.com/cases/federal/us/489/602/ 

Soave, R. (2021, March 16). The Texas bill that prohibits social media censorship is a mess. Reason Magazinehttps://reason.com/2021/03/16/texas-social-media-bill-sb12-political-censorship/

South Dakota v. Wayfair Inc. 585 U.S. ____. (2018). https://supreme.justia.com/cases/federal/us/585/17-494/ 

Stratton-Oakmont, Inc. v. Prodigy Services Co., 1995 N.Y. Misc. LEXIS 229 (N.Y. Sup. Ct.) https://h2o.law.harvard.edu/cases/4540 

Srinivasan, D. (2019). The antitrust case against Facebook: A monopolist's journey towards pervasive surveillance in spite of consumers' preference for privacy. Berkeley Business Law Journal, 16(1), 39-101. https://lawcat.berkeley.edu/record/1128876/files/fulltext.pdf 

Sutton v. Providence St. Joseph Medical Center, 192 F.3d 826 (9th Cir. 1999). https://law.justia.com/cases/federal/appellate-courts/F3/192/826/594035/ 

Take on Big Tech. (2021). The America First Policy Institute. https://www.takeonbigtech.com/

The Root [@TheRoot] (2021, March 17) Whiteness Is a Pandemic http://dlvr.it/RvqcCm [Tweet]. Twitter. https://twitter.com/theroot/status/1372237187148701699     

Teatotaller, LLC vs. Facebook, Inc. 242 A.3d 814 (N.H. 2020) https://www.leagle.com/decision/innhco20200724417 

Turley, J. (2021, April 1). North Dakota Senate kills bill targeting social media companies for censorship. Grand Forks Heraldhttps://www.grandforksherald.com/news/government-and-politics/6965405-North-Dakota-Senate-kills-bill-targeting-social-media-companies-for-censorship 

Turner Broadcasting System, Inc. v. FCC. 512 U.S. 622. (1994) https://supreme.justia.com/cases/federal/us/512/622   

Twitter Inc. (2021a, Jan. 8). Permanent suspension of @realDonaldTrumphttps://blog.twitter.com/en_us/topics/company/2020/suspension.html 

Twitter, Inc. (2021b, July 22). Twitter announces second quarter 2021 results. PR Newswire. https://www.prnewswire.com/news-releases/twitter-announces-second-quarter-2021-results-301339855.html 

U.S. Census Bureau. (2020). National Demographic Analysis Tables: 2020. Table 1. https://www.census.gov/data/tables/2020/demo/popest/2020-demographic-analysis-tables.html 

U.S. Department of Justice. (2020). Department of Justice’s review of section 230 of the Communications Decency Act of 1996. https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996 

U.S. Federal Communications Commission. (2021). Carr calls for ending big tech's free ride on the internet. FCC Newshttps://docs.fcc.gov/public/attachments/DOC-372688A1.pdf 

U.S. Telecom Association v. FCC, 855 F.3d 381 (D.C. Cir. 2017). https://www.leagle.com/decision/infco20170501305 

Vogels, E., Perrin, A., and Anderson, M. (2020, Aug. 19). Most Americans Think Social Media Sites Censor Political Viewpoint. Pew Research Centerhttps://www.pewresearch.org/internet/2020/08/19/most-americans-think-social-media-sites-censor-political-viewpoints/ 

Volokh, E. (2021). Treating social media platforms as common carriers? Journal of Free Speech Law. 1(1), 377-462. https://www.journaloffreespeechlaw.org/volokh.pdf 

Wooley v. Maynard. 430 U.S. 705. (1977) https://supreme.justia.com/cases/federal/us/430/705/#tab-opinion-1952177 

YouTube.com (2020, May 20). COVID-19 medical misinformation policyhttps://support.google.com/youtube/answer/9891785?hl=en 

Varadarajan, T. (2021, Jan. 15). The ‘Common Carrier’ Solution to Social-Media Censorship. The Wall Street Journalhttps://www.wsj.com/articles/the-common-carrier-solution-to-social-media-censorship-11610732343

Zauderer v. Office of Disc. Counsel. 471 U.S. 626. (1985). 

https://supreme.justia.com/cases/federal/us/471/626/
 


[1] The Supreme Court has not considered the scope of Section 230 immunity.

[2] President Trump repeatedly called on Congress to repeal Section 230 outright. By itself such a repeal would encourage platforms to moderate content even more aggressively, by increasing their liability for not removing potentially illegal materials. However, in practice most major platforms could not function under the pre-Section 230 legal framework. Google, for example, would face crushing liability if it could be sued every time it linked to defamatory material. Repealing Section 230 would thus force the platforms to the bargaining table to develop an alternative framework for liability protection and content moderation. 

[3] Some lawyers argue that the Dormant Commerce Clause presents a third legal obstacle. This legal doctrine broadly holds that states cannot pass protectionist policies that discriminate against interstate commerce unless Congress expressly allows them to do so. Under this argument, states cannot attempt to regulate content moderation policies by out of state firms. While there is no case-law directly on point, this is a difficult argument to make against state anti-censorship legislation that only applies to content that appears within the state and that does not differentiate between in-state and out-of-state firms.

[4] In Wooley v. Maynard the Supreme Court struck down New Hampshire law requiring residents to display the state motto “Live Free or Die” on their vehicle license plates. The Court held that the First Amendment prohibited the government from requiring an individual to participate in the dissemination of an ideological message by displaying it on his private property in a manner and for the express purpose that it be observed and read by the public.

[5] President Trump recently filed a class-action lawsuit against Facebook, Twitter, and YouTube for censoring users. President Trump’s complaint argues that Section 230’s broad immunity for content moderation is itself unconstitutional as applied against the plaintiffs. The America First Policy Institute’s Constitutional Litigation Partnership is assisting with this litigation.

[6] They do so on the theory that under (c)(1) platforms cannot be treated as editors or publishers, and a core function of an editor is to decide what to publish or not publish. Therefore, these courts have held that Section 230 bars any suit that would hold platforms liable for not publishing content.

[7] Additionally, 230(c) is entitled “Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material.” Construing 230(c) to immunize all content moderation – regardless of good faith or whether pertaining to offensive material – gives no effect to the words “Good Samaritan” or “Offensive Material” in the subject heading. 

[8] As the 9th Circuit Court of Appeals explained in Sutton v. St. Joseph Medical Center (1999): “When a statute contains a list of specific items and a general item, we usually deem the general item to be of the same category or class as the more specifically enumerated items. This interpretation is supported by two principles of statutory interpretation, noscitur a sociis and ejusdem generis. The first means that a word is understood by the associated words, the second, that a general term following more specific terms means that the things embraced in the general term are of the same kind as those denoted by the specific terms.” (cleaned up)

[9] In Enigma Software Group v. Malwarebytes, Inc. (2019) the Ninth Circuit considered applying the ejusdem generis canon to the term “otherwise objectionable”, but decided against it, holding the terms lacked a similar meaning. This was the case that prompted Justice Thomas’ observation that lower courts have read much more sweeping immunity into Section 230 than the statute warrants. The Ninth Circuit’s conclusion is belied by subsequent research documenting that the common thread is the types of indecent material Congress legislated against in the Communications Decency Act, and Congress expressly differentiated between such indecent content and content of a political or religious nature (Candeub & Volokh, 2021). 

[10] See for example the 9th Circuit’s decision in  Barnes vs. Yahoo!, Inc(2009), the New Hampshire Supreme Court’s decision in Teatotaller v. Facebook Inc. (2020), and Massachusetts federal district court decision in Hiam v. HomeAway. com, Inc. (2017).

[11] The FCC order rolling back the Obama administration’s net neutrality rule held that internet service providers (ISPs) were “information services” under Title I of the Communications Act, not “telecommunications services” under Title II. That FCC order also categorically preempted state net neutrality regulations. In subsequent litigation the D.C. Circuit Court of Appeals invalidated the preemption provision, holding that if ISPs were not covered by Title II then the FCC had no statutory authority to categorically preempt state laws imposing common carrier duties. See Mozilla Corp. v. Federal Communications Commission (2019). That holding implies that states can generally regulate ISPs under state law.

[12] Under the Mozilla v. FCC holding, the FCC would have to affirmatively classify online platforms as “telecommunications services” and not “information services” for the Communications Act to preempt state legislation. The Biden administration has shown no interest in making this policy shift.

[13] One of the ways primary ways regulatory capture operates is through administrative agencies hiring industry experts. The agencies need these employees’ specialized expertise, but they also necessarily bring their perspective to the agency (Kenton, 2021). State courts are unlikely to hire many judges or clerks from the tech industry, largely neutralizing this channel for regulatory capture.

[14] These cases have largely been decided on Section 230 grounds, with the courts not reaching the constitutional question.

[15] For example, Judge Hinkle overturned a Florida state law defining marriage as a union of a man and a woman in Brenner v. Scott (2014), defying then-controlling Supreme Court precedent in Baker v. Nelson (1972) that preceded the U.S. Supreme Court holding such laws to be unconstitutional in Obergefell v. Hodges (2015). In 2020 Judge Hinkle ruled that a Florida law reinstating felons’ right to vote after they paid all fines, fees, and restitution required by their sentence was an unconstitutional poll tax. His decision was overturned by the 11th Circuit Court of Appeals (Lemongello & Rohrer, 2020).

[16] A severability clause provides that if a court overturns part of a law the rest continues in force.

[17] The author of this report led the White House inter-agency working group that developed President Trump’s executive order combatting online censorship (Exec. Order 13925). 

[18] See for example 47 U.S.C. § 201.

[19] By contrast, the exception in section 1(a)(iii) of the model bill for obscene, filthy, harassing, etc. content does not raise similar constitutional concerns unless the courts also hold Section 230(c)(2) unconstitutional. The Supreme Court allows content-based restrictions on speech if they serve a compelling governmental interest and are narrowly tailored using the least restrictive means to achieve that purpose (Reed v. Town of Gilbert, 2015). States have a compelling interest in avoiding preemption under federal law. These exceptions are necessary to comport with 230(c)(2)’s immunity for certain types of content moderation. Further, the provision is narrowly tailored using the least restrictive means possible, as it follows the federal statutory language almost verbatim.

[20] The 2nd Circuit recently held in Domen v. Vimeo (2021) that (c)(2) does immunize deplatforming users. The 2nd circuit is the first appeals court to reach this conclusion, and it is only binding precedent in the 2nd Circuit, which covers Connecticut, New York, and Vermont. If courts struck down the categorical ban on deplatforming users, however, section 1 would still generally prohibit platforms deplatforming users in bad faith. 

[21] Judge Hinkle enjoined Florida’s ban on deplatforming political candidates in part because it was “about as content-based as it gets.” The deplatforming ban applied to political candidates, but not to otherwise similarly situated Floridians (NetChoice vs. Moody, 2021, p. 24). The model legislation avoids this problem.

[22] Platforms would have to show content violated plain and particular TOS to claim they acted consistently with TOS. So the bill does not allow platforms to institute TOS that permit them to remove content at will, and then argue these TOS made all content restrictions definitionally good faith.

[23] See for example the Supreme Court’s rulings in Roth v. United States (1957) and Miller v. California (1973). Transmitting obscene materials is also generally illegal under federal law. See 18 U.S.C. § 1460 et. seq.

[24] This language is consistent with the legal arguments that Section 230 violates the First Amendment by encouraging platforms to take down content the government cannot directly censor. Section 1(d) of the model legislation covers only constitutionally unprotected content. The government can directly restrict these materials, and it may accordingly encourage platforms to take down such content as well.

[25] The Supreme Court has held the government may generally require businesses to disclose their terms of service (Zauderer v. Office of Disc. Counsel, 1985)

[26] The model bill does not provide for separate penalties for failure to maintain plain and particular TOS available to users at the time of use. This is because failure to do so prevents platforms from claiming content moderation was consistent with TOS. The penalty for non-compliance is thus self-enforcing, as it makes it harder for platforms to prevail in bad faith content moderation claims.

[27] The fees are levied on active users aged 13 and older. The Children’s Online Privacy Protection Act (1998) regulates the collection of private information from children under age 13. Many social media platforms ban users under age 13 to avoid the cost of complying with this legislation. Requiring fee payments for users 13 and older aligns fee administration with this existing threshold.

[28] Facebook reported revenues per U.S. and Canadian user of $53.01 in Q2 2021 (Facebook Inc., 2021, p.4). This figure excludes Instagram revenues. The model legislation suggests a fee of $5 per state user of personal social networking services. Facebook’s annual U.S. Instagram revenues are approximately $18.1 billion, or $4.5 billion per quarter (Guttman, 2020). The Pew Research Center estimates that approximately 40 percent of American adults use Instagram, and 88 percent do so at least every few weeks (Auxier & Anderson, 2021, pp. 14, 17). The Census Bureau estimates there are approximately 280 million Americans age 13 and older in 2020 (U.S. Census Bureau, 2020). These figures imply 99 million regular Instagram users, and Facebook quarterly revenues per Instagram user of about $45. The draft legislation suggests fees of $4 per user of photo sharing services. Twitter reported U.S. revenues of approximately $700 million in Q2 2021 (Twitter, Inc., 2021b). Approximately 23 percent of Americans use Twitter, and of those, 84 percent use Twitter at least once every few weeks (Auxier & Anderson, 2021, pp. 14, 17). These figures and the Census Bureau population estimates, imply approximately 54 million Americans actively use Twitter, with U.S. quarterly per-user revenues of approximately $13. The model suggests a quarterly per-user fee of $1.50 for microblogging platforms. In Q2 2021 Alphabet, Inc. (the parent company of Google and YouTube) had total U.S. revenues of $28.2 billion (Alphabet, Inc., 2021). If Alphabet’s U.S. revenue sources are similar to the firm’s global distribution, approximately $16.3 billion of their U.S. revenues are attributable to Google search advertising and $3.2 billion to YouTube advertising. Further, 93 percent of American adults use the internet (Perrin & Atske, 2021). Assuming every American who uses the internet also uses Google search, Alphabet’s quarterly search advertising is approximately $60 per user. If some internet users do not regularly use Google, then their per-user revenues are somewhat higher. The model legislation suggests per-user fees for general search platforms of approximately $7.50 per user. About 81 percent of American adults use YouTube, and of those who do, 92 percent use YouTube at least once every few weeks (Auxier & Anderson, 2021, p.14, 17). This suggests three quarters of American adults – approximately 210 million people – regularly use YouTube. That implies Alphabet’s quarterly YouTube revenues are about $15 per user. The draft suggests charging video sharing platforms a quarterly fee of $1.50 per active state user.

[29] Existing commercial services such as Comscore estimate the number of Americans who use different platform services and the amount of time they spend on these platforms. The major platforms use these services to gauge their market share vis-à-vis other platforms (Federal Trade Commission v. Facebook, 2021, pp. 63-66). States could purchase data from these services to assess platforms’ in-state users and the platforms’ respective market shares.

[30] See footnote 10, supra. Note that some district courts have held that Section 230 does render contractual content moderation commitments unenforceable. See for example Jane Doe One v. Oliver (2000).

[31] Note that the federal universal service “contribution factor” is approximately one-third the cost of phone services (Marashlain, 2021). So fees capped at 15 percent of platform revenues stand well within existing uses of the government’s taxing power to fund universal service programs.

[32] See 47 U.S.C. § 151, note, § 1105(5): “The term ‘internet access’ – (A) means a service that enables users to connect to the Internet to access content, information, or other services offered over the Internet … (E) includes a homepage, electronic mail and instant messaging (including voice- and video-capable electronic mail and instant messaging), video clips, and personal electronic storage capacity, that are provided independently or not packaged with Internet access.”

[33] See 47 U.S.C. § 151, note, § 1107: “Nothing in this Act shall prevent the imposition or collection of any fees or charges used to preserve and advance Federal universal service or similar State programs … authorized by section 254 of the Communications Act of 1934 (47 U.S.C. 254).”

[34] The 8 states that do not are Alabama, Delaware, Florida, Hawaii, Massachusetts, New Jersey, Tennessee, and Virginia (Lichtenberg, 2019, p.2).

[35] An FCC commissioner recently proposed extending federal universal service fees to tech platforms on this basis (U.S. Federal Communications Commission, 2021).

[36] For example, state universal service funds subsidize providing telecommunications services to rural areas, support broadband deployment, and provide equipment and connectivity for schools and libraries (Lichtenberg, 2019, pp. 8-12).

[37] The ITFA defines a tax as “any charge imposed by any governmental entity for the purpose of generating revenues for governmental purposes, and is not a fee imposed for a specific privilege, service, or benefit conferred.” See 47 U.S.C. § 151, note, § 1105(8)(A)(i). States can accordingly avoid ITFA preemption by levying fees that support a specific privilege, service, or benefit for affected entities. For example, the ITFA would not preempt large state fees on social media platforms that funded initiatives to educate the platforms’ in-state employees and clients about the benefits of free speech and open discourse, or the dangers of cancel culture.

[38] This data superficially contradicts Justice Thomas’ claim that Google intermediates between webpages and users looking for data approximately 90 percent of the time. However, both statements are accurate. Comscore provides data on total internet searches across all platforms, and shows Google performs approximately 62 percent of such searches. However, Google users are much more likely to click through to links than users of other services. As a result, Google is responsible for approximately 90 percent of internet search traffic to other webpages, despite accounting for only 60 percent of internet searches.

Join The
Movement



By providing your information, you become a member of America First Policy Institute and consent to receive emails. By checking the opt in box, you consent to receive recurring SMS/MMS messages. Message and data rates may apply. Message frequency varies. Text STOP to opt-out or HELP for help. SMS opt in will not be sold, rented, or shared. You can view our Privacy Policy and Mobile Terms of Service here.