Skip to main content
Log in

Optimal social media content moderation and platform immunities

  • Published:
European Journal of Law and Economics Aims and scope Submit manuscript

Abstract

This article presents a model of the lawmakers’ choice between implementing a new content moderation regime that provides for platform liability for user-generated content versus continuing platform immunity for the same. The model demonstrates that lawmakers prefer platform immunity, even if incivility is increasing, if the costs of implementing a platform liability regime are greater than the costs of enforcing status quo law. In addition, inasmuch as implementation of a platform liability regime is coupled with new speech restrictions that are unconstitutional or prohibitively costly, lawmakers prefer immunity, but platforms are free to set strong content moderation policies consistent with existing law.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. 52 U.S.C. § 30121 and 11 C.F.R. 110.20.

  2. For instance, the GDPR may raise barriers to entry, keep smaller platform entrants (with smaller content moderation budgets) from entering the market, and reduce overall incivility levels.

  3. See, e.g., § 230 of the U.S. Communications Decency Act.

  4. Certainly the GDPR can be understood as a response to the new threats of social media and the Internet, and thus be seen as responsive legislation. However, the GDPR does not provide for platform liability for illegal user-generated speech.

  5. Internal compliance policies are platform policies for compliance with all laws. Under a platform liability regime, these include policies for compliance with rules that hold platforms responsible for user-generated content in addition to all other rules that govern platform behavior. Under immunity, compliance policies only address rules that govern platform behavior and exclude consideration of liability for user-generated speech.

  6. This assumption is predicated upon the reasoning that the higher the level of civility, the more visible is the policy outcome.

  7. First-party sanctions take the form of intrapersonal guilt. Second-party sanctions take the form of interpersonal disapproval. Third-party sanctions take the form of state-imposed fines, injunctions, incarceration, expulsion, and so on. The penalty is assumed to be net of any contribution from sources of intrapersonal pride and interpersonal approval conferred by acting incivil. Thus, the model allows for a user to choose an incivil act that actually provides a net gain. In that case, the penalty would increase that user’s utility, and would be better understood as a reward. The term penalty is used here for exposition given that incivility carries a negative connotation. Note, too, in jurisdictions with lax policies, the state-imposed penalty is simply 0. In that case, an incivil user immune to guilt and disapproval will experience a net penalty of 0.

  8. The results of the model are identical for any action \(\theta _{p}\) chosen by the platform so long as that action reflects its choice of the location of its internal content moderation regime. For instance, \(\theta _{p}\) can represent a chosen content moderation regime that reflects a platform’s chosen level of advertising revenue, where content regimes further from y can be interpreted as advertising to larger target audiences that include users that post content prohibited by law, and those users’ followers, readers, and viewers. Thus, the model permits platforms to increase profits, for instance, by increasing the space between y and \(\theta _{p}\).

  9. Given the current legislative proposals referenced in the introduction, the foregoing analysis seeks to evaluate moves from immunity under status quo speech screening policies to platform liability with or without changes to screening policies.

  10. Summary results are presented in Table 1.

References

  • Balkin, J. (2018). Free speech in the algorithmic society: Big data, private governance, and new school speech regulation. UC Davis Law Review, 51, 1149–1210.

    Google Scholar 

  • Bejan, T. M. (2017). Mere civility: Disagreement and the limits of toleration. Cambridge: Harvard University Press.

    Book  Google Scholar 

  • Bloom, P. (2016). Against empathy: The case for rational compassion. New York: HarperCollins.

    Google Scholar 

  • Coase, R. (1974). The market for goods and the market for ideas. American Economic Review, 64, 384–391.

    Google Scholar 

  • Fagan, F. (2018). Systemic social media regulation. Duke Law and Technology Review, 16, 393–439.

    Google Scholar 

  • Klonick, K. (2018). The new governors: The people, rules, and processes governing online speech. Harvard Law Review, 131, 1598–1670.

    Google Scholar 

  • Langvardt, K. (2018). Regulating online content moderation. Georgetown Law Journal, 106, 1353–1388.

    Google Scholar 

  • Posner, R. A. (1986). Free speech in an economic perspective. Suffolk University Law Review, 20, 1.

    Google Scholar 

  • Rawls, J. (1993). Political liberalism. New York, NY: Columbia University Press.

    Google Scholar 

Download references

Acknowledgements

The author wishes to thank two anonymous referees.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frank Fagan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Proposition 1

Lawmakers are faced with \(-\psi _{1}\left| y\right| -r_{1}{\bar{h}}-w_{1}{\bar{k}}+\delta V\left( y-{\bar{k}}\right) >-r_{0}{\bar{h}}-w_{0}{\bar{k}}+\delta V\left( y-{\bar{k}}\right)\). Consider platform penalties o. Recall that platforms set optimal non-compliance so that \(h_{p}=\frac{y-\alpha _{p}}{o_{\lambda }+d_{\lambda }}+ \frac{d_{\lambda }y}{o_{\lambda }+d_{\lambda }}\). Increases (decreases) in o decrease (increase) \(h_{p}\) for \(\lambda =0,1\). It follows that \({\bar{h}}\), given by \(\int h_{p}f\left( \alpha \right) d\alpha\), decreases (increases) as well, for \(\lambda =0,1\). However, \(r_{1}>r_{0}\). Thus, for any o, platform enforcement costs under liability are greater than platform enforcement costs under immunity. As a result, lawmakers prefer liability (immunity) when o is increasing (decreasing) holding other factors constant.

Lawmakers are faced with \(-\psi _{1}\left| y\right| -r_{1}{\bar{h}}-w_{1}{\bar{k}}+\delta V\left( y-{\bar{k}}\right) >-r_{0}{\bar{h}}-w_{0}{\bar{k}}+\delta V\left( y-{\bar{k}}\right)\). Consider user penalties s. Recall that users set optimal incivility so that \(k_{i}=\frac{y-\sigma _{i}}{1+s_{\lambda }+b_{\lambda }}+ \frac{b_{\lambda }y}{1+s_{\lambda }+b_{\lambda }}-\frac{1}{2 \left( 1+s_{\lambda }+b_{\lambda }\right) }\beta W_{\omega }\). Decreases (increases) in s increase (decrease) \(k_{i}\) for \(\lambda =0,1\). It follows that \({\bar{k}}\), given by \({\bar{k}}=\int k_{i}f\left( \sigma \right) d\sigma\), increases (decreases) as well, for \(\lambda =0,1\). However, \(w_{1}<w_{0}\). Thus, for any s, user enforcement costs under liability are less than user enforcement costs under immunity. As a result, lawmakers prefer liability (immunity) when s is decreasing (increasing) holding other factors constant. \(\square\)

Proof of Proposition 2

When moving from immunity to liability, lawmakers incur costs \(\psi _{1}\left| y\right| +r_{1}{\bar{h}}+w_{1}{\bar{k}}\). Consider first, the radicalness of the change to the speech screening policy. Enactment costs \(\psi\) are directly proportional to the status quo, normalized to be 0. Smaller distances from the status quo \(y-0\) result in smaller \(\psi\).

Consider second, the distances between the new policy location and the platforms’ and users’ ideal policy locations, \(y-\alpha _{p}\) and \(y-\sigma _{i}\), respectively. Recall that platforms set optimal non-compliance so that \(h_{p}=\frac{y-\alpha _{p}}{o_{\lambda }+d_{\lambda }}+\frac{d_{\lambda }y}{o_{\lambda }+d_{\lambda }}\) and that users set optimal incivility so that \(k_{i}=\frac{y-\sigma _{i}}{1+s_{\lambda }+b_{\lambda }}+\frac{b_{\lambda }y}{1+s_{\lambda }+b_{\lambda }}-\frac{1}{2\left( 1+s_{\lambda }+b_{\lambda }\right) }\beta W_{\omega }\). Decreases in \(y-\alpha _{p}\) and \(y-\sigma _{i}\) reduce \(h_{p}\) and \(k_{i}\), respectively. It follows that \({\bar{h}}\) and \({\bar{k}}\) decrease, and that lawmakers incur fewer enforcement costs r and w. \(\square\)

Proof of Proposition 3

Given a constitutionally fixed speech screening policy, lawmakers cannot change y. Enactment costs \(\psi _{1}\left| y\right|\), given by \(y-0\), equal 0. Lawmakers, therefore, are faced with \(r_{1}{\bar{h}}+w_{1}{\bar{k}}>r_{0}{\bar{h}}+w_{0}{\bar{k}}\), where \(r_{1}>r_{0}\) and \(w_{1}<w_{0}\), and prefer immunity when \(\left( r_{0}-r_{1}\right) {\bar{h}}>\left( w_{0}-w_{1}\right) {\bar{k}}\) for any level of aggregate incivility \({\bar{k}}\). \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fagan, F. Optimal social media content moderation and platform immunities. Eur J Law Econ 50, 437–449 (2020). https://doi.org/10.1007/s10657-020-09653-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10657-020-09653-7

Keywords

JEL Classification

Navigation