How does the government regulate media?

Should the Government Regulate Polarizing and Misleading Speech on Social Media?

By Herbert Lin and Marshall Van Alstyne. If you enjoy this piece, you can read more Political Pen Pals debates here.


Social Media Needs Transparency, Not Regulation

By Herbert Lin – Senior Research Scholar, Center for International Security and Cooperation, Stanford University

Government regulation to prevent the spread of misinformation and disinformation is neither desirable nor feasible. It is not desirable because any process developed to address the problem cannot be made immune to political co-optation. Nor is it feasible without significant departures from First Amendment jurisprudence and clear definitions of misinformation and disinformation. Nevertheless, government regulation does have an important role to play in increasing the transparency with which social media companies operate—transparency that would subject such companies to greater public scrutiny and increase the pressure to mitigate the worst effects of polarization.

Defining Misinformation and Disinformation

What are misinformation and disinformation? A common understanding is that misinformation is information that is not true and disinformation is misinformation disseminated with the awareness that it is not true. “The moon is made of green cheese” is misinformation, but the same phrase could count as disinformation when I utter it to my toddler daughter (in fairness, she didn’t believe me for a second).

But information related to online polarization goes far beyond that which is true or false. Exhortations, opinions, and questions are neither true nor false—as examples, consider statements such as “Republicans are more patriotic than Democrats” or “Didn’t I see you at a Nazi rally?” A statement can have entirely different implications depending on which words are emphasized. Speakers can claim that their comments were intended to be taken humorously or sarcastically rather than seriously. Conspiracy theories in general cannot be falsified, as it is impossible to prove the negative.  

Given the wide variety of speech that is potentially misleading depending on the context, developing a precise and narrow definition of speech to be regulated seems a daunting task.

The Causes of Online Polarization

The notion that limiting the spread of misinformation and disinformation will reduce online polarization depends on the idea that partisan affiliation results from exposure to the content of social media messages. In this view, social media encourages people to affiliate with others of similar political perspectives and limits their information to that which is psychologically comfortable to them. The result, according to this perspective, is that they become less willing to accommodate other political views—that is, they become more politically polarized.

This content-based perspective on polarization captures some of the important elements of the polarization process, but it neglects the role of social identity in shaping partisanship. Often called affective polarization, this view points to the psychological mechanisms underlying the formation of group identity, which can be driven by small and arbitrary differences in group characteristics. Individuals internalize social media messages because they reflect a group consensus. Thus, the flimsiest of rationales—quite common on social media—often suffices to justify an individual’s political views. 

Regulating misinformation and disinformation might plausibly and partially reduce the most extreme forms of such content. But it is unclear how it would address the problem of confirmation bias in how people seek or retain information. Regulation would have to entirely suppress any support for one point of view or another to have a significant impact on flimsy rationales for outlandish positions and that is a world that should terrify us all.

Departing From First Amendment Jurisprudence 

Under current First Amendment jurisprudence, content-based government regulation of speech is subject to a standard of strict scrutiny. It is permissible only in support of compelling governmental interests with the narrowest means possible so that only “bad” speech is restricted—and “bad” is narrowly defined. While certain types of expression can be regulated (e.g., commercial speech can be regulated to prevent fraudulent advertising, and child pornography can be forbidden), speech regulations must generally clear a high bar. To the extent that the regulated content is political speech, barriers to regulation are even higher, as political speech is among the most protected categories of speech.  

One metaphor frequently used to describe the operation of the First Amendment is the marketplace of ideas. The marketplace of ideas metaphor, however, is potentially misleading in today’s information environment and the social media pervasiveness. In the marketplace of ideas, good ideas are supposed to push out bad ones. On the other hand, markets may fail, prompting the government to step in to remediate that failure. Thus, the operative question is this: To the extent that there is failure today in the marketplace of ideas, how should the U.S. government respond? Is the social media environment—with all of its false, misleading, and inauthentic statements that manipulate the political process—so pervasive and destructive that the nation should consider regulating such speech? And if so, can such regulation be tailored to minimize the dangers of undue restrictions on the marketplace of ideas? I believe the answer is no.

Improving Transparency and Targeted Regulation

A government-based process for regulating misinformation and disinformation could make sense if the government was broadly trusted to act in the interests of its constituents. This trust exists in many contexts today—for example, many people trust weather reports. But when the president of the United States has freely extolled the virtues of “truthful hyperbole,” incorrectly altered NOAA weather maps predicting the path of a hurricane, and then pressured NOAA officials to state that their predictions were consistent with the altered map, it is clear that even weather reports are subject to political pressures. If official US government reports on weather are subject to such pressures, one can only imagine the impact of regular government investigations of social media content that casts a poor light on government actions.

Still, the reflections above should not be taken to mean that government should take no action at all to regulate the social media information environment. It is probably true that government could establish a compelling interest in ensuring the accuracy of a very narrow category of voting information (e.g., poll locations, hours, and voter eligibility), as well as other potential categories.

As for the social media companies, the government could establish requirements for operating transparency that would at least subject them to greater public scrutiny. The Aspen Commission on Information Disorder, on which I served, offered several possibilities for increasing transparency, two of which I highlight here. Social media companies could be required to regularly publish the content, source account, and reach data for posts that they organically deliver to large audiences, and regularly disclose key information about every digital ad and paid post that runs on their platforms regardless of reach.  


Saving §230, Preserving Democracy, and Protecting Free Speech

By Marshall Van Alstyne – Questrom Chair Professor of Information Economics, Boston University School of Management

The status quo is unsustainable. Information spread on social media has been implicated in lynching, vaccine hesitancy, polarization, insurrection, genocide, sex trafficking, drug trafficking, teenage depression, cancer misinformation, and the belief that a sitting president stole an election. Strikingly, another sitting president was de-platformed without due process—despite having been exempted from rules prohibiting the spread of threats and lies, presumably excused because he was newsworthy. The platform that had placed him above the rules when he was an asset denied him our most fundamental laws when he became a liability. Neither is acceptable. Yet, many prefer the status quo, arguing that government intervention is neither desirable nor feasible. Neither is true. Our first recommendation is simple: Require platforms to treat all people equally under the law.

The Legacy of Section 230

The heart of the problem lies at the intersection of a business model and the law. Section 230 of the 1996 Communications Decency Act immunizes platforms from the consequences of their editorial choices regarding user-generated content. It remains a vital component in the development of internet business models. Before §230 and after Stratton Oakmont v Prodigy (1995), platforms had little incentive to moderate content because even limited moderation subjected them to full publisher liability—as if they had authored content themselvesSection §230 placed content regulation policy in the hands of internet platforms by allowing them to keep user posts they would otherwise delete for fear of a lawsuit. §230 avoids a problem that some say is impossible to do well at scale: checking millions of messages each moment. §230 fosters diverse environments in which Apple can make family-friendly spaces or Reddit can allow hate speech while Facebook can suppress it, even when permitted by the First Amendment. 

While §230 provides a safe haven that allows platforms to curb spam and abuse, it also fosters moral hazards. Freed of the consequences of their choices, platforms act to benefit themselves at the expense of society. These consequences are the lynchings, insurrections, and anti-vaccination movements that occur when platforms put profits ahead of people, as one whistleblower testified. This issue is a difficult one—the problem concerns both free speech and profit. One person’s post may provide the spark, but the platform offers the accelerant—targeting, fanning, and magnifying the post 88 million times. While the flames burn down a neighborhood, platforms continue selling ads, and others stand by watching the fire. One scholar has written, “We can have democracy or we can have a surveillance society, but we cannot have both.” Can we save democracy, protect speech, and preserve the Internet all at once?

Transparency is Insufficient

The fact that a problem is complex should never be a reason to give up on solving it. The status quo solution—offering simple transparency of who placed what ad—is insufficient for at least two reasons. First, as business models go, it’s simply unfair. Depending on the format—print, broadcast, and Internet—media face different liabilities for paid advertising. While all paid ads are “user-generated content” and ads supply the principal revenue for all three business models, print media faces liability for the ads they accept containing lies. Broadcast faces liability, too, except for lies in ads placed by federal candidates, which they must accept. Internet platforms face no liability for any ad; they are free to accept or reject these ads as they please. Thus, §230 grants internet platforms an immunity that is denied to print and broadcast. Perhaps this made sense when AOL, Compuserve, and Prodigy were but advertising afterthoughts. A quarter-century later, however, it makes no sense when three firms dominate 70% of digital advertising. Transparency alone does not level the liability playing field. Thus, additional intervention is needed to reconcile ad liability across print, broadcast, and Internet media.

Second, transparency is insufficient even for the goal of those who propose it. The theory is that in a market of many ideas, the best ones win. “The remedy for speech that is false is speech that is true,” wrote former SCOTUS Justice Anthony Kennedy in United States v. Alvarez (2012). This cannot happen if listeners hear only the false idea that uniquely appears in social media and is insulated from competing messages. Transparency that merely reveals who said what does not enable necessary counterspeech. Instead, any intervention must provide equivalence of reach to ensure that listeners can hear the truth that voids a lie. Otherwise, transparency only alerts a losing candidate of the lies by which she lost, while giving her asymmetric and inadequate means to change the outcome of her election.

Non-Governmental Oversight of Media Misinformation

Those favoring the status quo also argue that the meddling of government in the market of ideas cannot be immunized against co-optation. This argument is vital. They are right. I agree. But let us recognize that the distinction between private companies legitimizing speech (which we have now) and the government legitimizing speech (which is dangerous) is a false choice of two evils. Instead, let us consider a third option: the design of a missing institution not-yet-tried.  Can we not develop new democratic models, decentralized to defeat centralized co-optation? 

To address the governance problem, we might split the definition, adjudication, and remediation of misinformation across different bodies just as we split the legislative, judicial, and executive branches of government. We empower one group to define “misinformation.” Liberals and conservatives alike might even agree on a definition of “false facts” while disagreeing on which specific facts are false. A second group would judge, but only facts and not opinions. This has long been a distinction of the court. For although “under the first amendment, there is no such thing as a false idea…there is no constitutional value in false statements of fact” (Gertz v Welch, 1974). Peer juries have shown as much accuracy as fact-checkers and also carry legitimacy. In this approach, peer juries must apply the agreed definitions to the fact in question, much as judges must apply the laws. The last group, comprised of internet platforms, must implement the jury decisions. No group can decide based on self-interest. No money shades the decision. No one party decides. For full decentralization, readers may explore a market mechanism based on Coase, developed in “Free Speech, Platforms, and the Fake News Problem.” Here, our intervention splits and decentralizes governance.

Decentralized oversight might take the form of a standards body or trade association. If industry refuses to self-regulate, then the government might impose “meta regulation.” This would create just enough pressure to cause self-regulation without content details. There are some who would oppose even this intervention, but fake news is pollution—a market failure that requires interventions such as taxes or the assignment of liability rules

Addressing the Problem of Scale

Still, we must address the problem of scale. Revisions to Section 230 have faced two main critiques: one, that holding platforms actionable for others’ false speech would cause them to take down user speech, and two, that ambiguity in individual messages makes the judgment of false speech infeasible at scale. A targeted solution could separate original speech from amplified speech, generously protecting the former while reverse amplifying the latter. The posting and even discovery of false speech would be protected even better than under private enterprise, but amplification is unprotected.

The second element uses scale as an advantage. Rather than vet every message, the system takes only statistical samples. The Central Limit Theorem (CLT) guarantees that establishing the presence of misinformation in amplified speech is feasible to any level of desired accuracy simply by taking larger samples. For instance, a doctor testing for cholesterol does not test every drop of blood but only a statistically valid sample. Facebook, in this case, reports removal statistics for suicide and self-harm, covid misinformation, terrorist content, and regulated goods. We hold them to their own published standards, on a statistical and not individual message basis, and verify reports are true. Then, a progressive pollution rate can be applied to firms of varying sizes. If Facebook is allowed 1% pollution, perhaps startups are allowed 5%.

Fine Platforms for Publishing Misleading Ads

A practical solution is even simpler. If we charge platforms a bit more than the ad revenues they generate when amplifying lies, it becomes unprofitable. Thus, the full damage disappears while the original post remains. Note that as a free speech issue, a platform could still magnify misinformation aligned with its views for what is little more than the price of its own ads. The difference with this solution is that the ad revenue is given to society. We can use our CLT tool and the price of ads to make a highly predictable and low-cost instrument that limits the damage and avoids the lawsuits resulting from magnified misinformation. Threading the needle, we have a decentralized and democratic decision mechanism that’s fair, predictable, and low in cost.

I don’t claim that my ideas are the best ones. We just need to volunteer and submit our ideas to criticism in the hope of finding better ones. Thank you, Divided We Fall, for hosting my debatable musings. 


In Response Marshall Van Alstyne: Common Ground to Build a Partnership on

By Herbert Lin – Senior Research Scholar, Center for International Security and Cooperation, Stanford University

When I was asked to write on “Can the government regulate social media to limit the spread of mis/disinformation and reduce online polarization?” I interpreted “regulate” to mean “regulate the content of social media”. I said that government regulation was neither desirable nor feasible and I still mean that. But professor Van Alstyne’s piece actually uses “intervene” (intervention), which I take to mean “change the operating environment.” Accepting this change of scope, I agree with much of his argument. 

For example, we both like the idea of treating original speech differently from amplified speech, so we might consider limiting the number of people that one account could reach with anyone rebroadcast or implementing a cool-down timer that increases for each rebroadcast as the number of rebroadcasts increases. Of course, such actions would make “going viral” more difficult for all content and non-political marketers might well complain. But maybe that’s a price that is worth paying. 

Professor Van Alstyne seems to imply that a major component in solving the polarization problem should be appropriate government intervention regarding social media companies (SMCs) and that transparency will be insufficient. On the latter point, I agree entirely with all of the reasons he provides, though I hope he would also agree that transparency remains necessary. Transparency is something that seems to be more politically achievable than other interventions and certainly isn’t a bad starting point.

New Interventions Are Needed for Today’s Environment

I agree that we need to think about new ideas for government intervention. We would both agree that in its early days, Section 230 was invaluable in protecting nascent Internet companies from the liability burdens of publishers, but today it protects multi-billion-dollar behemoths as well as three-person startups. Thus, we might consider an intervention reducing or eliminating Section 230 protections based on company size (perhaps measured by revenue or user base), while at the same time increasing the barriers for frivolous or harassing lawsuits. Below a certain threshold, Section 230 protections would remain in full force. Above a different higher threshold, Section 230 would be void. We can work out the scope and nature of protections in the intermediate region.

My substantive disagreement with my colleague is with his focus on ads and content that is factually true or false. First, I don’t know how to define an advertisement, and in particular what distinguishes an ad from an item of advocacy. While it is true that SMCs obtain revenue from ads, most of those ads are not politically charged. But if I write an essay on my blog, for free and without paying anyone, which supports the election of Donald Trump as President, is that an advertisement? My takeaway is that ads per se are far less insidious than other factors driving polarization.

The same holds for content that is factually false. My piece emphasized that the veracity of a statement is only one component of how it informs or misinforms—context, emphasis, and tone also play roles that can be as important or more. Professor Van Alstyne’s essay barely touches on this point, which is an important omission given that a large portion of SMC content that might be regarded as polarizing or inflammatory is often derived from true facts.

Civil Discourse to the Rescue

Bottom line—my colleague and I share significant common ground when we unpack our apparent differences. Where we disagree substantively, I believe we can engage in constructive dialog. I am pleased to associate myself completely with his last sentences: I don’t claim that my ideas are the best ones. We just need to volunteer and submit our ideas to criticism in the hope of finding better ones.



If you liked this post, you can read more of our Encouraging Bipartisanship series here. 

How does the government regulate media?

Herbert Lin

Senior Research Scholar, Center for International Security and Cooperation, Stanford University | + posts

Dr. Herbert Lin has been a senior research scholar and Hank J. Holland Fellow at Stanford University since 2015. He served from 1990 through 2014 as chief scientist of the Computer Science and Telecommunications Board of the National Academies. Prior to his National Academies service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986–1990). He received his doctorate in physics from MIT. In his new book, "With Cyber Threats and Nuclear Weapons," Herbert Lin provides a clear-eyed breakdown of the cyber risks to the U.S. nuclear enterprise.

How does the government regulate media?

Marshall Van Alstyne

Questrom Chair Professor of Information Economics, Boston University School of Management | + posts

Marshall Van Alstyne is the Questrom Chair Professor of Information Economics at the Boston University School of Management. His work explores how ICT affects firms, products, innovation, and society with an emphasis on multi-sided platforms. Work or commentary have appeared in journals such as Science, Nature, Management Science, American Journal of Sociology, Strategic Management Journal, Information Systems Research, MISQ, The Economist, New York Times, and Wall Street Journal. He has made significant contributions to platform economics and strategy as a co-developer of the theory of “two-sided” markets. He is co-author of the international bestseller "Platform Revolution." 

How does the US regulate media?

The Federal Communications Commission regulates interstate and international communications by radio, television, wire, satellite and cable in all 50 states, the District of Columbia and U.S. territories.

Which form of media has to be regulated by the government?

Radio and television broadcasters must obtain a license from the government because, according to American law, the public owns the airwaves. The Federal Communications Commission (FCC) issues these licenses and is in charge of regulating the airwaves.

How is the media regulated in the UK?

Ofcom is the regulator and competition authority for the UK communications industries. It regulates the TV and radio sectors, fixed line telecoms, mobiles, postal services, plus the airwaves over which wireless devices operate.

How is media regulated in Australia?

The Australian Press Council 'is the principal body with responsibility for responding to complaints about Australian newspapers, magazines and associated digital outlets', although there are other industry regulators, such as the Independent Media Council, which has a code of conduct for print and online print media ...