FROM INFORMATION TO OPTIMIZATION
How the Modern Information Environment Selects for False Belief - Working Paper No. 2
THE LEGITIMACY PROJECT — Working Paper No. 2
FROM INFORMATION TO OPTIMIZATION
How the Modern Information Environment Selects for False Belief
The modern information environment selects for beliefs. What it selects for is not, in any straightforward sense, the truth.
By The Legitimacy Project | April 2026 | Working paper
I. NOT MISINFORMATION. SELECTION PRESSURE.
The standard account of epistemic deterioration focuses on false content: fabricated stories and coordinated deception. That account is incomplete in a way that matters. False information has always existed. What has changed is the structure of the environment in which it competes.
The first working paper in this series identified three vectors of epistemic deterioration, among them incentive misalignment, which this paper examines in depth. The argument here is that the information environment is now structured to systematically advantage false and epistemically degraded content over accurate content, independent of any actor’s intentions. The mechanism is optimization, and it operates at scale.
I proceed as follows. In section 2, I describe the incentive structure of contemporary information platforms and explain why it generates selection pressure against epistemic quality. In section 3, I examine how scale transforms what was a local and episodic problem into a global and continuous one. In section 4, I return to the cost structure inversion introduced in working paper 1 and unpack its mechanics. In section 5, I describe the emergent outcome of these combined forces. In section 6, I explain why the standard corrective responses fail. In section 7, I extend the argument to knowledge-producing institutions. In section 8, I state the structural constraint that follows for any political system operating in this environment.
II. WHAT PLATFORMS OPTIMIZE FOR
Contemporary information platforms are optimization systems. They are designed to maximize engagement, understood as the time users spend on platform and the probability they return. Engagement is what platforms sell to advertisers, and it is the metric around which their technical and organizational structures are oriented.1
Engagement and epistemic quality are not the same objective, and the divergence between them is a predictable consequence of optimizing for a behavioral signal that is easier to measure than truth. Content optimized for outrage spreads faster than content optimized for accuracy. Alarming claims outperform measured ones. The platform prefers high-engagement content to low-engagement content, and the correlation between high engagement and epistemic degradation is systematic.2
The result is epistemic selection pressure, a structural condition in which beliefs compete for circulation under platform conditions, and the ones that survive and spread are those that maximize engagement rather than those that accurately represent the world. The selection mechanism is indifferent to truth value in the same way that market competition is indifferent to the welfare of firms it eliminates. What survives is what is fit for the environment. The environment rewards engagement. Accuracy is not part of the fitness function.3
Epistemic selection pressure is a structural condition in which beliefs compete for circulation under platform conditions, and the ones that survive are those that maximize engagement rather than those that accurately represent the world.
It is worth being precise about what this selection pressure acts on. It acts on the behavioral responses that claims elicit, rather than on their truth value directly. A false claim that triggers outrage spreads. A true claim that is complicated and emotionally flat does not. Over time, the claims that populate a shared information environment are a sample selected for the behavioral responses they produce in an audience operating under conditions of limited attention and high emotional reactivity. That is the population of beliefs from which democratic citizens are drawing their picture of the world.
III. WHY THIS TIME IS DIFFERENT
Incentive misalignment between truth and profitable content is not new. Yellow journalism, partisan presses, and sensationalist broadcasting all produced versions of this dynamic. What distinguishes the present moment is the scale and continuity at which it operates.4
Historical misinformation was episodic. A false story ran in a newspaper, circulated for a time, and eventually competed with corrections in the same or rival publications. The information environment had natural friction: print runs were finite, distribution was slow, and the number of channels was limited. Epistemic disruptions occurred against a background of relative informational stability.
The contemporary information environment operates under conditions of always-on epistemic competition. Content circulates globally and instantaneously. The number of channels is effectively unbounded. There is no off-cycle, no closing edition, no natural pause in which corrections can catch up with fabrications. Citizens are continuously immersed in an environment in which epistemically degraded content circulates alongside accurate content under conditions that systematically favor the former. The background of informational stability that made episodic disruption manageable no longer exists.5
Scale also changes the population dynamics of belief. When misinformation was episodic and local, its effects were bounded. A false claim might shape the beliefs of a community for a period, but it competed with the lived experience of community members, with local institutions, and with other channels of information not subject to the same selection pressures. At global scale and continuous operation, no such countervailing anchors exist for many of the beliefs that matter most for democratic governance. Citizens forming views about climate policy, electoral integrity, or public health are doing so in an environment where selection pressure operates uniformly and without interruption.
IV. VERIFICATION DOES NOT SCALE
Working paper 1 introduced the observation that the cost structure of the information environment has inverted, so that fabrication has become cheap and verification has become expensive. The mechanics of that inversion are more specific than the general observation suggests.
The cost of producing persuasive false content has declined along several dimensions simultaneously. Large language models generate coherent, authoritative-sounding text at negligible marginal cost. Image synthesis produces visually convincing fabrications without the skill or equipment that photographic manipulation once required. Voice cloning and video synthesis extend the same economics to audio and visual media. The production cost of a convincing false claim, which once required journalistic or technical skill, now approaches zero for anyone with access to widely available tools.6
Verification has not undergone the same cost reduction. Assessing whether a claim is true still requires domain expertise, access to primary sources, and the capacity to coordinate across institutions. Fact-checking organizations require trained staff. Scientific replication requires equipment and sustained time. The labor intensity of verification is a structural feature of what the process requires, one that no organizational improvement can eliminate.7
The result is a verification asymmetry, meaning that fabrication scales at a rate that verification cannot match. A single person with access to contemporary AI tools can produce false content faster than any institution can assess and correct it. At the level of an ecosystem, the ratio of fabrication to verification capacity is not stable. It widens as fabrication tools improve and as the volume of content requiring verification grows.8
There is a further dimension to this asymmetry that Chesney and Citron have called the liar’s dividend, referring to how the mere existence of convincing synthetic media undermines the epistemic value of genuine evidence. When fabricated video is indistinguishable from real video, real video loses its evidential force. The production of false content degrades the epistemic value of true content, so the verification burden increases even for claims that are accurate.9
V. NON-CONVERGENT EPISTEMIC SYSTEMS
Selection pressure, always-on competition, and verification asymmetry compound. Their combined effect is the emergence of non-convergent epistemic systems, meaning populations whose belief-forming processes operate under conditions that provide no stable mechanism for resolving disputes or establishing a shared factual baseline.
In a functional epistemic environment, disagreement is resolvable in principle. Citizens may hold different views, but they share enough of a common informational substrate that evidence and institutional adjudication can narrow disputes over time. This is the picture that democratic theory assumes. Citizen disagreement is structured by a common reality to which appeal is possible.10
Non-convergent epistemic systems do not have this property. When citizens draw their beliefs from information ecosystems structured by different selection pressures, operating on different factual priors, with no shared institutional authority capable of adjudicating between them, disagreement does not respond to evidence in the way democratic deliberation requires. Citizens are rational with respect to the information environments they actually inhabit, and those environments are structured to produce divergence rather than convergence.11
This is the mechanism that directly undermines the epistemic preconditions identified in working paper 1. Democratic legitimacy presupposes that citizens share enough of a common informational world to deliberate in good faith. Non-convergent epistemic systems make that presupposition false because the environment in which they form beliefs is structured to prevent it.
VI. WHY THE STANDARD RESPONSES DO NOT WORK
The standard response to misinformation is correction by producing accurate information, labeling false content, and deploying expert authority. There is nothing wrong with these interventions in principle. The problem is that they are responses to a content problem, and what is being described here is a structural one. Correcting individual false claims does not alter the selection pressure that produced them.
Several mechanisms explain why correction is systematically insufficient. Corrections spread more slowly than the content they correct. The same engagement dynamics that amplify false claims work against corrections, which tend to be less emotionally activating and more qualified. Research on the persistence of misinformation suggests that corrections often fail to fully displace false beliefs even when they reach the same audience, and they frequently do not reach the same audience.12
Expert intervention faces its own structural problem. In an environment where epistemic authority is fragmented and contested, the credibility of expert correction is itself a variable rather than a stable resource. A correction issued by a scientific body, a fact-checking organization, or a government agency does not land as authoritative for audiences whose information ecosystems have systematically cultivated distrust of those institutions. The correction may be accurate. It is not, for that audience, epistemically compelling.13
More information does not solve an optimization problem. In most contemporary democracies, accurate information is widely available. The problem is that the environment in which accurate and inaccurate information compete is structured to disadvantage accuracy. Adding more accurate content to that environment does not change the selection pressure. It increases the volume of content competing under conditions that are already adverse to epistemic quality.
VII. WHEN THE ARBITERS ARE ALSO SUBJECT TO SELECTION
The argument so far has focused on how epistemic selection pressure operates on citizen belief formation. The argument extends further. The institutions tasked with producing authoritative knowledge are themselves operating within the same information environment and are subject to versions of the same pressures.
Scientific institutions face replication failures, publication incentives that reward novelty over accuracy, and funding structures that create at least the appearance of conflicts of interest. These are not new problems. What is new is that the mechanisms for publicizing and amplifying these failures operate at the same scale and under the same selection pressures as the broader information environment. A retracted study generates as much circulation as its correction. The impression of expert disagreement is more engaging than the reality of expert consensus, and the information environment systematically produces the former at the expense of the latter.14
Legal and regulatory institutions face related dynamics. Courts produce authoritative adjudications of contested facts, but only within their jurisdictional scope and procedural constraints. When the contested facts that matter for democratic governance circulate outside those constraints, and when the authority of judicial findings is itself contestable in the broader information environment, the epistemic function that legal institutions serve is degraded. The finding exists, but its authority does not automatically transfer to the populations whose beliefs it was meant to settle.15
The press faces the most direct version of the institutional selection problem. Journalism that hedges and qualifies competes poorly in an environment that rewards neither. The same engagement dynamics that affect platform content affect the economic viability of news organizations. The environment in which journalism operates creates systematic pressure toward epistemic degradation. The institutions with the greatest capacity to resist that pressure are also the ones facing the most severe economic stress.16
When the institutions whose function is to arbitrate factual disputes are themselves subject to selection pressure, the epistemic infrastructure loses what working paper 1 called its second tier of support. Citizens form beliefs in an environment structured against epistemic quality. When they look to institutions for authoritative adjudication, they find institutions whose authority is contested and whose outputs are filtered through the same selection mechanisms that distort citizen belief formation in the first place. The system has no self-correcting tier.
VIII. A CONDITION, NOT A DISRUPTION
The picture assembled across sections 2 through 7 is of a system. Epistemic selection pressure, always-on competition, verification asymmetry, non-convergent epistemic systems, the failure of correction, and institutional spillover are interacting components of a single structural condition that emerged from the optimization logic of contemporary information infrastructure.
A disruption has a before and an after. It is an event against a background of relative normality, and responses to it are oriented toward restoration. The condition described here is a structural transformation of the epistemic environment itself. The question of what epistemic conditions prevailed before this transformation is historically interesting but not practically relevant. The environment that democratic theory assumed is not the environment that presently obtains, and there is no evidence that the forces producing the current environment are temporary or self-limiting.
Any political system that depends on reliable belief formation among its citizens must operate within this environment. That includes democratic systems, but it is not limited to them. The constraint is on governance as such, insofar as governance requires a population capable of being informed about and responsive to the conditions it is governing. What the constraint means specifically for democratic legitimacy was the subject of working paper 1. What it means for the available options of repair and revision will be the subject of working papers 3 and 4.
The present paper has established that the epistemic deterioration identified in working paper 1 is the product of an optimization logic operating at scale, rather than the contingent work of bad actors against an otherwise stable epistemic background, and that it is uncorrectable by interventions that leave that logic intact. The question has shifted from whether epistemic conditions are degraded to what forms of governance can survive them.
REFERENCES
Benkler, Yochai, Robert Faris, and Hal Roberts, 2018, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics, Oxford: Oxford University Press.
Chesney, Robert and Danielle Keats Citron, 2019, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review, 107(6): 1753-1820.
Citron, Danielle Keats, 2014, Hate Crimes in Cyberspace, Cambridge, MA: Harvard University Press.
Cohen, Joshua, 1989, “Deliberation and Democratic Legitimacy,” in Alan Hamlin and Philip Pettit (eds.), The Good Polity, Oxford: Basil Blackwell, 17-34.
Goldstein, Josh A., Girish Sastry, Micah Musser, Rene DiResta, Matthew Gentzel, and Katerina Sedova, 2023, “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations,” arXiv:2301.04246.
Goodin, Robert E. and Kai Spiekermann, 2019, An Epistemic Theory of Democracy, Oxford: Oxford University Press.
Graves, Lucas, 2016, Deciding What’s True: The Rise of Political Fact-Checking in American Journalism, New York: Columbia University Press.
Habermas, Jurgen, 1992 [1996], Between Facts and Norms, William Rehg (trans.), Cambridge, MA: MIT Press.
Kahan, Dan M., 2013, “Ideology, Motivated Reasoning, and Cognitive Reflection,” Judgment and Decision Making, 8(4): 407-424.
Lewandowsky, Stephan, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook, 2012, “Misinformation and Its Correction: Continued Influence and Successful Debiasing,” Psychological Science in the Public Interest, 13(3): 106-131.
McChesney, Robert W. and John Nichols, 2010, The Death and Life of American Journalism, Philadelphia: Nation Books.
Oreskes, Naomi, 2019, Why Trust Science?, Princeton: Princeton University Press.
Oreskes, Naomi and Erik M. Conway, 2010, Merchants of Doubt, New York: Bloomsbury Press.
Vosoughi, Soroush, Deb Roy, and Sinan Aral, 2018, “The Spread of True and False News Online,” Science, 359(6380): 1146-1151.
Wardle, Claire and Hossein Derakhshan, 2017, Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making, Strasbourg: Council of Europe.
Wu, Tim, 2016, The Attention Merchants: The Epic Scramble to Get Inside Our Heads, New York: Knopf.
Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism, New York: PublicAffairs.
The relationship between platform business models and engagement optimization is developed in Zuboff (2019) and Wu (2016).
Vosoughi, Roy, and Aral (2018) provide the most-cited empirical study of differential spread rates between true and false content on social media, finding that false news spreads faster, further, and more broadly than true news across all categories of content examined.
The market competition analogy is used deliberately. Like market competition, platform selection is a designed system with identifiable optimization targets, which makes it more tractable to analysis and, in principle, more susceptible to intervention at the design level than a purely biological process would be.
For historical context on pre-digital misinformation and propaganda, see Benkler, Faris, and Roberts (2018), ch. 2.
Wardle and Derakhshan (2017) develop the concept of an information disorder in which the normal friction that limited misinformation’s reach has been systematically reduced by platform architecture.
For the economics of AI-generated content and its implications for information ecosystems, see Chesney and Citron (2019) and Goldstein et al. (2023) on the use of large language models for influence operations.
The labor intensity of verification is a consistent theme in journalism scholarship. For the specific constraints facing fact-checking organizations operating at scale, see Graves (2016).
The systemic overload argument is developed in Benkler, Faris, and Roberts (2018), who document asymmetric production capacity between partisan disinformation networks and mainstream verification institutions in the US context.
Chesney and Citron (2019), 1796-1800.
This is the epistemic presupposition that Cohen (1989) and Habermas (1992 [1996]) build into their accounts of deliberative democracy. For a more recent treatment, see Goodin and Spiekermann (2019), ch. 1.
Kahan (2013) provides evidence for the mechanism. Citizens with higher cognitive ability are better at finding and processing information that confirms their prior beliefs, which means that the divergence produced by non-convergent information ecosystems is amplified rather than corrected by individual rationality.
Lewandowsky et al. (2012) provide the canonical review of misinformation persistence and the limited effectiveness of correction.
Distrust of expert institutions as a structural feature of contemporary information environments, rather than a response to specific institutional failures, is documented in Oreskes (2019).
The interaction between scientific publication incentives and public perception of expert disagreement is discussed in Oreskes and Conway (2010) in the context of manufactured doubt about climate science. The mechanism generalizes beyond that case.
For the limits of legal adjudication as an epistemic intervention in a high-volume disinformation environment, see Citron (2014).
The economic pressures on quality journalism and their relationship to epistemic standards are documented in the Pew Research Center’s annual State of the News Media reports and analyzed in McChesney and Nichols (2010).

