THE LIMITS OF EPISTEMIC REPAIR
On the Feasibility of Restoring the Conditions Democratic Legitimacy Requires - Working Paper No. 3
THE LEGITIMACY PROJECT — Working Paper No. 3
THE LIMITS OF EPISTEMIC REPAIR
On the Feasibility of Restoring the Conditions Democratic Legitimacy Requires
The repair project is serious, and it deserves a serious examination of what it actually requires.
By The Legitimacy Project | April 2026 | Working paper
I. THE SERIOUS BASELINE POSITION
The previous two working papers established that democratic legitimacy presupposes epistemic conditions that the contemporary information environment is systematically degrading. A natural response to that diagnosis is repair: identify the mechanisms of degradation and intervene to correct them. Regulate platforms. Invest in fact-checking and stronger institutions. Restore, as far as possible, the shared informational baseline that democratic theory assumes.
This is a serious position, and it should be treated as one. The repair project accepts that epistemic conditions have deteriorated and proposes to restore them from within the democratic framework, preserving both the normative commitments of democratic theory and the institutional structures through which those commitments are realized. It is the position most consistent with continuity, and continuity has genuine value when the alternatives are uncertain.
The question this paper asks is whether repair is feasible at the scale democratic legitimacy requires. Repair may be possible in local or partial ways, and that possibility is worth preserving. The more pressing question is whether repair, understood as the systematic restoration of the epistemic conditions democratic legitimacy presupposes, can succeed against a set of independent structural constraints that compound in ways its proponents have not adequately confronted.
I proceed as follows. In section 2, I define repair in operational terms that make its requirements explicit. Sections 3 through 7 examine five structural constraints on those requirements. Section 8 synthesizes the constraints and introduces the distinction between local repair and the scale repair requires. Section 9 opens the question that follows.
II. DEFINING THE PROBLEM OPERATIONALLY
Most discussion of epistemic repair operates at the level of policy proposals, such as content moderation rules, algorithmic transparency requirements, media literacy curricula, and public interest journalism funding. These proposals are worth examining on their own terms, and some may have merit in specific domains. The more fundamental question is what conditions any such proposal must satisfy to count as repair at the level of democratic legitimacy rather than merely improvement at the margins.
Repair, in the relevant sense, requires four things operating simultaneously. Platform incentives must be realigned so that engagement and accuracy are no longer systematically opposed. A shared informational baseline must be restored, meaning some convergence, across sufficiently large populations, in what counts as evidence and which sources can arbitrate disputes. Verification mechanisms must scale to process contested claims at the volume they are produced. And epistemic authorities, the institutions whose function is to adjudicate factual disputes, must recover sufficient trust to perform that function across groups with different prior commitments.
Each of these is a demanding condition on its own. The repair project requires all four simultaneously, because the conditions are interdependent. Restored verification mechanisms are useless if trust in the institutions running them is absent. Realigned platform incentives cannot produce convergent beliefs if the informational substrate remains fragmented. Repair is accordingly a coordinated transformation of the information environment across multiple layers, each of which faces its own structural resistance.
III. CONSTRAINT 1: INCENTIVE LOCK-IN
The first constraint is that platform incentives are not freely adjustable. Working paper 2 established that the divergence between engagement and epistemic quality is a predictable consequence of the platform business model, not a design choice that could be revised without disrupting the underlying economics. That point has a further dimension when examined as a constraint on repair.
The revenue model of dominant information platforms depends on advertising, and advertising revenue scales with engagement. Engagement correlates with emotional activation, and emotional activation is systematically inversely related to epistemic quality. A platform that successfully realigned its optimization target from engagement to accuracy would, under current market conditions, generate less revenue than its competitors. The pressure to revert is structural, not contingent on the preferences of platform managers.1
Regulatory intervention could in principle alter this equilibrium by imposing costs on engagement-maximizing behavior or mandating accuracy-oriented design. The history of platform regulation suggests that such interventions face significant resistance, move slowly relative to the pace of platform development, and tend to be implemented in forms that preserve the core incentive structure while modifying its surface features.2
This observation bears on the feasibility of regulation as a repair mechanism, though it does not settle whether regulation should be attempted. The lock-in is stable enough that dislodging it requires sustained political will and regulatory capacity that have not been demonstrated at the scale required.
The deeper problem is that incentive lock-in affects the repair effort itself. Organizations advocating for platform reform operate within the same information environment they are trying to change. Their communications are subject to the same selection pressures that favor emotionally activating content over careful argument. The reform effort is not insulated from the condition it is trying to correct.
IV. CONSTRAINT 2: REPAIR ASYMMETRY
The second constraint follows directly from the analysis in working paper 2. Fabrication scales cheaply. Verification scales slowly and at high cost. Any repair strategy that depends on verification keeping pace with fabrication faces a persistent and widening asymmetry.
The repair project must outrun production. Under current technological conditions, that requirement is not being met and the trajectory is adverse. The tools that reduce the cost of fabrication are improving faster than the tools that reduce the cost of verification. Fact-checking organizations, however well-resourced, are processing a finite volume of claims against an effectively unbounded production capacity. Automated verification tools reduce the cost of some forms of checking, but they do so against a background of fabrication tools that are improving at least as rapidly and that have been deliberately optimized to evade detection.3
The repair asymmetry is also a coordination problem. Verification at democratic scale requires agreement across institutions about what constitutes sufficient evidence for a claim. In a fragmented information environment, that agreement is itself contested. Different institutions apply different standards, and in conditions of trust fragmentation, the existence of multiple verification regimes provides grounds for dismissing any particular verdict. A system in which verification is decentralized and contested cannot perform the epistemic function that democratic legitimacy requires, regardless of how much resource is invested in individual verification efforts.
The repair project must outrun production. Under current technological conditions, that requirement is not being met and the trajectory is adverse.
V. CONSTRAINT 3: TRUST FRAGMENTATION
The third constraint is that repair presupposes trust that the conditions requiring repair have already degraded. The restoration of shared epistemic baselines requires institutions capable of arbitrating between competing claims with authority that is broadly accepted. Those institutions do not currently exist in the relevant sense, and the mechanisms that would produce them are themselves subject to the conditions that have undermined trust.
Trust in epistemic authorities, such as scientific institutions, journalistic organizations, regulatory bodies, and courts, has declined across most democratic societies over the past several decades. The causes are multiple and contested, but among them are institutional failures that were genuine, manufactured doubt campaigns that were deliberate, and the selection dynamics described in working paper 2 that systematically amplify the impression of expert disagreement beyond its actual extent.4
Whatever the causal mix, the functional result is that there is no neutral arbiter whose authority is accepted across the populations whose beliefs democratic legitimacy requires to be at least partially convergent.
This creates a circularity the repair project has difficulty escaping. Repair requires trusted institutions. Building trusted institutions requires a period of demonstrated reliability. Demonstrating reliability requires operating in an information environment that does not systematically distort the signal. The information environment that distorts the signal is precisely what repair is supposed to fix. The precondition for repair is the outcome repair is supposed to produce.5
A further dimension of this constraint is that corrections are interpreted through identity filters. Working paper 2 noted that citizens are rational with respect to the information environments they actually inhabit. An implication of that observation is that corrections from distrusted sources are processed as evidence of the source’s untrustworthiness rather than as evidence against the corrected claim. Increased investment in correction, delivered through channels that are already distrusted, may reinforce the prior rather than revise it.6
VI. CONSTRAINT 4: COGNITIVE LIMITS
The fourth constraint operates at the level of individual cognition rather than institutional design, but it bears directly on the feasibility of repair strategies that depend on individual epistemic improvement.
A significant strand of the repair literature places weight on media literacy interventions: teaching citizens to evaluate sources, identify manipulation, and apply critical scrutiny to claims they encounter. The intuition behind this approach is that epistemic failure is partly a deficit of skill, and skills can be taught. The empirical literature on motivated reasoning gives substantial grounds for skepticism about this intuition.7
The relevant finding, documented extensively by Kahan and others, is that increased cognitive sophistication does not reliably produce better-calibrated beliefs about contested political questions. Citizens with higher analytical ability are better at finding and processing information that confirms their prior commitments, which means that improvements in individual cognitive skill can amplify instead of correct the divergence that epistemic selection pressure produces. The mechanism that media literacy interventions aim to improve is not straightforwardly improvable in the direction repair requires.8
The relationship between individual epistemic skill and the population-level epistemic conditions democratic legitimacy requires is accordingly not the straightforward one that media literacy programs tend to assume. The conditions are structural, and structural conditions are not dissolved by improving the individuals operating within them. Citizens reasoning more carefully within a non-convergent epistemic system may simply produce better-reasoned divergent conclusions.
VII. CONSTRAINT 5: THE SELF-UNDERMINING PROBLEM
The fifth constraint is the most structurally interesting, and it is the one the repair literature has been slowest to confront. Repair interventions are not epistemically neutral. They are exercises of power over the information environment, and the conditions under which they are legitimate are themselves subject to the epistemic conditions that repair is supposed to restore.
Consider the range of interventions the repair project requires. Platform regulation mandates changes to how information is amplified, which is an intervention in speech at scale. Institutional investment in trusted arbiters means deciding which institutions receive epistemic authority and on what basis. Verification standards, however technically specified, involve judgments about what counts as evidence and who is qualified to assess it. Each of these interventions requires a legitimating basis that the current epistemic environment makes difficult to establish.9
In conditions of trust fragmentation, repair interventions are available to be read as power consolidation rather than genuine epistemic improvement. A government that mandates content moderation is, for a significant portion of the population, suppressing legitimate speech. An institution designated as an authoritative arbiter is, for those who distrust it, having authority conferred by the very entities whose legitimacy is at stake. The repair intervention becomes evidence for the narrative that motivated distrust in the first place. The effort to restore epistemic conditions is itself processed through the distorted epistemic conditions it is trying to correct.10
Repair interventions in conditions of deep trust fragmentation face a legitimacy problem that is internal to the repair project rather than external to it. They can proceed, but they proceed in an environment where their own legitimacy is contested by the mechanisms that made repair necessary.
VIII. LOCAL SUCCESS AND DEMOCRATIC SUFFICIENCY
Each constraint examined in sections 3 through 7 is serious on its own. Together, they describe a repair environment in which progress in any one dimension tends to be offset by resistance in others. Incentive lock-in limits the effectiveness of platform reform. Repair asymmetry limits the effectiveness of verification investment. Trust fragmentation limits the effectiveness of institutional strengthening. The self-undermining problem means that repair efforts can actively worsen the conditions they are trying to correct. These are not parallel failures. They interact: failure in one undermines progress in others, and the repair project requires simultaneous progress across all of them.
Domain-specific improvements have been documented. Certain interventions reduce the spread of specific false claims. Some institutional reforms have improved the reliability of particular verification processes. The question the repair project faces is whether improvement can occur at the scale and simultaneity required for democratic legitimacy. Local success and democratic sufficiency are not the same standard, and the repair literature has been insufficiently clear about which standard its evidence actually supports.
The probability of sufficient simultaneous progress across independent systems such as platforms, institutions, individual cognition, and political will is not simply the product of the probability of progress in each domain. It is lower, because the domains interact in the ways described above. A repair effort that succeeds in improving platform incentives without restoring institutional trust will find its verification outputs dismissed. One that restores institutional trust without addressing cognitive limits will find that trust exploited to reinforce divergence instead of correct it. The interdependence that makes repair coherent as a project is also what makes its full realization deeply constrained.
IX. WHEN THE PROBLEM OUTRUNS THE FRAMEWORK
The repair project operates within democratic theory. It takes the normative framework as given and asks what empirical conditions must be restored for that framework to function. That is the right question if the conditions are restorable. The analysis above suggests that they may not be, at least not by the mechanisms the repair project identifies, operating under the institutional and political constraints that actually obtain.
If that assessment is correct, the gap between what democratic legitimacy requires and what the current information environment can reliably produce is one that further institutional design within the existing framework cannot close. It is a problem about the relationship between the framework and the conditions under which it was developed, conditions that no longer obtain and may not be recoverable.
This does not establish that democratic legitimacy is impossible or that the democratic framework should be abandoned. The framework may require revision to remain coherent under conditions its architects did not anticipate and that its defenders have been slow to confront. The question shifts from how to restore the conditions democratic legitimacy requires to whether political structures must be revised to align with the epistemic conditions that actually obtain. That is the subject of working paper 4.
REFERENCES
Balkin, Jack M., 2018, “Free Speech in the Algorithmic Society: Big Data, Private Governance, and the Future of Public Discourse,” UC Davis Law Review, 51(3): 1149-1210.
Estlund, David, 2008, Democratic Authority: A Philosophical Framework, Princeton: Princeton University Press.
Goldstein, Josh A., Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Katerina Sedova, 2023, “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations,” arXiv:2301.04246.
Kahan, Dan M., 2013, “Ideology, Motivated Reasoning, and Cognitive Reflection,” Judgment and Decision Making, 8(4): 407-424.
Kahan, Dan M., Ellen Peters, Maggie Wittlin, Paul Slovic, Lisa Larrimore Ouellette, Donald Braman, and Gregory Mandel, 2012, “The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks,” Nature Climate Change, 2(10): 732-735.
Kaye, David, 2019, Speech Police: The Global Struggle to Govern the Internet, New York: Columbia Global Reports.
McGrew, Sarah, Mark Smith, Joel Breakstone, Teresa Ortega, and Sam Wineburg, 2018, “Can Students Evaluate Online Sources? Learning from Assessments of Civic Online Reasoning,” Theory and Research in Social Education, 47(2): 165-193.
Napoli, Philip M., 2019, Social Media and the Public Interest: Media Regulation in the Disinformation Age, New York: Columbia University Press.
Nyhan, Brendan and Jason Reifler, 2010, “When Corrections Fail: The Persistence of Political Misperceptions,” Political Behavior, 32(2): 303-330.
Oreskes, Naomi, 2019, Why Trust Science?, Princeton: Princeton University Press.
Oreskes, Naomi and Erik M. Conway, 2010, Merchants of Doubt, New York: Bloomsbury Press.
Parker, Geoffrey G., Marshall W. Van Alstyne, and Sangeet Paul Choudary, 2016, Platform Revolution, New York: W. W. Norton.
Pennycook, Gordon and David G. Rand, 2019, “Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than Motivated Reasoning,” Cognition, 188: 39-50.
Peter, Fabienne, 2009, Democratic Legitimacy, New York: Routledge.
Wu, Tim, 2016, The Attention Merchants, New York: Knopf.
Zellers, Rowan, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi, 2019, “Defending Against Neural Fake News,” Advances in Neural Information Processing Systems, 32.
The structural relationship between platform revenue models, engagement optimization, and epistemic quality is developed in working paper 2. For the economic analysis of platform competition, see Wu (2016) and Parker, Van Alstyne, and Choudary (2016) on platform business models.
For the history and limits of platform regulation efforts, see Kaye (2019) on freedom of expression and the role of private platforms, and Napoli (2019) on the regulatory treatment of social media as a category distinct from broadcast media.
Goldstein et al. (2023) document the use of large language models for influence operations and assess the detection problem. For the detection arms race more broadly, see Zellers et al. (2019) on the simultaneous development of generation and detection capabilities.
Oreskes (2019) provides the most systematic treatment of the epistemology of trust in science and the mechanisms by which it has been degraded. For the manufactured doubt literature in the specific context of climate science, see Oreskes and Conway (2010).
This circularity is related to what Peter (2009) calls the bootstrapping problem for democratic legitimacy, namely that the conditions which legitimate democratic procedures cannot themselves be established by those procedures without circularity. The epistemic version of the problem has the same structure.
Nyhan and Reifler (2010) document the backfire effect, the finding that corrections can in some conditions strengthen the prior belief they are intended to correct, though subsequent research has qualified the generality of this finding. The directional concern remains: corrections delivered through distrusted channels face systematic resistance that is not simply a function of the correction’s accuracy.
For a review of media literacy interventions and their measured effects, see McGrew et al. (2018) and Pennycook and Rand (2019) on the relationship between analytic thinking and susceptibility to misinformation.
Kahan (2013); Kahan et al. (2012). The finding that numeracy and scientific literacy are positively correlated with polarization on contested empirical questions is one of the more counterintuitive results in the political psychology literature and one of the most relevant to the feasibility of cognitive-level repair strategies.
Balkin (2018) develops the concept of an information fiduciary as a framework for thinking about the obligations of platforms and the legitimacy conditions for platform regulation. The self-undermining problem is implicit in his analysis though not developed as a constraint on repair.
This dynamic is related to what Estlund (2008) calls the authority problem for epistemic proceduralism, namely that designating some procedure or institution as epistemically authoritative requires a prior standard of epistemic quality, which raises the question of who is authorized to apply that standard. In conditions of trust fragmentation, the designation of authority is itself a contested epistemic act.

