Epistemic Authority And Legitimacy In The Age Of Generative Ai: A Normative Framework For Evaluating Ai As A Knowledge-Formatter In Scientific Inquiry
Abstract
The growing integration of generative Artificial Intelligence (AI) into scientific inquiry raises foundational questions about epistemic authority and justified epistemic dependence. Classical accounts—ranging from Goldman’s reliabilism to Zagzebski’s preemption theory—treat epistemic authority as a normative relation between agents characterized by intentionality, reason-responsiveness, and accountability. Drawing on critiques by McMyler and Jäger and Shackel, this paper argues that such agential and relational conditions exclude AI systems from possessing epistemic authority in any classical sense. Yet AI increasingly shapes evidential practices, hypothesis formation, and interpretive processes within contemporary science. To evaluate this non-agential epistemic influence, the paper develops the Epistemic Authority Analysis Framework (EAAF), which distinguishes epistemic authority from epistemic legitimacy. The EAAF identifies three normative conditions—Epistemic Transparency, Delegative Trust, and Normative Reflexivity—under which reliance on AI can generate justified belief without conferring authority on the system itself. By framing AI as a “knowledge-formatter,” the paper demonstrates how epistemic responsibility can be preserved while integrating AI into scientific inquiry. The framework clarifies the epistemic status of AI and offers a principled foundation for human–machine epistemic collaboration in increasingly algorithmic epistemic environments.
Keywords
Full Text:
PDFReferences
A. I. Goldman, Epistemology and cognition. Cambridge, MA, US: Harvard University Press, 1986.
M. Fricker, Epistemic Injustice Power and the Ethics of Knowing. 2007.
L. T. Zagzebski, Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief, vol. 0. Oxford University Press, 2012. doi: 10.1093/acprof:oso/9780199936472.001.0001.
R. Hauswald, “Artificial Epistemic Authorities,” Soc. Epistemol., vol. 39, no. 6, pp. 716–725, 2025, doi: 10.1080/02691728.2025.2449602.
K. Boyd, “Trusting scientific experts in an online world,” Synthese, vol. 200, no. 1, p. 14, 2022, doi: 10.1007/s11229-022-03592-3.
B. Wheeler, “Reliabilism and the testimony of robots,” Techne Res. Philos. Technol., vol. 24, no. 3, pp. 332–356, 2020, doi: 10.5840/techne202049123.
N. Spivack and K. Gillis Jonk, “Epistemology and Metacognition in Artificial Intelligence: Defining, Classifying, and Governing the Limits of AI Knowledge,” https://www.novaspivack.com, pp. 1–33, 2025.
J. Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data Soc., vol. 3, no. 1, pp. 1–12, 2016, doi: 10.1177/2053951715622512.
E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, “On the dangers of stochastic parrots: Can language models be too big?,” FAccT 2021 - Proc. 2021 ACM Conf. Fairness, Accountability, Transpar., pp. 610–623, 2021, doi: 10.1145/3442188.3445922.
B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, “The ethics of algorithms: Mapping the debate,” Big Data Soc., vol. 3, no. 2, pp. 1–21, 2016, doi: 10.1177/2053951716679679.
A. Ferrario, A. Facchini, and A. Termine, “Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems,” Minds Mach., vol. 34, no. 3, pp. 1–27, 2024, doi: 10.1007/s11023-024-09681-1.
J. Constantin and T. Grundmann, “Epistemic authority: preemption through source sensitive defeat,” Synthese, vol. 197, no. 9, pp. 4109–4130, 2020, doi: 10.1007/s11229-018-01923-x.
A. Keren, “Zagzebski on authority and pre-emption in the domain of belief,” Eur. J. Philos. Relig., vol. 6, no. 4, pp. 61–76, 2014, doi: 10.24204/ejpr.v6i4.145.
E. A. Seger and T. Hall, “a Comparative Investigation Into the Formation of Epistemically Justified Belief in Expert Testimony and in the Outputs of Ai-Enabled Expert Systems,” no. May, 2022.
C. Beisbart and T. Räz, “Philosophy of science at sea: Clarifying the interpretability of machine learning,” Philos. Compass, vol. 17, no. 6, p. e12830, Jun. 2022, doi: https://doi.org/10.1111/phc3.12830.
B. McMyler, “Epistemic Authority, Preemption and Normative Power,” Eur. J. Philos. Relig., vol. 6, no. 4, pp. 101–119, Dec. 2014, doi: 10.24204/ejpr.v6i4.148.
C. Jäger and N. Shackel, “Testimonial Authority and Knowledge Transmission,” Soc. Epistemol., vol. 00, no. 00, pp. 1–16, 2025, doi: 10.1080/02691728.2025.2449608.
J. Lackey, “Learning from Words: Testimony as a Source of Knowledge.” Oxford University Press, Feb. 28, 2008. doi: 10.1093/acprof:oso/9780199219162.001.0001.
R. C. Roberts and W. J. Wood, “Intellectual Virtues: An Essay in Regulative Epistemology.” Oxford University Press, Jan. 11, 2007. doi: 10.1093/acprof:oso/9780199283675.001.0001.
J. Montmarquet, “Epistemic Virtue and Doxastic Responsibility,” Am. Philos. Q., vol. 29, no. 4, pp. 331–341, Dec. 1992, doi: 10.2307/2108301.
E. Sosa, “A Virtue Epistemology: Apt Belief and Reflective Knowledge, Volume I.” Oxford University Press, Jun. 28, 2007. doi: 10.1093/acprof:oso/9780199297023.001.0001.
A. Ross, “AI and the expert; a blueprint for the ethical use of opaque AI,” AI Soc., vol. 39, no. 3, pp. 925–936, Jun. 2024, doi: 10.1007/s00146-022-01564-2.
C. Jäger, Epistemic Authority. Oxford University Press, 2025.
DOI: http://dx.doi.org/10.52155/ijpsat.v55.1.7673
Refbacks
- There are currently no refbacks.
Copyright (c) 2025 Jazimatul Husna, Imilia Ibrahim

This work is licensed under a Creative Commons Attribution 4.0 International License.


















