Generative AI as a Disinformation Tool

“The Hidden Sound of Things Approaching”

Authors

  • Marios D. Dikaiakos University of Cyprus

DOI:

https://doi.org/10.22151/politikon.61.CON3

Keywords:

Artificial Intelligence and Democracy, Generative AI, Large Language Models, Misinformation, Digital Resilience, Foreign Information Manipulation and Interference (FIMI)

Abstract

The rapid spread of Large Language Models (LLMs) is transforming how individuals access and interpret information, increasing societal exposure to sophisticated forms of misinformation. As generative AI becomes a central information gatekeeper, it expands the attack surface for Foreign Information Manipulation and Interference (FIMI), enabling scalable data contamination, alignment manipulation, and realistic synthetic media. These dynamics weaken critical evaluation skills and amplify latent model biases, paralleling the long-term cognitive effects of traditional disinformation. Using lessons from science misinformation, the paper argues for robust regulation, transparent training practices, and integrated resilience strategies to mitigate emerging systemic risks posed by generative AI.

References

Bergmanis-Korāts, G., and T. Haiduchyk. 2024. Social Media Manipulation for Sale: 2024 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement. NATO Strategic Communications Centre of Excellence.

Bergmanis-Korāts, G., G. Bertolin, A. Pužule, and Y. Zeng. 2024. AI in Support of StratCom Capabilities. NATO Strategic Communications Centre of Excellence.

Bola, C., and K. Papadaki. 2021. “Digital Propaganda, Counter Publics, and the Disruption of the Public Sphere: The Finnish Approach to Building Digital Resilience.” In The World Information War: Western Resilience, Campaigning, and Cognitive Effects, edited by T. Clack and R. Johnson. Routledge.

Carranza, A., et al. 2023. “Deceptive Alignment Monitoring.” arXiv 2307.10569.

Cavafy, C. P. Collected Poems. Revised ed. Translated by Edmund Keeley and Philip Sherrard. Edited by George Savidis. Princeton, NJ: Princeton University Press, 1992.

Deer, Brian. 2011. “How the Case Against the MMR Vaccine Was Fixed.” BMJ 342:c5347. https://doi.org/10.1136/bmj.c5347.

Elsner, M., G. Atkinson, and S. Zahidi. 2025. The Global Risks Report 2025, 20th ed. World Economic Forum.

Hassoun, A., A. Abonizio, K. Osborn, C. Wu, and B. Goldberg. 2024. “The Influencer Next Door: How Misinformation Creators Use GenAI.” arXiv 2405.13554.

Hsiung, L., Pang, T., Tang, Y-C., Song, L., Ho, T-Y., Chen, P-Y., Yang, Y. “Why LLM Safety Guardrails Collapse After Fine-tuning: A Similarity Analysis Between Alignment and Fine-tuning Datasets.” arXiv 2506.05346v1.

Judge, B., M. Nitzberg, and S. Russell. 2024. “When Code Isn’t Law: Rethinking Regulation for Artificial Intelligence.” Policy and Society. https://doi.org/10.1093/polsoc/puae020.

Lockwood, M. 2020. “Editorial: Citation Malpractice.” Proceedings of the Royal Society A 476: 20200746.

Lowe, D. 2025. “An Evaluation of ‘Deep Research’ Performance.” In the Pipeline (blog). https://www.science.org/content/blog-post/evaluation-deep-research-performance.

NASEM, National Academies of Sciences, Engineering, and Medicine. 2024. Understanding and Addressing Misinformation About Science. Washington, DC: National Academies Press.

Paschalides, D., Pallis, G., Dikaiakos, M.D. 2025. “Adopting Beliefs or Superficial Mimicry? Investigating Nuanced Ideological Manipulation of LLMs.” Proceedings of the Nineteenth International AAAI Conference on Web and Social Media (ICWSM 2025).

Pérez-Neri, I., C. Pineda, and H. Sandoval. 2022. “Threats to Scholarly Research Integrity Arising from Paper Mills: A Rapid Scoping Review.” Clinical Rheumatology 41 (7): 2241–48. https://doi.org/10.1007/s10067-022-06198-9.

Rao, T. S., and C. Andrade. 2011. “The MMR Vaccine and Autism: Sensation, Refutation, Retraction, and Fraud.” Indian Journal of Psychiatry 53 (2): 95–96. https://doi.org/10.4103/0019-5545.82529.

Rid, Thomas. 2021. Active Measures. Picador USA.

Roose, Kevin. 2025. “5 Notes From the Big A.I. Summit in Paris.” New York Times, February 10.

The Economist. 2025. “The Danger of Relying on OpenAI’s Deep Research.” February 13.

UK Government. 2025. “The Bletchley Declaration by Countries Attending the AI Safety Summit.”https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.

Published

2025-12-04

How to Cite

Dikaiakos, Marios D. 2025. “Generative AI As a Disinformation Tool: ‘The Hidden Sound of Things Approaching’”. Politikon: The IAPSS Journal of Political Science 26 (2). Online:76-80. https://doi.org/10.22151/politikon.61.CON3.

Issue

Section

Conversations