X Welcome to International Affairs Forum

International Affairs Forum a platform to encourage a more complete understanding of the world's opinions on international relations and economics. It presents a cross-section of all-partisan mainstream content, from left to right and across the world.

By reading International Affairs Forum, not only explore pieces you agree with but pieces you don't agree with. Read the other side, challenge yourself, analyze, and share pieces with others. Most importantly, analyze the issues and discuss them civilly with others.

And, yes, send us your essay or editorial! Students are encouraged to participate.

Please enter and join the many International Affairs Forum participants who seek a better path toward addressing world issues.
Tue. November 18, 2025
Get Published   |   About Us   |   Donate   | Login
International Affairs Forum

Around the World, Across the Political Spectrum

Provenance or Propaganda? AI-Generated War Imagery and the Geopolitics of Truth

Comments(0)

By Alena Gribanova, Independent Researcher

The summer of 2025 witnessed a watershed moment in modern warfare—not the deployment of advanced weaponry or unprecedented military tactics, but the systematic weaponization of artificial intelligence to fabricate reality itself. During the June escalation between Israel and Iran, synthetic media flooded information channels with such sophistication and scale that the conflict became what analysts now term the first "AI war". This marked a fundamental shift in how nations conduct psychological operations, shape public perception, and contest the very foundations of truth during international crises (BBC News, 2025).

Transforming an Artificially Constructed Reality into a Tool of Pressure

Network structures associated with Iran initiated a disinformation campaign leveraging the technological capabilities of artificial intelligence. The social media posts' audience exceeded 100 million views, reflecting an unprecedented level of reach into the virtual environment. The scale of these actions significantly surpassed all previously documented digital manipulation practices in armed and political conflicts, demonstrating a new level of use of synthetic means of communication pressure. The scale of the operation significantly surpassed any previous attempts at digital manipulation during conflicts. Instead of the usual rewritten images that dominated previous propaganda, the 2025 escalation produced a stream of hyper-realistic video scenes created using advanced generative solutions—including Google's Veo 3 (Memory in the Digital Age, 2024), a tool optimized for synthesizing believable audiovisual scenes. The Iranian operation's configuration included five distinct classes of synthetic content—designed to demonstrate power while simultaneously undermining Israel's authority. Videos depicting staged destruction in Israeli cities have spread, including edited footage of Tel Aviv's Ben-Gurion Airport allegedly hit by Iranian missiles; target platforms include TikTok and X (International Institute for Counterterrorism, 2025). State media in Tehran broadcast algorithmically generated images of Israeli F-35s reportedly shot down over Iran, while other video clips depicted Supreme Leader Ali Khamenei's symbolic dominance over both Israel and the US leadership (Humanities and Social Sciences Communications, 2025).

Platform-Dependent Memory Formation

The mass diffusion of AI-generated content amid the Israel-Iran escalation demonstrated how platform-specific ranking logics, coupled with engagement metrics, construct collective memory (Memory in the Digital Age, 2024). Not a neutral archive, but an active editor of public remembrance: through the personalized distribution of streams and the relaying of behavioral signals, the platform stack shifted the optics of the event. The prioritization of affective power over credibility criteria in recommendations made social platforms unwitting accomplices in the circulation of false narratives (Communications in the Humanities and Social Sciences, 2025). Rankings pushed publications with high emotional intensity into visible positions, unchecked for authenticity. The phenomenon extends beyond immediate tactical advantages in information warfare (Policy Insider AI, The liar’s dividend, 2024). Visual narratives, particularly those depicting conflict and crisis, contribute to what scholars term the "sealing" of collective memory—the process by which repeated exposure to specific imagery becomes embedded in social consciousness as representative of historical events (International Institute for Counter-Terrorism, 2025). When these visual narratives are artificially generated but presented as documentary evidence, they threaten to corrupt the historical record in unprecedented ways (National Library of Medicine, Memory in the digital age, 2024).

The Liar's Dividend and International Discourse

Within the above context, it is precisely the concept of “liar's dividends” that has one of the key meanings for international relations in the context of diplomatic discourse in particular. As the capabilities of synthetic media become increasingly sophisticated, government officials, along with international players, are increasingly using the confusion between authentic and fabricated content to evade responsibility or replace narratives, as practical cases show (Policy Insider AI, The liar’s dividend, 2024). It is worth recalling the events during the Indo-Pakistani conflict in May 2025, where a similar dynamic was observed. At that time, both countries used content generated entirely by artificial intelligence to influence public opinion (Humanities and Social Sciences Communications, 2025). This gave rise to the idea that the presence of synthetic media in such contexts creates what researchers call “plausible deniability” (Iyer, Assessing AI and the future of armed conflict, 2024). In other words, it is the ability of actors to reject authentic evidence by claiming that it is the result of AI manipulation. Conversely, “plausible credibility” allows supporters to accept fabricated content that confirms their existing beliefs, which in turn further reinforces polarized narratives (Policy Insider AI, The liar’s dividend, 2024). In other words, this erosion of epistemic trust extends beyond immediate areas of conflict, which in turn affects broader international discourse (EDMO, 2025).

Legal Frameworks and Geneva Convention Applications

Current international humanitarian law provides only fragmentary guidance regarding the treatment of materials generated by artificial intelligence systems in theaters of military operations. The Geneva Conventions distinguish between permissible "ruse of war"—deceptive tactics aimed at misleading the enemy—and explicitly prohibit "perfidy," that is, actions that undermine trust between warring parties under international protection standards. Legal experts often classify deepfakes as belonging to the first category; artificially generated content, including false surrender orders, fabricated humanitarian messages, or synthetic footage intended to lure civilians into dangerous areas, falls within the scope of the latter regime's prohibition under current law. This legal distinction does not address the threats to civilians posed by synthetic media campaigns that blur the line separating legitimate military objectives from protected populations (EDMO, 2025).

Conclusion

Thus, summarizing the entire work, the escalation of the conflict between Israel and Iran in 2025 became a turning point in the information war, demonstrating how images created with the help of artificial intelligence can change the origin and, in a way, reshape perception with the speed of information dissemination. Synthetic media, which has now become realistic (giving it the opportunity to greatly expand its horizons), has turned the coverage of conflicts into a battle for reality itself, with narratives from the battlefield unfolding on the Internet as fiercely as on the real battlefield. This environment promotes the “liar's dividend,” where the widespread dissemination of fakes allows genuine evidence to be dishonestly dismissed (Policy Insider AI, The liar’s dividend, 2024). In this case, if authentic documents can be plausibly presented as synthetic, accountability and the possibility of verification are weakened, then the pillars of the international order (on which the rule of law is based) undermine the evidence base of the law, running counter to any diplomacy.

That is why it is so important that technical measures complement legal reform. Fair access to advanced detection tools for those engaged in fact-finding on the front lines is essential to prevent an asymmetry of power in which only those with significant resources can create high-quality forgeries. Without this balance, the right to determine reality risks becoming a monopoly rather than a public trust.

Alena Gribanova graduated with a Master's degree in International Relations from the University of Pavia in Italy. During her academic journey, she completed an internship at the research center Fondazione Eni Enrico Mattei in Milan, specialising in sustainable development and international affairs and holds a bachelor’s degree in international relations from Lomonosov Moscow State University. 

 

References

  1. BBC News. (2025, June 22). Israel–Iran conflict unleashes wave of AI disinformation.
  2. Camilli, E. (2025, August 12). The use of generative AI deepfakes in the Israel–Iran conflict. Hozint – Horizon Intelligence.
  3. European Digital Media Observatory. (2025, July 13). The first AI war: How the Iran–Israel conflict became a battlefield for generative misinformation.
  4. Evolution of mediated memory in the digital age: Tracing its development. (2023). Humanities and Social Sciences Communications.
  5. International Institute for Counter-Terrorism. (2025, June 22). Iranian TikTok campaign seeks to shape war perceptions using AI.
  6. Lowy Institute. (2025, September 22). Deepfakes and nuclear weapons: Why AI regulation can’t wait. The Interpreter.
  7. Memory in the digital age. (2024, January 11). National Library of Medicine.
  8. Iyer, P. (2024, September 10). Assessing AI and the future of armed conflict. Tech Policy Press.
  9. Policy Insider AI. (2024, September 15). The liar’s dividend: Insights from a Kroll report on the impact of AI in politics.

 

Comments in Chronological order (0 total comments)

Report Abuse
Contact Us | About Us | Donate | Terms & Conditions X Facebook Get Alerts Get Published

All Rights Reserved. Copyright 2002 - 2025