Benjamin Netanyahu stands in a café holding a cup of coffee, speaking to the camera. It looks like just another one of his social media videos addressing the Israeli public in an attempt to boost morale during wartime.
But then he begins to talk, smirking as he references a rumor spreading online that he was killed in an Iranian strike. The video is meant to put the claim to rest. He jokes about the rumor and continues speaking as if the absurdity of it all should be obvious.
But the video does not end the speculation. Within hours, new posts begin dissecting his video... frame by frame.
Some users claim the clip is AI-generated. Others point to supposed anomalies – the movement of his coffee cup, the blur of the image, the way his teeth appear to shift from one moment to the next – as proof that something is awry.
In the information war surrounding the Iran conflict, even the act of proving you are alive can become part of the misinformation cycle.
Across social media, fabricated images and AI-generated videos circulate alongside authentic footage of the conflict – clips showing fictional missile strikes on Tel Aviv, surreal scenes of world leaders dancing together, or fabricated battlefield destruction.
The result is a growing sense of uncertainty: What is real, what is synthetic, and how can anyone tell the difference?
According to Tehilla Shwartz Altshuler, a senior fellow for media and tech policy at the Israel Democracy Institute, the phenomenon reflects a broader transformation in how information moves through digital ecosystems.
“What we see in wars is often a mirror of what we see in the general information ecosystem,” Shwartz Altshuler said in a recent interview with The Jerusalem Report.
She has described the aftermath of the October 7 Hamas attacks as the first truly “digital war” because of how quickly information moved through social media platforms.
“Atrocities were filmed in real time, posted on Telegram, and by noon they were already on X. By the evening, they were on television,” she said.
Today, just two and a half years later, the information environment has evolved even further. While earlier misinformation often relied on miscaptioned images or recycled footage from past conflicts, generative AI can create entirely new scenes.
“This is the main characteristic of AI-generated content,” she said. “It’s not only taken out of context. Sometimes it’s actually generated from scratch.”
Synthetic war imagery
The types of content circulating online range from crude fabrications to more elaborate productions.
Some clips purport to show missile strikes scoring devastating hits on Tel Aviv – videos that analysts and journalists have quickly identified as AI-generated.
Others attempt to manipulate political narratives, such as posts suggesting that Netanyahu had died, based on supposed visual “proof” from a televised speech.
In one instance, users argued that Netanyahu must have been dead because a still frame from a video appeared to show him with six fingers – a common artifact of AI-generated imagery.
Shwartz Altshuler said that such claims illustrate how misinformation evolves in the AI era.
“We call it the ‘liar’s dividend,’” she said.
On the one hand, fabricated content can persuade people that events occurred when they did not. On the other hand, the existence of AI manipulation allows real events to be dismissed as fake.
“When you cannot sort authentic content from machine-generated content,” she explained, “it allows people to convince others of things that never happened, but it also allows people to claim that real things didn’t happen.”
Primitive fakes
Despite the flood of synthetic content, much of what currently circulates online remains relatively crude.
“Most of what we see on social media that was generated by AI is what we call ‘slop,’” Shwartz Altshuler said.
The videos often contain telltale flaws: distorted faces, unnatural movement, extra fingers, or disappearing objects. These imperfections enable viewers to identify the clips as fake. But that apparent detectability may itself be dangerous.
“It creates a false feeling of literacy,” she warned.
People may believe they can easily identify synthetic media because current examples are flawed. However, more advanced AI systems are rapidly improving.
“Tomorrow, if someone uses more sophisticated models,” she said, “you won’t be able to detect it.”
War propaganda
But crude or not, AI-generated imagery has already found its way into the strategies of political leaders and governments, not just hostile actors.
According to Shwartz Altshuler, political leaders and governments around the world increasingly experiment with AI-generated imagery as part of their own messaging strategies.
“AI is being used by both sides,” she said.
In previous months, for example, US President Donald Trump shared AI-generated images depicting himself in fantastical scenarios, such as riding animals or appearing as a superhero.
While those images were clearly satirical, they normalized the idea that leaders can shape reality through fabrication. During wartime, the same tools carry much higher stakes.
Iran and its allies have long used recycled images or footage from unrelated conflicts in influence campaigns. Generative AI now adds a new layer to that ecosystem.
Platforms and profit
Another driver of the phenomenon is economic. Many creators producing AI-generated war videos are not necessarily motivated by political agendas. Instead, they are seeking attention and advertising revenue.
“People are monetizing these slops,” Shwartz Altshuler said. “They don’t care about the outcome of the war. They just want to make money because people are consuming information about the war.”
Social media platforms are increasingly under pressure to address the problem. The platform X recently announced that accounts spreading AI-generated war content without labeling it as synthetic could be removed from monetization programs for up to 90 days. Shwartz Altshuler argues that platforms must go further.
“They need to mark this content as AI-generated or remove it if it’s not properly labeled,” she said.
Journalism in the synthetic era
For journalists, the rise of synthetic imagery raises new challenges.
“The job of journalists today is even more important than it was a decade or two decades ago,” Shwartz Altshuler said.
Verification now requires new skills and tools – from reverse image searches to specialized detection software capable of identifying AI-generated content.
“The job of a journalist is to create the provenance of reality,” she said. News organizations must also adapt their own practices, she added, by watermarking content and alerting audiences when manipulated material is detected.
Crisis of reality
The implications extend far beyond wartime propaganda.
If synthetic media becomes indistinguishable from authentic footage, the consequences could reshape fundamental institutions. “If this becomes reality, the stock exchange won’t be able to function, democracy won’t be democracy, and commerce won’t work,” Shwartz Altshuler said.
Societies, she argued, may eventually need new forms of digital regulation requiring the origin – or “provenance” – of content to be traceable. Such rules would not censor speech, she said, but simply require transparency about whether images and videos are authentic or AI-generated.
Until then, moments like the Netanyahu video may become increasingly common: leaders appearing on camera not only to address the public but to prove that they exist at all.
In a war where AI-generated missile strikes on Tel Aviv can circulate alongside authentic footage of real attacks, the line between documentation and fabrication is becoming harder to see.
The fog of war may increasingly be manufactured. And in a world where images can be created as easily as they are recorded, seeing may no longer be believing.■