In past decades, we grew accustomed to thinking that wars are decided through weapons, intelligence and diplomatic capabilities. Yet in today’s era, where public perception has become a strategic resource no less important than gas, oil, or water, it has become clear that the main front is no longer fought only on the ground but also across social media.
The battle for consciousness has become a domain in which artificial intelligence and sophisticated algorithms shape global public opinion, and especially the way the world understands, interprets, and experiences the Israeli-Palestinian conflict.
Israel, long a focal point of international media attention, now finds itself at the center of a digital storm in which truth does not always prevail, but rather the narrative that looks better on the screen.
The rapid development of AI creates unlimited opportunities, but alongside them, unprecedented dangers. These technologies can generate in seconds fake images, videos, recordings, and “evidence” that appear entirely authentic. For states and organizations with anti-Israel agendas, this capability becomes a powerful weapon.
Terror organizations and hostile actors are already using these technologies. They distribute fabricated images of “civilian casualties” that never occurred, animated videos presented as facts, and edited versions of real events designed to spark emotional outrage against Israel.
The fight for attention
One of the most dangerous challenges Israel faces is the fight for attention. Algorithms on TikTok, Instagram and X/Twitter reward content that triggers strong emotion – anger, fear and shock. Extreme anti-Israel videos spread at a rapid pace, even when they are built on complete falsehoods.
By contrast, factual and reasoned Israeli public diplomacy loses the competition from the outset – not because of a lack of professionalism or incorrect messaging, but because it cannot evoke the level of emotion required to survive in an ocean of disinformation.
This creates a situation in which the battle for consciousness is inherently unfair. The side spreading bold, dramatic lies receives algorithmic preference over the side presenting factual truth.
The implications are dramatic: thousands of teenagers around the world shape political opinions based on edited clips, short captions or AI-generated images.
Alongside the blatant anti-Israel and antisemitic falsehoods that were exposed and led to the dismissal of BBC executives, even the social media channels of respected news organizations help spread unreliable or misleading information.
The battle for consciousness has become global, decentralized, and without borders. Any online user – even a 15-year-old boy in the Philippines – can become a “journalist” telling the world what is happening in Israel, even if he doesn’t even know where Jerusalem is on the map.
This dynamic serves anti-Israel actors well. Countries like Iran, Qatar, and Turkey invest vast sums in distributing hostile content. They create thousands of fake accounts (bots), operate content farms, and use AI to craft a global narrative portraying Israel as aggressive, cruel and murderous, while the terrorism of Hamas is completely blurred.
In the Israel-Hamas War, we saw this even more clearly. Before the IDF completed any operation, fake videos were already going viral online. Narratives were being created in real time, even when the facts were unknown – or had not yet happened.
Research estimates that at least 30% of political content online about Israel is produced or amplified by nonhuman accounts. This means we are not fighting only a hostile public, but an organized, sophisticated machine with enormous reach and negligible cost.
In many cases, Western media outlets – pressed to publish quickly and appear up-to-date, or shaped by well-known ideological tendencies – adopt narratives born in the incubators of fake content. The result is an echo chamber in which a small lie posted on TikTok reaches the opening segment of major news broadcasts within minutes.
To counter these false publications, official spokespersons alone are not enough. Israel needs a network of students, diplomats, intellectuals, influencers, Jews, Arabs and Christians who support Israel – each one capable of becoming a multiplier of cognitive influence. Israel must also lead a global campaign to define AI-generated fake content as a “cognitive weapon.” Just as cyber weapons are monitored and identified, so too must disinformation be clearly labeled and regulated.
The battle for consciousness is not a passing phenomenon. It will grow, expand, and become more central than any physical battle. Israel’s security needs are no longer confined to its borders; they spill into the digital world, where images can shape history.
The writer is the CEO of Radius 100FM, an honorary consul and deputy dean of the Consular Diplomatic Corps, president of the Israel Radio Communications Association, and formerly an intelligence monitor at IDF Radio and a television correspondent for NBC.