A nurse carries a smiling young woman in her arms, with other women walking behind her and looking straight into the camera. Barbed-wire fences line both sides of the black-and-white image. The photographer stands slightly elevated; the composition is symmetrical and razor-sharp. However, this scene never happened.
From Auschwitz we know photographs and film footage showing emaciated prisoners, weary and exhausted, staring into the cameras of Soviet cameramen. The Auschwitz Album, with photographs shot by the SS in 1944, contains pictures of Hungarian Jewish women arriving at the camp. A short film sequence from the days after liberation shows Soviet nurses walking with a group of children through barbed wire. By a miracle, these children survived the brutal medical experiments of the notorious camp doctor Josef Mengele.
Yet, the photo of the nurse carrying a young woman continues to circulate on social media. I found it on Facebook. That it evokes Auschwitz is no coincidence: It condenses several visual icons of the camp into a single image. That alone should raise suspicion. More puzzling still is the caption of a post uploaded on January 17 to the account “Threads of Time,” which places the image in the Ravensbruck concentration camp.
It claims to show a Polish nurse, Elzbieta Kowalska, being forced onto a death march by camp guards as Soviet troops approach. But why do the women look so confidently into the camera? Why is the nurse staged so heroically by a Nazi photographer? And why does the setting bear no resemblance to the actual topography of the women’s camp north of Berlin?
Little can be found about Kowalska beyond Facebook posts that repeat the story, sometimes illustrated with additional photos. In one, her name even appears on a nurse’s uniform, yet the face looks entirely different.
What can be verified is that very few photographs from Ravensbruck exist. Ninety-two images survived in an SS album from 1940/41, including views of the camp that show a completely different environment from the one in the Facebook post. The few photos taken immediately after liberation also depict entirely different scenes.
The hyperrealistic quality of the image, its near-perfect composition, and the impression that it synthesizes well-known iconic images from concentration and extermination camps, together with the frequency with which the account uploads Holocaust-era images, make it highly likely that this is an AI-generated picture.
The flood of AI-generated Holocaust images
For some time now, such AI-generated Holocaust images have been flooding social media. This “fake history,” as Auschwitz Memorial & Museum spokesperson Pawel Sawicki calls it, is particularly prevalent on platforms owned by Meta. Estimates suggest that around 40% of content shared on Facebook is AI-generated, yet very little of it is labeled as such, despite existing requirements.
Sawicki warns that photorealistic AI images pose “a new danger of distorting the history of Auschwitz.” He assumes many people take such images as authentic because “photography has long been understood as a documentary medium, implying there was a real photographer who was present.”
Last week in Berlin, at an event of our educational project SHOAH STORIES, Michaela Kuchler, secretary-general of the International Holocaust Remembrance Alliance (IHRA), similarly noted that AI-generated content often circulates without context and that users usually cannot tell it is fake. This is particularly troubling, she argued, because “digital space has fundamentally changed how people encounter Holocaust history.”
Against this backdrop, more than 30 memorial sites, foundations, and initiatives in Germany published an open letter for International Holocaust Remembrance Day, urging platform operators to act more decisively against such historical distortion. AI content should be labeled and removed in cases of violation, and users should be given better tools to report visual fake history.
Challenges posed by AI images
Posts like the one about Ravensbruck undermine the work of memorial institutions. Because they often refer to real historical persons or circumstances, they promote forms of remembrance based not on learning and reflection but on affect and artificially generated emotion. More than that, such deepfakes increasingly shape perceptions of the past, as Konstantin Schonfelder from the Centre Responsible Digitality notes.
Since many users absorb historical information on platforms like Facebook, Instagram, or TikTok only in passing, there is a risk these images will enter visual memory unquestioned and unfiltered. It is therefore all the more important to promote AI literacy in addition to digital media literacy.
AI-generated Holocaust images also pose challenges for our archives. In recent years, scholars have intensified efforts to research the provenance of the few films and photographs we have from the Holocaust. Historians Tal Bruttmann, Stefan Hordler, and Christoph Kreutzmuller have closely analyzed and contextualized the Auschwitz Album.
At the Hebrew University of Jerusalem, our German-Israeli research project on the Archaeology of Iconic Film Footage from the Nazi Era is reconstructing the historical contexts and later uses of iconic Nazi-era film footage, from pogroms in Riga and Lviv to mass shootings in Latvia and the boycott of Jewish businesses in Germany.
Last year, historian Jurgen Matthaus clarified the background of the photograph long known as “The Last Jew in Vinnitsa,” dating the execution it depicts to July 28, 1941, and identifying the site as the citadel of Berdychiv in today’s Ukraine. He also used so-called analytical AI, which identifies and evaluates patterns in data.
Unlike generative AI, which synthesizes old material into something new, analytical AI can help establish relationships between data, recognizing objects or people in historical images, or, as in our joint European project Visual History of the Holocaust, analyzing camera angles, movements, and compositions.
Artificial image generation
Generative AI, by contrast, also relies on pattern recognition but not on an empirical, analytical approach. It varies statistically likely and obvious possibilities. Media scholar Roland Meyer points out that artificial image generation does not truly create images but synthesizes existing material. The process condenses training data, often loosely assembled and superficially curated visual collections.
The resulting images foreground visual clichés and stereotypes, combining familiar icons into what Meyer calls a “cliché amplifier,” a statistical average of representational conventions. This is why images like the nurse between barbed wire appear so symmetrical. They lack the imperfect, the contingent, even the propagandistically staged qualities that characterize many authentic photographs from the Holocaust.
What historians call the “perpetrator perspective,” evident in many Nazi photos and films, simply cannot be generated by AI. Instead, these images reveal the prompt behind them: visualizations of generic descriptions. Hence, Meyer calls such effect “generic pastness.”
Today’s artificial image generation is based on diffusion models. During training, data are gradually overlaid with random noise until they become unrecognizable; the model then learns to reverse this process. The flood of such synthesized Holocaust images now ironically suggests that our visual archives themselves risk diffusing. This alters the logic of image archives and poses the danger of contamination: Synthetic images risk becoming new historical reference points.
French filmmaker Claude Lanzmann once justified his rejection of historical photographs of Nazi crimes in his monumental film Shoah by calling them “images without imagination.” This judgment applies even more to AI-generated photorealistic images. They do not unsettle or disturb. They are affected images that have an effect only because viewers recognize what they already know and expect.
These images do not encourage engagement or challenge perception. They are artificial icons that contribute to the diffusion of historical reality and the erosion of historical truth. All the more important, then, is that users themselves defend the memory of the victims. Beneath the deepfake of the Polish nurse in Ravensbruck, one comment now reads: “True story, phony AI crap photo.”
The writer is an associate professor at the Department of Communication & Journalism and in the European Forum of the Hebrew University of Jerusalem.