A groundbreaking new report lays bare the scale and mechanics of the phenomenon, raising urgent questions about the responsibility of social media platforms in the global surge of online hate.
Titled “Engineered Exposure: How Antisemitic Content Is Pushed and Amplified to Millions Across Instagram,” the study reveals that Instagram’s algorithm is systematically feeding users antisemitic material, often without them actively seeking it out.
During a focused 96-hour monitoring period, ARC researchers identified 100 antisemitic posts that were directly recommended to accounts by the platform. These posts generated more than 5.3 million likes and 3.8 million shares, with an estimated potential reach of up to 280 million users.
According to the report, the content follows familiar and dangerous patterns, ranging from conspiracy theories to overt demonization, mirroring propaganda tropes that have historically fueled antisemitism across generations.
Crucially, the study underscores that this is not simply a case of user-generated content existing on the platform. Rather, it points to algorithmic amplification, where Instagram’s own systems actively promote such material in order to maximize engagement.
This distinction, experts say, is critical.
“Exposure is not accidental, it is engineered,” the report warns, highlighting how recommendation engines can transform fringe hate into mainstream visibility.
The findings come at a time of heightened global concern about antisemitism, particularly online. Over the last few years, monitoring groups have reported sharp spikes in antisemitic incitement across social media platforms, including posts celebrating violence and spreading classic anti-Jewish conspiracies.
“Simply put, this is evidence of a broad systemic failure on the part of Instagram and Meta,” said CAM CEO Sacha Roytman Dratwa. “When a platform actively recommends content that dehumanizes Jews to mass audiences, we are no longer talking about a simple oversight or a mistake in the algorithmic design. We are talking about infrastructure that normalizes hatred at scale that must be addressed immediately.”
The ARC’s research adds to a growing body of evidence suggesting that social media algorithms, designed to prioritize engagement, can inadvertently, or in some cases systematically, promote extreme and polarizing content.
The CAM report calls for urgent action from technology companies, regulators, and civil society, urging greater transparency around algorithmic systems and stronger enforcement against hate content.
It also reinforces a broader message: that the fight against antisemitism in the 21st century cannot be separated from the digital ecosystems in which ideas, and hatred, now spread at unprecedented speed and scale.
As governments and institutions grapple with how to regulate online platforms, the ARC’s findings serve as a stark warning.
In today’s information environment, the question is no longer whether antisemitism exists online, but how powerfully it is being amplified, and by whom.
To read the full report, click on the link https://combatantisemitism.org/wp-content/uploads/2026/03/Instagramantisemitismreport.pdf