Antisemitism is exploding online. And social media platforms aren’t just failing to stop it – they’re helping it spread.

I know this from two vantage points most people don’t get to combine. I studied data science at UC Berkeley, a direct pipeline into Silicon Valley. I spent years building AI systems. And today, as Miss Israel, I represent the Jewish people on a global stage.

That combination makes one thing impossible to ignore: the same blind spots that excuse antisemitism in elite institutions are now baked into the algorithms running our largest social media platforms.
Here’s what that looks like in practice.

I’ve been targeted by hundreds of thousands of hate comments. Comments saying six million Jews “wasn’t enough.” Graphic descriptions of what should be done to Jewish women. Calls for my death, for the deaths of my family, for another Holocaust. I’ve reported these comments repeatedly. They’re never removed.

Meanwhile, my own posts and comments – containing no harsh language, nothing remotely offensive, or even explicitly political– get deleted within hours. Mass reporting campaigns work. The algorithm responds to volume, not validity.

Melanie Shiraz after her win at Miss Universe Israel 2025.
Melanie Shiraz after her win at Miss Universe Israel 2025. (credit: Edgar Entertainment via Melanie Shiraz)

This isn’t unique to me. Jewish journalists, activists, and public figures across industries report identical experiences: restrictions, removals, and shadowbans triggered not by actual violations but by coordinated attacks. These are targeted campaigns against prominent Jewish voices, and they’re working.

The algorithm is the amplifier

The standard defense is that platforms are neutral hosts, not publishers. They just connect people. They can’t control what users say.

From a data science perspective, this framing is deeply misleading.

Social media platforms don’t just display content. They actively rank, recommend, amplify, suppress, and remove it using systems optimized for one thing: engagement. Clicks, shares, comments, and watch time. Outrage and tribalism perform extraordinarily well under these metrics. Antisemitic content is no exception.

These systems are also dangerously exploitable. Platforms like Meta have built moderation algorithms that can be gamed through coordinated reporting campaigns. Entities like Qatar and other well-funded actors have figured this out and invested accordingly. But the vulnerability exists because of how these systems were designed.

The pattern is clear: volume trumps validity. Automated removals happen before human review. The most vile antisemitic content stays up indefinitely, while benign comments from Jews get deleted because enough people reported them. Whether this outcome was intended or not, it’s the reality these systems have created.

Engineers and researchers have been documenting algorithmic bias for years. Algorithms reflect the priorities and blind spots of the people who build them. When you reward engagement above everything else, and when you build moderation systems that respond to coordinated abuse rather than actual violations, predictable harms follow.

For Jews, this is catastrophic. We’re roughly 16 million people in a digital ecosystem of billions. Algorithmic systems reward scale and coordination. We can’t compete with mass-reporting campaigns. We can’t match the sheer volume of antisemitic content being produced and amplified. These exploitable systems make that imbalance fatal to Jewish voices online.

The result is a digital environment where antisemitism gets amplified while Jewish voices get suppressed. And history is clear about what happens when Jew-hate spreads unchecked. We’re watching it happen again.
What makes this indefensible is that platforms like Meta have positioned themselves as speech moderators while appearing to avoid the responsibilities that come with that role.

There are two coherent positions here: enforce your rules consistently and protect users from targeted hate – or stay out of it entirely. What appears to exist now is neither. Enforcement seems selective. Jewish voices get silenced through reporting systems while antisemitic content circulates freely.

Some of this has been enabled by outdated legal frameworks. Section 230 was written before algorithmic recommendation systems existed at this scale. Courts are starting to question whether companies should remain immune when their algorithms may contribute to spreading harmful content.

But platforms don’t need new laws to act responsibly. They could address the exploitable systems they’ve built. They could revise policies so that mass reporting campaigns don’t override content standards. They could invest in moderation that distinguishes between actual violations and coordinated abuse. They could be transparent about how antisemitic content is handled and why these campaigns keep succeeding.
So far, they haven’t.

Algorithms aren’t neutral. They reflect the values of the companies that deploy them. Right now, those values appear to be enabling one of humanity’s oldest hatreds to spread at a scale we’ve never seen before.

As a Jewish woman, a technologist, and someone representing my people globally, I’m raising this because the consequences aren’t abstract anymore. When platforms with Meta’s reach allow antisemitism to thrive while silencing Jewish voices, they’re not passively hosting speech. They’re actively shaping reality.
And that reality is becoming more dangerous every day.

The writer is the current Miss Israel, a data scientist, and a prominent Jewish voice on the global stage today.