Facebook stymied even the “Mossad.” As soon as (what appeared to be) Israel’s intelligence agency started a Facebook page and quickly garnered 3,000 likes, Facebook took it down.
“I got the email and the first sentence is basically: Pages are for business and promotion, and we also remove things that are hateful,” said Shawn Eni, the founder of the page.
The page doesn’t really belong to the Mossad: It’s a satirical Mossad account that rapidly become a popular pro-Israel social media platform for its unconventional Israel advocacy. Now boasting more than 50,000 Twitter followers, the “Mossad” made headlines for its humorous trolling of the likes of Hamas, Linda Sarsour and whatever Israel-hater is making real-time headlines.
Eni, an Israeli-Canadian project manager who runs the account in his spare time, could not figure out, despite his superlative reconnaissance skills, why exactly the page was not allowed. Because it’s a satirical page? If hateful, toward whom? Hamas? “They give you absolutely no recourse,” Eni told The Jerusalem Post Magazine
. “I looked on Google how to get it back, and it sends me back to Facebook ‘help pages,’ and you can make an appeal, but that link is broken. Nothing!” Suddenly, he was locked out of his personal Facebook account. For several hours he sat in “Facebook jail,” a term used in jest – and in anger – by pro-Israel advocates who in recent months have found their pro-Israel posts flagged or taken down by Facebook, only to lose posting privileges temporarily.
Anyone like me who has a community of friends or colleagues involved in political pro-Israel advocacy on Facebook or social media will have likely associated with (or aided and abetted) these social media “criminals.”
Within this community, accusations (and evidence) are mounting that social media are engaged in a coordinated targeting of pro-Israel pages, personalities and accounts, particularly those on the Right.
Aside from feeling unjustly silenced, some are also losing a source of livelihood.
When it comes to Facebook, some of these activists feel vindicated in blowing the whistle on what they believe are unfair and dishonest business practices, especially as the recent Cambridge Analytica scandal revealed that Facebook has been harvesting and selling user information. Facebook CEO Mark Zuckerberg was in the Congressional hot seat being questioned about Facebook practices.
Facebook says data leak hits 87 million users, April 5, 2018 (Reuters)
In full disclosure, I too have been a social-media “criminal.” A few years ago, I sat in “Facebook jail” for creating a parody page called “Gaza Girls,” a satiric, fictitious Palestinian girls’ band who made their splash with their first hit single, “Kill All the Jews.” (YouTube took the music video down but Vimeo kept it alive in the name of free speech, to more than 300,000 views.) A few months ago, YouTube knocked off another video three years after it was originally posted and added a strike to my account, in which I publicized a “ditty” I received from Palestinian “rapper” from Beit Jala who threatened to kill and rape me. The reason given: hate speech. I appealed, arguing via an impersonal form that the video educated about “hate speech.” YouTube upheld the decision.
Avi Abelow, CEO and cofounder of the Israel Video Network, a company that aggregates pro-Israel videos, is a full-time pro-Israel activist who said his business has been heavily affected by recent changes in social media moderation policies. Whereas once his videos highlighting Israel’s war on Islamic terrorism went viral, they now pick up very slowly, with some being flagged or demonetized. Recently, this Efrat resident and New York native had to downsize his Jerusalem office due to decreased social-media revenue.
His quibbles with Facebook go back to the summer of 2016, when a picture-post that said “It’s called ‘Israel’ not ‘Palestine’” was taken down.
When Abelow’s appeal was rejected, Dov Lipman, a former Knesset member and then executive at the World Zionist Organization, got involved and sent a letter to Facebook via this newspaper. Facebook stated that this one did not violate “community standards.” Facebook apologized and Abelow’s original post was reinstated.
Of course, Facebook is not the only alleged “silencer.”
YouTube, Google and Twitter have been engaged in an increasingly public spat with the conservative community, which includes stridently pro-Israel, pro-Trump and anti-jihad entities. Not a week goes by where, at least in my Facebook feed, I don’t hear of another friend, colleague or organization involved in some type of pro-Israel advocacy heavily critical of the Palestinian camp that is suddenly sanctioned by social media.
Just a few of the Twitter “criminals” that appeared in my Facebook feed in recent months included anti-jihad activist and cartoonist Bosch Fatwin, who was knocked off until he led a public appeal demanding an explanation. The Canary Mission, an organization that seeks to expose anti-Israel personalities, recently lost Twitter posting privileges only to get them reinstated after public outcry, with Twitter citing an “error.”
Then of course there’s YouTube. A few months ago, Aussie Dave, an immigrant to Israel from Australia and founder of Israellycool, a pro-Israel blog to which I sometimes contribute (he hides his identity to avoid harassment from antisemites), reached out to his readers, upset and worried, because he was set to lose his account and some 200 videos he had uploaded over the years. He had no idea why.
“No one on Google was reachable,” Dave said.
But the most public feud over Google censorship involves PragerU, which filed a lawsuit against the tech behemoth in late 2017. PragerU is not an actual university, but an online educational site founded by conservative commentator Dennis Prager that seeks to spread conservative ideas via brief video lessons in part to counteract left-wing bias the founders believe overruns university campuses. PragerU decided to sue once it realized the restrictions were personally calculated.
“About a year ago, before we launched the lawsuit, we discovered that 11 videos were being restricted,” said PragerU CEO Marissa Streit, speaking from PragerU’s office in southern California.
YouTube cited “community guidelines” which offered a number of possible reasons: from being pornographic to inciting violence to constituting hate speech.
“Obviously, this must be a mistake,” Streit said, recalling the discovery. PragerU’s videos usually feature prominent authors, educators and professors who tackle hot-button issues like gun control, the Middle East conflict or climate change with animated, clean-cut information.
PragerU’s initial inquiries to YouTube and Google (which owns YouTube) went unanswered. A petition signed by a quarter of a million people finally got YouTube’s attention.
Executives assured PragerU that they would manually watch the videos. They did – and upheld the restrictions.
“Not only that, more videos were added to the list,” Streit said. Eleven videos grew to more than 30, including videos on Israel’s moral right to exist, Islamic terrorism and Muslim antisemitism. PragerU’s main cause of action argued that YouTube is a public forum, like a town square. As such, it must allow those who traffic it to express themselves according to the First Amendment. As proof of discrimination, it offered a list of videos on similar themes from a liberal point of view which remained online unscathed.
Last month, the court dismissed the Federal claims, saying that PragerU has not sufficiently demonstrated that Google is a “state actor” which must abide by the First Amendment. PragerU was granted permission to file an amended version.
Another lawsuit challenging social-media platforms has been filed on behalf of outspoken anti-jihad blogger and author Pamela Geller and affiliates. Geller’s Facebook reach has been slashed by tens of thousands in recent months.
In September 2017, YouTube deleted her channel, only to reinstate it after public outcry. Geller accuses tech firms of being “sharia compliant” by enforcing Islamic blasphemy laws which forbid criticism of Islam and Muhammad.
She’s not legally challenging the platforms directly because her team believes they are protected by Section 230 of the Communications Decency Act (CDA) which allows platforms to act as “good Samaritans” by self-regulating offensive material. According to Streit, while Section 230 was intended to protect content platforms from being treated as publishers who could be sued over user-generated content, it is now being exploited by firms to censor content they simply don’t like.
“The problem that I see with the Prager lawsuit is Google and YouTube are private companies. When you sign on to use their platform, you agree to their terms of service, which PragerU did. Our lawsuit differs greatly,” Geller said.
“Section 230 provides immunity from lawsuits to Facebook, Twitter and YouTube, thereby permitting these social media giants to engage in government-sanctioned censorship and discriminatory business practices, free from legal challenge. We are suing to lift that immunity.”
But a judge struck down the lawsuit, claiming it targets the wrong entity.
“We are tweaking it,” Geller said.
And PragerU is tweaking theirs. “Far from an unexpected setback, we look forward to arguing the merits of our case in both state and Federal Court, as well as the 9th Circuit, or even the Supreme Court if that is what it takes to ensure every American’s freedom of speech is protected online,” Streit said in a statement.
Internet legislation, particularly regarding how the web balances democratic ideals, commercial concerns and public safety, has opened new ground for the justice system and legal academia.
Prof. Jane Bambauer, a professor of law at the University of Arizona, is pessimistic that the courts will categorize YouTube as a public forum, nor does she believe that is desirable.
“The motivation for CDA Section 230 is a reminder of why we should be reluctant to treat private companies the same as the government, even when they are very popular,” she told the Magazine
. “The goal of Section 230 was to encourage, rather than discourage, content hosts to do whatever curation they deemed appropriate so that a platform could provide as wide a range of content as it wanted while still retaining the ability to purge some of the content that the company considers inappropriate. Of course, this means that companies will have the discretion to make these purging decisions in an ideologically biased way, or in a way that is biased along some other dimension. But without a policy like Section 230, the result may be worse. Given a limited choice between a speech free-for-all that can become a cesspool, or a much narrower platform defined by heavy content restrictions, companies may choose the latter, and we could have more Balkanization on the Internet than we currently do.”
Bambauer believes PragerU’s most promising route is its complaint against deceptive advertising.
“If we define the problem with ideologically biased screening not based on the bias itself but based instead on a company’s persistent claims that it takes a neutral stance, then the statements about its neutrality and commitment to free, unfettered access to content might be deceptive and might prevent competitors – who really do offer a freer platform – from attracting users,” Bambauer said.
In her forthcoming academic paper “The New Governors: The People, Rules, and Processes Governing Online Speech”’ for Harvard Law Review
, Kate Klonick, a PhD candidate in law and resident fellow at the Information Society Project at Yale University, has also laid out a case for allowing Internet companies to curate content without government regulation.
“Interpreting online platforms as state actors, and thereby obligating them to preserve the First Amendment rights of their users, would not only explicitly conflict with the purposes of Section 230, it would likely create an Internet nobody wants,” she wrote.
But exactly how content gets called into question is unclear, and Klonick describes moderation methods as “historically opaque.” Who sets community standards? Who monitors content and decides which violate those standards? Are restrictions a response to flagging or based on employee discretion? Are some decided by algorithms that scan certain undesirable words or phrases? Responses from Google and Facebook regarding bias against pro-Israel content evasively and vaguely cited “safety,” without addressing safety from what or from whom.
“People flag millions of videos and more and more people are using this feature (the number of flags per day is up over 25% year-on-year),” wrote Paul Solomon, Google’s director of communications for European, Middle Eastern and African emerging markets.
“There has always been debate about what goes online and stays online. These are issues we’ve been looking at for years. We work to strike a thoughtful balance – supporting free expression but also recognizing that some types of content can be harmful and have no place in our services, e.g., content that incites violence or promotes terrorism.”
Solomon would not comment on pending lawsuits.
Addressing accusations of bias, Facebook wrote: “We want everyone, including in places where there is strong political disagreement or conflict, to be able to talk on Facebook about what matters to them. But we don’t allow them to do so in a way which calls for harm against others.”
As for how it flags and removes unwanted material, Facebook responded: “Our community operation teams around the world – which grew by 3,000 people last year to our existing 4,500 – work 24 hours a day and in dozens of languages to review reports and determine the context,” Facebook wrote. “In addition to our teams, we sometimes use technology and AI [artificial intelligence] but even in those cases AI can’t catch everything. Not everything is always straightforward and algorithms are not yet as good as people when it comes to understanding context.”
Through her research and interviews with company executives, Klonick describes a three-tiered moderation process at Facebook, in which the initial review is governed by a strict set of measurable standards, such as the amount or nature of nudity or violence. The final tier – which is made up of lawyers and policy-makers who are based at company headquarters – rules on particularly sensitive or escalated cases.
“After content has been flagged to a platform for review, the precise mechanics of the decision-making process become murky,” she wrote, adding that social media companies do not make their internal moderation guidelines public.
“The lack of accountability is also troubling in that it lays bare our dependence on these private platforms to exercise our public right,” Klonick wrote.
“Besides exit or leveraging of government, media or third-party lobbying groups, users are simply dependent on the whims of these corporations.”
Likening the moderators’ decision-making process to the process of judicial jurisprudence, Klonick suggests that these platforms voluntarily enact “technological due process” in which users could have the right to notices and hearings and enjoy more transparency when it comes to the rulings.
Brian Thomas, an immigrant from London who serves as a social media consultant for conservative websites and an Israellycool contributor, believes that factors involved in restricting content are based on a mixture of ideological views, market forces and societal pressure. Anti-Israel activists, he claims, are extremely organized and partly responsible for automated bans and restrictions that may be imposed on pro-Israel content.
“It always has to do with arousing the suspicions and annoyance of ‘social justice warriors’ and the BDS [boycott, divestment and sanctions] crowd,” he told the Magazine
. “You post something, they see, and they come after you. I think a lot of it is targeted, coordinated, flagging.”
For example, when Israellycool exposed and mocked the public antisemitic proclamations of an Israel-hater, the post was deleted from Facebook.
Aussie Dave surmised that the antisemite in question launched a digital attack on him, but he was bothered that Facebook didn’t reinstate it.
“I lost posting privileges even as my own private person for a few days,” Dave said. “They punished me. On the other hand, we complained about the most vile antisemitic pages and we got responses saying they don’t go against community standards. That, to me, smells really bad.”
According to Thomas, YouTube is more trigger-happy than Facebook, in part because video quantity doesn’t necessarily translate into more profit.
“YouTube suffers from an oversupply of video and an undersupply of advertisers,” Thomas said. Several weeks ago, YouTube announced a new policy that would prevent users from monetization unless they achieved a certain audience threshold.
“I think Facebook is much more protective of large pages than Twitter,” he said, accusing Twitter of being overrun by “social justice warriors.”
As evidence, he cites a Project Veritas undercover investigation in which current and former Twitter employees admitted on camera that Twitter targets right-wing users out of political bias, specifically out of antipathy for Trump.
Silicon Valley executives are increasingly coming under fire for allegedly imposing a liberal political outlook. A former employee at Google, James Dunmore, has filed a wrongful termination lawsuit against Google, accusing the company of harassing and firing him for his conservative viewpoints.
“It’s such a bubbled environment with such a strong leftist ideology that, to them, conservatives are haters,” Streit said.
Those affected by restrictions noticed that they occurred in larger numbers after Trump’s election, leading Abelow of Israel Video Network to call it “the post-Trump purge.”
“Because if you look at the quotes from the heads of Silicon Valley after Trump won, they blame themselves,” Abelow said.
Klonick also described pressure on these platforms to more heavily moderate the kind of reports that tipped the scales in Trump’s favor, a.k.a. “fake news.” The Cambridge Analytica scandal has particularly angered Trump opponents who believe the privacy breach had been used to help Trump’s election.
On January 19, Zuckerberg announced that in order to make sure the news is of high quality, “I’ve asked our product teams to make sure we prioritize news that is trustworthy, informative and local. And we’re starting next week with trusted sources.” Abelow believes this is an attempt to reduce the access to the influence of conservative content.
To combat both censorship and loss of revenue, Abelow has shifted his business model to rely more heavily on direct engagement with his constituency through e-mail listserves, along with content which people seek out rather than find by chance in their social media feed.
“We always knew the day would come in which we’d be cut off, so we prepared by building up a subscription list of hundreds of thousands of subscribers,” he said.
He’s also started a “dontshutusup.com” campaign to raise awareness of social media censorship, believing public pressure could influence moderation policies.
Klonick has identified several factors that could influence moderation policies: government request, media coverage and third-party civil society groups.
“The media does not have a major role in changing platform policy per se, but when media coverage is coupled with either (1) the collective action of users, or (2) a public figure, platforms have historically been responsive.”
This may explain why some high-profile users, such as Geller, could activate their following to lead public complaints and succeed in overturning the verdicts.
The “Mossad” has likewise proven that Facebook bans can be defeated. A few days after Eni’s parody page was deleted, he was invited to reinstate it, provided he change the name to indicate that it’s a parody account.
“I won’t doubt there was intervention,” he said.
“A lot of people messaged me: They know ‘someone.’ I know I didn’t ask for intervention.”
But he’d like to think that – as the agency looking out for the Jewish people – the intervention was divine.