AI like ChatGPT can make US antisemitism worse, most Americans say - poll

75% of Americans are deeply concerned that generative artificial intelligence like ChatGPT could be "misused or abused" to incite hate and harassment.

 Artificial Intelligence words are seen in this illustration taken March 31, 2023 (photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)
Artificial Intelligence words are seen in this illustration taken March 31, 2023
(photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)

Three-quarters of Americans believe Generative Artificial Intelligence (GAI) can be used to incite more antisemitism across the United States, according to a survey conducted by the Anti-Defamation League (ADL) on Monday.

The data, which was gathered from the responses of 1,007 American adults, was presented in a US Senate hearing about GAI.

Concerns surrounding Generative Artificial Intelligence 

According to the poll, 75% of Americans are deeply concerned that GAI could be "misused or abused" to incite hate and harassment. Additionally, 75% of respondents believe the GAI could create biased content.

“If we have learned anything from other new technologies, we must protect against the potential risk for extreme harm from generative AI before it’s too late.”

Jonathan Greenblatt

“If we have learned anything from other new technologies, we must protect against the potential risk for extreme harm from generative AI before it’s too late,” said Jonathan Greenblatt, ADL CEO. “We join with most Americans in being deeply concerned about the potential for these platforms to exacerbate already high levels of antisemitism and hate in our society, and the risk that they will be misused to spread misinformation and fuel extremism.”

The poll found that 70% of the respondents believe that GAI will contribute to worse antisemitism, extremism and hate in the US. In addition, 75% believed that the tools will produce content that is biased against marginalized groups.

 Should we be wary of the risks that powerful Artificial Intelligence systems could pose to humanity? (illustrative) (credit: PEXELS)
Should we be wary of the risks that powerful Artificial Intelligence systems could pose to humanity? (illustrative) (credit: PEXELS)

“There’s no doubt that many exciting technological advancements are possible with the increased access to GAI,” said Yael Eisenstat, vice president of the ADL Center for Technology and Society. “But this technology may be abused to further accelerate hate, harassment and extremism online. As lawmakers and industry leaders prioritize innovation, they must also address these challenges to prevent their misuse. We look forward to working with policymakers, industry leaders and researchers as they establish standards for GAI.”

Nearly 90% of the respondents believe that private companies need to ensure that their GAI tools are prevented from creating harmful content. This would mean a limitation on extremist or antisemitic imagery. 

Further, 77% of the respondents feared that GAI may be used to radicalize and recruit more extremists.

Advertisement

What is Generative Artificial Intelligence?

GAI is a tool used to create new images, videos and audio. The technology requires very little skill to use and is therefore available to the masses. 

According to The Guardian, 96% of deep fakes are pornographic. In a lot of deep-faked content, celebrity faces were placed onto the naked body of porn stars. 

While the majority of deep fakes are used for porn, the technology has been used in other criminal ventures. A Hungarian bank was tricked into losing £200,000 after a deep fake of the CEO's voice was used, according to The Guardian.

The technology has also been used to create fake accounts on social media for the purpose of foreign spying.

The need to police the technology

OpenAI CEO Sam Altman spoke at a senate hearing, urging lawmakers to install regulations for AI. 

“We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” Altman said, according to CNN.

Senator Richard Blumenthal also spoke at the hearing, although not using his mouth. Blumenthal introduced a deep-faked recording of his voice to show the potential risks of the technology. The recording provided a speech written by ChatGPT and a GAI had used his previous speeches to create his voice.

Blumenthal added that the deep fake could have produced statements that would be damaging.