US receives thousands of reports of AI-generated child abuse content in growing risk

The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.

A hooded man holds a laptop computer as a blue screen with an exclamation mark is projected on him in this illustration picture taken on May 13, 2017.  (photo credit: REUTERS/KACPER PEMPEL/ILLUSTRATION/FILE PHOTO)
A hooded man holds a laptop computer as a blue screen with an exclamation mark is projected on him in this illustration picture taken on May 13, 2017.
(photo credit: REUTERS/KACPER PEMPEL/ILLUSTRATION/FILE PHOTO)

The US National Center for Missing and Exploited Children (NCMEC) said it had received 4,700 reports last year about content generated by artificial intelligence that depicted child sexual exploitation.

The NCMEC told Reuters the figure reflected a nascent problem that is expected to grow as AI technology advances.

In recent months, child safety experts and researchers have raised the alarm about the risk that generative AI tech, which can create text and images in response to prompts, could exacerbate online exploitation.

Increasing child exploitative material

The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.

"We are receiving reports from the generative AI companies themselves, (online) platforms and members of the public. It's absolutely happening," said John Shehan, senior vice president at NCMEC, which serves as the national clearinghouse to report child abuse content to law enforcement.

The chief executives of Meta Platforms, X, TikTok, Snap and Discord testified in a Senate hearing on Wednesday about online child safety, where lawmakers questioned the social media and messaging companies about their efforts to protect children from online predators.

Researchers at Stanford Internet Observatory said in a report in June that generative AI could be used by abusers to repeatedly harm real children by creating new images that match a child's likeness.

Content flagged as AI-generated is becoming "more and more photo-realistic," making it challenging to determine if the victim is a real person, said Fallon McNulty, director of NCMEC's CyberTipline, which receives reports of online child exploitation.

OpenAI, creator of the popular ChatGPT, has set up a process to send reports to NCMEC, and the organization is in conversations with other generative AI companies, McNulty said.

Advertisement