The end of deepfake: Meta to label AI-generated fake images

Meta will tag AI-generated fake images, though some may still find ways to distribute them.

 An illustrative image of artificial intelligence. (photo credit: INGIMAGE)
An illustrative image of artificial intelligence.
(photo credit: INGIMAGE)

Another step in the fight against fake news: Meta, the parent company of Facebook, Instagram, and Threads, has announced that it is advancing the development of tools that will enable it to identify images generated by artificial intelligence. Meta added that these images will be accompanied by tags on its social platforms in the coming months.

Nick Clegg, president of global affairs at Meta, stated in a declaration that the move comes during a year when many countries around the world hold elections and added, "As the difference between human and synthetic content gets blurred, people want to know where the boundary lies. People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology."

Meta says it is working with industry partners to develop technology for identifying AI-generated content, and that the tags will serve as indicators of industry standards and will appear in all languages.

How AI-generated images have spread on the Internet

Since 2022, it is estimated that nearly 20 billion AI-generated images have been uploaded to the Internet, including fake images of public or private individuals uploaded without their consent, or misleading information with political motives to distort the truth.

It has been a long time since social media giants and other online platforms know that they have no choice but to do something about it. In the past year, Britain passed an online safety law that makes the circulation of fake photos of a person without their consent a crime.

 Facebook's new rebrand logo Meta is seen on smartphone in front of displayed logo of Facebook, Messenger, Instagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021 (credit: REUTERS/ DADO RUVIC)
Facebook's new rebrand logo Meta is seen on smartphone in front of displayed logo of Facebook, Messenger, Instagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021 (credit: REUTERS/ DADO RUVIC)

US lawmakers have previously admitted to failing to protect internet users' safety and that only legislation would require social media platforms to take action to prevent the spread of fake news. This move by Britain is estimated to lead other companies to establish standards of trust and control over the published content.

At Meta, it is still not possible to identify all the content created by artificial intelligence and there will be those who try to bypass tagging technology. However, they said that their intention is to continue searching for ways to monitor some of the uploaded content and also ask users to share information about content created through artificial intelligence, so that the company can add a tagging label to it.

In recent months, deepfake images have become increasingly sophisticated, to the point where it is sometimes difficult to tell if they are real or not. For example, last January, fake images of pop star Taylor Swift were uploaded to social media, which are believed to have been created using artificial intelligence.

In Britain, a slideshow of eight images depicting the Prince William and Prince Harry during the coronation of King Charles was circulated on Facebook, receiving over 78,000 likes. One of the images showed an apparent emotional hug between William and Harry following reports of a rift between the brothers. None of the eight images were real.

Advertisement

Another fake photo showed former US president Donald Trump, also created using artificial intelligence, after he was accused of allegedly election fraud charges.