Meta unveils major AI innovations, including WhatsApp integration

The tech giant introduces a series of breakthroughs in artificial intelligence, expanding to all platforms, including WhatsApp, featuring real-time text-to-image creation.

  (photo credit: SHUTTERSTOCK)
(photo credit: SHUTTERSTOCK)

Meta, formerly Facebook, continues its innovation streak by revealing a slew of advancements in artificial intelligence, set to integrate across the company's platforms, including WhatsApp. Among the innovations: the introduction of a personal AI assistant, Meta AI, integrating within the company's applications; a new open-source language model boasting industry-leading performance (Meta Llama 3), utilized in building the Meta AI assistant; a new AI tool that transforms text into real-time images and can animate them—available in beta in the United States (Imagine); and new safety measures ensuring open and responsible innovation.

The Meta AI personal assistant expands to applications and gradually to more countries. Meta announces tonight that its personal artificial intelligence assistant, Meta AI, will gradually become available worldwide through the Facebook, Instagram, WhatsApp, and Messenger applications, as well as through Meta's Ray-Ban smart glasses and Meta Quest. The Meta AI assistant, launched tonight, will be available in English in over 12 countries outside the US (including Australia, Canada, Ghana, India, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe)—this is just the beginning.

The Meta AI assistant is developed based on the new Meta Llama 3 language model, already considered one of the world's leading AI assistants in terms of performance, and will be available for free gradually worldwide. Additionally, the Meta AI assistant will be accessible through the browser on computers, at the address meta.ai. Meta's AI assistant can perform a variety of tasks—from searching for restaurants and providing vacation recommendations to assisting with studies, solving math problems, or offering inspiration for interior design.

The Meta AI assistant will also be available for search on Facebook, Instagram, WhatsApp, and Messenger—allowing people to access real-time information from the internet without closing the applications or switching to another app. Additionally, people can access Meta AI while scrolling through their Facebook feed. If they come across an interesting post, they can request more information directly from Meta AI through the post itself.

New Open-Source Language Model: Meta Llama 3 Meta releases the first two models of Meta Llama 3, now available for widespread use. This early edition includes pre-trained language models with eight billion and 70 billion parameters, capable of supporting a wide range of use cases. The new generation of Llama demonstrates industry-leading performance across a wide range of measurements compared to open models and is comparable to other commercial models.

The initial models will be launched as open-source, for the developer community, as part of Meta's longstanding commitment to open innovation in artificial intelligence. Thus, the company embraces the ethos of early release open-source code, allowing the community access to models while still in development. The text-based models released today are the first in the Llama 3 collection, with the goal of making Meta Llama 3 a multilingual and multimodal model and continuing to enhance the model with a variety of capabilities and tasks (such as summarization or code writing).

New AI Tool for Real-Time Image Creation: Imagine Additionally, Meta introduces tonight a new tool for creating images from text commands in real-time. The tool will be launched in beta in the United States, within WhatsApp and the web version of Meta AI.

  (credit: SHUTTERSTOCK)
(credit: SHUTTERSTOCK)

When people input the command Imagine in a chat with Meta AI, they can compose a prompt—and the image will start appearing as they type. The generated image will change with each additional letter typed, allowing people to see in real-time how artificial intelligence generates and brings their vision to life. The tech giant also showcases new capabilities to enhance images with the help of the AI assistant, which can turn images into animations and even GIF files.

New Safety Measures for Responsible Innovation Meta is committed to responsible artificial intelligence development and helping others do the same. The company shares tonight how it has refined how the Meta AI personal assistant responds to prompts on political or social issues, incorporating guidelines to ensure that the AI assistant does not present a single opinion or viewpoint but summarizes various relevant perspectives on the topic. Meta also used red teams and external experts to identify unexpected ways people might misuse Meta AI or the Llama 3 model. The company has also embedded protection measures—both at the prompt and output level of the AI assistant.

  (credit: SHUTTERSTOCK)
(credit: SHUTTERSTOCK)

Additionally, Meta has expanded the training data set for Llama 3 so that over 5% of the early training data for the model consists of high-quality non-English data, covering more than 30 languages. Furthermore, Llama 3 is trained on a variety of public data, and for training purposes, Meta has removed data from sources known to contain a large amount of personal information. Following model training, Meta conducted both automatic and manual evaluations—alongside red teams and external experts—to understand model performance across a range of risk areas.

In the coming months, Meta will release additional versions of Llama 3 with new capabilities and multimodality, including multi-language conversation and stronger overall performance. Meta is currently training a model with 400 billion parameters, and any final decision regarding its open-source release will be made following safety evaluations in the coming months.