When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft's new AI creates super-realistic talking-head deepfakes, and it made Mona Lisa rap

Deepfakes of human facial expressions generated by Microsofts VASA-1 model

Microsoft Research Asia released a new paper introducing VASA, a framework for generating lifelike talking faces. The researchers presented their model, dubbed VASA-1, that can generate realistic videos based only on a single static image and a speech audio clip. The full paper is available at arXiv.

The results are impressive and beat all previous tools that use generative artificial intelligence to produce realistic deepfakes.

What is particularly interesting about VASA-1 is the overall ability to emulate natural facial expressions, a wide range of emotions, and lip-sync ability with very few artifacts.

The researchers admit that the model – like all the other models – still struggles with non-rigid elements, such as hair. However, even in this area, the model performs above average, mitigating one of the known red flags when identifying an inauthentic, deepfake video.

The technical cornerstone, Microsoft says, is an innovative holistic facial dynamics and head movement generation model that works in an expressive and disentangled face latent space. VASA-1 also offers real-time efficiency:

“Our method generates video frames of 512 × 512 size at 45fps in the offline batch processing mode, and can support up to 40fps in the online streaming mode with a preceding latency of only 170ms, evaluated on a desktop PC with a single NVIDIA RTX 4090 GPU.”

The tool based on the new model is very easy to use and even offers the ability to control “optional signals as condition,” meaning the user can set a main eye gaze direction, head distance, and emotion offsets:

VASA-1 also handles non-realistic inputs, such as art. Therefore, it can essentially bring paintings to life too.

The model can also make the photos sing, rap, or talk in languages other than English. As one of the examples, Microsoft presented a hilarious clip of crazy Mona Lisa rapping:

It is important to emphasize the potential harm that such technology could cause when used to generate content imitating actual people – not just politicians and celebrities, but also regular citizens. The good news is that Microsoft’s researchers are aware of the risk:

“We have no plans to release an online demo, API, product, additional implementation details, or any related offerings until we are certain that the technology will be used responsibly and in accordance with proper regulations.”

Microsoft acknowledges the possibility of misuse. However, it also highlights the potential benefits of the technology, ranging from enhancing educational equity, improving accessibility for individuals with communication challenges, and offering companionship or therapeutic support to those in need.

It is worth mentioning that Microsoft’s competitor, OpenAI, also faces a similar dilemma. Just recently, OpenAI presented a powerful AI model for voice cloning but opted not to make it public. The company claims that the wider release of this technology should go hand in hand with policies and countermeasures to prevent its misuse.

Report a problem with article
Cities Skylines 2 Beach Properties DLC
Next Article

Cities: Skylines II DLC buyers are getting refunded, studio pledges to improve base game

People working on Microsoft Edge Dev 125
Previous Article

Edge Dev 125.0.2518.0 is out with web capture improvements and multiple fixes

Join the conversation!

Login or Sign Up to read and post a comment.

35 Comments - Add comment