At the Augmented World Expo on Tuesday, Snap teased an early version of its real-time, on-device image diffusion model that can generate vivid AR experiences. The company also unveiled generative AI tools for AR creators.
Snap co-founder and CTO Bobby Murphy said onstage that the model is small enough to run on a smartphone and fast enough to re-render frames in real time, guided by a text prompt.
Murphy said that while the emergence of generative AI image diffusion models has been exciting, these models need to be significantly faster for them to be impactful for augmented reality, which is why its teams have been working to accelerate machine learning models.
Snapchat users will start to see Lenses with this generative model in the coming months, and Snap plans to bring it to creators by the end of the year.
“This and future real time on device generative ML models speak to an exciting new direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether,” Murphy said.
Murphy also announced that Lens Studio 5.0 is launching today for developers with access to new generative AI tools that will help them create AR effects much faster than currently possible, saving them weeks and even months.
AR creators can create selfie Lenses by generating highly realistic ML face effects. Plus, they can generate custom stylization effects that apply a realistic transformation over the user’s face, body and surroundings in real time. Creators can also generate a 3D asset in minutes and include it in their Lenses.
In addition, AR creators can generate characters like aliens or wizards with a text or image prompt using the company’s Face Mesh technology. They can also generate face masks, texture and materials within minutes.
The latest version of Lens Studio also includes an AI assistant that can answer questions that AR creators may have.
Comment