Audio2Face 2023.1 Release Highlights#
Audio2Face 2023.1 comes packed with lots of new features, functionality and workflow tutorials. We’ve expanded on our AI models, further improving A2F multi-language support. In addition to our existing Mark AI model, we’ve added a new Chinese female AI model called Claire - providing users with more options for generating realistic facial animations.
2023.1 also includes Livelink and Avatar Streaming, providing the ability to stream blendshape animation from Audio2Face to Unreal Engine, allowing you to drive a MetaHuman character’s facial animation. Blendshapes are supported for tongues on both Both Mark and Claire AI models.
The Streaming audio player now engages Audio2Emotion, providing developers and users the full performance capabilities of Audio2Face. In addition, we have improved the Rest API with more comprehensive controls and the ability to export “emotion keyframes” from Audio2Emotion for use in other applications.
New AI model Claire#
Meet Claire, our new multi-language Asian Female AI model.
Introducing the new Ai Models Panel#
New to this release is the AI models panel - providing more choice and a simplified workflow for getting started within the application. Choose which Ai Model you would like to use with which Audio Player. You can also quickly change between available trained Networks for each respective AI model where multiple networks are present.
Live Link and Avatar Streaming#
A2F can now output a Live stream of blendshape animation data to connect to external applications that drive a characters facial animation performance.
See the tutorial here for setting up the Streaming Workflow.
Audio2Face Avatar Stream Application Tutorial.
Livelink Ue4 Plugin#
This plugin enables Live Streaming of blendshapes from Audio2Face to Unreal Engine.
Please see the User Manual for details on installation and setup of the Plugin and Workflow.
Blendshape Streaming to Unreal Engines - Metahuman#
Audio2Face to Metahuman Blendshape Streaming PART 1
Audio2Face to Metahuman Blendshape Streaming PART 2
Improved BlendShapes with added Tongue support#
We’ve improved performance for Blendshape Solvers and We’ve added a tongue blendshape solution for both Mark and Claire AI models. You can find these assets in the A2F Samples Tab.
Export Emotion keyframes#
Provided in the export tab is the ability to export emotion keyframes in JSON format.
Improved Rest API#
Additional routes to Query/Set the Live Streaming of A2F
Additional routes to export the Emotion keys as JSON file
Additional Audio2Emotion routes
Additional routes for the Regular Player
Updated routes for the Regular Player
A2F 2023.1 Sample files#
All Sample files are available in the examples Tab within the A2F application or via the content browser in your localhost at NVIDIA/Assets/Audio2Face/Samples_2023.1/
Set up a Custom Character Transfer#
We’ve created an updated tutorial for setting up a sustom character transfer:
Additional Tutorials#
ARKit Blendshape Part 1 - Ferret Model Overview
ARKit Blendshape Part 2 - Ferret Character Setup Overview