Animate Characters with Auto Lip-Sync Powered by Adobe Sensei

Adobe Sensei, our artificial intelligence and machine-learning technology, powers numerous features and services across our suite of products to streamline and simplify your workflows and enhance creative expression possibilities. Within the products you already know and love, these features help eliminate tedious tasks, freeing you up for truly creative pursuits and maximizing your ability to deliver powerful digital experiences.

Adobe Animate, our premier tool for creating animations, is used in diverse fields like character animation, games, ads, and e-learning content to name a few. Lots of our character animators create interesting characters. Often, they need to simulate the effect of characters talking to each other or directly to the audience. They spend an inordinate amount of time mapping mouth poses to the sound inflections just to simulate this. We collaborated with the Adobe Sensei team and came up with a solution where animators can use the Auto Lip-Sync to do this automatically.

You can see how easy it is to do so.

Please let us know what you think about this new feature and how it can be enhanced. We are also keen to know other similar problems that could potentially be solved using machine learning or AI.

As always, you can reach out to me at _<ajshukla@adobe.com_>.


https://www.adobe.com/creativecloud.html