Animation Evolution: From Paper to Digital to 3D to Live Stream

Technology advances improve the creative process

Animation has changed enormously in the last 40 years. Originally, animators drew everything on paper — a time-consuming and expensive process.

“Obviously, the amount of work required to do that is extraordinary, especially when you need as many as 24 drawings per second,” says David Simons, senior principal scientist at Adobe. “You also have to be very skilled to be able to depict the motion and bring a character to life.”

The first revolution in animation was the arrival of digital options with computer-generated imagery (CGI). These new animation tools had their challenges, as well. They were complicated to use and rendering and playback time was limited by low computer memory. Today, things are different. Digital animation is advancing quickly, allowing today’s artists to create work just as authentic as anything you’d find on paper all those years ago — and so much faster.

Combining technology with human creativity has helped the medium evolve even further.

Tools like Character Animator, released in 2015 as part of Adobe After Effects, have enhanced animators’ natural creativity and marked an important industry milestone. With Character Animator, designers bring 2D characters to life. A professional animator, or any designer, can create a layered character in Adobe Photoshop or Adobe Illustrator, bring it into a Character Animator scene, act out the character’s movement in front of a webcam, and create an animated scene in real-time.

Animators can make characters mimic their facial expressions, creating animations that more closely mirror the real world.

Character Animator uses a webcam to track facial expressions — from raised eyebrows to moving lips — as well as head movements. It also records the voice using the computer’s microphone. So when the animator, actor, or whoever is in front of the webcam looks surprised, happy, or angry, the character does, too. Even subtle facial expressions show up instantly, along with recorded dialogue and other actions triggered by a few simple keystrokes.

All of this delivers the ability to create engaging characters, with animations that are developed through real-world and real-time elements.

Live Animation on TV

The next wave of animation is happening right now. Live-streaming is a groundbreaking new feature of Character Animator. It received prime-time attention in May 2016 after a team from The Simpsons approached Adobe to create a live performance of their popular show.

The Simpsons team used Adobe Character Animator to improvise a three-minute live segment for the iconic show.

“They apparently searched ‘live character animation’ online and found us,” David says.

The Simpsons animation team wanted to create a live episode** **that would feature Homer Simpson answering and engaging with live phone calls.

On the program’s first-ever live segment, Homer shared his take on current events and responded directly to questions asked by fans calling in. Dan Castellaneta, Homer’s voice actor, delivered the live performance, and Character Animator made the magic happen through real-time lip sync and keyboard-triggered animations.

“Character Animator was very easy to work with. I found it flexible and very fast,” David Silverman, consulting producer at The Simpsons, says. “Everything went flawlessly.”

Character Animator has “starred” on other TV programs, too, including Chelsea and The Late Show with Stephen Colbert. Characters named Cartoon Donald Trump and Cartoon Hillary Clinton both made appearances on The Late Show with Stephen Colbert during their 2016 U.S. presidential campaigns.

Katherine Isabelle Weber, the lead animator on staff for Chelsea, is particularly excited about the possibilities of using the tool with social media.

“We make a lot of digital content that never even goes on the show,” she says. “To have a tool like Character Animator, which lets us talk to the public directly using fun characters, is really impressive. It also helps us incorporate more animation in the show and gives it a digital animated identity separate from the live-action content.”

Character Animator is improving constantly. The April 2017 release, for example, simplifies workflows, enabling users to create dynamic full-body walking animations, make precision adjustments to lip sync, and take advantage of an improved workspace interface, among other things.

It’s changing the possibilities for animators and all creatives.

“People don’t usually associate animation with speed and simplicity,” says Bill Roberts, senior director of product management at Adobe. “Traditional animation takes a huge amount of time to do well. It’s not easy to convey emotion and action, and if you design too fast, you risk losing all those great ‘in-between’ moments. Character Animator is a game changer.”

Machine Learning: Pushing the Boundaries

Machine learning, this year’s marketing buzzword, is also making a play in animation. Machine learning is a form of artificial intelligence that allows computers to mimic human intelligence and create predictive models. As computers ingest more data, the technology makes animation more intelligent, intuitive, and efficient.

Adobe Sensei, the machine learning and artificial intelligence capabilities integrated into many Adobe products, is integrated into Character Animator in several ways. For example, with the automatic lip sync technology, it listens to a voice and automatically picks the right mouth shapes in real time to animate the character.

“That process involves machine learning,” David notes. “The product listens for and analyzes 66 different phonemes — the different sounds that distinguish one word from another — and figures out which mouth shape to replicate.”

Animation entertains and engages audiences with ongoing innovation from Adobe.

Soon, the capabilities could evolve even further. Project Puppetron, a technology sneaked at Adobe MAX this year and powered by Adobe Sensei intelligent services, could allow animators to integrate artistic styles with photos of faces. This would allow animators to quickly create puppets that can be used directly in Character Animator.

“Puppetron is using Adobe Sensei technology to automatically take any art style and apply it to any other face,” David says. “With Puppetron’s improved puppet animation, you can use a sample image — or even a picture of yourself — and it will adjust to the graphic style of your choice and animate in real time, based on your facial expressions.”

Stylized Facial Animation is another exciting development in digital animation. Using the same technology as Project Puppetron, this Adobe Research project can apply input styles (faces, currently) to video with natural-looking results.

As these examples show, machine learning can help make animations look and feel much more realistic, enabling creatives to reflect the real world much easier in their art.

A Team Approach to Innovation

All these advances derive from a team approach to innovation at Adobe, according to David.

“We are a very small product development group, but we work closely with folks from Adobe Research,” he says. “The whole character animation effort started from a collaboration between Research and Advanced Product Development. It’s been a very fruitful way to innovate.”

What are some of the upcoming advances we might see from these collaborators in next year’s Sneaks at Adobe MAX ?

**“**One obvious thing is full-body tracking,” David notes. “The algorithms are improving to the point where full-body tracking with a regular camera is becoming accurate enough that it can be useful.”

“Looking ahead, I don’t know that animation itself will look very different,” David adds. “I think the real changes will come in the creative process, making it easier and more accessible to more people. I’m most interested in enabling more people to do stuff that maybe looks like traditional animation, but is made in a completely new and modern way.”