Java Post Adds to its Invisible Effects Toolbox with Content-Aware Fill in Adobe After Effects
Putting Content-Aware Fill for video to the test.
Image courtesy of Goldrush Entertainment and Minds Eye Entertainment.
It’s that time of year again, when Adobe raises the bar with amazing new features in its professional video and audio tools. In the months leading up to the big reveal, some very talented creative professionals road tested features in the latest beta version of Adobe After Effects and provided valuable feedback to the product teams.
One customer, Jack Tunnicliffe of Java Post, is a veteran of the film and television industry, having worked in nearly every aspect of film and television production. After purchasing Adobe After Effects in 1995, he started focusing exclusively on post-production, namely visual effects and color correction work. Today, he’s a go-to person in the industry for fixing shots and doing invisible effects work.
Putting Content-Aware Fill for video to the test
For the past few months, Jack spent time testing Content-Aware Fill in After Effects, and even used it on an upcoming Hollywood film, A Score to Settle, starring Nicholas Cage and Benjamin Bratt. His focus on fixing shots is one of the things that drew him to Content Aware Fill for video, which is powered by Adobe Sensei, our artificial intelligence and machine learning technology.
“Repairs are a big part of any high-end production,” says Jack. “For me, Content-Aware Fill in After Effects is a huge tool in my tool box.”
Almost every project Java Post works on requires some form of removal, such as logos, objects, or shadows. The studio does most repairs by tracking clean plates and painting. Now with Content-Aware Fill, Jack can accomplish many of these tasks directly in After Effects or by using it in conjunction with Adobe Photoshop to create reference frames to help handle lighting changes in shots.
“When people start learning how to do removal, they often think it’s simply a matter of painting or cloning items away using adjacent frames. But in reality, the clone work has to be so precise, it’s an impossible task, as even the smallest changes in paint or texture chatters in playback,” explains Jack.
With Content-Aware Fill, the dream of easily painting or cloning out items is now a reality. One scene in A Score to Settle uses squibs, which are miniature explosive devices, to simulate Nicholas Cage being shot. In the original scene, the squibs were visible under his jacket—especially when the sun reflected off them. To fix the shot, Jack created a reference frame in Photoshop of the jacket with the bulges removed. He then brought the frame back into After Effects and ran Content-Aware Fill, which adjusted the other frames to match.
“Content-Aware Fill in After Effects stayed true to the reference shot of what the jacket was supposed look like, and it did it over multiple frames,” says Jack.
Images courtesy of Goldrush Entertainment and Minds Eye Entertainment.
In another shot, Jack used a reference frame to calculate lighting changes in a clip. A bottle of pills fell under a bed and the cap remained directly in the light source. In the master shot used by the film, the cap was not there. There was also a hand passing through the light source. They further refined the shot by using several reference frames to correct the lighting.
“Tracking a clean plate or freeze frame doesn’t account for lighting changes and takes another form of manipulation to accomplish,” explains Jack. “Content-Aware Fill can provide the data for the lighting changes through reference frames in the shot. When we posted the corrected clip for producer approval it was immediately approved.”
Images courtesy of Goldrush Entertainment and Minds Eye Entertainment.
Traditional removal and repair work gets a makeover
Content-Aware Fill is also useful for fixing blemishes on skin, also known as digital makeup. In the past, Jack painstakingly tracked every blemish and then blurred and painted each shot. With Content-Aware Fill, he can track regions of a person’s face, create reference frames in Photoshop to clean up the blemishes, and then give After Effects the information to follow the lighting changes as the person’s head moves in a scene.
“Content-Aware Fill in After Effects in the right hands is much more powerful than Nuke and digital painting,” Jack says. “I could process a shot in one-tenth the time in After Effects.”
Jack also discovered that Content-Aware Fill can save the day if a movie is rejected during the QC process at lab. This can happen when a camera hasn’t been black shaded properly or has dead pixels. These pixels, which are visible on the big screen, can be invisible during post production. By putting a mask around the region where the pixels exist and running Content-Aware Fill, Jack easily filled in spots with information from surrounding pixels.
Images courtesy of Java Post.
“In one shot I fixed there were almost 400 dead pixels, but they weren’t visible until we blew the shot up to 400%,” says Jack. “Once I knew where they were, I created a mask for one shot that ultimately worked for every shot because the sensor never moves. Content-Aware Fill just uses the surrounding information to fill in the pixels in each frame – it’s amazing.”
As Jack works with movie and television clients to meet their exacting requirements, he envisions the latest After Effects feature to become an integral part of his everyday work. With just one keystroke, he can remove objects and avoid hours previously spent cleaning up shots. “In an industry where both speed and quality are paramount, Content-Aware Fill is invaluable,” he says.
Learn more about Content-Aware Fill in After Effects.