Research scientist Jingwan (Cynthia) Lu first came to Adobe Research as an intern while pursuing her doctorate in computer science at Princeton. She completed four internships with the research team, publishing four research papers in leading venues and producing a landmark technology, Real Brush, which became a digital painting feature in Adobe Photoshop Sketch. Now a full-time researcher, she’s on a quest to develop sophisticated machine-learning-powered tools for creatives.
What do you focus on at Adobe Research?
In graduate school, I figured out that I wanted to work on digital painting, but from a novel, data-driven perspective. That’s different from traditional, simulation-based digital painting that models physical interactions between canvas, pigments, and painting instruments. I asked, “We have pictures of actual brushstrokes out there, pictures of paint — how can we use that data to enrich the appearance of digital painting?” That’s how Real Brush was born.
Now, I want to bring a data-driven, intelligent support approach to other creative processes, especially image editing and synthesis. I want to know how we can use machine intelligence and the vast amount of data available today to help make the process of working with images more intuitive, and to free artists for the real creative task.
Scribbler is one of my first projects in this area. I hired an intern, Patsorn Sangkloy from Georgia Tech, to partner with me on this. Scribbler was chosen for an Adobe MAX demo in 2017. It’s an interactive system that colors and textures your images, powered by machine learning and Adobe Sensei.
Cynthia demos #ProjectScribbler during Sneaks at Adobe MAX 2017.
You are involved in cutting-edge work on GANs, generative adversarial networks. Could you tell us about this area?
On a broad level, we want to know: How can we edit images in a more intelligent way? Instead of pixel-level editing, can we allow the swipe of a finger, a simple click, or a scribble to achieve intelligent editing?
GANs are a form of artificial intelligence using machine learning, and they may hold the key. We feed lots of data into these networks. Once trained on the thousands of images we expose them to, the system can create new image content. We can leverage that for developing powerful tools.
This kind of machine learning can be very complex. Scribbler uses a specific type of conditional GANs. It has one network that learns to understand input images and generates modifications constrained by them, and another network that learns to discriminate whether the generated image looks real.
I am a leader of an internal initiative that focuses on using GANs to help with image editing and image synthesis. We want to come up with an overall vision, share ideas, and collaborate on projects, including Scribbler and others.
I also work on makeup transfer, where you can take makeup from a face in an image and apply it to another image. This also relies on conditional GANs. It was shown in a diversity talk at Adobe, chosen as a positive example of the benefits of having a diverse workforce working on these kinds of questions.