The Future of AI and Creativity: An Interview with Jon Brandt

Jon Brandt joined Adobe 15 years ago to help expand our research into artificial intelligence (AI). He immediately went to work on how to automatically detect — and correct — red-eye in a photo with Photoshop. That was an early application of AI, computing methods that perform functions the human mind can do. Today, Jon continues to be fascinated with trying to figure out how the mind works and applying those learnings to improve artistic expression. We recently sat down with Jon to get his take on the future of AI and his overall vision for how we will use AI and machine learning to help creatives and other Adobe customers.

What are some of the biggest creative pain points that AI could solve?

Although our creative tools are awesome, there is a lot of tedium in pretty much everything you do when you interact with content. If you are a Photoshop user and want to change the text on the billboard in a photo, it’s a tedious process to erase what’s there, type in new text, match the font, and draw it in perspective. The artist shouldn’t have to think about what layer he needs to go to or RGB color he wants. The tool should fade into the background so the artist can have a much more immediate experience with the canvas.

If you could get the results you want by simply telling the tool what you want, then you are no longer interacting with a tool, per se, but rather an intelligent assistant that takes care of the tedium, without disrupting the creative process. With the power of AI, Photoshop will become your creative assistant as opposed to just your creative tool.

So AI will promote greater efficiencies in the creative process?

It’s already doing that in many ways. Adobe Sensei — which is the AI that infuses Adobe products — uses machine learning and deep analytics to optimize creative processes. For example, it can automatically detect facial features so you can easily adjust someone’s eyes or mouth in a photo to create a stronger artistic impact.

But we’re just scratching the surface, both in the artistic possibilities and the efficiencies we can gain from artificial intelligence. For example, consider “spin sets” — a standard, expensive, manual, studio process for creating 360 degree walk-arounds of products such as shoes, handbags, etc. With artificial intelligence, we can generate all the different views of a product from maybe one photo, which is a definite short-cut for the photographer. This is an example of an exciting new development that has come along in the past few years where we are using neural networks to create images that have never been seen before. Not only will this save money for creative professionals, it also will greatly speed up their process and give them much more freedom than could be achieved in the studio.

What is the biggest technical obstacle facing AI adoption in creative work?

In the short term, compute power. The models that are used to create these amazing effects are the output of deep learning. That is a subdiscipline of AI that allows technology to analyze huge sets of data so it can inductively find answers to problems rather than follow explicit codes. But the models are resource heavy. It’s hard to download them, it takes time and a lot of memory, especially on a mobile device.

And what are the issues in the long term?

Scaling from a cool demonstration to production quality. Our users have high expectations around the quality of results they can get using our tools. Quality is a key differentiator for us, which we are going to keep, even if it comes at the cost of time to market.

We are here to invent creative AI and transform the world through AI and machine learning. No one else is in a position to do that the way we are, because we have this tight loop between marketing and creative tools, content creation, and content consumption. That said, the process of figuring out how to make it work takes time.

For example, we put years into tagging technology and have done iteration after iteration, striving to make it robust. It’s currently available as a service as AEM Smart Tags in the Adobe Experience Cloud, and is being used to improve search results in Adobe Stock. In all of our comparisons with third party alternatives, our results are best in class. Why? Because we take the time to sweat the details.

Control is a second major issue that arises when applying AI for creative workflows. Although we are seeking to automate things to relieve the tedium, our creative users expect complete control over their work. This is another way that cool demos differ from production quality features. It can take months, or years, to iron out all the details so that the feature works in the ways our users expect.

From a technology standpoint, what does it take to make artificial intelligence truly mainstream?

In a word — language. Although we’ve made great strides in human language understanding, as evidenced by Alexa and company, we are still very far away from being able to communicate with computers in the way we communicate with each other.

The more we study this problem of communication, the more we are learning how much intelligence is needed to communicate effectively. It’s one thing to understand the words that I am saying. It is quite another to understand what I mean, taking into account conversational context, world knowledge, as well as an educated guess of what I know, what I want, and what I might be feeling.

Incorporating all of these complexities of communication into a creative assistant is a daunting task. Not only do we have the general challenges of understanding the thread of any conversation, but we also have to understand in depth all the concepts and activities that may be a part of the creative process.

For sure, we are still far from the goal of natural interaction with the computer, but we’re making progress, and the intermediate results are still pretty useful for our creatives.

There are concerns that technology will become so powerful it will replace people, even artists. What’s your take on this?

I believe just the opposite will happen – the technology will enhance people. Basically we are trying to build technologies that will expand and extend the scope of people’s creative expression. An analogy is the camera. Before the camera was invented, artists could only produce a few paintings in their lives and the reach of their work was pretty limited. Cameras gave them a whole new art form, which has grown, thrived, and expanded well beyond what anybody could have imagined when the camera was invented. The ability to manipulate digital images has given artists yet another whole new art form.

If you look at where AI and machine learning in the space of creativity is today, we can draw parallels to where Photoshop and digital photography was about 25 years ago, and where the camera was almost 200 years ago. At that time, engineers were figuring out how to use this thing and how to make it work. At some point in the evolution, much of the technical problems had been solved. Then, artists started to explore what could be done with it, and many amazing new art forms arose as a result. AI technology, as it stands today, is certainly powerful and impressive, but when the artists take over it will be truly amazing. I can’t wait!

You May Also Like …