Works Of Art: AI Inspires New Ways To Create

With its ability to learn and perform repetitive tasks quickly, artificial intelligence (AI) is changing the future of work and the job market. This includes fields that rely heavily on the human imagination.

Works Of Art: AI Inspires New Ways To Create

by Jackie Snow

Posted on 11-15-2019

With its ability to learn and perform repetitive tasks quickly, artificial intelligence (AI) is changing the future of work and the job market. This includes fields that rely heavily on the human imagination.

“There is no reason that creativity should only be attributed to humans,” said Arthur Miller, a professor at University College London and author of the recent book “The Artist in the Machine: The World of AI-Powered Creativity.”

AI is already making some waves in the art world. Last year, a piece of AI-generated art sold for $432,500 at Christie’s, over 40 times the initial estimate. Artists like Taryn Southern and YACHT have put out albums that AI had a hand in crafting. So far, these examples are one-offs that take a lot of effort and expertise, but researchers are at work on creating tools that even the non-technical among us could use. Such tools could be useful in automating the mundane tasks that plague any creative process or in generating new ideas when a creator is feeling stuck.

AI could even expand what we think of as art.

“The machine sees the world very differently than we see the world,” Miller told CMO by Adobe. “At the end of the day, [AI-generated art] might even be better than what we produce.”

While that remains to be seen, here are four innovative ways AI is pushing the artistic process to new, creative frontiers.

Improvise Like A Jazz Master

Learning an instrument takes years of dedication. Now there is a shortcut with Piano Genie, an AI app that lets anyone “play” piano and improvise like a pro.

Piano Genie was “trained” on almost 1,500 piano recordings from an international youth competition. The app learns a new user’s playing style as they press on a small, eight-button keyboard that tracks to a full 88-key piano. After the user plays each note, the program tries to predict what will come next—much like a smartphone’s predictive text function. A free version can be banged on here.

Piano Genie was recently used at a concert by rock band Flaming Lips to help create a new band member: the audience. The eight buttons were mapped to giant inflatable balloons with sensors that were thrown into the crowd. Each touch from an audience member triggered the system and played a note over the loudspeakers.

In the future, the technology could be extended to other instruments, and let a newbie jam with a band, said Chris Donahue, who has played piano for 20 years and is on the team behind Piano Genie. Who knows? It might even expand the world of music beyond what humans know today, he said.

“It’s possible the technology could come up with something that is impossible to play,” he said. “A nice property of these generative systems is they can surprise you.”

Cartoons Come Alive

Illustration has advanced quite a bit since Walt Disney Studios’ Steamboat Willie, the company’s first cartoon with synchronized sound. These days animators can leverage AI and machine-learning technology like Adobe Sensei to expand what cartoon characters can do.

Adobe Sensei

Power incredible experiences with AI.

Learn more

“For all of these advanced tools we are building that [involve] some level of automating, the goal is to automate parts of the process that are tedious,” said Wil Li, a principal scientist for Adobe Research, the company’s team of research scientists and engineers shaping early-stage ideas into innovative technologies. “By automating those things [with artificial intelligence], it frees artists up to do more creative things.”

For example, Adobe Sensei powers features in Adobe Character Animator, which offers different animation options for creators. One new feature called Characterizer helps generate digital character puppets, which previously needed to be painstakingly built from scratch. Character Animator can create one in seconds based on a model’s face and a piece of art for a style reference, and then animate it in real time using a webcam. Twitch, a popular video game streaming site, lets users create custom avatars and turn to Character Animator to do live expression tracking that moves and reacts as they do.

Adobe Sensei also powers the content-aware fill for video feature in Adobe After Effects, which lets creators automatically cut out unwanted objects in a video scene, instead of having to do it frame by frame. Further developments Adobe is looking into, according to Li, are filters that can change how a voice sounds and ways to generate design ideas based off of sketches.

“For any part of this animation workflow, there are opportunities for AI to inject new possibilities or to amplify our own performances,” Li said.

Food For Thought

The AI use cases for text have historically trailed behind what is possible with images, which have received the lion’s share of research attention. OpenAI, a San Francisco-based nonprofit, put that attention on text earlier this year with GPT-2, which uses AI to generate paragraphs of text that (mostly) make sense after getting a sentence or two as a start. GPT-2 learned to do that after being trained on 8 million internet articles.

When she saw the results, she “gasped with pleasure.” The program had come up with ideas that helped her hone in on the details of her story that she was struggling with. Samuels said she has come to think that the sweet spot with GPT-2 seems to be bits of inspiration, and she’s looking forward to the release of a more advanced GPT-2.

“I can’t wait to meet the even more robust version of my collaborator,” she said.

Try it out for yourself on Talk to Transformer, a site that lets anyone play with a slightly stripped-down version of GPT-2. People have taken to Twitter and posted their own experimentswith what different inputscame back with.

Generate New Product Ideas

With previous software and methods, designers essentially create something already in their brains, based on their imaginations, and real-world inputs such as previous designs. Autodesk, however, is using generative design, an AI-powered process, to iterate product designs quickly while also balancing parameters like material, cost, and production method rules as it comes up with thousands of options.

“With generative design, you’re flipping the usual process on its head,” said Autodesk senior director of design research Mark Davis. “It’s a whole new way to design.”

Autodesk, a company that makes software for industries including architecture, engineering, construction, and manufacturing, launched its generative design tool, called Dreamcatcher, in 2017. Since then, the company has collaborated with NASA to come up with an interplanetary lander concept and with GM on vehicle components.

A chair unveiled earlier this year at Milan Design Week will be the first commercial product available for sale created by Autodesk’s generative algorithms. To develop it, software churned out hundreds of designs, optimizing for characteristics such as comfort and using the least amount of material possible. French designer Philippe Starck pointed out that while the details that make the chair special are subtle, the comfort and ease of production really matters at scale. And while generative design could be used to come up with ideas that are unlike anything a human would create, it also has the potential to improve existing manufacturing processes around the world.

“We haven’t explored [what’s possible], not even close,” Davis told CMO by Adobe.

Topics: CMO by Adobe, Insights & Inspiration, Artificial Intelligence, Experience Cloud, Digital Transformation, High Tech, Information Technology

Products: Experience Cloud, Experience Platform, Sensei