5 Ways AI is Changing the Fabric of Brand Storytelling
Artificial intelligence is finding its way into content ideation and actual storytelling.
In late July, JPMorgan Chase, the largest bank in the United States, did something so opposite of its reputation as a conservative financial organization that it caught many industry observers off guard.
It picked an artificial intelligence (AI) algorithm over human beings to write a portion of its ad copy.
After experiments with synthetic storytelling brought a 450% lift in ad click-through rates (compared to 50 to 200% through other means), the bank inked a five-year deal with Persado to use its AI engine for direct-response emails and online display ads.
“Persado’s technology is incredibly promising,” JPMorgan Chase Chief Marketing Officer Kristin Lemkau said in a statement. “It rewrote copy and headlines that a marketer, using subjective judgment and their experience, likely wouldn’t have. And they worked. We think this is just the beginning.”
This isn’t the first time AI has found its way into ideation and actual storytelling, and it certainly won’t be the last. In fact, while AI arguably still has a long way to go before it can equal or exceed human imagination, it can already automate more basic or repeatable tasks to accelerate creative processes.
“There are a lot of practitioners who would rather not waste their time in drudgery doing repetitive tasks,” Adobe CTO Abhay Parasnis told Forbes last spring. “AI can take those out of their lives and inspire them to focus on the more creative, storytelling, high-level activities they would rather spend time on.”
In marketing terms, AI essentially contributes to creative storytelling by uncovering and aggregating structured and unstructured data from multiple channels and looking for patterns. These algorithms can help creatives tell richer and more immersive brand stories that capture the imagination of target audiences.
Here are 5 ways AI is changing the fabric of brand storytelling:
1. Rapid iteration
With AI, creative teams can produce content in mere days, or even hours, that would otherwise take months. How? By eliminating the all-too-common grunt work bogging down design teams.
In April, for example, Adobe announced new Creative Cloud features that included groundbreaking Content-Aware Fill for video, powered by Adobe Sensei, the company’s AI and machine-learning technology. It also added new capabilities for titles and graphics, crafting animations, refining audio mixing, and organizing and preparing project media. In addition, it rolled out hundreds of performance improvements, including faster Mask Tracking for effects and color workflows, dual GPU optimization, and improved hardware acceleration for HEVC and H.264 formats in Premiere Pro.
All of these new features probably fly under the radar for the average user. But what they are doing is automating previously tedious work processes so that almost anyone — whether they’re an experienced or novice creator — can get involved in motion design, animation, and special effects (whereas those fields were once reserved for individuals who knew how to code and animate and had some grounding in visual and design techniques). In the near future AI will enable more professionals to jump into the game and rapidly create and circulate powerful, compelling, and memorable brand stories that matter.
2. Experimentation
Before long, AI will enable computers to look, listen, and learn from what a designer is trying to do and make recommendations along the way.
As designers assemble their creative stories, they’ll have options presented in their moments of need. In a sense, the designer will become a curator and the computer will serve as a creative assistant, proposing ideas. In that way, designers working with a drawing and painting application like Adobe Fresco will be able to experiment more quickly and effectively because they won’t have to hunt for the right brush, color, shape, or font based on their past activities and those of other designers. Machine-learning algorithms will help them zero in on the most likely options, saving time so their brains can stay in creative or strategic (rather than tactical) mode.
While that technology will invariably show up in tools like Creative Cloud, it may not be as apparent as it is today. In the future, some industry observers think AI will be invisible. It will be ambient. It will basically serve as a mostly silent assistant enabling its human counterparts to ideate, experiment, and create.
3. Multi-sensory design
Today, we can extract and use data from machine senses, such as our augmented reality (AR) cameras, voice-controlled speakers, biodata, and, eventually, haptics. Combining this “sensory” data with the power of AI is so powerful that it has produced a new computer platform that performs beyond our typical six senses. We call this spatial computing.
Spatial computing will become the next computing platform, like mobile computing and desktop computing before it. It will essentially operate in the spaces around us and use machine senses as input. Where we once used touchscreens and computer mice to input information, with spatial computing the machine will watch us, mimic our vocal patterns or gestures, and then feed that information into our onscreen creations, with AI acting as the nervous system to spark all of this.
In short, where many of us grew up thinking we had to adapt to the computer, in the future PCs will adapt to us, making our lives a whole lot easier and freeing us to imagine, create, and be more efficient.
4. Data-driven design
During the creative process today, users typically relate to design tools on their level. They must know the terminology of the tools — masking, liquifying, or dodging — and how to apply each to their particular need.
But in the future using AI and machine learning, those tools will become more contextually aware and serve up capabilities based on the task of the moment. So, for example, in the old days if you noticed a mouth was a little misshapen and wanted to fix it, as you adjusted the shape, the program might try to extend the pattern of the rest of the face to where you adjusted the lips. But with AI, once it has access to data about hundreds or thousands of other mouths, it would use that to predict what should go where and recommend or make accurate adjustments.
Similarly, as the technology progresses, design tools might enable users to change the style of an object from, say, modern classical to look like a Vincent van Gogh painting. Or an artist working with maple leaves may decide they’d look better as oak leaves and ask the program to make that adjustment. With AI and its ability to refer to and utilize mass treasure troves of data, almost anything will be possible.
5. Continued rise of assistants
In the future, as AI becomes more integrated with design tools and is enhanced by capabilities like natural language processing (NLP), it may also be possible to simply utter a few commands and have elements of that creation change in seconds. Many of us are already accustomed to using voice commands with personal digital assistants, such as Alexa and Siri. Why not creative tools?
For instance, by saying, “brighten the sky,” the tool could immediately call upon past experiences to know what that specific designer might have in mind and then make the change. Or by asking the program to “remove the dog” or “take out the people,” it could also identify the pixels associated with those things and cleanly wipe them from the screen.
These assistants will be subtly integrated into applications, only serving up suggestions that make sense in the moment and facilitating — rather than obstructing — creative processes.
The goal behind all of these trends is to remove barriers and enhance creativity. The ability to create should not be limited by your access to software classes or even your capability to invest time to explore a particular application. In the future, AI will level the playing field and make creative storytelling easier and more approachable for a wider array of designers and marketers.