Artificial Intelligence: Friend or Foe?
How humans and machines can work together to amplify human capability.
In the Academy Award-winning film “Ex Machina_,_” audiences are drawn into a trusting relationship between Ava, an artificial intelligence (AI)-powered robot, and its creators, Caleb and Nathan. That is, until Ava murders Nathan and leaves Caleb trapped in the laboratory while she flies away in a waiting helicopter to enjoy a life outside of walled confines.
These days, what we might assume is a far-flung science fiction movie plot might have actually happened. In 2016, a Russian-built AI robot prototype called Promobot IR77 escaped the lab in which it was being developed. Although it was captured, it still made other attempts to leave. These examples beg the question: Can AI backfire on humans?
It’s a question that has troubled us since the days of Greek mythology, where legends of mechanical domestics and intelligent robots first surfaced. Now, in the 21st century, AI is everywhere — in smart toys, digital assistants, medical devices, vehicles, even your refrigerator. AI is the new “big thing” and companies across industries are leaping in feet first.
The process of adopting new technology comes with some question marks, though. The technology industry and its customers are grappling with what AI boundaries to set as they determine whether these advancements will become friend or foe.
“Companies are trying to figure how best to implement AI and whether they should embrace it or fear it,” says Daya Nadamuni, senior manager, corporate strategy and competitive intelligence at Adobe. “First, they have to understand what it is, then they have to learn how to use it. We are in the early stages of AI. As the understanding and usage evolves, in another five years, I expect AI to be mainstream. For technology companies, AI is expressing itself in new software products, features, and services. Adobe is a great example of that.” Read our Primer on Artificial Intelligence.
Working in sync
Certainly, AI-revved digital assistants such as Amazon’s Alexa, Apple’s Siri, and the Google Assistant are more commonplace as they infiltrate our daily lives and work. They are helping to spur greater productivity and creativity.
“There are companies that are looking at AI from a general AI perspective. There are other companies very focused on data — on understanding it, and even understanding and influencing consumer behavior,” says Lars Trieloff, principal scientist at Adobe. “Only Adobe is bringing together the whole gamut of experiences, which means understanding content, understanding creativity, and enhancing creativity through AI.”
This approach to AI, which is powered by Adobe Sensei, not only facilitates creating new experiences, but it supports what Lars calls enlightening experiences. “Experiences that are not just good looking, but experiences that educate, that entertain, or that persuade depending on what you need,” he says.
Everything AI-related — self-driving cars, quantum computing, a travel concierge, medical progress, chatbots, farming, Amazon.com, etc. — boils down to data.
“How do you take large amounts of data and make sense of it so that you can better optimize what customers see and do?” asks Anil Kamath, fellow and VP, technology at Adobe. “That’s clearly something that humans are not able to do with large amounts of data.”
Anil uses the example of sending an email or loading a webpage where it’s not possible for humans to be involved at such a fast pace. “Machines help, and what I’ve been pushing for is this idea of AI being an intelligence augmentation, or intelligence unification. Human intelligence will always be number one, but with machine learning and artificial intelligence you can amplify it. You can do much better at certain tasks by combining those two things. We don’t see it as AI — we see it as IA, which is intelligence amplification,” Anil says.
Looking through the telescope
Gavin Miller, fellow and vice president of Adobe Research, offers a sneak peek into where AI and machine-learning frameworks might be headed for Adobe Sensei and photo editing.
“Imagine replacing an entire building on one side of a street scene. Sensei will assess the images and come up with options that would fit best into that shape, and then wrap it and adjust it so that it looks like it blends in,” he says. “It’s just gestural input, and feels like traditional editing tools, but under the hood it’s doing a lot of smart reasoning.”
Gavin points out how, if an algorithm recognizes mistakes and corrections, it gets smarter every time someone uses it and thereby improves over time with more data.
Adding to the AI technology is talk about generative adversarial networks (GANs) — where one AI algorithm communicates with another algorithm without human intervention — that result in a smarter machine.
“GANs are truly one of the biggest breakthroughs of the last few years in AI,” Anil says.
First introduced in 2014, GANs use a two neural system — a generator and a discriminator — at odds with — or adversarial to — each other. In other words, it’s AI that makes itself smarter.
A common example of GANs is when AI is used to create realistic photos from text descriptions. In this case, the generator produces photos that look real, while the discriminator attempts to distinguish the sample photos from real data. In essence, the generator factor is trying to trick the discriminator and increase its error rate. The expected outcome is that the generative network will produce better photos, and the discriminative network better weeds out the fake ones.
Adobe Research presented how to use GANs to guard against image edits that result in unnatural images.
In another example, Adobe Research presented how GANs can help guard against losing natural properties of an image during editing. Realistic image manipulation is challenging — it requires someone using software to modify the appearance of the image while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to cross the line of what looks real and move into territory where the image looks manipulated. With data and a GAN, the researchers help the machine learn where that line — or natural image manifold — is. The machine will then constrain the output of certain image editing operations to lie on that learned manifold at all times. The method the researchers presented can be used for changing one image to look like the other, as well as generating new images from scratch based on a user’s scribbles.
“Instead of it learning from data that you provided, it’s actually capable of creating things that you didn’t tell it about,” Anil says.
Fear factor
Of course, with change comes challenges, and some technology advancements bring with them a sizeable fear factor. Will AI zap our creative juices? Will transportation, storage, manufacturing, wholesale, and retail jobs be at risk? Are we destined to walk around the earth surrounded by robots? We remain optimistic.
“AI as a technique will become even more ubiquitous than it is right now,” Lars says. “AI is a tool for humans. And just as the steam engine, or motor-powered flight, or electricity, or the internet have not taken away our humanity, I don’t believe AI will take it away, either.”
Indeed, these technological benefits often outweigh the downsides. Daya notes that throughout the last 200-300 years, technological improvements have brought disruption, but, ultimately, these advancements created paths to other inventions and opportunities.
“The analogy I think of is the washing machine. You don’t hear anyone complaining that they can’t wash clothes by hand anymore,” Daya says. “AI does create a more efficient and productive workflow and it might displace temporarily — it might be a mismatch of the talent or the skill — but, over the longer term, it improves the conditions generally so you can engage in more productive work.”
Interwoven in current AI advancements is the need to monitor ethics and eliminate bias. Adobe, for one, is focused on creating development requirements and oversight for engineers working on AI through a data ethics group.
“We are looking at ethics and bias, making sure we don’t introduce bias into our machine-learning models as a cornerstone of how we will develop AI applications,” Daya says.
Read more about Adobe’s take on bias, ethics, and diversity in AI.
What’s ahead
Many of us don’t think AI advancements will go the dystopian way of “Ex Machina’s” Ava. According to Strategy Analytics, 41 percent of consumers think AI will make their lives better, strengthening the notion that humans and machines can, indeed, co-exist. Recent Adobe research shows that 72 percent of U.S. office workers say they want to use AI at work, especially in the form of an intelligent personal assistant.
Clearly, AI will continue its ascent, but it can’t replace the beauty of human creativity.
“I truly think of it as a partnership between what humans can do and what machines can do in a way that makes the whole thing greater than the sum of the parts,” Anil says.
And, while Ava used her intelligence for bad, AI can use its superpowers for good.
“We believe AI is going to be incredibly powerful, and as Spider-Man’s Uncle Ben said, ‘With great power comes great responsibility,’” Lars says. “We all have to take this responsibility seriously — of trying to do the right thing, and of having a culture that encourages openness and encourages honesty.”
Read more about our future with artificial intelligence in our Human & Machine collection.