Technology and Culture When the World Becomes the Screen
A conversation with Abhay Parasnis, Adobe chief technology officer
The future that lies beyond the screen will bring dramatic changes to the way we live, work, and play with technology. Immersive technologies like augmented reality (AR) will bring digital magic to new parts of our physical world. Intelligent assistants will become partners in work, collaboration, and creativity. The way we interact with devices and each other will create new social norms. We sat down with Abhay Parasnis, Adobe’s chief technology officer, to put it all in perspective.
Abhay, digital technology continues to transform almost every aspect of business and culture. What do you see as the key technology trends that will shape the future?
I think there are four key trends we all need to pay attention to.
The first is intelligent computing. It includes technologies like artificial intelligence (AI), machine learning, and deep learning that are enabling incredible breakthroughs in the capabilities and usefulness of the devices in our lives.
The second I call “beyond touch.” It has to do with changes to human-computer interaction as we evolve beyond the keyboard, mouse, and touchscreen. It includes voice, gesture, and contextual awareness, as well as immersive technologies like virtual reality (VR) and AR, that move our computing experience off the traditional two-dimensional screen.
The third is a cultural shift. We’re seeing huge growth in social-centered communication and collaboration, as well as changing expectations of privacy and sharing that come with it.
And finally, new technologies and businesses are increasingly born in the cloud. Cloud technology and broadband connectivity is on the verge of becoming as ubiquitous as electricity — always on, always available, omnipresent, and scalable.
How do you think innovation in intelligent computing will benefit Adobe and its customers?
Intelligent computing will have a profound impact across our entire business. Machine learning and deep learning already underpin a lot of the new capabilities being developed by our researchers and product teams.
When you go to MAX, and see a sneak technology demo that makes you say, “Wow!” — whether that’s software that understands what’s in your photo and auto tags it, or an algorithm that creates ultra-realistic brush strokes — that’s all being made possible by new techniques in machine and deep learning.
You’ve also seen us go into this area of AI in a big way with Adobe Sensei, where we’re connecting the power of these technologies to our customers across our three clouds.
Adobe Sensei tackles complex experience challenges such as image matching across millions of images or understanding the meaning and sentiment of documents.
Sensei leverages Adobe’s massive volume of content and data assets, as well as our expertise in the creative, marketing, and document segments. We are uniquely positioned to leverage our decades of experience in these areas to blend the art of content with the science of data.
This deep, focused integration of AI is driving rapid innovation for us. In fact, we’ve rolled out dozens of new features and capabilities across our three clouds since launching Sensei a year ago.
Further into the future, it’s not hard to imagine a time when intelligent assistants become reliable creative partners, observing the way creative people work with their software, making helpful suggestions, or performing mundane, routine tasks.
Finally, intelligent computing will power new methods of human-computer interaction, including voice recognition, gesture recognition, and contextual awareness. You can expect all of these technologies to make their way into the user interfaces of the future.
That leads right into the second key trend you mentioned — “beyond touch.” Can you tell us more about this topic?
Sure. “Beyond touch” is just a simple way of saying that human-computer interaction is at an inflection point as significant as the shift from punch cards to the keyboard and mouse, or, more recently, the shift from the keyboard and mouse to touch-enabled screens.
The ubiquity of personal devices, like smartphones, along with the growth of the Internet of Things (IoT), means we will be surrounded by a variety of sensors — microphones, 3-D cameras, GPS, and more. We also have exciting new capabilities emerging in haptic touch, voice interaction, computer vision, and gesture recognition. When you add immersive displays to that mix, it’s going to unlock new uses and capabilities we can’t even begin to imagine.
It’s a huge area of emphasis for our research and product teams. We’re already hard at work putting increased emphasis into voice as an interface, and working to integrate 360-degree video capture and editing into products like Premiere Pro. We’re also exploring immersive virtual spaces as a canvas for creatives with a team that’s dedicated exclusively to immersive design.
How do the changing expectations around social-centered communication affect the needs of Adobe’s customers, and what are you doing to address that?
First, we’ve tried to recognize that every software application must become — in some sense — a communications application. That means we need to prioritize features for collaboration and sharing. The demands of the modern workplace increasingly value collaboration over personal productivity, and social-platforms are becoming the de facto medium of choice for communication. We expect software in the future to be judged as much on its ability to share and interact, as by its sophisticated productivity features.
You’ve already seen Adobe create products that fill a need in this space, whether that’s Spark for creating social media or Bēhance for sharing creative work and collaborative ideation.
We’re also rethinking applications across the Creative Cloud to embrace a multiuser, social workflow at its core. That’s given rise to capabilities like Team Project as a premium feature in Premiere Pro.
But the best example might be Adobe XD, which we really reimagined as a collaboration system in the cloud that allows designers, marketers, and business stakeholders to design applications together.
What has you most excited for the future?
For me it’s really seeing how all these trends come together. I expect it to create sort of a virtuous cycle and exponential growth of innovation. These trends supercharge each other.
Let me give you an example. Today, researchers are using new techniques in deep learning to better train algorithms so they are more capable at understanding the human voice — including all its accents, dialects, and contexts.
In turn, voice interaction is becoming a more useful and dependable way of interacting with our devices. That’s great for immersive computing because it’s much more practical to interact with voice, rather than a keyboard, when you’re immersed inside a virtual world. So, advancements in deep learning and voice will accelerate the adoption of immersive technologies like VR.
Meanwhile, cultural norms that encourage the broad sharing of personal video provide a huge, global dataset of contextual voice samples that can be used to train AI to be even better.
So, I think that we’re about to see the biggest transformation in computing ever. It’s going to change our way of life in more ways than the invention of the PC, the rise of the Internet, or the explosive growth of smartphones and tablets. The entirety of the physical world now has the potential to blend with the digital world to become an interactive screen. I can’t wait to see what kind of creative possibilities emerge.
Abhay Parasnis, chief technology officer at Adobe, believes we are about to see the biggest ever transformations in computing.
Read more about the future of immersive experiences in our Beyond the Screen collection.