Will the Future Look More Like Star Trek or Harry Potter?

Source: Warner Brothers

Prolific science fiction author Arthur C. Clarke wrote that “Any sufficiently advanced technology is indistinguishable from magic.” As technology advances, what once was deemed “fantasy” begins to materialize in the world around us. Movies inspire us with the idea that any sufficiently capable magic is indistinguishable from technology – on-screen magic makes things like teleportation, omnipotent medicine and the creation of objects out of thin air all seem real.

In many ways we already live in a future predicted by yesterday’s science fiction, with interactive touch devices connected to a vast world of information. This is what I think of as the “Star Trek” world – a world of powerful interactive devices with glowing, touch-enabled screens.

But we’re also on the cusp of the “Harry Potter” future – in which everyday objects will spring to life, interactive books will write themselves and portraits will come alive with vivid personalities. Instead of having to search the internet or sift through documents, intelligent assistants will materialize out of thin air to help us with daily tasks.

While some of these goals will require deep breakthroughs in hard science, others may soon be experienced using a new medium that combines virtual reality (VR) and augmented reality (AR) with artificial intelligence (AI).

In a Harry Potter future, mixing the digital world and physical world together in new, mixed realities will seem completely natural. Combined, these advancements will fundamentally change the human relationship to computing and the way we interact with technology and each other. Cities will be infused with hidden experiences, helpful information and AR art. The elevator at One World Trade Center is a great example of an immersive experience embedded in the device itself: a wall of screens shows Manhattan transforming through the centuries as the elevator rises to dizzying heights.

With ever cheaper computation, everyday objects will also have embedded behavior and access to information, expressed in a way that is harmonious with the device’s design and function. Such “ internet of things” devices will require power, computation and displays – as well as frequent software updates and security enhancements. For standalone devices like thermostats, computation will be embedded, but passive objects with AR features can be managed via a personal device, such as AR glasses.

Of course, for this to happen, it has to be created. Technologists will create the devices and ecosystems, but content creators and artists will be the ones to populate this new world with experiences. To enable this kind of Harry Potter universe, we need a new creative canvas — a tool set for creative people to build the kind of 3D, immersive experiences the future demands.

Across our teams at Adobe Research, we are chartered with imagining and building this future, exploring new platforms ranging from VR to AR to robots, and then we work to bring it to life through art and science. We’re combining enhancements in artificial intelligence and machine learning with our expertise in content and data to push the boundaries of how we create experiences that combine the best of both the Harry Potter and the Star Trek universes. We call these deep and helpful technologies Adobe Sensei.

Here are a few of the things we are thinking about…

A new creative canvas – In the Star Trek universe, information is often displayed using crisp, graphic diagrams and displays. In contrast, part of the magic of the Harry Potter world is seeing natural materials, such as ink or smoke, moving to convey information in a more organic way. By using some of the same technologies used to make the movies, we have been inventing systems for interaction that bring the richness and serendipity of real materials into the controllable world of digital tools. This is literally a new creative canvas with rich paints and brushes.

Our WetBrush technology is a great example of what is possible here, creating realistic oil painting brush strokes through particle and fluid simulation that was once only possible with a super computer, but can now run on a pressure-sensitive tablet. Also, another step in that direction, is an immersive drawing experience in the virtual world called Project Dali.

Immersive displays and the rise of 3D content – Over the next decade, the way we see information will be brought to life through transparent displays, AR and VR headsets and more exotic technologies capable of turning any surface into a display. As these types of experiences become more commonplace, it is also becoming clear that many of the conventions that worked in two-dimensional surfaces don’t work well in an immersive 3D environment.

With 360 video, directors will have to contend with the challenges of motion sickness, and viewers deciding where they want to look in an environment that surrounds them. We have much to learn, but this demonstration shows the benefits of editing in a native, 360**°** environment, and what some of the new UI conventions might look like.

Also, to make other truly unique objects in 3D we are exploring a number of approaches, like virtual clay, which lets you add and subtract material using painting ideas lifted from Photoshop.

Motion Parallax video footageIntegration with AI and other technologies – Lastly, breakthroughs in machine learning, and deep learning in particular, are unlocking new capabilities for natural language processing, computer vision and AI. By enhancing conventional applications with voice control, we are exploring the right blend of hands-on design and Harry Pottery naturalness of verbal expression. With this early look at an interactive agent for photo editing, we combine the emerging science of voice interaction with a deep understanding of creative workflows and aspirations.

This post also appeared on Forbes.com.