Imagination in Three Dimensions

This story is part of a series of weekly posts that will give you a closer look at the people and technology that were showcased as part of MAX Sneaks. Read our other features on Time of Day, Live Mobile Dev, Visual Speech Editor, Gap Stop and PSD Web Editing

Anybody who’s ever painted a bedroom, or purchased a new sofa, knows it’s not always easy to visualize changes to real life, three-dimensional spaces. A color that looks great on a swatch at the store can look completely different spread across a wall. A chair that looks small and sleek in a catalog can end up dominating your living room.

https://blog.adobe.com/media_9e523363721b1ab4493159eca3b0fcfb5e83bb88.gifThe interaction between light, color, placement of objects, patterns of movement and distance create so many variables, it’s hard to imagine them all accurately. Even professionals who work and think in three dimensions constantly — such as architects or game designers — can struggle with the challenge. What’s more, the digital tools available for conceptualizing and rendering three-dimensional objects and spaces are often complex, requiring a steep learning curve.

But what if there were software tools that made it easier? That’s something that Kalyan Sunkavalli, a research scientist in Adobe’s Imagination Lab, has been thinking about for a while. Even as a PhD student in computer science at Harvard, he was interested in the capabilities of computer vision and computer graphics.

“Given a photograph,” he wonders, “what can I figure out based on what I see in the photograph? Can I figure out what’s in it; who the people are; what’s the shape of the objects? Can I figure out what the lighting is? And if I can figure these things out, can I build a tool that enables people to edit them in interesting ways? Can I build it in such a way that even a novice can use it?”

Kalyan collaborated with a team of researchers across Adobe and academia — Sunil Hadap, Nathan Carr, Hailin Jin and Kevin Karsch (a summer intern from UIUC) — to develop a technology that automatically generates three-dimensional computer models of a space from a single photograph.

“What we wanted to do was build a tool that automatically analyzes a photograph and creates a 3D model, so that we can capture the geometry of the space, as well the lighting of the room. Once we have that, we can add additional 3D models of objects and have them interact with the surfaces and lighting of the room in a completely natural way,” Kalyan says.

Originally, the team published their work in the journal ACM Transactions on Graphics, and later presented at SIGGRAPH, the well-known conference for computer graphics. Most recently, the team showcased their work as part of Sneaks night, at Adobe MAX 2014.

During the demonstration, Kalyan showed the audience the difficulty of adding 3D models to a two-dimensional photograph in a traditional photo editor.

Kalyan then impressed the audience by showing the work his team had done to make the process of combining photographs with 3D models very simple. He quickly generated a 3D model of a room from a single photograph and demonstrated the various ways that model could be manipulated — from integrating 3D objects, to adjusting the depth of field, to manipulating light sources — allowing for rapid exploration of the space.

3d-sneak2
http://blogs.adobe.com/conversations/files/2015/03/3d-sneak2.png