Adobe MAX Sneaks show how AI is enhancing the future of creativity
Sneaks are a can’t-miss, fan-favorite moment every year at Adobe MAX. During the Sneaks session, Adobe demonstrated cutting-edge, innovative technologies that may or may not become features in our products. This year's Sneaks spotlighted AI’s role in unlocking creative possibilities for those interested in developing immersive 3D environments, creating animated videos for social media, or designing projects for real world settings.
Actor, comedian, entrepreneur, and Sneaks co-host Kevin Hart unveiled our latest innovations alongside Adobe’s engineers and research scientists, and we have all the highlights right here in a bite-sized recap. Check them out below, then let us know what you like by tweeting at our Adobe MAX Twitter channel.
Want to see the full Sneaks session? Visit here.
Project Clever Composites
Perhaps you visited the Eiffel Tower but missed out on that picture of you standing right in front of it. With Project Clever Composites’ new image compositing feature you can make that once-in-a-lifetime shot a reality since this Sneak makes it easy to add yourself into any background.
While compositing images into backgrounds is a common task in image editing software, compositing an object from one image into a background can be a time-consuming and difficult process. This typically involves several manual tasks, including searching for the right image, carefully cutting out the desired object, and manually editing the object’s color, tone, and scale.
Project Clever Composites uses AI and automation to transform this tedious process into an essentially drag and drop action. Now with just few clicks, you can achieve a realistic composite image. AI makes it easy to quickly identify an object suitable for adding to a background image, automatically cut out the object, adjust its color and size for consistency with the background, and then generate appropriate shadows by analyzing the background lighting.
Presenter: Zhifei Zhang is a research engineer at Adobe focusing on image editing, compositing, and representation.
Project Instant Add
When creating a video, have you ever made it to post-production — only to realize you forgot to add a logo, or need to make last-minute edits?
Project Instant Add uses AI and machine learning to simplify the typically complex and time-consuming processes of editing videos and adding VFX effects in post-production. This innovation enables anyone to edit video content in the same way they edit images. Users need only to choose the element in the video to map text or graphics to and AI handles the rest by automatically mapping the graphics to the chosen element in the video.
Project Magnetic Type
Graphic design projects often include combining shapes with text. Yet the process of merging shapes with text can present challenges during the editing process, especially when the shapes associated with the text no longer align. Correcting this issue requires a lot of tedious repositioning and alignment — even for the most advanced designers.
Project Magnetic Type lets anyone attach and instantly fuse any shape — even hand-drawn calligraphy — to a piece of live digital text. Adobe’s AI technology allows fluid binding of shapes to text without distorting the original text’s aesthetics or its text-editability. This tech uses object detection models to enable easy extraction of swashes from any calligraphic image, which can be fused with any live type to give it a similar look. It provides flexibility when attaching and detaching shapes or objects, and expands a designer’s creativity when developing text-based content including logos or ads.
Presenter: Arushi Jain is a computer scientist at Adobe with a deep passion for typography-related features, especially when it comes to fonts, glyphs, and vector snapping.
Project Vector Edge
When creating graphic designs, designers commonly struggle to visualize what assets will look like in their intended real-world settings, such as billboards, t-shirts, or coffee mugs.
Project Vector Edge provides designers and their teams with the ability to visualize, make edits, and collaborate on 2D designs within 3D environments during the design process. Using AI and vector graphic projection technologies, this project automatically projects 2D design assets onto surfaces in 3D environments, true-to-scale and in high resolution, so designers and stakeholders can visualize how assets will look in the real world.
Presenter: Ankit Phogat is a senior computer scientist at Adobe, where he works on technologies such as Freeform Gradients, Puppet Warp, and 3D.
Project Motion Mix
Have you ever wanted to create visually appealing, eye-catching videos of yourself dancing that you can leverage to become the next viral social media sensation? Project Motion Mix enables creators to easily create high-quality looping human animations, complete with realistic human movements, all from a still image.
Traditionally, generating looping human animations for video is very time-consuming and challenging with video creators needing to capture each unique motion before they can add it to the animation. This process can be especially difficult for those looking to replicate a professional dancer’s movements. Project Motion Mix does this automatically, using AI-based motion generation and human rendering technologies to create high-quality, realistic 3D movements for the subject of the video. Creators can also make changes to the video’s background, including adding a friend to the video who joins in the animation.
Presenter: Jae Shin Yoon is a research scientist at Adobe. His research interests are solving the problem of high-quality reconstruction and generalization of digitized human avatars over temporal domain by modeling, rendering, and adapting avatars from a single camera with computer vision, graphics, and machine learning knowledge.
Project Blink
Have you ever wished you could quickly search a video to skip to the good parts or create a more shareable video clip?
Say hello to Project Blink, a new video editing tool that uses AI to help users find and quickly pull highlights out from video content. Project Blink makes it as easy to edit video as people currently edit text. To create clips, users just search for specific words, objects, sounds, or even types of activities that took place in the video, then select the portion of the video transcript they would like to use. Adobe’s AI then automatically transforms that section of the video into a new clip.
Project Blink is now available in beta. If you are interested in trying out this technology today, you can do so by signing up for beta access here.
Presenter: Mira Dontcheva is principal scientist and research manager at Adobe, leading research in Human Computer Interaction (HCI). Her research focuses on building new tools that make creative tasks easier, more fun, and more accessible to wider audiences.
Project Artistic Scenes
The metaverse has opened a whole new world of possibilities for those interested in developing 3D and immersive content. But creating artistic 3D content often requires significant time and special expertise.
Project Artistic Scenes uses AI to automatically transform 3D scenes using styles from 2D artwork. With Project Artistic Scenes, creators can quickly create high-quality stylized 3D content that will be essential for the future of virtual reality and augmented reality experiences. Unlike traditional style techniques that only transform 2D images, Adobe’s approach transforms the entire 3D scene. This makes it possible to transfer a 3D scene into any artistic style — imagine a playground rendered as a watercolor painting, including realistic style details such as brushstrokes, creating a fully immersive, artistic 3D experience.
Presenter: Sai Bi is a research scientist at Adobe. His research focuses on topics in computer graphics and computer vision, such as appearance acquisition, 3D reconstruction, inverse rendering, and neural rendering.
Project All of Me
While cropping images down to smaller sizes can be easy, extending images for un-cropping to larger sizes can be challenging — or even seem downright impossible!
With Project All of Me, anyone — a fashion designer who needs to update their website models with new clothing designs or a student hoping to create a school event flyer — can create new content and larger images with just a few clicks. This smart portrait photo editor leverages AI to generate un-cropped components of photos and edit out unwanted distractions — it can even provide recommendations and modify the photo subject’s apparel.
Presenter: Qing Liu is a research engineer and scientist at Adobe working on deep learning-based image understanding and generation.
Project Beyond the Seen
Have you ever wanted to create immersive 360° experiences from flat 2D images? You could relive a family vacation or develop a new world for the metaverse.
Pushing the limits of immersive virtual reality, Project Beyond the Seen enables you to do just that, using Adobe’s AI to easily generate full 360° panoramas from a single image. Depth estimation methods allow users to extend a panoramic image, creating realistic and fully immersive 3D environments. Since AI is able to generate content behind, above, below, and to the sides of the image, users can easily and quickly add objects, including AI-generated reflections — and more.
Presenter: Yannick Hold-Geoffroy is a senior engineer and research scientist at Adobe. He has published over 40 scientific papers and patents in the fields of image analysis and 3D reconstruction, notably contributing to the Match Image feature in Adobe Stager, and the infrastructure behind Neural Filters in Adobe Photoshop, among others.
Project Made in the Shade
Editing shadows and their interactions with shapes is difficult and time-consuming in traditional image editing software. Project Made in the Shade uses AI to make 3D image editing easy and intuitive, removing the complex knowledge and skills typically required to add shadows. Adobe’s AI technology recognizes depth, lighting, and other aspects of a scene, enabling users to move a person or object within a photo while casting realistic shadows. Now anyone — hobbyists or pros — can easily move people or objects with shadows in photographs, a particularly useful technology for compositing 3D images and making 3D motion graphics.
Presenter: Vojtěch Krs is a research engineer at Adobe. His research and engineering areas of focus include creative tools, digital imaging, and real-time 3D graphics. He enjoys coding for hours at a time, and learning to create digital art of his own.