MAX Sneaks highlight several new generative AI capabilities across photo, video, audio, 3D and design
We are entering a new era of creativity, where generative AI is expanding access to powerful new workflows and unleashing our most imaginative ideas. Adobe’s Sneaks offer a “sneak peek” into what’s ahead, and at this year’s MAX Sneaks session — hosted by actor and comedian Adam DeVine — Adobe research scientists and engineers demonstrated for the first time several cutting-edge, experimental technologies that could someday become features in Adobe products.
This year, many of the MAX Sneaks leverage generative AI, providing creators with innovative new tools spanning multiple mediums — including photo, video, audio, 3D and design — that can take creativity to a whole new level.
Watch the full Sneaks session and read more about this year’s finalists below.
Project Fast Fill
This tool enables editors to remove objects or change background elements in videos with the same ease, quality, and fluidity of editing still images.
Project Fast Fill harnesses Generative Fill, powered by Adobe Firefly, to bring generative AI technology into video editing applications. This makes it easy for users to use simple text prompts to perform texture replacement in videos, even for complex surfaces and varying light conditions. Users can use this tool to edit an object on a single frame and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.
Presenter: Gabriel Huang is a research engineer at Adobe focusing on video. He has contributed to pioneering technologies that revolutionize video effects editing by making it easier and more accessible.
Project Draw & Delight
Ever get stalled or need help jumpstarting the creative process when trying to bring an idea to reality?
With Project Draw & Delight, creators can use generative AI to guide them along their creation journey, helping transform initial ideas — often represented as rough doodles or scribbles — into polished and refined sketches.
This technology goes beyond text-to-image by providing users with the ability to augment text-based instructions with visual hints, such as rough sketches and paint strokes. Draw & Delight then uses the power of Adobe Firefly to generate high-quality vectors of illustrations or animations in various color palettes, style variations, poses and backgrounds.
Presenter: Souymodip Chakraborty is a computer scientist at Adobe. His interests are in computer graphics and geometry processing.
Project Neo
Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.
Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.
Presenter: Inigo Quilez is a principal research engineer at Adobe focused on 2D & 3D animation. He is the creator of VR animation tools, and has had his computer graphics work featured in several animated blockbusters and VR films.
Project Scene Change
Composition is an essential part of cinematography; it allows filmmakers and video creators to develop a narrative and is vital to keeping viewers engaged with the story as it plays out onscreen.
Project Scene Change makes it easy to composite a subject and a scene from two separate videos — captured with different camera trajectories — into one scene with synchronized camera motion.
Artificial intelligence renders a 3D representation of the background scene from a prerecorded image as if it was captured by a free-moving camera, then composites the separately filmed subject, with proper shadows, into a new scene with compatible motion. This removes any limitations due to the camera motion of existing video assets, and allows video editors to place a subject into a new environment with realistic camera motion.
Presenter: Zhan Xu is a research scientist at Adobe Research. He focuses on understanding videos from a 3D perspective and introducing 3D controls into video editing.
Project Primrose
Today, many designers use Adobe Illustrator to try out new designs. Wouldn’t it be great if they could quickly bring those designs to life in real objects, with the click of a button?
Project Primrose, displayed at MAX as an interactive dress, makes this possible with wearable and flexible, non-emissive textiles which allow an entire surface to display content created with Adobe Firefly, Adobe After Effects, Adobe Stock, and Adobe Illustrator. Designers can layer this technology into clothing, furniture, and other surfaces to unlock infinite style possibilities — such as the ability to download and wear the latest design from a favorite designer.
Presenter: Christine Dierk is a research scientist at Adobe, specializing in human-computer interaction and hardware research initiatives.
Project Glyph Ease
When creating flyers or posters, designers often need to manually create each individual letter to maintain a consistent style. This can take a lot of time depending on the specific design and shape of the elements of each character.
Project Glyph Ease uses generative AI to create stylized and customized letters in vector format, which can later be used and edited. All a designer needs to do is create three reference letters in a chosen style from existing vector shapes or ones they hand draw on paper, and this technology automatically create the remaining letters in a consistent style. Once created, designers have flexibility to edit the new font since the letters will appear as live text that can be scaled, rotated or moved in the project.
Presenter: Difan Liu is a research scientist at Adobe where he focuses on image, vector graphic, and video editing.
Project Poseable
Designing prototypes and storyboards can take hours, requiring creators to make painfully slow edits for every detail in each scene.
Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.
Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.
Presenter: Yi Zhou is a research scientist at Adobe. Her research is focused on autonomous virtual avatars. She mainly works on representation learning for 3D models, human hair and body reconstruction, human motion synthesis, and 3D animation.
Project Res Up
You’ve probably encountered blurry, low-resolution videos before — maybe they weren’t upscaled to look good on a larger screen size, or perhaps the video was originally made for SD, but now you are playing it on an HD display.
Project Res Up can help: It’s a video upscaling tool that uses diffusion-based technology and artificial intelligence to convert low-resolution videos to high-resolution videos for applications. Users can directly upscale low-resolution videos to high resolution. They can also zoom-in and crop videos and upscale them to full resolution with high-fidelity visual details and temporal consistency. This is great for those looking to bring new life into older videos or to prevent blurry videos when playing scaled versions on HD screens.
Presenter: Yang Zhou is a research scientist at Adobe, where he works on deep learning-based video generation, and digital humans.
Project Dub Dub Dub
As the digital economy grows, so has the need to deliver video content on a global scale. More content creators have a desire to reach new audiences by making their videos and podcasts available to anyone, no matter their location or language.
Project Dub Dub Dub uses generative AI to auto-dub videos or audio clips in more than 70 languages and over 140 dialects. It uses speech-to-speech translation to automatically translate and match the speakers’ voice, tone, cadence and the acoustics of the original video, whether the video or audio clip is brand new or one from a user’s video archives. All users have to do is press a button to auto-dub content, transforming this historically labor- and cost-intensive process into one that can be completed in minutes.
Presenter: Zeyu Jin is a senior research scientist at Adobe Research. His research is rooted in deep generative models for studio-quality speech enhancement, speech quality assessment and personalized voice generation.
Project Stardust
Have you ever taken a photo or created content with Adobe Firefly and wanted to quickly modify specific objects in the image?
Project Stardust is an object-aware editing engine that uses artificial intelligence and generative AI to revolutionize image editing. This technology automates sometimes time-consuming parts of the image editing process — filling in backgrounds, cutting out outlines for selection, blending lighting and color, and more. In addition, the generative AI features let you add objects and make creative transformations. Stardust makes image editing more intuitive, accessible and time efficient for any user, regardless of skill level.
Presenter: Aya Philémon is a product manager at Adobe who aims to empower others to make the most of their creative potential. Her research projects are inspired by both her professional and personal life experiences.
Project See Through
When taking pictures, glass reflections can be a nuisance. Reflections can obscure or distract from image subjects, and often make photos completely unusable.
Today, it’s difficult or impossible to manually remove reflections. Project See Through simplifies the process of cleaning up reflections by using artificial intelligence. Reflections are automatically removed, and optionally saved as separate images for editing purposes. This gives users more control over when and how reflections appear in their photos.
Presenter: Eric Kee is a research scientist at Adobe. His research interests lie in the intersection of computer vision, computational photography, machine learning and visual perception.
Keep up with all things MAX
The latest from the stage and sessions in Los Angeles. Register to unlock new ideas, creativity, digital skills, and more.