Reimagining our video and audio tools with Adobe Firefly


Meet Adobe Firefly. For Video.

Ahead of this week’s NAB annual conference, Adobe announced new innovations, including Text-Based Editing and Automated Color Tone-Mapping in Adobe Premiere Pro and celebrated 30 years of Adobe After Effects. We’ve also expanded Frame.io to enable photography and PDF document reviews, giving decentralized marketing teams a unified and intuitive cloud hub for collaborating on assets.

Still image of text based editing in Premiere Pro.

New Text Based Editing in Adobe Premiere.

For more than three decades, Adobe’s family of video and audio apps have been the tools-of-choice for the worlds’ most talented creative pros. This year alone our applications helped bring to life 10 Oscar-nominated films, including the winner for Best Picture and Best Editing, “Everything Everywhere All At Once.” The creators behind Adobe Substance 3D Designer earned a Scientific Technical Award from the Academy of Motion Picture Arts and Sciences® (Adobe’s third Oscar), and nearly two thirds of movies shown at the 2023 Sundance Film Festival were edited in Premiere Pro.

Bringing Generative AI to Creative Cloud with Adobe Firefly

Image of bringing Genereative AI to Creative Cloud with Adobe Firefly.

Firefly in Adobe Premiere.

Over more than 10 years, Adobe has delivered hundreds of AI-driven features through Adobe Sensei, Adobe’s AI and machine learning framework. Features like Auto Reframe and Remix in Premiere Pro and Content Aware Fill for Video in After Effects are already helping video and audio professionals around the world create stunning content at high velocity.

We are entering a new era where generative AI will enable a natural conversation between creator and computer — where typing in your own words and simple gestures will combine with the best of professional creative application workflows to enable new creative expression.

Last month we announced Adobe Firefly, the next major evolution of AI-driven creativity and productivity. Firefly is our family of creative generative AI models, starting with image generation and text effects. Our first model, trained on Adobe Stock images, openly licensed content, and public domain content where copyright has expired, is designed to generate content safe for commercial use. Firefly is now available in beta (try it out!) and we shared concepts for how Firefly could be integrated into Adobe Photoshop, Adobe Illustrator, Premiere Pro and other Adobe applications in the near future.

At NAB, we are expanding the vision for Firefly to imagine ways we can bring generative AI into Adobe’s video, audio, animation and motion graphics design apps.

Imagining Adobe Firefly for video

We are truly in the golden age of video — short-form video is ubiquitous in news, social media, and entertainment. And, insatiable demand, multiplying channels, and globally distributed teams make it extra challenging to scale the production of high-quality creative work efficiently.

This is why we’re excited to invent and innovate with the video and audio community to make it easier and faster for you to transform your vision into reality. With Firefly as a creative co-pilot, you can supercharge your discovery and ideation processes and cut post-production time from days to minutes. And with generative AI integrated directly into your workflows, these powerful new capabilities will be right at your fingertips. Imagine the power to instantly change the time of day of a video, automatically annotate and find relevant b-roll, or create limitless variations of clips — all as a starting point of your creativity.

To start, we’re exploring a range of concepts, including:

Starting later this year, we’ll begin introducing new generative AI features for video, audio, animation and motion graphics design. In the meantime, we’d love to hear from you and learn more about what you are looking to see with Adobe Firefly and generative AI across our applications. You can sign up for the beta of Firefly and join the conversation on Discord. And on Thursday, April, 20, you can also join us on a community live session on YouTube.