Adobe previews new cutting-edge generative AI tools for crafting and editing custom audio

Image credit: Adobe Stock/ _veiksme_

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.

https://youtu.be/J6jhWyU5lBY

Adobe has a decade-long legacy of AI innovation, and Firefly, Adobe’s family of generative AI models has become the most popular AI image generation model designed for safe commercial use, in record time, globally. Firefly has been used to generate over 6 billion images to date. Adobe is committed to ensuring our technology is developed in line with our AI ethics principles of accountability, responsibility, and transparency. All content generated with Firefly automatically includes Content Credentials – which are “nutrition labels” for digital content, that remain associated with content wherever it is used, published or stored.

The new tools begin with a text prompt fed into a generative AI model, a method that Adobe already uses in Firefly. A user inputs a text prompt, like “powerful rock,” “happy dance,” or “sad jazz” to generate music. Once the tools generate music, fine grained editing is integrated directly into the workflow.

With a simple user interface, users could transform their generated audio based on a reference melody; adjust the tempo, structure, and repeating patterns of a piece of music; choose when to increase and decrease the audio’s intensity; extend the length of a clip; re-mix a section; or generate a seamlessly repeatable loop.

Instead of manually cutting existing music to make intros, outros, and background audio, Project Music GenAI Control could help users to create exactly the pieces they need—solving workflow pain points end-to-end.

“One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music,” explains Bryan.

Project Music GenAI Control is being developed in collaboration with colleagues at the University of California, San Diego, including Zachary Novack, Julian McAuley, and Taylor Berg-Kirkpatrick, and colleagues at the School of Computer Science, Carnegie Mellon University, including Shih-Lun Wu, Chris Donahue, and Shinji Watanabe.

Wondering what else is happening inside Adobe Research? Check out our latest news here.