Adobe expands bug bounty program to incentivize AI security research

A magnifying glass over a shield AI-generated content may be incorrect.

As Adobe's products rapidly introduce new AI-powered features across the creative and enterprise landscape, we recognize the importance of making sure these capabilities are thoroughly tested by the security research community. AI brings transformative potential to our customers, but it also introduces a new class of security challenges that demand specialized expertise and fresh approaches to vulnerability research.

To strengthen our coverage in this area and attract highly skilled AI-focused researchers, we are expanding the scope of our bug bounty program and introducing new incentives designed to help accelerate AI security research across Adobe's product portfolio.

Introducing the ‘AI bonus tier’

To reflect the unique complexity and impact of AI security research, we've launched a dedicated third bounty tier – the AI Bonus Tier – exclusively for AI-related findings across our products. This new tier sits alongside our existing bounty structure and offers enhanced rewards for researchers who identify vulnerabilities in scoped AI assets.

Top-tier critical vulnerabilities in scoped AI assets now earn rewards up to $15,000, recognizing the depth of skill and effort required to uncover meaningful security issues in AI-driven features.

What's new in scope and policy

We've restructured our program to meet AI researchers where they work:

  • Expanded scope covering newly released AI features across our web applications including:
    • Acrobat AI Assistant
    • Acrobat PDF Spaces
    • Acrobat Create Presentations
    • Acrobat Create Podcast
    • Express AI Assistant
    • Firefly Image Models
    • Firefly Video Model
    • Firefly Custom Models
    • Lightroom AI Edits
    • Lightroom "Edit suggestions" Tech Preview
    • Photoshop AI Assistant
    • Stock AI Studio
  • A dedicated AI scope section with curated references to public documentation to help accelerate research
  • Explicit AI vulnerability guidance, including the attack scenarios and vulnerability classes we're most interested in, from prompt injection to model abuse
  • A clear AI-specific exclusion list, so researchers know exactly where the boundaries are before they start testing

These updates are designed to remove friction and provide the clarity researchers need to dive in with confidence. Whether you're exploring prompt injection vectors, testing data leakage through AI-generated outputs, or evaluating model behavior under adversarial conditions, our updated policy provides a clear framework to guide your work.

What's coming

This is just the beginning. Throughout the year, we'll be extending AI asset coverage beyond web applications to include mobile and desktop products, broadening the attack surface available for research and making sure security keeps pace with our AI integrations across every platform.

As we continue to invest in AI-powered experiences for our customers, we are committed to expanding the program in parallel to providing researchers with new and meaningful opportunities to test, challenge, and help secure these capabilities.

Get involved: Help Adobe build more secure AI products

If you specialize in LLM security, adversarial inputs, AI pipeline abuse, or emerging AI threat models, there's never been a better time to engage with our program. As Adobe's bug bounty program continues to evolve and scale, we look forward to deepening our collaboration with the security community and providing more opportunities to empower researchers across the globe in securing the digital world.

If you are ready to make an impact and level up your AI security research skills, we invite you to submit a report today on Adobe's bug bounty program.

Subscribe to the Security@Adobe newsletter

Don’t miss out! Get the latest Adobe security news and exclusive content delivered straight to your inbox.

Sign up