News

Adobe collaborates with ethical hackers to build safer, more secure AI tools

media 13e26e7a9d858762130054dcd9

As we continue to integrate generative AI into our daily lives, it’s important to understand and mitigate the potential risks that can arise from its use. Our ongoing commitment to advancing safe, secure, and trustworthy AI includes transparency about the capabilities and limitations of large language models (LLMs).

Adobe has long focused on establishing a strong foundation of cybersecurity, built on a culture of collaboration, enabled by talented professionals, strong partnerships, leading edge capabilities, and deep engineering prowess. We prioritize research and collaborating with the broader industry on preventing risks by responsibly developing and deploying AI.

We have been actively engaged with partners, standards organizations, and security researchers for many years to collectively enhance the security of our products. We receive reports directly and through our presence on the HackerOne platform and are continually looking at ways to further engage with the community and open feedback to enhance our products and innovate responsibly.

Commitment to responsible AI innovation

Today, we’re announcing the expansion of the Adobe bug bounty program to reward security researchers for discovering and responsibly disclosing bugs specific to our implementation of Content Credentials and Adobe Firefly. By fostering an open dialogue, we aim to encourage fresh ideas and perspectives while providing transparency and building trust.

Content Credentials are built on the C2PA open standard and serve as tamper-evident metadata that can be attached to digital content to provide transparency about their creation and editing process. Content Credentials are currently integrated across popular Adobe applications such as Adobe Firefly, Photoshop, Lightroom and more. We are crowdsourcing security testing efforts for Content Credentials to reinforce the resilience of Adobe’s implementation against traditional risks and unique considerations that come with the provenance tool, such as the potential for intentional abuse of Content Credentials by incorrectly attaching them to the wrong asset.

Adobe Firefly is a family of creative generative AI models available as a standalone web application at firefly.adobe.com as well as through features powered by Firefly in Adobe flagship applications. We encourage security researchers to review the OWASP Top 10 for Large Language Models, such as prompt injection, sensitive information disclosure, or training data poisoning, to help focus their research efforts on pinpointing weaknesses in these AI-powered solutions.

Safer, more secure generative AI models

By proactively engaging with the security community, we hope to gain additional insights into our generative AI technologies which, in turn, will provide valuable feedback to our internal teams and security program. This feedback will help identify key areas of focus and opportunities to reinforce security.

In addition to our hacker-powered research, we leverage our robust security program that includes penetration testing, red-teaming, code scanning to continually enhance the security of our products and systems, including Adobe Firefly and our Content Credentials implementation.

“The skills and expertise of security researchers play a critical role in enhancing security and now can help combat the spread of misinformation,” Dana Rao, executive vice president, general counsel and chief trust officer at Adobe. “We are committed to working with the broader industry to help strengthen our Content Credentials implementation in Adobe Firefly and other flagship products to bring important issues to the forefront and encourage the development of responsible AI solutions.”

“Building safe and secure AI products starts by engaging experts who know the most about this technology’s risks. The global ethical hacker community helps organizations not only identify weaknesses in generative AI but also define what those risks are,” said Dane Sherrets, senior solutions Architect at HackerOne. “We commend Adobe for proactively engaging with the community. Responsible AI starts with responsible product owners.”

“It’s great to see the scope of products widen to encompass areas such as artificial intelligence, combatting misinformation, Internet of Things, and even cars. These additions may require additional training for ethical hackers to acquire the necessary skills to uncover critical vulnerabilities,” said Ben Sadeghipour, founder of NahamSec. “Bug Bounty Village is committed to expanding our workshops and partnering with more organizations, like Adobe, to ensure security researchers are equipped with the right tools to protect these technologies.”

These are early steps toward ensuring the safe and secure development of generative AI, and we know the work is just getting started. The future of technology is exciting, but there can be implications if these innovations are not built responsibly. Our hope is that by incentivizing more security research in these areas, we’ll spark even more collaboration with the security community and others to ultimately make AI safer for everyone.

How to participate

Ethical hackers looking to get involved in the Adobe bug bounty program can find more information on the Adobe HackerOne page. For those interested in joining the Adobe private bug bounty program, we invite you to apply for the Adobe private bug bounty program and discover more information in our blog.

We will be attending the upcoming BSidesSF event. If you’ll be there, you can join us May 4, 2024 for the BSidesSF Saturday Party or find us at the Bug Bounty Village, hosted by Nahamsec.

Registration for BSidesSF can be found here.

Source: https://blog.adobe.com/

Daniel Long

Daniel Long

About Author

Daniel Long, as a writer, delves into the realm of emerging technologies and business solutions, with a particular emphasis on optimizing efficiency and fostering growth. He educational background includes a Bachelor's degree in English from the University of California, Irvine, and he furthered his knowledge by attaining an MBA from Chapman University. This combination of expertise allows him to offer valuable insights into the ever-evolving business landscape.

You may also like

media 119cb3eb20b3cb6d64a6b5d7cf
News

Adobe advances creative ideation with the new Firefly Image 3 Model

Just one year after launching Adobe Firefly, we’re thrilled to introduce our newest image generation model, Adobe Firefly Image 3 Model, with stunning
media 1fc76138c5ad90e9db5c211194
News

Reimagining customer experience in the age of AI

If there is one thing we’ve learned about the digital economy, it is that it changes rapidly. It is hard