Google expands bug bounty program to include rewards for AI attack scenarios

Abstract 3D render of wavy thin wires and particles forming a hand

Piranka via Getty Images

In cybersecurity, threats change quickly. Add rapidly evolving generative AI tech to the mix, and security concerns evolve by the minute. Google is one of the biggest players in artificial intelligence technology, and it recognizes the need to adapt to this threat.

Google is expanding its existing Vulnerability Rewards Program (VRP) to include vulnerabilities specific to generative AI and considering the unique challenges that generative AI poses, like biases, model manipulations, data misinterpretations, and other adversarial attacks.

Also: Cybersecurity 101: Everything on how to protect your privacy and stay safe online

The VRP is a bug bounty program that rewards external security researchers for testing and reporting software vulnerabilities in Google’s products and services. Now, this will include generative AI products. Some of Google’s most popular generative AI products include Bard, Lens, and other AI integrations in Search, Gmail, Docs, and more. 

As generative AI becomes more integrated into different Google tools and programs, the potential risks increase and Google already has internal Trust and Safety teams working to foresee these risks. With the expansion of the bug bounty program to include generative AI, Google is trying to encourage research in AI safety to ensure responsible AI becomes the norm. 

Also: Beyond passwords: 4 key security steps you’re probably forgetting

Google also offered more information on its reward criteria for reporting bugs in AI products so users can easily determine what is in scope and what isn’t. 

External security researchers are tasked with finding these vulnerabilities in exchange for financial gain, which in turn gives Google, the company behind the bug bounty program, the opportunity to fix these threats before bad actors exploit them. This ensures a more secure product for users. 

Aside from encompassing generative AI into its VRP, Google introduced the Secure AI Framework to support creating responsible and safe AI applications. It also announced it’s collaborating with the Open Source Security Foundation to ensure the integrity of AI supply chains. 

Also: WormGPT: What to know about ChatGPT’s malicious cousin

Users who want to join Google’s bug bounty program can submit a bug or security vulnerability directly to the company. In 2022, Google issued over $12 million in rewards to security researchers as part of its bug bounty program. 

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here