Last Updated:
Google defines an “AI bug” as a flaw that harms LLMs or generative AI, such as misusing a Google Home device or sending someone’s email info to another user

The programme aims to identify errors where an AI system might automatically perform incorrect or unwanted actions. (Representative/News18 Hindi)
Google has unveiled its new AI Bug Bounty Programme, targeting cybersecurity researchers to address vulnerabilities specific to AI systems. This initiative expands on the company’s existing Vulnerability Reward Programme (VRP) by focusing on AI-related security flaws.
The programme aims to identify errors where an AI system might automatically perform incorrect or unwanted actions, such as unlocking devices, leaking data, or sending information to unintended accounts.
Google defines an “AI bug” as a weakness that can negatively impact large language models (LLMs) or generative AI systems. For instance, if AI is used to unlock a Google Home door or if it compromises email information and sends it to another user, these would be classified as bugs.
However, misinformation or hate speech generated by AI is not considered a bug; such cases require feedback within the product itself.
The rewards for identifying significant AI bugs in Google Search, Gemini apps, Gmail, and Drive range from $20,000 to $30,000. Bugs found in smaller tools like NotebookLM or Jules Assistant will yield slightly lower rewards.
Additionally, Google has introduced CodeMender, an AI tool that helps identify and fix bugs in open-source software. To date, CodeMender has provided 72 verified fixes.
This programme is crucial as Google’s products increasingly incorporate AI, necessitating robust security measures. Since 2022, researchers have earned over $430,000 by uncovering AI-related bugs. This initiative represents a valuable opportunity for those interested in AI security to gain recognition on a major platform like Google.
October 07, 2025, 19:03 IST
Read More