Google has confirmed that it is expanding its existing vulnerability bounty program to accept attack scenarios that feature generative AI. The newly revised bug bounty program will encourage hackers to investigate attack scenarios and discover vulnerabilities that apply to Google’s AI systems and services.
Google’s AI Red Team imitates real hacking attacks
Google announced in August that it had created an AI Red Team. Using the same types of attack techniques exploited by nation states, organized cybercrime groups, and malicious insiders. Daniel Fabian, head of Google’s AI Red Team, said: “One of the key responsibilities of Google’s AI Red Team is to gather relevant findings and apply them to real-world products and features that use AI. and learn its effects.” “We leverage attacker tactics, techniques, and procedures (TTPs) to test various system defenses.”
Google AI bug bounty Hackers need to follow the rules
Hackers outside of the AI Red Team and outside of Google itself can now look for weaknesses in Google’s AI systems. The difference is that these hackers must work within a strict framework that defines what’s in scope and what’s out of scope. They won’t take the same “anything goes” approach to attack simulation, but that doesn’t make AI bounty hunting hackers any less important.
Like other bug bounty programs, There are guidelines Learn about the types of vulnerabilities Google is disclosing, the methods you can use to do so, and the process for reporting discovered vulnerabilities and receiving rewards. So, for example, instant injections that are invisible to the victim and change the state of the victim’s account or assets are also covered. You may not use the Products to generate content that is violating, misleading, or factually incorrect, such as “illusions” or factually inaccurate responses. Similarly, situations in which an adversary could reliably misclassify a security control and exploit it for malicious purposes are also within scope, but if this is a “compelling attack scenario or It is not applicable if it does not cause a “feasible route”.
$12 million bug bounty bonanza
Google will be compensated for vulnerabilities disclosed under the framework of its Vulnerability Bounty Program, but the amount of the bounty will depend on “the severity of the attack scenario and the type of targets affected.” admitted to doing so. However, in 2022, he was paid more than $12 million in such bounties to hackers who were part of a broader program.
A Google spokesperson said: “We look forward to continuing to work with the research community to discover and fix security and abuse issues in our AI-powered features.” “Once you find a qualifying problem, Visit the Bug Hunter website. Submit a bug report and the issue is found to be valid [you will] We reward you for helping us keep our users safe. ”
Google’s confirmation of its new AI bug bounty program couldn’t be more timely.UK Prime Minister announces proposals for Global AI Safety Summit and AI Safety Institute Rishi Sunak gave a speech “Criminals could use AI for cyberattacks, disinformation, fraud, and even child sexual abuse,” he said on October 26.
“Generative AI is a double-edged sword, as the cybersecurity landscape continues to evolve and the proliferation of generative AI will only add further complexity,” said Fabian Rech, senior vice president at Trellix. I am. “As we prepare for the first AI Safety Summit next week, we look forward to hearing what this means for organizations in the future of regulation with this emerging technology and how they are leveraging and integrating AI.” It’s important to recognize that you can do it.”