Topics
Latest
AI
Amazon
Image Credits:Jakub Porzycki / NurPhoto / Getty Images
Apps
Biotech & Health
Climate
Image Credits:Jakub Porzycki / NurPhoto / Getty Images
Cloud Computing
mercantilism
Crypto
Enterprise
EVs
Fintech
Fundraising
gismo
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
expatriation
Venture
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
television
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Google has expanded its vulnerability rewards computer program ( VRP ) to admit attack scenarios specific togenerative AI .
In an promulgation shared with TechCrunch before of issue , Google said : “ We consider expanding the VRP will incentivize research around AI safety and security and land likely issues to unhorse that will ultimately make AI dependable for everyone , ”
Google ’s vulnerability rewards program ( or bug bounty ) pay ethical hackers for finding and responsibly disclose security flaws .
give that generative AI brings to light new security issues , such as the electric potential for unfair bias or model manipulation , Google said it sought to rethink how bugs it receives should be categorized and report .
The technical school giant says it ’s doing this by using findings from its freshly formedAI Red Team , a mathematical group of hackers that simulate a variety of adversaries , ranging from country - states and government - endorse groups to hacktivists and malicious insider to hunt down security department weaknesses in technology . The team recently conducted an exercise to fix the biggest threats to the applied science behind generative AI products like ChatGPT and Google Bard .
The team found that large language models ( or Master of Laws ) are vulnerable to immediate shot attacks , for example , whereby a hacker crafts adversarial prompts that can mold the behaviour of the manakin . An attacker could utilise this case of attack to generate text that is harmful or nauseating or to leak out sensitive data . They also monish of another type of attack called training - data extraction , which reserve hacker to reconstruct direct training instance to extract personally identifiable information or countersign from the datum .
Both of these type of attack are cover in the setting of Google ’s blow up VRP , along with model manipulation and model theft attacks , but Google pronounce it will not offer reward to researchers who reveal bugs come to to copyright government issue or data extraction that reconstruct non - sensitive or public entropy .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
The monetary rewards will vary on the severity of the exposure discovered . Researchers can currently earn $ 31,337 if they find command injection attacks and deserialization bug in extremely tender software program , such as Google Search or Google Play . If the flaws affect apps that have a lower priority , the maximal advantage is $ 5,000 .
Google says that it paid out more than $ 12 million in rewards to security measure researcher in 2022 .
Google brings productive AI to cybersecurity