Topics
Latest
AI
Amazon
Image Credits:Kimberly White/Getty Images for WIRED
Apps
Biotech & Health
Climate
Image Credits:Kimberly White/Getty Images for WIRED
Cloud Computing
Commerce
Crypto
enterprisingness
EVs
Fintech
fundraise
contraption
punt
Government & Policy
computer hardware
layoff
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
protection
societal
quad
startup
TikTok
DoT
speculation
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
newssheet
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
In a new written report , a California - based policy group co - led by Fei - Fei Li , an AI innovator , suggest that lawmaker should deal AI risks that “ have not yet been observed in the world ” when crafting AI regulative policies .
The41 - Sir Frederick Handley Page interim reportreleased on Tuesday comes from the Joint California Policy Working Group on AI Frontier Models , an effort form by Governor Gavin Newsom followinghis veto of California ’s controversial AI safety bill , SB 1047 . While Newsom found thatSB 1047 missed the mark , he acknowledged last yr the pauperization for a more extensive judgment of AI jeopardy to inform legislators .
In the report , Li , along with co - source Jennifer Chayes ( UC Berkeley College of Computing dean ) and Mariano - Florentino Cuéllar ( Carnegie Endowment for International Peace United States President ) , argue in favor of laws that would increase transparency into what frontier AI labs such as OpenAI are build . Industry stakeholders from across the ideologic spectrum reviewed the news report before its publishing , including staunch AI safety machine advocates like Turing Award winner Yoshua Bengio and those who argued against SB 1047 , such as Databricks conscientious objector - founder Ion Stoica .
harmonize to the report , the fresh peril posed by AI systems may need laws that would wedge AI model developer to publicly report their condom tests , data - acquisition drill , and security measures . The report also advocates for increase standards around third - party evaluations of these metrics and embodied policies , in addition to expanded whistle-blower protections for AI company employee and contractors .
Li et al . write that there ’s an “ inconclusive degree of evidence ” for AI ’s potential to help carry out cyberattacks , create biologic weapons , or institute about other “ utmost ” terror . They also argue , however , that AI policy should not only address current endangerment , but also anticipate future consequences that might go on without sufficient safeguards .
“ For example , we do not need to follow a nuclear weapon [ set off ] to predict reliably that it could and would have broad impairment , ” the composition say . “ If those who speculate about the most extreme jeopardy are proper — and we are uncertain if they will be — then the post and cost for inactivity on frontier AI at this current here and now are super high . ”
The report recommend a two - forked scheme to boost AI model exploitation transparence : trustfulness but verify . AI model developers and their employee should be ply avenues to report on areas of public concern , the report says , such as internal safety examination , while also being required to defer examination claims for third - party substantiation .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
While the study , the final variant of which is due out in June 2025 , second no specific lawmaking , it ’s been well have by expert on both position of the AI policymaking debate .
Dean Ball , an AI - focused enquiry fellow at George Mason University who was vital of SB 1047 , said in a post on X that the report wasa promise stepfor California ’s AI safe regulating . It ’s also a profits for AI prophylactic advocates , concord to California country senator Scott Wiener , who introduced SB 1047 last year . Wiener aver in a press release that the report builds on “ pressing conversations around AI governance we begin in the legislature [ in 2024 ] . ”
The report appears to ordinate with several ingredient of SB 1047 and Wiener ’s follow - up bill , SB 53 , such as requiring AI role model developer to report the results of safety test . Taking a broader view , it seems to be a much - needed win for AI safety folks , whose order of business has lose ground in the last yr .