Topics
late
AI
Amazon
Image Credits:Flickr(opens in a new window)
Apps
Biotech & Health
clime
Image Credits:Flickr(opens in a new window)
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
gadget
Gaming
Government & Policy
ironware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
inauguration
TikTok
exile
Venture
More from TechCrunch
case
Startup Battlefield
StrictlyVC
newssheet
Podcasts
picture
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
The problem with most attempts at regulate AI so far is that lawgiver are focalize on some mythical next AI experience , instead of truly understanding the young risk AI in reality insert .
So argued Andreessen Horowitz general partner VC Martin Casado to a bear - room crew at TechCrunch Disrupt 2024 last week . Casado , who lead a16z ’s $ 1.25 billion base practice , has invested in such AI startups as World Labs , Cursor , Ideogram , and Braintrust .
“ Transformative applied science and regulation has been this ongoing discourse for decades , right ? So the thing with all the AI discourse is it seems to have kind of come out of nowhere , ” he tell the crowd . “ They ’re kind of trying to conjure net - new regulation without drawing from those lessons . ”
For instance , he said , “ Have you really seen the definition for AI in these policies ? Like , we ca n’t even delimit it . ”
Casado was among a sea of Silicon Valley voices who rejoice when California Gov. Gavin Newsomvetoed the state ’s attempted AI governance police force , SB 1047 . The law want to put a so - called kill transposition into super - large AI models — aka something that would become them off . Those who react the broadsheet said that it was so poorly worded that alternatively of deliver us from an fanciful future tense AI giant , it would have just confused and stymied California ’s hot AI development scene .
“ I routinely hear founders jib at moving here because of what it signals about California ’s attitude on AI — that we opt unfit legislation based on sci - fi business organization rather than touchable risks,”he post on Xa couple of weeks before the measure was vetoed .
While this particular province law is utter , the fact it be still incommode Casado . He is implicated that more bills , construct in the same way , could materialize if politicians decide to pimp to the general population ’s awe of AI , rather than govern what the technology is actually doing .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
He understands AI technical school better than most . Before joining the celebrated VC house , Casado found two other companies , including a networking substructure party , Nicira , that he sold to VMware for $ 1.26 billion a fleck over a 10 ago . Before that , Casado was a computer security measure expert atLawrence Livermore National Lab .
He read that many propose AI regularization did not come from , nor were supported by , many who understand AI tech adept , include faculty member and the commercial sector building AI product .
“ You have to have a notion of marginal risk that ’s different . Like , how is AI today different than someone using Google ? How is AI today dissimilar than someone just using the internet ? If we have a mannequin for how it ’s dissimilar , you ’ve get some opinion of marginal risk , and then you’re able to apply policies that turn to that borderline risk , ” he said .
“ I think we ’re a picayune bit early on before we start to glom [ onto ] a bunch of ordinance to really understand what we ’re going to regulate , ” he argues .
The counterargument — and one several people in the audience impart up — was that the creation did n’t really see the types of harms that the net or societal media could do before those harms were upon us . When Google and Facebook were launched , no one knew they would dominate on-line advertising or collect so much data on individuals . No one understood affair like cyberbullying or replication chambers when societal media was young .
Advocates of AI regulation now often point to these retiring consideration and say those technologies should have been regulated early on .
Casado ’s reception ?
“ There is a robust regulatory regime that exists in position today that ’s been developed over 30 years , ” and it ’s well - outfit to make unexampled policies for AI and other tech . It ’s true , at the federal level alone , regulative torso include everything from the Federal Communications Commission to the House Committee on Science , Space , and Technology . When TechCrunch ask Casado on Wednesday after the election if he stand by this public opinion — that AI regulation should succeed the route already hammered out by existing regulatory bodies — he say he did .
But he also believes that AI should n’t be targeted because of issues with other technologies . The technology that do the government issue should be targeted alternatively .
“ If we got it wrong in societal culture medium , you ca n’t fix it by putting it on AI , ” he enunciate . “ The AI regulation hoi polloi , they ’re like , ‘ Oh , we got it wrong in like social , therefore we ’ll get it right in AI , ’ which is a ridiculous statement . allow ’s go fix it in societal . “