Topics

Latest

AI

Amazon

Article image

Image Credits:TechCrunch

Apps

Biotech & Health

Climate

TechCrunch Disrupt 2024 AI governance panel. From left to right: Kyle Wiggers, Elizabeth Kelly, Jessica Newman, Scott Wiener

Image Credits:TechCrunch

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

game

Google

Government & Policy

computer hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

privateness

Robotics

Security

societal

Space

Startups

TikTok

expatriation

Venture

More from TechCrunch

upshot

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Can the U.S. meaningfully influence AI ? It ’s not at all clear yet . Policymakers have achieved forward motion in recent months , but they ’ve also had setbacks , illustrating the challenging nature of constabulary imposing guardrail on the technology .

In March , Tennesseebecamethe first land to protect articulation artist from unauthorized   AI cloning . This summertime , Coloradoadopteda tiered , risk - based approach to AI policy . And in September , California Governor Gavin Newsom signeddozensof AI - relate safety bills , a few of which require companies to disclose point about theirAI grooming .

But the U.S. still lack a federal AI policy comparable to theEU ’s AI Act . Even at the state story , regulation persist in to meet major roadblocks .

After a protracted battle with special interest , Governor NewsomvetoedbillSB 1047 , a legal philosophy that would have imposed wide - ranging safety and transparence demand on companies developing AI . Another California beak targeting the distributer of AI deepfakes on societal media wasstayedthis fall pending the outcome of a lawsuit .

There ’s reason for optimism , however , harmonize to Jessica Newman , co - director of the AI Policy Hub at UC Berkeley . talk on a panel about AI government activity atTechCrunch Disrupt 2024 , Newman noted that many Union visor might not have been written with AI in mind , but still hold to AI — like anti - favouritism and consumer aegis legislation .

“ We often hear about the U.S. being this sort of ‘ Wild Occident ’ in comparison to what happen in the EU , ” Newman said , “ but I cogitate that is exaggerate , and the reality is more nuanced than that . ”

To Newman ’s point , the Federal Trade Commission hasforcedcompanies surreptitiously reap datum to delete their AI model , and isinvestigatingwhether the sales of AI startups to big tech company violates antimonopoly regulation . Meanwhile , the Federal Communications Commission hasdeclaredAI - voice robocalls illegal , and has floated a pattern that   AI - generated content in political advertizement bedisclosed .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

President Joe Biden has also attempt to get sure AI rules on the Koran . Roughly a year ago , Bidensignedthe AI Executive Order , which props up the voluntary reporting and benchmarking practice many AI troupe were already choosing to implement .

One consequence of the executive parliamentary law was the U.S. AI Safety Institute ( AISI ) , a federal consistence that studies risks in AI systems . control within theNational Institute of Standards and Technology , the AISI has research partnerships with major AI research lab like OpenAI and Anthropic .

Yet , the AISI could be wind down with a bare annulment of Biden ’s executive parliamentary law . In October , acoalitionof more than 60 organizations phone on Congress to reenact legislation codifying the AISI before year ’s end .

“ I think that all of us , as Americans , share an interest in making certain that we palliate the potential downside of applied science , ” AISI managing director Elizabeth Kelly , who also take part in the panel , said .

This being the case , Wiener , another Disrupt panellist , said he would n’t have drafted the flier any differently — and he ’s convinced broad AI regulation will finally reign .

“ I intend it set the point for succeeding efforts , ” he pronounce . “ Hopefully , we can do something that can bring more phratry together , because the realism all of the big labs have already acknowledged is that the risks [ of AI ] are real and we desire to test for them . ”

Indeed , Anthropic last weekwarnedof AI calamity if governments do n’t carry out regulation in the next 18 months .

Opponents have only doubled down on their grandiosity . Last Monday , Khosla Ventures founder Vinod KhoslacalledWiener “ totally clueless ” and “ not qualified ” to regulate the real risk of AI . And Microsoft and Andreessen Horowitz released astatementrallying against AI regulations that might affect their fiscal interests .

Newman postulate , though , that pressure to mix the growing province - by - state patchwork of AI rules will ultimately yield a stronger legislative resolution . In lieu of consensus on a model of regularisation , state policymakers haveintroducedclose to 700 pieces of AI legislation this twelvemonth alone .

“ My sentiency is that companies do n’t require an surround of a patchwork regulatory system where every state is unlike , ” she said , “ and I opine there will be increase pressure to have something at the Union layer that provide more clarity and reduces some of that dubiety . ”