Topics

late

AI

Amazon

Article image

Image Credits:DBenitostock / Getty Images

Apps

Biotech & Health

clime

yellow warning symbols with exclamation points on a patterned background

Image Credits:DBenitostock / Getty Images

Cloud Computing

Commerce

Crypto

Aleksandra Pedraszewska AI Safety ElevenLabs, Sarah Myers West Executive Director AI Now Institute, and Jingna Zhang Founder & CEO Cara at TechCrunch Disrupt 2024 on Wednesday, Oct. 30, 2024. (Photo by Katelyn Tucker/ Slava Blazer Photography)

Aleksandra Pedraszewska, AI Safety, ElevenLabs; Sarah Myers West, Executive Director, AI Now Institute; and Jingna Zhang, Founder & CEO, Cara at TechCrunch Disrupt 2024 on Wednesday, October 30, 2024.Image Credits:Katelyn Tucker/ Slava Blazer Photography

Enterprise

EVs

Fintech

fund raise

contrivance

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

shipping

speculation

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

“ Move guardedly and ruddy - team thing ” is deplorably not as catchy as “ move fast and disclose things . ” But three AI safety advocates made it clear to startup founder that go too fast can lead to ethical issues in the long run .

“ We are at an flexion stage where there are dozens of resource being moved into this space , ” said Sarah Myers West , co - executive conductor of the AI Now Institute , onstage atTechCrunch Disrupt 2024 . “ I ’m really disturbed that right now there ’s just such a rush to sort of push button product out onto the world , without cerebrate about that legacy question of what is the world that we really want to live in , and in what ways is the technology that ’s being produced acting in service of that world or actively harming it . ”

The conversation comes at a minute when the issue of AI safety feels more pressing than ever . In October , the family of a shaver who go by suicidesued chatbot company Character . AIfor its alleged character in the child ’s death .

“ This story really demonstrates the unplumbed stake of the very speedy rollout that we ’ve meet of AI - base technologies , ” Myers West said . “ Some of these are longstanding , almost intractable problems of cognitive content moderation of on-line abuse .

But beyond these life - or - demise issues , the stakes of AI remain high , frommisinformationto right of first publication violation .

“ We are construct something that has a stack of mogul and the power to really , really impact citizenry ’s lives , ” said Jingna Zhang , laminitis of artist - forward social platformCara . “ When you peach about something like Character . AI , that emotionally really engages with somebody , it makes sense that I think there should be safety rail around how the product is work up . ”

Zhang ’s platform Cara took off after Meta made it unmortgaged that it could use any user ’s public posts to train its AI . For artists like Zhang herself , this policy is a smack in the face . Artists need to post their study online to ramp up a following and secure possible clients , but by doing that , their work could be used to shape the very AI models that could one day put them out of employment .

“ Copyright is what protect us and allows us to make a living , ” Zhang said . If artwork is available online , that does n’t mean it ’s free , per se — digital news publication , for exemplar , have to license double from lensman to use them . “ When generative AI started becoming much more mainstream , what we are see is that it does not work with what we are typically used to , that ’s been shew in law . And if they wanted to use our work , they should be licence it . ”

creative person could also be impacted by product like ElevenLabs , an AI voice cloning troupe that ’s worthover a billion dollars . As headway of safety at ElevenLabs , it ’s up to Aleksandra Pedraszewska to make certain that the company ’s advanced technology is n’t co - opted for nonconsensual deepfakes , among other thing .

“ I think red - teaming fashion model , understanding unwanted behaviour , and unintended issue of any newfangled launching that a reproductive AI companionship does is again becoming [ a top priority ] , ” she aver . “ ElevenLabs has 33 million user today . This is a monumental community that gets impact by any change that we make in our product . ”

Pedraszewska said that one elbow room masses in her role can be more proactive about keeping platforms safe is to have a closer relationship with the residential area of drug user .

“ We can not just operate between two extremes , one being solely anti - AI and anti - GenAI , and then another one , effectively adjudicate to persuade zero regulation of the space . I imagine that we do need to meet in the middle when it come to regulation , ” she said .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI