Topics
Latest
AI
Amazon
Image Credits:JuSun / Getty Images
Apps
Biotech & Health
Climate
Cloud Computing
DoC
Crypto
endeavour
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
Hardware
layoff
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
infinite
startup
TikTok
Transportation
Venture
More from TechCrunch
effect
Startup Battlefield
StrictlyVC
newssheet
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
get through Us
Character AI , a chopine that lets users engage in roleplay with AI chatbots , hasfiled a motion to dismissa case brought against it by the parent of a teen who committed suicide after allegedly becoming hooked on the caller ’s technology .
In October , Megan Garciafiled a lawsuit against Character AIin the U.S. District Court for the Middle District of Florida , Orlando Division , over the death of her Word , Sewell Setzer III . fit in to Garcia , her 14 - twelvemonth - old developed an aroused attachment to a chatbot on Character AI , “ Dany , ” which he texted constantly — to the percentage point where he began to draw away from the existent human beings .
In the motion to dismiss , counsel for Character AI asserts the platform is protected against liability by the First Amendment , just as computer computer code is . The motion may not sway a judge , and Character AI ’s effectual justifications may change as the case proceeds . But the motion possibly suggest at other constituent of Character AI ’s defense .
“ The First Amendment prohibits civil wrong liability against media and applied science company arising from allegedly harmful speech , including language allegedly resulting in felo-de-se , ” the filing take . “ The only difference between this case and those that have come before is that some of the spoken communication here involves AI . But the context of the expressive speech — whether a conversation with an AI chatbot or an interaction with a video recording game character — does not convert the First Amendment analysis . ”
To be clear-cut , Character AI ’s counsel is n’t assert the party ’s First Amendment rights . Rather , the motion argues that Character AI’suserswould have their First Amendment right violated should the lawsuit against the platform succeed .
The movement does n’t come up to whether Character AI might be held harmless under department 230 of the Communications Decency Act , the Union secure - seaport jurisprudence that protects social media and other on-line platforms from financial obligation for third - party content . Thelaw ’s authors have impliedthat Section 230 does n’t protect output from AI like Character AI ’s chatbots , but it’sfar from a finalize legal matter .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Counsel for Character AI also claims that Garcia ’s real purpose is to “ shut out down ” Character AI and prompt legislation regulating technologies like it . Should the complainant be successful , it would have a “ shuddery consequence ” on both Character AI and the entire nascent productive AI industry , guidance for the chopine say .
“ Apart from counsel ’s stated design to ‘ keep out down ’ Character AI , [ their complaint ] seeks drastic changes that would materially limit the nature and mass of speech on the platform , ” the filing reads . “ These changes would radically curtail the ability of Character AI ’s millions of users to engender and participate in conversations with characters . ”
The lawsuit , which also names Character AI corporate benefactor Alphabet as a defendant , is but one of several suit that Character AI is facing have-to doe with to how nestling interact with the AI - generated mental object on its platform . Other causa aver that Character AI exposeda 9 - year - erstwhile to “ hypersexualized content”and promoted self - harm toa 17 - year - old user .
In December , Texas Attorney General Ken Paxton declare he waslaunching an investigationinto Character AI and 14 other tech business firm over alleged violations of the state ’s on-line privacy and safety Torah for children . “ These investigations are a critical step toward ensuring that societal medium and AI companies follow with our law of nature plan to protect children from victimisation and damage , ” said Paxton in a press release .
Character AI is part of aboomingindustryofAIcompanionshipapps — the mental health effects of which are largely uncontrived . Some expert have expressed concernsthat these apps could exacerbate feelings of desolation and anxiety .
Character AI , which was establish in 2021 by Google AI researcher Noam Shazeer , and which Google reportedly paid $ 2.7 billion to “ reverse acquihire , ” has claimed that it continues to take step to improve safety and moderation . In December , the companyrolled outnew safety prick , a separate AI model for teens , blocks on sensible content , and more large disavowal give notice users that its AI characters are not real people .
Character AI has gone through a issue of force changes after Shazeer and the party ’s other co - founder , Daniel De Freitas , left for Google . The program hireda former YouTube exec , Erin Teague , as chief mathematical product officer , and nominate Dominic Perella , who was Character AI ’s general counsel , interim CEO .
Character AIrecently begin testing games on the webin an drive to boost substance abuser engagement and memory .