Topics
late
AI
Amazon
Image Credits:sompong_tom(opens in a new window)/ Getty Images
Apps
Biotech & Health
clime
Image Credits:sompong_tom(opens in a new window)/ Getty Images
Cloud Computing
Department of Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
computer hardware
layoff
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
certificate
Social
Space
inauguration
TikTok
Transportation
Venture
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
video recording
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
This week in AI , a new subject shows that procreative AI really is n’t all that harmful — at least not in the apocalyptical sense .
In apapersubmitted to the Association for Computational Linguistics ’ annual conference , researchers from the University of Bath and University of Darmstadt argue that models like those in Meta’sLlamafamily ca n’t learn severally or acquire new skills without expressed instruction .
The researcher conducted thousands of experiments to test the power of several models to complete undertaking they had n’t come across before , like answering questions about topics that were outside the scope of their training data . They found that , while the models could superficially follow instructions , they could n’t dominate new acquirement on their own .
“ Our study show that the fear that a example will go away and do something completely unexpected , advanced and potentially dangerous is not valid , ” Harish Tayyar Madabushi , a computer scientist at the University of Bath and co - author on the cogitation , saidin a statement . “ The prevailing story that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies , and also divert aid from the genuine issues that require our focus . ”
There are limitation to the subject . The research worker did n’t prove the new and most capable models from seller like OpenAI and Anthropic , and benchmarking manakin tend to be an imprecise science . But the inquiry isfarfrom thefirsttofindthat today ’s generative AI tech is n’t humanity - threatening — and that assuming otherwise lay on the line regrettable policymaking .
In anop - edin Scientific American last year , AI ethicist Alex Hanna and linguistics professor Emily Bender made the case that corporate AI labs are misdirecting regulative attention to imaginary , world - ending scenario as a bureaucratic maneuvering gambit . They pointed to OpenAI CEO Sam Altman ’s appearance in a May 2023 congressional hearing , during which he suggest — without grounds — that generative AI prick could go “ quite improper . ”
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
“ The broader public and regulatory agencies must not fall for this manoeuvre , ” Hanna and Bender wrote . “ Rather we should look to scholars and activists who practice peer review and have pushed back on AI hype in an endeavor to understand its detrimental effects here and now . ”
Theirs and Madabushi ’s are central point to keep in mind as investors keep on to swarm billions into productive AI and the hoopla cps go up its elevation . There ’s a pile at stake for the companies punt generative AI tech , and what ’s good for them — and their backers — is n’t needfully good for the rest of us .
Generative AI might not cause our extinction . But it ’s already harming in other way — see the spread ofnonconsensual deepfake pornography , wrongful facial recognition arrestsand the hordes ofunderpaid information annotators . Policymakers hopefully see this too and share this view — or get around finally . If not , human beings may very well have something to dread .
News
Google Gemini and AI , oh my : Google ’s annual Made By Google ironware outcome took place Tuesday , and the company announced a ton ofupdatesto its Gemini assistant — plus newphones , earbudsandsmartwatches . Check out TechCrunch’sroundupfor all the previous reporting .
AI right of first publication suit moves forward : A class activity lawsuit filed by artists who say that Stability AI , Runway AI and DeviantArt illicitly trained their AIs on copyrighted oeuvre can move forward , but only in part , the presiding jurist decided on Monday . In a assorted opinion , several of the complainant ’ claim were dismissed while others survived , mean the suit could end up at test .
Problems for X and Grok :X , the social media program owned by Elon Musk , has been point with a series of privacy complaints after it helped itself to the data of users in the European Union for training AI models without require people ’s consent . X has agreed to stop EU dataprocessingfor training Grok — for now .
YouTube try out Gemini brainstorming : YouTube is testing an integration with Gemini to help creators brainstorm video thought , titles and thumbnails . call Brainstorm with Gemini , the characteristic is currently available only to select creators as part of a small , special experiment .
OpenAI ’s GPT-4o does weird stuff : OpenAI’sGPT-4ois the company ’s first model trained on voice as well as school text and look-alike data . And that lead it to behave in strange ways sometimes — like mimicking the voice of the someone speaking to it or haphazardly call in the center of a conversation .
Research paper of the week
There are tons of fellowship out there offer tools they claim can reliably detect text write by a generative AI mannikin , which would be useful for , say , combating misinformation and plagiarism . But when wetesteda few a while back , the tools rarely work . And a new study intimate the situation has n’t improved much .
Researchers at UPenndesigneda dataset and leaderboard , the Robust AI Detector ( RAID ) , of over 10 million AI - generated and human - written recipes , news articles , blog posts and more to measure the performance of AI text detector . They come up the detector they evaluated to be “ mostly useless ” ( in the researchers ’ words ) , only working when applied to specific use case and text similar to the text they were train on .
“ If universities or schools were relying on a narrowly trained detector to catch students ’ use of [ generative AI ] to publish assignments , they could be incorrectly accusing students of cheating when they are not , ” Chris Callison - Burch , professor in computer and information science and a Colorado - writer on the study , state in a argument . “ They could also omit students who were chicane by using other [ generative AI ] to return their homework . ”
There ’s no silver smoke when it comes to AI schoolbook detection , it seems — the problem ’s an intractable one .
Reportedly , OpenAI itself has developed a new text - detection tool for its AI models — an improvement over thecompany ’s first endeavour — but is turn down to release it over fearfulness it might disproportionately impact non - English users and be rendered ineffective by slight adjustment in the text . ( Less philanthropically , OpenAI is also said to be concerned about how a built - in AI text detector might affect perception — and usage — of its intersection . )
Model of the week
Generative AI is secure for more than just meme , it seems . MIT researchers areapplyingit to flag problem in complex system like wind turbines .
A team at MIT ’s Computer Science and Artificial Intelligence Lab originate a framework , shout SigLLM , that includes a component to convert clock time - series data point — measurements taken repeatedly over time — into text - based input a generative AI model can process . A substance abuser can feed these set data to the model and ask it to commence identify anomalies . The model can also be used to forecast future time - serial data points as part of an anomalousness - detection grapevine .
The model did n’t performexceptionallywell in the researchers ’ experimentation . But if its public presentation can be meliorate , SigLLM could , for lesson , avail technician sag possible problems in equipment like heavy machinery before they occur .
“ Since this is just the first iteration , we did n’t expect to get there from the first go , but these solvent show that there ’s an opportunity here to leverage [ reproductive AI models ] for complex anomaly sleuthing tasks , ” Sarah Alnegheimish , an electric engineering and computer scientific discipline graduate student and head author on a paper on SigLLM , said in a statement .
Grab bag
OpenAI upgradedChatGPT , its AI - power chatbot platform , to a new fundament model this month — but let go no changelog ( well , barelya changelog ) .
there ’s a new GPT-4o model out in ChatGPT since last workweek . go for you all are enjoying it and check it out if you have n’t ! we think you ’ll like it 😃
So what to make of it ? Whatcanone make of it , on the button ? There ’s nothing to go on but anecdotic grounds fromsubjective tests .
I think Ethan Mollick , a professor at Wharton study AI , innovation and startups , had the right take . It ’s strong to spell release notes for reproductive AI models because the models “ feel ” dissimilar in one interaction to the next ; they ’re largelyvibes - based . At the same time , people expend — and pay for — ChatGPT . Do n’t they merit to know what they ’re getting into ?
It could be the improvements are incremental , and OpenAI believe it ’s unwise for competitive reasons to signal this . Less likely is the model relate somehow to OpenAI’sreportedreasoning breakthroughs . Regardless , when it come to AI , transparency should be a priority . There ca n’t be trust without it — and OpenAI haslost plenty of that already .