Topics

a la mode

AI

Amazon

Article image

Image Credits:Bryce Durbin/TechCrunch

Apps

Biotech & Health

Climate

Article image

Image Credits:Bryce Durbin/TechCrunch

Cloud Computing

Commerce

Crypto

Adversarial AI pop-ups

Image Credits:Zhang et al.

Enterprise

EVs

Fintech

Article image

Image Credits:Scale.ai

fundraise

appliance

Gaming

Google

Government & Policy

Hardware

Instagram

layoff

Media & Entertainment

Meta

Microsoft

privateness

Robotics

surety

Social

outer space

Startups

TikTok

transport

Venture

More from TechCrunch

event

Startup Battlefield

StrictlyVC

newssheet

Podcasts

picture

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Hiya , folks , welcome to TechCrunch ’s regular AI newssheet . If you desire this in your inbox every Wednesday , sign uphere .

It was shockingly promiscuous to make a convincing Kamala Harris audio recording deepfake on Election Day . It cost me $ 5 and take less than two minute , illustrate how cheesy , ubiquitous productive AI has open up the floodgates to disinformation .

Creating a Harris deepfake was n’t my original intent . I was playing around with Cartesia’sVoice Changer , a model that transform your voice into a different part while preserving the original ’s prosody . That second vocalisation can be a “ clone ” of another person ’s — Cartesia will make a digital articulation double from any 10 - sec transcription .

So , I enquire , would Voice Changer transmute my voice into Harris ’ ? I give $ 5 to unlock Cartesia ’s voice cloning feature , created a clone of Harris ’ vocalism using late campaign speeches , and selected that clone as the output in Voice Changer .

It work like a charm :

I ’m confident that Cartesia did n’t exactly intend for its tools to be used in this way . To enable vocalism cloning , Cartesia requires that you find out a box seat indicating that you wo n’t give anything harmful or illegal and that you consent to your speech recordings being clone .

But that ’s just an honor system . wanting any real safeguards , there ’s nothing prevent a person from creating as many “ harmful or illegal ” deepfakes as they wish .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

That ’s a problem , it go without saying . So what ’s the root ? Is there one ? Cartesia can put through voice verification , as someotherplatformshave done . But by the time it does , fortune are a fresh , unfettered vocalisation cloning putz will have emerge .

I spoke about this very issuewith experts at TC ’s Disrupt conference last week . Some were supportive of the idea of inconspicuous watermarks so that it ’s easier to say whether content has been AI - generated . Others pointed to contentedness moderation laws such as the Online Safety Act in the U.K. , which they argued might serve stem the lunar time period of disinformation .

Call me a pessimist , but I remember those ships have sailed . We ’re look at , as CEO of the Center for Countering Digital Hate Imran Ahmed put it , a “ constant bulls — simple machine . ”

Disinformation is circularize at an alarming charge per unit . Some gamey - profile examples from the past twelvemonth includea bot networkon X targeting U.S. federal election and avoicemaildeepfake of President Joe Biden discouraging New Hampshire residents from voting . But U.S. voter and technical school - savvy the great unwashed are n’t the targets of most of this content , according to True Media.org’sanalysis , so we tend to underestimate its presence elsewhere .

The mass of AI - generate deepfakes grew 900 % between 2019 and 2020,accordingto data from the World Economic Forum .

Meanwhile , there ’s relatively few deepfake - targeting laws on the Book . And deepfake detection is poise to become a never - finish weapons system subspecies . Some tool inevitably wo n’t opt to use safety equipment measures such as watermarking , or will be deployed with expressly malicious software in mind .

forgetful of a ocean change , I think the best we can do is be intensely skeptical of what ’s out there — especially viral subject . It ’s not as easy as it once was to tell truth from fiction online . But we ’re still in ascendence of what we share versus what we do n’t . And that ’s much more impactful than it might seem .

News

ChatGPT Search review : My workfellow Max took OpenAI ’s new search consolidation for ChatGPT , ChatGPT Search , for a twirl . He found it to be telling in some ways , but unreliable for short queries contain just a few word .

Amazon drones in Phoenix : A few month afterendingits drone - based legal transfer political platform , Prime Air , in California , Amazon   says   that it ’s commence get deliveries to select customers via drone in Phoenix , Arizona .

Ex - Meta AR lead joins OpenAI : The former head teacher of Meta ’s AR glasses efforts , includingOrion , announced on Monday she ’s joining OpenAI to contribute robotics and consumer hardware . The word comes after OpenAIhiredthe co - founder of X ( formerly Twitter ) challenger Pebble .

Held back by compute : In a RedditAMA , OpenAI CEO Sam Altman admit that a lack of compute capacity is one major factor preventing the company from ship product as often as it ’d like .

AI - engender recap : Amazon   has launched “ X - Ray Recaps , ” a generative Bradypus tridactylus - power feature film that create concise sum-up of entire TV seasons , individual episode , and even voice of episode .

anthropical hikes Haiku prices : Anthropic ’s New AI model has   arrived : Claude 3.5 Haiku . But it ’s pricier than the last generation , and unlike Anthropic ’s other models , it ca n’t analyze images , graph , or diagrams just yet .

Apple acquires Pixelmator : AI - power figure editor Pixelmatorannouncedon Friday that it ’s being assume by Apple . The deal come as Apple has grown more aggressive about integrating AI into its imagery apps .

An ‘ agentic ’ Alexa : Amazon CEO Andy Jassy last week suggest at an improved “ agentic ” version of the company ’s Alexa assistant — one that could take actions on a drug user ’s behalf . The revamped Alexa has reportedly front postponement and technical setbacks , and might not launch until sometime in 2025 .

Research paper of the week

Pop - ups on the web can shoot AI , too — not just grandparents .

In a newpaper , research worker from Georgia Tech , the University of Hong Kong , and Stanford show that AI “ agents ” — AI models that can complete tasks — can be commandeer by “ adversarial pop - ups ” that instruct the models to do thing like download malicious filing cabinet extension .

Some of these soda - ups are quite plain traps to the human eye — but AI is n’t as discerning . The researcher say that the image- and text - analyse models they essay failed to snub dada - ups 86 % of the time , and — as a result — were 47 % less likely to complete tasks .

Basic defense , like instructing the modeling to ignore the pop - ups , were n’t effective . “ Deploying computer - use agents still abide from significant risks , ” the carbon monoxide gas - authors of the study wrote , “ and more robust agent systems are necessitate to ensure safe agent workflow . ”

Model of the week

Meta announced yesterday that it ’s working with partners to make itsLlama“open ” AI models available for defence force software program . Today , one of those partners , Scale AI , announcedDefense Llama , a theoretical account built on top of Meta ’s Llama 3 that ’s “ customized and OK - tuned to support American home security missions . ”

Defense Llama , which is available in Scale’sDonavanchatbot platform for U.S. government customers , was optimize for planning military and intelligence operations , Scale says .   Defense Llama can answer defense - related questions , for example like how an adversary might plan an attack against a U.S. military base .

So what makes Defense Llama different from stock Llama ? Well , Scale enjoin it was very well - tuned on content that might be relevant to military operations , like military doctrine and international humanitarian law , as well as the capacity of various weapons and defense systems . It also is n’t restricted from answering questions about warfare , like a civilian chatbot might be :

It ’s not clear who might be be given use it , though .

The U.S. military has beenslow to adopt procreative AI — and skeptical of its ROI . So far , the U.S. Army is theonlybranch of the U.S. armed military group with a generative AI deployment . Military officials have express concerns about security vulnerabilities in commercial model , as well as sound challenge connect with intelligence information data share-out and manakin ’ unpredictability when faced with edge cases .

Grab bag

Spawning AI , a startup create tools to enable Creator to opt out of generative AI training , has released an image dataset for training AI models that it claims is to the full public domain .

Most generative AI models are check on public web data point , some of which may be copyright or under a restrictive licence . OpenAI and many other AI vendors indicate thatfair - usedoctrine shields them from right of first publication claim . But that has n’t arrest datum ownersfromfiling lawsuits .

Spawning AI says its training dataset of 12.4 million persona - legend pairs includes only subject matter with “ known provenance ” and “ mark with clear , unequivocal rights ” for AI grooming . Unlike some other datasets , it ’s also available for download from a dedicated master of ceremonies , eliminating the need to web - scraping .

“ Significantly , the public - domain status of the dataset is inbuilt to these larger goal , ” Spawning drop a line in a blog post . “ Datasets that admit copyrighted images will continue to rely on web - scrape because hosting the image would go against copyright . ”

Spawning ’s dataset , PD12 M , and a rendering curated for “ esthetically pleasing ” images , PD3 M , can be foundat this tie .