Topics

Latest

AI

Amazon

Article image

Image Credits:Getty Images

Apps

Biotech & Health

Climate

A silhouette of a person’s head created using computer code.

Image Credits:Getty Images

Cloud Computing

Commerce

Crypto

endeavor

EVs

Fintech

Fundraising

contrivance

back

Google

Government & Policy

Hardware

Instagram

layoff

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

societal

Space

Startups

TikTok

Transportation

speculation

More from TechCrunch

event

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Many feared that the 2024 election would be affected , and perhaps decided , by AI - generated disinformation . While there was some to be found , it was far less than anticipated . But do n’t lease that fool you : The disinfo terror is real — you ’re just not the target .

Or at least so says Oren Etzioni , an AI investigator of long standing whose nonprofitTrueMediahas its digit on the yield disinformation heartbeat .

“ There is , for lack of a salutary word , a diversity of deepfakes , ” he tell TechCrunch in a late interview . “ Each one wait on its own purpose , and some we ’re more aware of than others . Let me put it this way of life : For every affair that you in reality find out about , there are a hundred that are not aim at you . perchance a thousand . It ’s really only the very crown of the iceberg that spend a penny it to the mainstream press . ”

The fact is that most people , and Americans more than most , be given to cogitate that what they get is the same as what others get . That is n’t rightful for a fate of reasons . But in the case of disinformation drive , America is actually a hard prey , yield a comparatively well - inform public , readily useable factual information , and a press that is trusted at least most of the fourth dimension ( despite all the noise to the contrary ) .

We tend to think of deepfakes as something like a video of Taylor Swift doing or saying something she would n’t . But the really dangerous deepfakes are not the ones of celebrities or politician , but of office and people that ca n’t be so easy identified and counteracted .

“ The biggest affair people do n’t get is the variety . I saw one today of Iranian planes over Israel , ” he noted — something that did n’t befall but ca n’t easily be disproven by someone not on the ground there . “ You do n’t see it because you ’re not on the Telegram channel , or in certain WhatsApp groups — but jillion are . ”

TrueMedia offers a destitute service ( via web and API ) for identifying epitome , video , audio , and other token as fake or veridical . It ’s no round-eyed task and ca n’t be completely automated , but they are slowly work up a foundation of ground truth material that feeds back into the process .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

“ Our elementary missionary work is catching . The academic benchmark [ for evaluating phony media ] have long since been plowed over , ” Etzioni explained . “ We train on things upload by hoi polloi all over the world ; we see what the different vendors say about it , what our example say about it , and we generate a conclusion . As a follow - up , we have a forensic team doing a cryptic investigation that ’s more wide and slower , not on all the items but a significant fraction , so we have a solid ground truth . We do n’t depute a accuracy value unless we ’re quite sure ; we can still be incorrect , but we ’re well skilful than any other individual root . ”

The principal mission is in servicing of quantify the trouble in three central slipway Etzioni draft :

All of these are works in progress , some just beginning , he emphasized . But you have to go somewhere .

“ Let me make a bold prediction : Over the next 4 geezerhood , we ’re going to become much more adept at assess this , ” he say . “ Because we have to . correctly now we ’re just trying to deal . ”

As for some of the industry and technical attempts to make generated media more obvious , such as watermarking images and text , they ’re harmless and peradventure beneficial , but do n’t even begin to solve the problem , he said .

“ The way I ’d put it is , do n’t bring a watermark to a gunplay . ” These voluntary standards are helpful in collaborative ecosystems where everyone has a reason to use them , but they extend little protection against malicious actors who want to quash detection .

It all sound rather dire , and it is , but the most eventful election in recent history just took place without much in the way of AI wile . That is not because generative disinfo is n’t commonplace , but because its purveyor did n’t feel it necessary to take part . Whether that frighten you more or less than the alternative is quite up to you .