Topics

Latest

AI

Amazon

Article image

Image Credits:AndreyPopov / Getty Images

Apps

Biotech & Health

Climate

Fact checking using AI

Image Credits:AndreyPopov / Getty Images

Cloud Computing

Commerce

Crypto

Article image

Grok was asked by a user on X to fact-check on claims made by another user.Image Credits:X/Twitter (screenshot)

Enterprise

EVs

Fintech

Article image

Grok’s response on whether it can spread Misinformation (Translated from Hinglish).Image Credits:X/Twitter (screenshot)

Fundraising

convenience

Gaming

Google

Government & Policy

ironware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

seclusion

Robotics

surety

Social

Space

Startups

TikTok

conveyance

speculation

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

touch Us

Some users on Elon Musk ’s X are turning to Musk ’s AI bot Grok for fact - checking , raising headache among human fact - checkers that this could fuel misinformation .

Earlier this month , Xenabledusers to call out xAI ’s Grok and ask query on dissimilar things . The move wassimilar to Perplexity , which has been run an automated account on X to offer a similar experience .

Fact - checkers are concerned about using Grok — or any other AI assistant of this sort — in this style because the bot can frame their solvent to vocalize convincing , even if they are not factually correct . Instances ofspreading faux newsandmisinformationwere seen with Grok in the past .

In August last year , five state secretariesurgedMusk to follow through critical changes to Grok after the misleading data generated by the assistant surfaced on societal meshwork ahead of the U.S. election .

Other chatbots , including OpenAI ’s ChatGPT and Google ’s Gemini , were also seen to begenerating inaccurate informationon the election last year . Separately , disinformation researchers found in 2023 that AI chatbots , include ChatGPT , could easily be used to produceconvincing text with misguide narratives .

“ AI assistants , like Grok , they ’re really honest at using raw language and give an answer that sounds like a human being say it . And in that way , the AI products have this call on ingenuousness and authentic sounding response , even when they ’re potentially very wrong . That would be the peril here , ” Angie Holan , conductor of the International Fact - Checking connection ( IFCN ) at Poynter , told TechCrunch .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

Unlike AI supporter , human fact - checker use multiple , credible source to verify info . They also take full answerableness for their findings , with their names and organizations impound to ensure believability .

Pratik Sinha , co - father of India ’s non - profit fact - train site Alt News , said that although Grok currently appears to have convincing answers , it is only as good as the data it is supplied with .

“ Who ’s run to decide what data it gets add with , and that is where governing interference , etc . , will come into motion-picture show , ” he noted .

“ There is no transparency . Anything which lack transparency will cause harm because anything that lacks transparence can be determine in any which way . ”

“Could be misused — to spread misinformation”

In one of the responses post earlier this calendar week , Grok ’s account on Xacknowledgedthat it “ could be misused — to spread misinformation and violate seclusion . ”

However , the automated account does not show any disclaimers to users when they get its answer , leading them to be misinformed if it has , for instance , hallucinate the answer , which is the potential disadvantage of AI .

“ It may make up information to provide a reaction , ” Anushka Jain , a enquiry familiar at Goa - based multidisciplinary research collective Digital Futures Lab , told TechCrunch .

There ’s also some interrogation about how much Grok uses posts on X as training data , and what quality - control measure it uses to fact - checker such post . Last summer , itpushed out a changethat come out to reserve Grok to consume ex drug user data point by default .

The other concerning area of AI supporter like Grok being accessible through social media platform is their delivery of data in public — unlike ChatGPT or other chatbots being used in camera .

Even if a exploiter is well aware that the information it acquire from the supporter could be deceptive or not completely correct , others on the platform might still conceive it .

This could cause serious societal harms . case of that were see earlier in India whenmisinformation circulated over WhatsApp led to mob lynchings . However , those severe incident occurred before the arrival of procreative AI , which has made synthetic content generation even easier and appear more realistic .

“ If you see a quite a little of these Grok answers , you ’re live to say , hey , well , most of them are right , and that may be so , but there are going to be some that are wrong . And how many ? It ’s not a modest fraction . Some of the research studies have shown that AI exemplar are subject to 20 % computer error rates   … and when it goes wrong , it can go really wrong with veridical - world consequences , ” IFCN ’s Holan told TechCrunch .

AI vs. real fact-checkers

While AI companies , let in xAI , are refining their AI model to make them communicate more like humans , they still are not — and can not — put back humans .

For the last few month , technical school companies are exploring ways to reduce trust on human fact - checker . Platforms , including X and Meta , started embracing the new concept of crowdsourced fact - checking through Community Notes .

Naturally , such change also have business organization to fact - checkers .

Sinha of Alt News optimistically believes that mass will learn to differentiate between automobile and human fact - checker and will value the truth of the mankind more .

“ We ’re become to see the pendulum dangle back finally toward more fact - checking , ” IFCN ’s Holan said .

However , she note that in the meantime , fact - chequer will probably have more workplace to do with the AI - generated information spreading fleetly .

“ A lot of this issue depend on , do you really care about what is really true or not ? Are you just looking for the veneer of something that sounds and finger true without actually being dead on target ? Because that ’s what AI help will get you , ” she aver .

X and xAI did n’t react to our request for comment .