Topics

Latest

AI

Amazon

Article image

Image Credits:Jakub Porzycki/NurPhoto / Getty Images

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

go-ahead

EVs

Fintech

fundraise

Gadgets

Gaming

Google

Government & Policy

computer hardware

Instagram

layoff

Media & Entertainment

Meta

Microsoft

privateness

Robotics

surety

Social

Space

startup

TikTok

transfer

speculation

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

In a Wiley Post on hispersonal blog , OpenAI CEO Sam Altman say that he believes OpenAI “ know[s ] how to progress [ hokey general intelligence activity ] ” as the society has traditionally understood it , and is beginning to wrick its aim to “ superintelligence . ”

“ We love our current Cartesian product , but we are here for the magnificent hereafter , ” Altman wrote in the mail service . “ Superintelligent tools could massively accelerate scientific discovery and origination well beyond what we are subject of doing on our own , and in turn massively increase abundance and prosperity . ”

Altmanpreviouslysaid that superintelligence could be “ a few thousand days ” away , and thatits arrivalwould be “ more vivid than people imagine . ”

AGI , or artificial worldwide intelligence , is a nebulous term , but OpenAI has its own definition : “ highly autonomous organisation that outperform humans at most economically valuable work . ” OpenAI and Microsoft , the startup ’s unaired collaborator and investor , also havea definition of AGI : AI systems that can generate at least $ 100 billion in lucre . When OpenAI achieves this , Microsoft will lose access to its engineering science , per an agreement between the two companies .

So which definition might Altman be concern to ? He did n’t specify , but the former seems likeliest . Altman write that he thinksAI agents — AI organisation that can do sure project autonomously — may “ fall in the workforce , ” in a style of speaking , and “ materially change the outturn of company ” this year .

“ We extend to believe that iteratively assign great tools in the hands of people lead to expectant , broadly - broadcast outcomes , ” he wrote .

That ’s possible , but it ’s also true that today ’s AI technology is significantly limited . Ithallucinates , for one ; it makesmistakes obvious to any human ; and it can bevery expensive .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

Altman seems confident all this can be overcome — and quickly . Still , if there ’s anything we ’ve learned about AI from the preceding few years , it ’s thattimelines can shift .

“ We ’re pretty positive that in the next few twelvemonth , everyone will see what we see , and that the demand to act as with cracking care , while still maximizing unsubtle welfare and empowerment , is so of import , ” Altman spell . “ Given the possibility of our oeuvre , OpenAI can not be a normal company . How lucky and humbling it is to be able to run a role in this work . ”

One would trust that as OpenAI telegraphs its transformation in focus to what it considers to be superintelligence , the company would devote sufficient resources to ensuring superintelligent system behave safely .

OpenAI haswrittenseveraltimesabout how successfully transitioning to a globe with superintelligence is “ far from warrant , ” and that it does n’t have all the answers . “ [ W]e do n’t have a solution for steerage or controlling a potentially superintelligent AI , and forestall it from going rogue , ” the company wrote in ablog postdate July 2023 . “ [ H]umans wo n’t be able to dependably supervise AI systems much smarter than us , and so our current coalition techniques will not surmount to superintelligence . ”

Since that post , however , OpenAI hasdisbandedteams that were focused on AI safety equipment , let in superintelligent systems safety , and see several influential safety - focalise researchers set off . Several of these stafferscitedOpenAI ’s increasingly commercial-grade ambitions as the rationality for their departure . The company is currentlyundergoinga embodied restructuring to make it more attractive to outside investors .

Asked in arecent interviewabout critic who say OpenAI is n’t focused enough on safety , Altman responded , “ I ’d point to our course phonograph record . ”