Topics
Latest
AI
Amazon
Image Credits:FABRICE COFFRINI/AFP / Getty Images
Apps
Biotech & Health
Climate
Image Credits:FABRICE COFFRINI/AFP / Getty Images
Cloud Computing
Commerce
Crypto
enterprisingness
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
computer hardware
Layoffs
Media & Entertainment
Meta
Microsoft
concealment
Robotics
surety
societal
distance
Startups
TikTok
Transportation
speculation
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
A high - profile X - OpenAI policy investigator , Miles Brundage , took to social mediaon Wednesday to criticize OpenAI for “ rewrite the history ” of its deployment approach to potentially risky AI systems .
Earlier this week , OpenAI bring out adocumentoutlining its current philosophy on AI safety and alignment , the process of designing AI systems that comport in desirable and interpretable way . In the text file , OpenAI said that it sees the development of AGI , broadly defined as AI system that can perform any task a human can , as a “ uninterrupted track ” that requires “ iteratively deploying and learning ” from AI technology .
“ In a noncontinuous world [ … ] condom lesson come from treating the systems of today with oversized caution proportional to their unmistakable power , [ which ] is the approach we took for [ our AI model ] GPT‑2 , ” OpenAI spell . “ We now view the first AGI as just one item along a series of system of increasing usefulness [ … ] In the continuous universe , the direction to make the next system secure and beneficial is to learn from the current organization . ”
But Brundage exact that GPT-2 did , in fact , justify abundant caveat at the metre of its release , and that this was “ 100 % consistent ” with OpenAI ’s iterative deployment strategy today .
“ OpenAI ’s dismissal of GPT-2 , which I was involved in , was 100 % consistent [ with and ] foreshadowed OpenAI ’s current philosophy of reiterative deployment , ” Brundagewrote in a post on X. “ The model was unblock incrementally , with deterrent example share at each whole step . Many security experts at the fourth dimension thanked us for this cautiousness . ”
Brundage , who joined OpenAI as a enquiry scientist in 2018 , was the company ’s head of policy enquiry for several eld . On OpenAI ’s “ AGI readiness ” team , he had a picky focus on the responsible deployment of language generation organisation such as OpenAI ’s AI chatbot political platform ChatGPT .
GPT-2 , which OpenAI foretell in 2019 , was a primogenitor of the AI systems poweringChatGPT . GPT-2 could do questions about a topic , summarize article , and render textbook on a spirit level sometimes indistinguishable from that of mankind .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
While GPT-2 and its outputs may count basic today , they were cut - edge at the time . reference the danger of malicious use , OpenAI initially reject to release GPT-2 ’s reservoir codification , opting or else to give take news outlets limited accession to a demo .
The decision was met with mixed reviews from the AI manufacture . Many experts argued that the terror pose by GPT-2had been magnified , and that there was n’t any grounds the manikin could be abused in the way OpenAI described . AI - centre publication The Gradient went so far as to publish anopen letterrequesting that OpenAI secrete the model , indicate it was too technologically important to hold back .
OpenAI eventually did liberate a partial version of GPT-2 six months after the example ’s unveiling , followed by the full system several months after that . Brundage think this was the correct approach .
“ What part of [ the GPT-2 dismissal ] was motivated by or premiss on thinking of AGI as noncontinuous ? None of it , ” he said in a post on X. “ What ’s the grounds this caution was ‘ disproportional ’ X ante ? Ex post , it prob . would have been OK , but that does n’t mean it was responsible to YOLO it [ sic ] given info at the time . ”
Brundage venerate that OpenAI ’s bearing with the document is to set up a burden of validation where “ concerns are alarmist ” and “ you need consuming evidence of imminent dangers to behave on them . ” This , he argues , is a “ very dangerous ” mentality for advanced AI systems .
“ If I were still working at OpenAI , I would be asking why this [ written document ] was written the way it was , and what precisely OpenAI trust to achieve by poo - pooing caution in such a lop - sided way , ” Brundage added .
OpenAI has historicallybeen accusedof prioritizing “ shiny product ” at the disbursement of safety , and ofrushing product releasesto beat rival companies to market . Last class , OpenAI dissolved its AGI readiness team , and a string of AI safety and insurance policy researchers deviate the company for rival .
free-enterprise pressures have only storm up . Taiwanese AI lab DeepSeekcaptured the world ’s attention with its openly availableR1model , which matched OpenAI ’s o1 “ reasoning ” model on a number of central benchmark . OpenAI CEO Sam Altman hasadmittedthat DeepSeek has lessened OpenAI ’s technological lead , andsaidthat OpenAI would “ pull up some releases ” to better vie .
There ’s a lot of money on the assembly line . OpenAI mislay billions yearly , and the caller hasreportedlyprojected that its annual losses could triple to $ 14 billion by 2026 . A faster product release hertz could benefit OpenAI ’s bottom line near - terminal figure , but perhaps at the expense of safety long - term . Experts like Brundage doubtfulness whether the trade - off is deserving it .