Topics
belated
AI
Amazon
Image Credits:Akio Kon/Bloomberg / Getty Images
Apps
Biotech & Health
mood
Image Credits:Akio Kon/Bloomberg / Getty Images
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
punt
Government & Policy
computer hardware
Layoffs
Media & Entertainment
Meta
Microsoft
privateness
Robotics
surety
Social
Space
Startups
TikTok
fare
Venture
More from TechCrunch
effect
Startup Battlefield
StrictlyVC
newssheet
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Nvidia crease in more than $ 19 billion in net income during the last fourth part , thecompany reportedon Wednesday , but that did fiddling to see to it investor that its speedy growth would continue . On its net income call , analyst prodded CEO Jensen Huang about how Nvidia would fare iftech companies set out using newfangled methods to ameliorate their AI models .
The method acting that underpinsOpenAI ’s o1 model , or “ mental testing - time scaling , ” came up quite a lot . It ’s the idea that AI manikin will give better answers if you give them more fourth dimension and reckon mightiness to “ imagine ” through question . Specifically , it add more compute to the AI inference phase angle , which is everything that happen after a substance abuser hits get in on their command prompt .
Nvidia ’s CEO was asked whether he was seeing AI theoretical account developer shift over to these new methods and how Nvidia ’s older chips would work for AI inference .
Huang indicate that o1 , and test - time scale more broadly , could play a expectant role in Nvidia ’s business moving forrader , yell it “ one of the most exciting maturation ” and “ a unexampled scaling constabulary . ” Huang did his best to ensure investors that Nvidia is well - positioned for the variety .
The Nvidia CEO ’s remarks coordinate with what Microsoft CEO Satya Nadella saidonstageat a Microsoft effect on Tuesday : o1 represents a new way for the AI industry to better its fashion model .
This is a big deal for the chip industry because it set a greater emphasis on AI illation . While Nvidia ’s chips are the gilded monetary standard for training AI model , there ’s a broad set of well - fund startups make lightning - dissolute AI inference chips , such as Groq and Cerebras . It could be a more competitive space for Nvidia to function in .
Despiterecent reportsthat melioration in reproductive models are slowing , Huang tell analyst that AI model developer are still improving their example by add more compute and data during the pretraining phase .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Anthropic CEO Dario Amodei also say on Wednesday during an onstage audience at the Cerebral Valley peak in San Francisco that he is not see a slowdown in model ontogenesis .
“ Foundation mannikin pretraining scaling is intact and it ’s continuing , ” said Huang on Wednesday . “ As you screw , this is an empirical law , not a central physical constabulary , but the evidence is that it continues to scale . What we ’re learning , however , is that it ’s not enough . ”
That ’s certainly what Nvidia investors require to hear , since the chipmaker ’s stockhas hang glide more than 180 % in 2024by sell the AI chips that OpenAI , Google , and Meta train their models on . However , Andreessen Horowitz partners and several other AI executive have previously say that these methods are already protrude to show diminishing yield .
Huang observe that most of Nvidia ’s calculation workloads today are around the pretraining of AI models — not inference — but he attributed that more to where the AI humanity is today . He said that one twenty-four hour period there will simply be more people running AI models , intend more AI illation will occur . Huang noted that Nvidia is the largest illation platform in the world today and the company ’s scale and reliability pay it a Brobdingnagian vantage compare to startup .
“ Our hopes and dreams are that someday , the world does a net ton of illation , and that ’s when AI has really succeeded , ” said Huang . “ Everybody knows that if they introduce on top of CUDA and Nvidia ’s architecture , they can innovate more quickly , and they screw that everything should work . ”