Topics
Latest
AI
Amazon
Image Credits:Jakub Porzycki/NurPhoto / Getty Images
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
fund-raise
widget
Gaming
Government & Policy
computer hardware
layoff
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
societal
outer space
Startups
TikTok
exile
speculation
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
television
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
OpenAI ’s efforts to break its next major model , GPT-5 , are hunt behind docket , with consequence that do n’t yet justify the enormous monetary value , according toa new reportin The Wall Street Journal .
This echoesan earlier reportin The Information suggesting that OpenAI is looking for new strategies , asGPT-5 might not symbolise as big a leap forward as previous modeling . But the WSJ story includes extra details around the 18 - month development of GPT-5 , computer code - named Orion .
OpenAI has reportedly completed at least two large preparation running , which aim to improve a modeling by training it on enormous quantities of data . An initial preparation running went slower than expected , hinting that a larger run would be both time - consuming and costly . And while GPT-5 can reportedly perform better than its predecessors , it has n’t yet upgrade enough to justify the price of keeping the model running .
The WSJ also reports that rather than just relying on publicly useable data and licensing deals , OpenAI has also hired people to create impertinent data by writing codification or resolve math problems . It ’s also using man-made data created by another of its framework , o1 .
OpenAI did not immediately reply to a postulation for comment . The company previously saidit would not be releasing a model code - named Orionthis year .