Topics

modish

AI

Amazon

Article image

Image Credits:TechCrunch / Bryce Durbin

Apps

Biotech & Health

clime

Render of a green Tesla

Image Credits:TechCrunch / Bryce Durbin

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

gadget

game

Google

Government & Policy

Hardware

Instagram

layoff

Media & Entertainment

Meta

Microsoft

concealment

Robotics

security measures

societal

Space

inauguration

TikTok

DoT

Venture

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

video

Partner Content

TechCrunch Brand Studio

Crunchboard

meet Us

Elon Musk does n’t desire Tesla to be just an car maker . He wants Tesla to be an AI ship’s company , one that ’s figure out how to make cars ride themselves .

important to that commission isDojo , Tesla ’s custom - build supercomputerdesigned to train its Full Self - Driving ( FSD ) neural networks . FSD is n’t really fully ego - drive ; it can do some automated driving chore , but still requires an thoughtful human behind the rack . But Tesla retrieve with more information , more compute power and more training , it can queer the limen from almost ego - drive to full ego - drive .

And that ’s where Dojo comes in .

Musk has been teasing Dojo for some time , but the executive ramp up discussions about the supercomputer throughout 2024 . Now that we ’re in 2025 , another supercomputer called Cortex has entered the schmoose , but Dojo ’s importance to Tesla might still be existential — with EV sales slumping , investors want self-assurance that Tesla can achieve self-reliance . Below is a timeline of Dojo mentions and promises .

2019

First mentions of Dojo

April 22 – AtTesla ’s Autonomy Day , the automaker had its AI team onstage to talk about Autopilot and Full Self - Driving , and the AI power them both . The ship’s company shares data about Tesla ’s custom - built chips that are plan specifically for neural mesh and ego - driving cars .

During the effect , Musk cod Dojo , revealing that it ’s a supercomputer for training AI .   He also note that all Tesla cars being produced at the time would have all ironware necessary for full ego - drive and only needed a software update .

2020

Musk begins the Dojo roadshow

Feb 2 – MusksaysTesla will before long have more than a million connected vehicles worldwide with detector and compute call for for full ego - drive — and swash Dojo ’s capabilities .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

“ Dojo , our education supercomputer , will be able-bodied to process vast amounts of television preparation information & efficiently run hyperspace arrays with a vast number of parameter , plenty of memory & ultra - high bandwidth between cores . More on this afterwards . ”

August 14 – Musk reiteratesTesla ’s plan to develop a neural meshwork preparation computer call Dojo “ to process truly vast amounts of video data , ” calling it “ a beast . ” He also says the first version of Dojo is“about a year away,”which would put its launch date somewhere around August 2021 .

December31 – Elon saysDojo is n’t needed , but it will make ego - drive better . “ It is n’t enough to be safe than human drivers , Autopilot ultimately involve to be more than 10 time safer than human driver . ”

2021

Tesla makes Dojo official

August 19 – The auto maker formally announces Dojo atTesla ’s first AI Day , an event meant to attract engineers to Tesla ’s AI team . Tesla also introduces its D1 chip , which the automaker say it will utilise — alongside Nvidia ’s GPU — to power the Dojo supercomputer . Tesla notes its AI clustering will domiciliate 3,000 D1 chips .

October 12 – Tesla releasesaDojo Technology whitepaper , “ a guidebook to Tesla ’s configurable floating peak formatting & arithmetic . ” The whitepaper outlines a proficient monetary standard for a fresh type of binary blow - point arithmetic that ’s used in deep get wind neuronal networks and can be implemented “ totally in software , entirely in ironware , or in any combination of software program and hardware . ”

2022

Tesla reveals Dojo progress

August 12 – Musk aver Tesla will “ phase in Dojo . Won’t want to buy as many incremental GPUs next year . ”

September 30 – At Tesla’ssecond AI Day , the fellowship reveals that it has set up the first Dojo console , test 2.2 megawatts of load testing . Tesla enjoin it was building one roofing tile per day ( which is made up of 25 D1 french-fried potatoes ) . Tesla demos Dojo onstage run a unchanging Diffusion mannikin to make an AI - generated paradigm of a “ Cybertruck on Mars . ”

Importantly , the company sets a target date of a full Exapod clustering to be completed by Q1 2023 , and says it plans to build up a aggregate of seven Exapods in Palo Alto .

2023

A ‘long-shot bet‘

April 19 – Musk tell investors duringTesla ’s first - quarter earningsthat Dojo “ has the potential for an order of magnitude improvement in the price of training , ” and also “ has the potential to become a merchantable table service that we would offer to other company in the same way that Amazon Web Services offers web help . ”

Musk also notes that he ’d “ look at Dojo as kind of a long - shot bet , ” but a “ bet deserving making . ”

June 21 – The Tesla AI X accountpoststhat the company ’s neural networks are already in customer vehicles . The thread includes a graphical record with a timeline of Tesla ’s current and visualize compute power , which localise the start of Dojo production at July 2023 , although it ’s not readable if this refers to the D1 chips or the supercomputer itself . Musk saysthat same day that Dojo was already online and running tasks at Tesla data point centers .

The company also jut out that Tesla ’s compute will be the top five in the full world by around February 2024 ( there are no indication this was successful ) and that Tesla would accomplish 100 exaflops by October 2024 .

July 19 – Teslanotesin its second - one-fourth pay paper it has start production of Dojo . Musk also says Tesla plans to spend more than $ 1 billion on Dojo through 2024 .

September 6 – Musk postson Xthat Tesla is throttle by AI training compute , but that Nvidia and Dojo will make that . He pronounce managing the data point from the around 160 billion frame of video Tesla gets from its cars per day is extremely difficult .

2024

Plans to scale

January 24 – During Tesla ’s 4th - quarter and full - twelvemonth net income call , Musk acknowledge again that Dojo is a high - endangerment , high - advantage project . He also says that Tesla was pursuing “ the threefold path of Nvidia and Dojo , ” that “ Dojo is working ” and is “ doing preparation job . ” He notes Tesla is scaling it up and has “ plans for Dojo 1.5 , Dojo 2 , Dojo 3 and whatnot . ”

January 26 – Tesla announced plans to spend $ 500 million to build aDojo supercomputer in Buffalo . Musk then downplays the investment somewhat , posting on Xthat while $ 500 million is a large sum total , it ’s “ only equivalent to a 10k H100 system from Nvidia . Tesla will expend more than that on Nvidia ironware this year . The table stake for being militant in AI are at least several billion dollar mark per year at this pointedness . ”

April 30 – At TSMC ’s North American Technology Symposium , the company say Dojo ’s next - generation training tile — the D2 , which lay the intact Dojo roofing tile onto a single silicon wafer , rather than connecting 25 microchip to make one tile — is already in production , accord toIEEE Spectrum .

May 20 – Musknotesthat the rear portion of the Giga Texas mill propagation will let in the construction of “ a super slow , water system - cooled supercomputer clump . ”

June 4 – ACNBC reportreveals Musk diverted thousands of Nvidia chip hold for Tesla to X and xAI . After ab initio saying the report card was false , Musk posts on Xthat Tesla did n’t have a position to send the Nvidia microprocessor chip to wrench them on , due to the continued structure on the south university extension of Giga Texas , “ so they would have just sat in a warehouse . ” He noted the extension will “ house 50k H100s for FSD breeding . ”

He alsoposts :

“ Of the approximately $ 10B in AI - related expenditures I said Tesla would make this yr , about one-half is internal , primarily the Tesla - design AI illation figurer and sensors present in all of our cars , plus Dojo . For building the AI training superclusters , NVidia hardware is about 2/3 of the cost . My current effective hypothesis for Nvidia purchases by Tesla are $ 3B to $ 4B this year . ”

July 1 – Musk reveals on Xthat current Tesla vehicle may not have the right hardware for the company ’s next - gen AI model . He says that the roughly 5x increase in parameter count with the next - gen AI “ is very unmanageable to achieve without kick upstairs the vehicle illation data processor . ”

Nvidia supply challenges

July 23 – During Tesla ’s second - quarter earnings call , Musk says need for Nvidia computer hardware is “ so high that it ’s often difficult to get the GPUs . ”

“ I think this therefore postulate that we put a quite a little more effort on Dojo so as to assure that we ’ve got the education capability that we involve , ” Musk says . “ And we do see a path to being private-enterprise with Nvidia with Dojo . ”

A graph in Tesla ’s investor deck of cards predicts that Tesla AI training capacity will storm to just about 90,000 H100 equivalent GPUs by the end of 2024 , up from around 40,000 in June . afterwards that twenty-four hours on X , Musk poststhat Dojo 1 will have “ roughly 8k H100 - equivalent of grooming online by end of year . ” He also postsphotosof the supercomputer , which seem to use the same electric refrigerator - like unsullied sword exterior as Tesla ’s Cybertrucks .

From Dojo to Cortex

July 30 – AI5 is ~18 months by from high - volume production , Musk says in areplyto a postal service from someone claiming to start a nine of “ Tesla HW4 / AI4 owner angry about getting left behind when AI5 comes out . ”

August 3 – Musk posts on Xthat he did a walkthrough of “ the Tesla supercompute clump at Giga Texas ( aka Cortex ) . ” He notes that it would be made roughly of 100,000 H100 / H200 Nvidia GPUs with “ massive reposition for video breeding of FSD & Optimus . ”

August 26 – Musk posts on Xa video of Cortex , which he relate to as “ the elephantine new AI grooming supercluster being built at Tesla HQ in Austin to solve veridical - world AI . ”

2025

No updates on Dojo in 2025

January 29 – Tesla ’s Q4 and full - yr 2024 pay callincluded no mention of Dojo . Cortex , Tesla ’s young AI training supercluster at the Austin gigafactory , did make an appearance , however . Tesla note in itsshareholder deckthat it make out the deployment of Cortex , which is made up of rough 50,000 H100 Nvidia GPUs .

“ Cortex help enable V13 of FSD ( Supervised ) , which boasts major improvements in safety and comfortableness thanks to 4.2x increase in data , higher resolution video inputs … among other enhancements , ” according to the letter of the alphabet .

During the call , CFO Vaibhav Taneja noted that Tesla speed the buildout of Cortex to belt along up the rollout of FSD V13 . He said that accumulated AI - related chapiter expenditure , including infrastructure , “ so far has been approximately $ 5 billion . ” In 2025 , Taneja said he expects capex to be flat as it relates to AI .

This narrative in the beginning published August 10 , 2024 , and we will update it as fresh information develop .