Topics

Latest

AI

Amazon

Article image

Image Credits:Kwarkot / Getty Images

Apps

Biotech & Health

Climate

Server racks in server room cloud data center.

Image Credits:Kwarkot / Getty Images

Cloud Computing

Commerce

Crypto

UALink

Image Credits:UALink Promoter Group

Enterprise

EVs

Fintech

Fundraising

Gadgets

punt

Google

Government & Policy

ironware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

societal

Space

Startups

TikTok

Transportation

Venture

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Intel , Google , Microsoft , Meta and other tech heavyweights are show a fresh industry group , the Ultra Accelerator Link ( UALink ) Promoter Group , to manoeuvre the ontogenesis of the ingredient that link together AI accelerator chips in data centers .

Announced Thursday , the UALink Promoter Group — which also counts AMD ( but not Armjust yet ) , Hewlett Packard Enterprise , Broadcom and Cisco among its members — is aim a new industry criterion to connect the AI accelerator pedal scrap found within a growing turn of servers . loosely defined , AI accelerators are chips ranging from GPUs to custom - design solution to speed up the training , OK - tuning and running of AI models .

“ The industry needs an open standard that can be moved frontwards very quickly , in an open [ format ] that allows multiple companies to add up economic value to the overall ecosystem , ” Forrest Norrod , AMD ’s GM of information plaza solution , told reporters in a briefing Wednesday . “ The manufacture needs a banner that allow founding to proceed at a rapid clip unfettered by any single company . ”

Version one of the proposed standard , UALink 1.0 , will connect up to 1,024 AI accelerator pedal — GPUs only — across a single computation “ pod . ” ( The group delineate apodas one or several racks in a host . ) UALink 1.0 , based on “ open standards ” includingAMD ’s Infinity Fabric , will allow for direct loads and store between the computer memory attached to AI accelerators , and loosely boost velocity while frown data transfer response time liken to exist interconnect specs , according to the UALink Promoter Group .

The mathematical group says it ’ll create a syndicate , the UALink Consortium , in Q3 to oversee development of the UALink spec going forwards . UALink 1.0 will be made available around the same prison term to companies that join the consortium , with a higher - bandwidth update spec , UALink 1.1 , set to make it in Q4 2024 .

The first UALink production will launch “ in the next couple of years , ” Norrod said .

Glaringly absent from the list of the chemical group ’s appendage is Nvidia , which is by far the tumid manufacturer of AI accelerators with anestimated80 % to 95 % of the grocery . Nvidia declined to annotate for this story . But it ’s not tough to see why the chipmaker is n’t enthusiastically throwing its weight behind UALink .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

For one , Nvidia declare oneself its own proprietary interconnect technical school for linking GPUs within a data point centre server . The company is likely none too penetrating to sustain a spec based on rival technologies .

Then there ’s the fact that Nvidia ’s operate from a position of enormous metier and influence .

In Nvidia ’s most recent financial quarter ( Q1 2025 ) , the company ’s datum center sale , which include sale of its AI chip , rose more than 400 % from the year - ago quarter . If Nvidiacontinueson its current trajectory , it ’s place to surpass Apple as the world ’s secondly - most worthful firm sometime this year .

So , simply put , Nvidia does n’t have to play ball if it does n’t need to .

As for Amazon Web Services ( AWS ) , the sole public swarm giant not bring to UALink , it might be in a “ wait and see ” modal value as it chip ( no punning intended ) away at its various in - firm accelerator pedal computer hardware drive . It could also be that AWS , with a stranglehold on the cloud service grocery , does n’t see much of a strategic point in pit Nvidia , which supplies much of the GPUs it serves to customers .

AWS did n’t answer to TechCrunch ’s request for comment .

Indeed , the big donee of UALink — besides AMD and Intel — seem to be Microsoft , Meta and Google , which blend have spend billions of clam on Nvidia GPUs to power their cloud and take aim their ever - growing AI models . All are looking to wean themselves off of a seller they see as worrisomely dominant in the AI hardware ecosystem .

In a late report , Gartner estimates that the note value of AI throttle used in servers will total $ 21 billion this yr , increase to $ 33 billion by 2028 . taxation from AI chip will hit $ 33.4 billion by 2025 , meanwhile , projects Gartner .

Google has usage cow chip for training and running AI models , TPUsandAxion . Amazon hasseveralAI chipfamilies under its belt . Microsoft last yr jump into the fray withMaia and Cobalt . And Meta is refiningits own lineupof accelerator .

Elsewhere , Microsoft and its close henchman , OpenAI , reportedlyplan to spend at least $ 100 billion on a supercomputer for train AI models that ’ll be equip with future versions of Cobalt and Maia chip . Those microprocessor chip will need something to link them — and perhaps it ’ll be UALink .