Topics
Latest
AI
Amazon
Image Credits:Google
Apps
Biotech & Health
mood
Image Credits:Google
Cloud Computing
mercantilism
Crypto
Image Credits:Frederic Lardinois/TechCrunchImage Credits:Frederic Lardinois/TechCrunch
Enterprise
EVs
Fintech
Image Credits:Frederic Lardinois/TechCrunchImage Credits:Frederic Lardinois/TechCrunch
Fundraising
Gadgets
Gaming
Government & Policy
computer hardware
layoff
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
place
Startups
TikTok
Transportation
speculation
More from TechCrunch
case
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
It ’s Google Cloud Next in Las Vegas this week , and that means it ’s prison term for a caboodle of new instance types and accelerator to hit the Google Cloud Platform . In improver to the newcustom Arm - based Axion flake , most of this year ’s announcements are about AI accelerators , whether ramp up by Google or from Nvidia .
Only a few weeks ago , Nvidia announced its Blackwell chopine . But do n’t expect Google to offer those machines anytime before long . documentation for the high - performanceNvidia HGX B200for AI and HPC workloads andGB200 NBL72for large spoken communication model ( LLM ) preparation will make it in other 2025 . One interesting nugget from Google ’s announcement : The GB200 server will be liquid state - cooled .
This may go like a bit of a premature proclamation , but Nvidia said that its Blackwell chips wo n’t be publicly useable until the last quarter of this year .
Before Blackwell
For developers who need more powerfulness to educate Master of Laws today , Google also denote the A3 Mega example . This instance , which the society developed together with Nvidia , features the industriousness - received H100 GPUs but blend them with a fresh networking system that can deliver up to twice the bandwidth per GPU .
Another new A3 example is A3 secret , which Google describe as enabling customers to “ better protect the confidentiality and wholeness of sensitive data and AI workload during training and inferencing . ” The company has long offeredconfidential computing servicesthat encrypt information in role , and here , once enable , confidential computing will encrypt information transfers between Intel ’s CPU and the Nvidia H100 GPU via protect PCIe . No codification changes necessitate , Google state .
As for Google ’s own chips , the company on Tuesday launched its Cloud TPU v5p processors — themost powerfulof its homegrown AI accelerators yet — into general accessibility . These chips feature a 2x betterment in drift point operation per second and a 3x improvement in memory bandwidth speed .
All of those fast chips need an inherent computer architecture that can keep up with them . So in add-on to the new chips , Google also announced Tuesday unexampled AI - optimized storage option . Hyperdisk ML , which is now in preview , is the troupe ’s next - gen block storage serve that can amend mannikin load time by up to 3.7x , fit in to Google .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Google Cloud is also launch a telephone number of more traditional instance , powered by Intel ’s fourth- and 5th - generation Xeon central processing unit . The new cosmopolitan - aim C4 and N4 instances , for example , will feature the 5th - generation Emerald Rapids Xeons , with the C4 focused on performance and the N4 on price . The novel C4 instances are now in private trailer , and the N4 machine are loosely available today .
Also Modern , but still in prevue , are the C3 bare - metal machines , power by older fourth - multiplication Intel Xeons , the X4 computer storage - optimized au naturel metallic element instances ( also in prevue ) and the Z3 , Google Cloud ’s first store - optimized virtual machine that promises to pop the question “ the eminent IOPS for storage optimise case among leading clouds . ”