Vera Rubin: Chip wars escalate as Nvidia unveils next frontier in AI

The CEO of Nvidia, Jensen Huang, announced that the next-generation chips will be more powerful and in full production, delivering five times the AI computing power of the company’s previous chips when serving chatbots and other AI apps.NVIDIA is unveiling the AI chips, which were estimated to be much later, but are coming much earlier than expected. In a speech at the Consumer Electronics Show in Las Vegas, the leader of the world’s most valuable company revealed new details about its chips, which will arrive later this year.

Vera-Rubin platform

The Vera-Rubin platform, an ext-generation AI computing architecture designed for agentic AI, reasoning, and massive, l long-context workflows, is expected to debut this year. The flagship server contains 72 of the company’s graphics units and 36 of its central processors.

Huang showed how they can be strung together into “pods” containing more than 1,000 Rubin chips and said they could improve the efficiency of generating “tokens” – the fundamental unit of AI systems – by 10 times.

Huang also highlighted that Rubin chips use a proprietary data format that the company hopes the wider industry will adopt.

Deliver a gigantic step up in performance

“This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors,” Huang said.

NVIDIA has been dominating the AI market, but it also faces many rivals, such as Advanced Micro Devices (AMD.O) and Alphabet (GOOGL.O), in delivering chatbot services and other technologies to millions of users.

Huang’s speech also focused on how the new chips will work with the provided prompt and task, including adding a new layer of storage technology called “context memory storage,” aimed at helping chatbots provide snappier responses to long questions and conversations.

Self-driving cars

In other announcements, Huang highlighted new software that can help self-driving cars make decisions about which path to take – and leave a paper trail for engineers to use afterward. NVIDIA showed research on software called Alpamayo late last year, with Huang saying on Monday it would be released more widely, along with the data used to train it,, s automakers can evaluate it.

“Not only do we open-source the models, but we also open-source the data that we use to train those models, because only in that way can you truly trust how the models came to be,” Huang said from a stage in Las Vegas.

Confident in approach

Huang has been confident enough in the services and deals they provide, without affecting other technologies, instead focusing on its own business so that the results could expand and new products could come into the lineup.

At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which U.S. President Donald Trump has allowed to flow to China. Reuters has reported that the chip, which was the predecessor to Nvidia’s current “Blackwell” chip, is in high demand in China, alarming China hawks across the U.S. political spectrum.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed