Placeholder canvas

NVIDIA launches new Blackwell B200 AI superchip, claims 30 times more power than their current flagship H100

NVIDIA, a frontrunner in AI hardware, has taken the tech world by surprise after it announced a completely new platform out of nowhere. The latest platform, dubbed Blackwell, is a family of a new generation of AI chips that promise significantly better AI computing performance and better energy efficiency.

Revealed by CEO Jensen Huang at the company’s annual GTC event in San Jose, the new Blackwell series of AI chips promises unparalleled speed and efficiency, marking a significant leap forward in how artificial intelligence machines work.

Named in homage to mathematician David Harold Blackwell, NVIDIA’s Blackwell chips boast a remarkable performance upgrade over their predecessors, the H100 chips. According to Huang, Blackwell chips are between seven and 30 times faster than the H100 while consuming a fraction of the power—about 25 times less, to be precise. The new Blackwell B200 GPU and GB200 “super chip are leading the pack.”

According to Nvidia, the latest B200 GPU boasts 20 petaflops of FP4 power, leveraging its 208 billion transistors. Additionally, Nvidia claims that pairing two of these GPUs with a Grace CPU in the GB200 configuration can deliver LLM inference workloads with a performance boost of up to 30 times, potentially achieving significantly greater efficiency. Nvidia asserts that this setup could reduce costs and energy consumption by as much as 25 times compared to the H100.

This breakthrough technology is poised to redefine the landscape of AI applications, considering how resource-hungry they can get.

Huang emphasized the pivotal role of Blackwell GPUs in driving what he termed the “new Industrial Revolution,” highlighting the transformative potential of generative AI. With support from leading companies across various sectors, NVIDIA aims to unleash AI’s full capabilities, revolutionizing industries and driving innovation.

The technological prowess of Blackwell chips lies in their unprecedented performance capabilities. With speeds of up to 20 petaflops, Blackwell outstrips the H100’s performance by a staggering margin—quadrupling the computational power.

Integrating 208 billion transistors facilitates this quantum leap, a substantial increase compared to the H100’s 80 billion. NVIDIA achieved this feat by interconnecting two expansive chip dies, enabling lightning-fast communication at up to 10 terabytes per second.

Endorsements from industry titans underscore the significance of NVIDIA’s contributions to the AI landscape. In a testament to the indispensability of NVIDIA hardware, CEOs, including Sam Altman of OpenAI, Satya Nadella of Microsoft, and Sundar Pichai of Alphabet, lauded Blackwell’s performance advancements. Elon Musk, CEO of Tesla, echoed this sentiment, hailing Blackwell as a catalyst for innovation in AI computing.

While NVIDIA has yet to disclose pricing details for Blackwell chips, the demand for its technology remains insatiable. Despite the hefty price tags associated with NVIDIA’s offerings, tech companies view access to these chips as a badge of honor. Startups have even raised funds solely because they have access to NVIDIA’s H100 AI GPUs. With delivery wait times stretching up to 11 months, securing NVIDIA’s AI chips has become a top priority for firms seeking to maintain a competitive edge in the AI landscape.

As NVIDIA continues to push the boundaries of AI computing, the unveiling of Blackwell heralds a new era of innovation and progress, propelling the industry toward unprecedented heights.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed