Meta to develop, use its in-house data centre AI-chips this year, AI tools to cost over $30 billion annually


Meta Platforms, the parent company of Facebook, plans to deploy a new version of its custom chip, “Artemis,” into its data centers in 2024, according to an internal document seen by Reuters.


Zuckerberg also outlined the company’s strategy to compete against Alphabet and Microsoft in the high-stakes AI arms race. Meta aims to leverage its vast walled data garden, emphasizing the hundreds of billions of publicly shared images and tens of billions of public videos available within its platform. This is positioned as a critical advantage over competitors like Google, Microsoft, and OpenAI, which primarily train their AI models on publicly crawled data from the web.

This second generation of Meta’s in-house silicon aims to support the company’s AI initiatives and could potentially reduce its reliance on NVIDIA chips, which currently dominate the market.

In addition to Meta’s generative AI technology, Zuckerberg expressed ambitions for “general intelligence,” a general-purpose AI capable of outperforming humans in various tasks. The goal is to build the most popular and advanced AI products and services, providing users with a world-class AI assistant to enhance productivity.

However, achieving this vision comes with a hefty price tag. Meta’s capital expenditures could increase by up to $9 billion in the current year, reaching $30 billion or even $37 billion, compared to $28.1 billion in 2023.

The company acknowledges that ambitious long-term AI research and product development efforts will require ongoing infrastructure investments beyond the current year. Despite the significant costs involved, Meta appears committed to positioning itself as a leader in the evolving landscape of artificial intelligence.

Meta has been investing billions of dollars in specialized chips and data center reconfigurations to accommodate the growing demand for AI products in its platforms, including Facebook, Instagram, WhatsApp, and hardware devices like smart glasses.

Deploying its chip could help Meta optimize performance and efficiency on its specific workloads, potentially saving significant annual energy and chip purchasing costs.

The new chip is designed for inference processing, where models use their algorithms to make ranking judgments and generate responses to user prompts. Meta CEO Mark Zuckerberg mentioned plans to have around 350,000 NVIDIA “H100” processors by the end of the year, contributing to Meta’s overall computing capacity.

This move comes after Meta’s decision in 2022 to halt the first iteration of its in-house chip and purchase NVIDIA GPUs. The Artemis chip is part of Meta’s broader AI silicon project, with the company also reportedly working on a more ambitious chip capable of both training and inference processes.

Despite earlier challenges, an inference chip like Artemis could offer greater efficiency in handling Meta’s recommendation models than the power-hungry NVIDIA processors.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed