Advertisement

Breaking News

Meta anticipates the release of new Nvidia chips no sooner than next year.

 Meta anticipates the release of new Nvidia chips no sooner than next year.

Nvidia


FILE PHOTO: This illustration, taken on January 8, 2024, shows the NVIDIA logo next to the motherboard of a computer. ADOS RUVIC/REUTERS/Illustration//File Photo

New York: A representative for Facebook's parent company, Meta Platforms, told Reuters that the company does not anticipate receiving shipments of Nvidia's new flagship AI chip this year.

A new chip called B200 "Blackwell" was unveiled by Nvidia, the leading manufacturer of GPU (graphics processing unit) processors, which are essential for most advanced artificial intelligence applications, on Monday during its annual developer conference.

The manufacturer of the chips said that the B200 is 30 times faster at tasks like providing responses from chatbots, but it did not provide specific information about how well it works when processing massive volumes of data to train those chatbots—the type of work that has driven the majority of Nvidia's explosive sales growth.

Financial analysts were informed on Tuesday by Nvidia's Chief Financial Officer Colette Kress that "we think we're going to come to market later this year," but she added that the new GPUs' shipments will not increase until 2025.

One of Nvidia's largest clients is the social media behemoth Meta, which purchased hundreds of thousands of the company's previous generation of chips to help the company's efforts to advance generative AI and more powerful content recommendation algorithms goods.


By the end of the year, Meta intends to have around 350,000 of those early chips, known as H100s, in stock, as CEO Mark Zuckerberg revealed in January. By then, he continued, Meta would have roughly 600,000 H100s in addition to other GPUs.

Zuckerberg said in a statement on Monday that Meta intended to train its Llama models using Blackwell. Currently, the company is using two GPU clusters that it unveiled last week, each of which it claims contains over 24,000 H100 GPUs, to train a third generation of the model.

A representative for Meta stated that Blackwell would be used for further iterations of the model and that the company intended to keep using those clusters to train Llama 3.

No comments