Nvidia praises the performance of the open source AI models of Deepseek on its RTX 50 serial GPUs just launched, saying that they can “manage the Deepseek family of models distilled faster than anything on the PC market”. But this Nvidia announcement could somewhat miss the point.
This week, NVIDIA’s market capitalization has undergone the greatest loss of market capitalization for a day for an American company, a loss widely allocated to Deepseek. Deepseek said that his new R1 reasoning model doesn’t Require powerful Nvidia equipment to obtain performance comparable to the O1 model of Openai, allowing the Chinese company to train it at a significantly lower cost. What Deepseek has accomplished with R1 seems to show that the best Nvidia chips may not be strictly necessary to make progress in AI, which could affect the fortune of the company in the future.
That said, Deepseek formed his models using GPU Nvidia, simply lower (H800) that the US government allows Nvidia to export to China. And Nvidia’s today’s blog wants to show that its new RTX GPUs in the 50 series can be useful for R1 inference – or what an AI model really generates – saying that the GPUs are built on “the same NVIDIA Blackwell GPU architecture which feeds IA innovation in the world head in the data center” and that “RTX Access completely Deepseek, offering maximum inference performance on PCs. “”
But how Deepseek did his training is one of what was so serious. (And it should be noted that China obtains a less powerful version of RTX 5090.)
Other technological companies are also trying to get on the Deepseek wave. R1 is also available on AWS, and Microsoft made it available on its Azure Ai Foundry and Github platform this week. However, Microsoft and Openai would have investigated Si Deepseek has taken OPENAI data, Bloomberg Reports.