Categories: Business

Intelligent architecture on raw calculation: Deepseek breaks the approach “ greater is “better ” of the development of AI

Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more


The story of the AI ​​has reached a critical inflection point. The deep breakthrough – Performing advanced performance without relying on the most advanced fleas – proves what many in Nerips in December had already declared: the future of AI does not concern more calculation of the problems – it s ‘Acts to reinvent how these systems work with humans and our environment.

As a educated informator of Stanford who witnessed both the promise and the perils of AI development, I consider this moment as even more transformative than the beginnings of Chatgpt. We are entering what some call a “rebirth of reasoning”. OPENAI’s O1, Deepseek R1 and others go beyond gross scaling towards something smarter – and doing it with unprecedented efficiency.

This change could not be more opportune. During his Neirips speech, the former Openai Heya Sutskever chief scientist said that “pre-training will end” because if the calculation power increases, we are forced by finished internet data. The breakthrough of Deepseek validates this perspective – the researchers of the Chinese company have obtained performances comparable to the O1 of Openai at a fraction of the cost, demonstrating that innovation, not only the power of raw calculation, is the way follow.

AI advanced without massive pre-training

Global models intensify to fill this gap. The recent increase of $ 230 million in World Labs to build AI systems that include reality like humans parallel to the approach of Deepseek, where their R1 model presents “AHA!” Moments – Stop to reassess problems as humans do. These systems, inspired by human cognitive processes, promise to transform everything from environmental modeling into human-AI interaction.

We see the first victories: Meta’s recent update to their Ray-Ban intelligent glasses allows continuous contextual conversations with AI assistants without wake words, as well as real-time translation. It is not only an update of functionalities – it is an overview of how AI can improve human capacities without requiring massive pre -formulated models.

However, this evolution is accompanied by nuanced challenges. Although Deepseek has considerably reduced costs thanks to innovative training techniques, this breakthrough of efficiency could paradoxically lead to an increase in overall resources consumption – a phenomenon known as Jevons Paradox, where improvements in efficiency technological often lead to an increase in the use of resources.

In the case of AI, cheaper training could mean that more models are formed by more organizations, which could increase net energy consumption. But Deepseek’s innovation is different: by demonstrating that advanced performance is possible without advanced equipment, they do not only make AI more effective – they fundamentally change the way we approach the development of the model.

This change towards an intelligent architecture compared to the raw calculation power could help us to escape the trap of the Jevons paradox, while the accent is put on “how much calculation can we afford?” To “How can we intelligently design our systems?” As Professor Guy Den Broeck notes, Professor Guy Den Broeck, “the overall cost of the reasoning of the language model certainly does not drop.” The environmental impact of these systems remains substantial, pushing the industry to more effective solutions – exactly the type of Deepseek innovation represents.

Prioritize effective architectures

This change requires new approaches. Deepseek’s success validates the fact that the future is not to build more important models – it is a question of creating smarter and more effective models that work in harmony with human intelligence and environmental constraints.

The chief scientist of Meta AI, Yann Lecun, is considering future systems that spend days or weeks to think about complex problems, as well as humans. The Deepseek’s-R1 model, with its ability to take a break and reconsider approaches, represents a step towards this vision. Although a resource delicacy, this approach could produce breakthroughs in climate change solutions, health care innovations and beyond. But like the ameet of Carnegie Mellon, Talwalkar, warns wisely, we must question any person claiming the certainty of the place where these technologies will lead us.

For business leaders, this change presents a clear path to follow. We must prioritize an effective architecture. The one who can:

  • Deploy specialized AI agents channels rather than unique massive models.
  • Invest in systems that optimize both for performance and environmental impact.
  • Build infrastructure that supports iterative and human development in the loop.

Here is what excites me: the breakthrough of Deepseek proves that we pass beyond the era of “Big Bigger is Better” and in something much more interesting. With the pre-training of its limits and innovative companies to find new ways to achieve more with less, there is this incredible space open for creative solutions.

The intelligent chains of smaller and specialized agents are not only more effective – they help us to solve problems in a way that we would never have imagined. For startups and businesses ready to think differently, it is our moment to have fun again with AI, to build something that makes sense for people and the planet.

Kiara Nirghin is a award-winning technologist from Stanford, a successful author and co-founder of Chima.

DATADECISIONMAKERS

Welcome to the Venturebeat community!

Data data manufacturers are the place where experts, including technicians who do data work, can share data -related information and innovation.

If you want to read on advanced ideas and up-to-date information, best practices and the future of data and data technology, join us at datadecisionmakers.

You could even consider contributing your own article!

Learn more about datadecisionmakers

remon Buul

Recent Posts

The writer Kiese Laymon discusses his new book “City Summer, Country Summer”

NPR speaks with the writer Kiese Laymon of his new children's book "City Summer, Country…

57 seconds ago

More work planned for the rail connection of San Clemente with San Diego

Public transport officials said this week that they planned to add more sand and rock…

4 minutes ago

Highlights: Trump imposes large world prices

The announcement by President Trump of radical prices on American trade partners has expanded the…

5 minutes ago

The head of the Trump antitrust agency explodes the digital rules of the EU as “ taxes on American companies ” – Politico

But in comments that are unlikely to mitigate transatlantic tensions in relation to digital regulations,…

23 minutes ago

Jim Carrey cries “Batman Forever” Costar Val Kilmer: “A generational talent”

Three decades after facing Val Kilmer Batman foreverJim Carrey pays tribute to his late Costar,…

35 minutes ago