- Intel’s Crescent Island chip targets AI market with power efficiency
- Intel faces challenges from AMD and Nvidia in the AI chip market
- Nvidia invests $5 billion in Intel for future chip development
The new chip (GPU) will be optimized for power efficiency and support a wide range of uses such as running AI or inference applications, Sachin Katti, Intel’s chief technology officer, said at the Open Compute Summit on Tuesday.
Register here.
“This emphasizes the focus that I talked about earlier, inference, optimized for AI, optimized, optimized to deliver the best token economy, the best performance per dollar,” Katti said.
The company’s plans lag behind competitors and represent the significant challenge Intel executives and engineers face in capturing a significant share of the market for AI chips and systems.
Intel CEO Lip-Bu Tan has vowed to revive the company’s stalled AI efforts after the company effectively shelved projects such as the Gaudi line of chips and the Falcon Shores processor.
Crescent Island will feature 160GB of a slower form of memory than the high-bandwidth memory (HBM) found on data center AI chips from AMD and Nvidia. The chip will be based on a design that Intel has used for its consumer GPUs.
Intel has not revealed which manufacturing process Crescent Island will use. The company did not immediately respond to a request for comment.
Since the boom in generative AI with the launch of OpenAI’s ChatGPT in November 2022, startups and large cloud operators have been rushing to procure GPUs that help run AI workloads on data center servers.
The explosion in demand has led to a supply crisis and exorbitant prices for chips designed or adapted for AI applications.
Katti said at the San Jose trade show that the company would release new data center AI chips each year, matching the annual cadence set by AMD, Nvidia and several cloud computing companies that make their own chips.
Nvidia has dominated the market in creating large AI models such as the one used for ChatGPT. Tan said the company plans to focus its design efforts on creating chips useful for running these AI models, which work behind the scenes to make AI software work.
“Instead of trying to build for every workload, we’re going to focus more and more on inference,” Katti said.
Intel has taken an open, modular approach in which customers can mix and match chips from different vendors, Katti said.
Nvidia announced last month that it would invest $5 billion in Intel, taking a roughly 4% stake and becoming one of its largest shareholders in a partnership to co-develop future chips for PCs and data centers.
The deal is part of Intel’s effort to ensure that Intel’s central processing units (CPUs) are installed in every AI system sold, Katti said.
Reporting by Akash Sriram in Bengaluru and Max Cherney in San Francisco; Editing by Shilpi Majumdar and Stephen Coates
Our Standards: The Thomson Reuters Trust Principles.open a new tab
Are you still using Windows 10 on your PC? Did you know that as of October 14, Microsoft moved the…
There he stood, wearing a red raincoat draped dramatically over a white suit, perched 30 feet above Piccadilly Circus for…
This 3D printed kagome tube can passively isolate vibrations thanks to its complex but deliberate structure. Credit: James McInerney, Air…
Dr. Banu Symington, in his office in Rock Springs, Wyoming, is one of the few full-time oncologists practicing in the…
Traders work on the floor of the New York Stock Exchange (NYSE) on October 13, 2025, in New York. Spencer…
Getty The Kansas City Chiefs and quarterback Patrick Mahomes got a big win Sunday against the Detroit Lions, but it…