Tech

NVIDIA brings AI assistants to life with GeForce RTX AI PCs

Project G-Assist, NVIDIA ACE NIMs for Digital Humans, and Generative AI Tools Deliver Advanced AI Experiences on RTX Laptops; Additionally, RTX-accelerated APIs for small language models are coming to Windows Copilot Runtime

COMPUTER-NVIDIA today announced a new NVIDIA RTX™ to power AI assistants and digital humans running on new GeForce RTX™ AI Wearables.

NVIDIA has revealed Project G-Assist, a demonstration of RTX-based AI assistant technology that provides context-sensitive help for PC games and applications. The G-Assist Project tech demo debuted with ARK: High Survival from Studio Wildcard. NVIDIA also introduced the first NVIDIA NIM™ inference microservices on PC for NVIDIAACE digital human platform.

These technologies are made possible by NVIDIA RTX AI Toolkit, a new suite of tools and SDKs that help developers optimize and deploy large generative AI models on Windows PCs. They join NVIDIA’s full-stack RTX AI innovations, accelerating more than 500 PC applications and games and 200 laptop designs from the manufacturers.

Additionally, the recently announced RTX AI PC laptops from ASUS and MSI feature GeForce RTX 4070 GPUs and energy-efficient SoCs with Windows 11 AI PC capabilities. These Windows 11 AI PCs will receive a free update to the Copilot+ PC experiences when available.

“NVIDIA kicked off the AI ​​PC era in 2018 with the release of the RTX Tensor Core and NVIDIA DLSS GPUs,” said Jason Paul, vice president of consumer AI at NVIDIA. “Now, with Project G-Assist and NVIDIA ACE, we are unlocking the next generation of AI-driven experiences for more than 100 million RTX AI PC users. »

Project G-Assist, a GeForce AI assistant
AI assistants are poised to transform in-game and in-app experiences, whether it’s providing game strategies, analyzing multiplayer replays, or facilitating complex creative workflows. The G-Assist project is a glimpse into this future.

PC games offer vast worlds to explore and complex mechanics to master, which is challenging and time-consuming for even the most dedicated gamers. The G-Assist project aims to put gaming knowledge at the fingertips of players through generative AI.

The G-Assist project takes voice or text input from the player, as well as contextual information from the game screen, and runs the data through AI vision models. These models improve contextual awareness and application-specific understanding of a large language model (LLM) linked to a gaming knowledge database, then generate a personalized response delivered as text or speech.

NVIDIA partnered with Studio Wildcard to demo the technology with ARK: High Survival. Project G-Assist can help answer questions about creatures, items, lore, objectives, difficult bosses and much more. Because Project G-Assist is context-aware, it personalizes its responses to the player’s gaming session.

Additionally, Project G-Assist can configure the player’s gaming system for optimal performance and efficiency. It can provide insight into performance metrics, optimize graphics settings based on the user’s hardware, apply safe overclocking, and even intelligently reduce power consumption while maintaining a performance target.

Launch of the first ACE PC NIM

NVIDIA ACE technology to power digital humans is now coming to RTX AI PCs and workstations with NVIDIA NIM – inference microservices that enable developers to reduce deployment times from weeks to minutes. ACE NIM microservices provide high-quality inference run locally on devices for natural language understanding, speech synthesis, facial animation and more.

At COMPUTEX, NVIDIA ACE NIM’s PC gaming debut will be showcased in the Secret Protocol Technical Demo, developed in collaboration with Inworld AI. He now presents NVIDIA Audio2Face™ and NVIDIA Riva Automatic speech recognition performed locally on devices.

Windows Copilot Runtime to add GPU acceleration for local PC SLMs
Microsoft and NVIDIA are collaborating to help developers bring new generative AI capabilities to their native Windows and web applications. This collaboration will provide application developers with simple application programming interface (API) access to GPU-accelerated small language models (SLMs) that enable generation augmented by recovery (RAG) that run on the device as part of Windows Copilot Runtime.

SLMs offer enormous opportunities for Windows developers, including content summarization, content generation, and task automation. RAG capabilities augment SLMs by giving AI models access to domain-specific information that is not well represented in base models. RAG APIs allow developers to leverage application-specific data sources and tailor SLM behavior and capabilities to application needs.

These AI capabilities will be accelerated by NVIDIA RTX GPUs, as well as AI accelerators from other hardware vendors, providing end users with fast, responsive AI experiences across the entire Windows ecosystem.

The API will be released as a developer preview later this year.

4x faster and 3x smaller models with the RTX AI toolkit
The AI ​​ecosystem has built hundreds of thousands of open source models for application developers to leverage, but most models are pre-trained for general purposes and designed to run in a data center.

To help developers create AI models specific to applications that run on PC, NVIDIA introduces RTX AI Toolkit — a suite of tools and SDKs for customizing, optimizing, and deploying models to RTX AI PCs. RTX AI Toolkit will be available later this month for wider developer access.

Developers can customize a pre-trained model with open source QLoRa tools. They can then use the NVIDIA TensorRT™ model optimizer to quantify models to consume up to 3x less RAM. NVIDIA TensorRT Cloud then optimizes the model for optimal performance across all RTX GPU lines. The result is up to 4x faster performance compared to the pre-trained model.

New NVIDIA AI Inference Manager The SDK, now available in Early Access, simplifies the deployment of ACE on PC. It pre-configures the PC with the necessary AI models, engines, and dependencies while orchestrating AI inference seamlessly across PCs and the cloud.

Software partners such as Adobe, Blackmagic Design, and Topaz are integrating components of the RTX AI Toolkit into their popular creative applications to accelerate AI performance on RTX PCs.

“Adobe and NVIDIA continue to collaborate to deliver breakthrough customer experiences across all creative workflows, from video to imaging, design, 3D and beyond,” said Deepa Subramaniam, Vice President President of Product Marketing, Creative Cloud at Adobe. “TensorRT 10.0 on RTX PCs delivers unprecedented performance and AI-driven capabilities to creators, designers and developers, opening the door to new creative possibilities for content creation in cutting-edge creative tools like Photoshop. »

RTX AI Toolkit components, such as TensorRT-LLM, are integrated into popular frameworks and development applications for generative AI, including Automatic1111, ComfyUI, Jan.AI, LangChain, LlamaIndex, Oobabooga, and Sanctum.AI.

AI for content creation
NVIDIA is also integrating RTX AI acceleration into applications for creators, modders and video enthusiasts.

Last year, NVIDIA introduced RTX acceleration using TensorRT for one of the most popular stable streaming UIs, Automatic1111. Starting this week, RTX will also accelerate the popular ComfyUI, delivering up to a 60% performance improvement over the currently available version and 7x faster performance compared to the MacBook Pro M3 Max.

NVIDIA RTX Remix is a modding platform for remastering classic DirectX 8 and DirectX 9 games with full ray tracing, NVIDIA DLSS 3.5, and physically accurate materials. RTX Remix includes a runtime renderer and the RTX Remix Toolkit application, which makes it easy to modify in-game assets and hardware.

Last year, NVIDIA made RTX Remix Runtime open source, allowing modders to extend game compatibility and advanced rendering capabilities.

Since the launch of RTX Remix Toolkit earlier this year, 20,000 modders have used it to modify classic gameswhich has resulted in over 100 RTX remasters in development on the RTX Remix Showcase Discord.

This month, NVIDIA will make the RTX Remix Toolkit open source, allowing modders to streamline how assets are replaced and scenes re-ignited, increase the supported file formats for the RTX Asset Ingester Remix and boost RTX Remix’s AI texture tools with new models.

Additionally, NVIDIA makes RTX Remix Toolkit’s capabilities accessible via a REST API, allowing modders to live link RTX Remix to digital content creation tools such as Blender, modding tools such as Hammer, and gaming applications. Generative AI such as ComfyUI. NVIDIA also provides an SDK for RTX Remix Runtime to allow modders to deploy the RTX Remix rendering engine in other applications and games beyond classic DirectX 8 and 9.

With more of the RTX Remix platform made open source, modders around the world can create even more stunning RTX remasters.

NVIDIA RTX videoThe popular AI-powered super-resolution feature, supported in Google Chrome, Microsoft Edge and Mozilla Firefox browsers, is now available as an SDK for all developers, helping them natively integrate AI for scaling, sharpening, compression artifact reduction and high dynamic range. range conversion (HDR).

Coming soon to Blackmagic Design’s DaVinci Resolve and Wondershare Filmora video editing software, RTX Video will allow video editors to upgrade lower quality video files to 4K resolution, as well as convert standard dynamic range source files to HDR. Additionally, free media player VLC will soon add RTX Video HDR to its existing super-resolution capability.

Learn more about PCs and RTX AI technology by joining NVIDIA at COMPUTEX.

News Source : nvidianews.nvidia.com
Gn tech

Back to top button