Nvidia released a new family of artificial intelligence (AI) models on Tuesday at its GPU Technology Conference (GTC) 2025. Dubbed Llama Nemotron, these are the company’s latest reasoning-focused large language models (LLMs) that are designed to offer a foundation for agentic AI workflows. The Santa Clara-based tech giant said these models were aimed at developers and enterprises to enable them to make advanced AI agents that can either work independently or as connected teams to perform complex tasks. The Llama Nemotron models are currently available via Nvidia’s platform and Hugging Face.
Nvidia Introduces New Reasoning-Focused AI Models
In a newsroom post, the tech giant detailed the new AI models. The Llama Nemotron reasoning models are based on Meta’s Llama 3 series models, with post-training enhancements added by Nvidia. The company highlighted that the family of AI models display improved capabilities in multistep mathematics, coding, reasoning, and complex decision-making.
The company highlighted that the process improved the accuracy of the models by up to 20 percent compared to the based models. The inference speed is also said to have been improved by five times compared to similar-sized open-source reasoning models. Nvidia claimed that “the models can handle more complex reasoning tasks, enhance decision-making capabilities, and reduce operational costs for enterprises.” With these advancements, the LLM can be used to build and power AI agents.
Llama Nemotron reasoning models are available in three parameter sizes — Nano, Super, and Ultra. The Nano model is best suited for on-device and edge-based tasks that require high accuracy. The Super variant is placed in the middle to offer high accuracy and throughput on a single GPU. Finally, the Ultra model is meant to be run on multi-GPU servers and offers agentic accuracy.
The post-training of the reasoning models was done on the Nvidia DGX Cloud using curated synthetic data generated using the Nemotron platform as well as other open models. The tech giant is also making the tools, datasets, and post-training optimisation techniques used to develop the Llama Nemotron models available to the open-source community.
Nvidia is also working with enterprise partners to bring the models to developers and businesses. These reasoning models and the NIM microservices can be accessed via Microsoft’s Azure AI Foundry as well as an option via the Azure AI Agent Services. SAP is also using the models for its Business AI solutions and the AI copilot dubbed Joule, the company said. Other enterprises using Llama Nemotron models include ServiceNow, Accenture, and Deloitte.
The Llama Nemotron Nano and Super models and NIM microservices are available for businesses and developers as an application programming interface (API) via Nvidia’s platform as well as its Hugging Face listing. It is available with the permissive Nvidia Open Model License Agreement which allows both research and commercial usage.