DeepSeek, the Hangzhou, China-based artificial intelligence (AI) firm, released an updated version of its Prover model on Wednesday. Dubbed DeepSeek-Prover-V2, it is a highly specialised model that focuses on proving formal mathematical theorems. The large language model (LLM) uses the Lean 4 programming language to check if the mathematical proofs are logically consistent by analysing each step independently. Similar to the Chinese firm’s previous releases, the DeepSeek-Prover-V2 is an open-source model and can be downloaded from popular repositories such as GitHub and Hugging Face.
DeepSeek’s New Mathematics-Focused AI Model Is Here
The AI firm detailed the new model on its GitHub listing page. It is essentially a reasoning-focused model with a visible chain-of-thought (CoT), which functions in the domain of mathematics. It is built on and distilled from the DeepSeek-V3 AI model, which was released in December 2024.
DeepSeek-Prover-V2 can be used in a variety of ways. It can solve high-school to college-level mathematical problems and find and fix errors in mathematical theorem proofs. It can also be used as a teaching aid and generate step-by-step explanations for proofs, and it can assist mathematicians and researchers in exploring new theorems and proving their validity.
It is available in two model sizes — a seven billion parameter size and a larger 671 billion parameter size. While the latter is trained on top of DeepSeek-V3-Base, the former is built upon DeepSeek-Prover-V1.5-Base and comes with a context length of up to 32,000 tokens.
Coming to the pre-training processes, the researchers implemented a cold-start training system by prompting the base model to decompose complex problems. These problems served as a series of subgoals. Then, the proofs of resolved subgoals were added to the CoT and combined with the reasoning of the base model to create an initial cold start for reinforcement learning.
Notably, apart from GitHub, the AI model can also be downloaded from DeepSeek’s Hugging Face listing. The Prover-V2 model highlights how iterative changes to the training process of AI models can result in significantly improving their specialised capability. Similar to other open-source model releases, the details about the core architecture or the larger dataset are not known.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Google’s Pichai Says US Fix Is ‘De Facto’ Spinoff of Search