DeepSeek-V3
https://github.com/deepseek-ai/DeepSeek-V3
📊 Stats
⭐ Stars: 102,409
📝 Language: Python
📝 Description: No description
⭐ Star Growth (12 months)
🔬 Research Notes
Stats
Description
No description
Topics
None
Research Summary
Key Features
Architecture
Use Cases
Assessment
README Excerpt
```
src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true"/>
src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white"/>
src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white"/>
src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white"/>
src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53"/>
src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53"/>
Table of Contents
1. [Introduction](#1-introduction)
2. [Model Summary](#2-model-summary)
3. [Model Downloads](#3-model-downloads)
4. [Evaluation Results](#4-evaluation-results)
5. [Chat Website & API Platform](#5-chat-website--api-platform)
6. [How to Run Locally](#6-how-to-run-locally)
7. [License](#7-license)
8. [Citation](#8-citation)
9. [Contact](#9-contact)
1. Introduction
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.
Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models.
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
In addition, its training process is remarkably stable.
Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.

2. Model Summary
---
Architecture: Innovative Load Balancing Strategy and Training Objective
It can also be used for speculative decoding for inference acceleration.
---
Pre-Training: Towards Ultimate Training Efficiency
This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
---
Post-Training: Knowledge Distillation from DeepSeek-R1
---
3. Model Downloads
| Model | #Total Params | #Activated Params | Context Length | Download |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-V3-Base | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) |
| DeepSeek-V3 | 671B | 37B | 128K | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3) |
> [!NOTE]
> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.
```
---
*Researched: 2026-03-28*