text
stringlengths 55
456k
| metadata
dict |
---|---|
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/"><img alt="Homepage"
src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true"/></a>
<a href="https://chat.deepseek.com/"><img alt="Chat"
src="https://img.shields.io/badge/๐ค%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white"/></a>
<a href="https://huggingface.co/deepseek-ai"><img alt="Hugging Face"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white"/></a>
<br>
<a href="https://discord.gg/Tc7c45Zzu5"><img alt="Discord"
src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da"/></a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true"><img alt="Wechat"
src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white"/></a>
<a href="https://twitter.com/deepseek_ai"><img alt="Twitter Follow"
src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white"/></a>
<br>
<a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-CODE"><img alt="Code License"
src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53"/></a>
<a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL"><img alt="Model License"
src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53"/></a>
<br>
<a href="DeepSeek_V3.pdf"><b>Paper Link</b>๐๏ธ</a>
</div>
## Table of Contents
1. [Introduction](#1-introduction)
2. [Model Summary](#2-model-summary)
3. [Model Downloads](#3-model-downloads)
4. [Evaluation Results](#4-evaluation-results)
5. [Chat Website & API Platform](#5-chat-website--api-platform)
6. [How to Run Locally](#6-how-to-run-locally)
7. [License](#7-license)
8. [Citation](#8-citation)
9. [Contact](#9-contact)
## 1. Introduction
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.
Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models.
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
In addition, its training process is remarkably stable.
Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
<p align="center">
<img width="80%" src="figures/benchmark.png">
</p>
## 2. Model Summary
---
**Architecture: Innovative Load Balancing Strategy and Training Objective**
- On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
- We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance.
It can also be used for speculative decoding for inference acceleration.
---
**Pre-Training: Towards Ultimate Training Efficiency**
- We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
- Through co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, nearly achieving full computation-communication overlap.
This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
- At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
---
**Post-Training: Knowledge Distillation from DeepSeek-R1**
- We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain a control over the output style and length of DeepSeek-V3.
---
## 3. Model Downloads
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-V3-Base | 671B | 37B | 128K | [๐ค Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) |
| DeepSeek-V3 | 671B | 37B | 128K | [๐ค Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3) |
</div>
> [!NOTE]
> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.
To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally).
For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md](./README_WEIGHTS.md) for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback.
## 4. Evaluation Results
### Base Model
#### Standard Benchmarks
<div align="center">
| | Benchmark (Metric) | # Shots | DeepSeek-V2 | Qwen2.5 72B | LLaMA3.1 405B | DeepSeek-V3 |
|---|-------------------|----------|--------|-------------|---------------|---------|
| | Architecture | - | MoE | Dense | Dense | MoE |
| | # Activated Params | - | 21B | 72B | 405B | 37B |
| | # Total Params | - | 236B | 72B | 405B | 671B |
| English | Pile-test (BPB) | - | 0.606 | 0.638 | **0.542** | 0.548 |
| | BBH (EM) | 3-shot | 78.8 | 79.8 | 82.9 | **87.5** |
| | MMLU (Acc.) | 5-shot | 78.4 | 85.0 | 84.4 | **87.1** |
| | MMLU-Redux (Acc.) | 5-shot | 75.6 | 83.2 | 81.3 | **86.2** |
| | MMLU-Pro (Acc.) | 5-shot | 51.4 | 58.3 | 52.8 | **64.4** |
| | DROP (F1) | 3-shot | 80.4 | 80.6 | 86.0 | **89.0** |
| | ARC-Easy (Acc.) | 25-shot | 97.6 | 98.4 | 98.4 | **98.9** |
| | ARC-Challenge (Acc.) | 25-shot | 92.2 | 94.5 | **95.3** | **95.3** |
| | HellaSwag (Acc.) | 10-shot | 87.1 | 84.8 | **89.2** | 88.9 |
| | PIQA (Acc.) | 0-shot | 83.9 | 82.6 | **85.9** | 84.7 |
| | WinoGrande (Acc.) | 5-shot | **86.3** | 82.3 | 85.2 | 84.9 |
| | RACE-Middle (Acc.) | 5-shot | 73.1 | 68.1 | **74.2** | 67.1 |
| | RACE-High (Acc.) | 5-shot | 52.6 | 50.3 | **56.8** | 51.3 |
| | TriviaQA (EM) | 5-shot | 80.0 | 71.9 | 82.7 | **82.9** |
| | NaturalQuestions (EM) | 5-shot | 38.6 | 33.2 | **41.5** | 40.0 |
| | AGIEval (Acc.) | 0-shot | 57.5 | 75.8 | 60.6 | **79.6** |
| Code | HumanEval (Pass@1) | 0-shot | 43.3 | 53.0 | 54.9 | **65.2** |
| | MBPP (Pass@1) | 3-shot | 65.0 | 72.6 | 68.4 | **75.4** |
| | LiveCodeBench-Base (Pass@1) | 3-shot | 11.6 | 12.9 | 15.5 | **19.4** |
| | CRUXEval-I (Acc.) | 2-shot | 52.5 | 59.1 | 58.5 | **67.3** |
| | CRUXEval-O (Acc.) | 2-shot | 49.8 | 59.9 | 59.9 | **69.8** |
| Math | GSM8K (EM) | 8-shot | 81.6 | 88.3 | 83.5 | **89.3** |
| | MATH (EM) | 4-shot | 43.4 | 54.4 | 49.0 | **61.6** |
| | MGSM (EM) | 8-shot | 63.6 | 76.2 | 69.9 | **79.8** |
| | CMath (EM) | 3-shot | 78.7 | 84.5 | 77.3 | **90.7** |
| Chinese | CLUEWSC (EM) | 5-shot | 82.0 | 82.5 | **83.0** | 82.7 |
| | C-Eval (Acc.) | 5-shot | 81.4 | 89.2 | 72.5 | **90.1** |
| | CMMLU (Acc.) | 5-shot | 84.0 | **89.5** | 73.7 | 88.8 |
| | CMRC (EM) | 1-shot | **77.4** | 75.8 | 76.0 | 76.3 |
| | C3 (Acc.) | 0-shot | 77.4 | 76.7 | **79.7** | 78.6 |
| | CCPM (Acc.) | 0-shot | **93.0** | 88.5 | 78.6 | 92.0 |
| Multilingual | MMMLU-non-English (Acc.) | 5-shot | 64.0 | 74.8 | 73.8 | **79.4** |
</div>
> [!NOTE]
> Best results are shown in bold. Scores with a gap not exceeding 0.3 are considered to be at the same level. DeepSeek-V3 achieves the best performance on most benchmarks, especially on math and code tasks.
> For more evaluation details, please check our paper.
#### Context Window
<p align="center">
<img width="80%" src="figures/niah.png">
</p>
Evaluation results on the ``Needle In A Haystack`` (NIAH) tests. DeepSeek-V3 performs well across all context window lengths up to **128K**.
### Chat Model
#### Standard Benchmarks (Models larger than 67B)
<div align="center">
| | **Benchmark (Metric)** | **DeepSeek V2-0506** | **DeepSeek V2.5-0905** | **Qwen2.5 72B-Inst.** | **Llama3.1 405B-Inst.** | **Claude-3.5-Sonnet-1022** | **GPT-4o 0513** | **DeepSeek V3** |
|---|---------------------|---------------------|----------------------|---------------------|----------------------|---------------------------|----------------|----------------|
| | Architecture | MoE | MoE | Dense | Dense | - | - | MoE |
| | # Activated Params | 21B | 21B | 72B | 405B | - | - | 37B |
| | # Total Params | 236B | 236B | 72B | 405B | - | - | 671B |
| English | MMLU (EM) | 78.2 | 80.6 | 85.3 | **88.6** | **88.3** | 87.2 | **88.5** |
| | MMLU-Redux (EM) | 77.9 | 80.3 | 85.6 | 86.2 | **88.9** | 88.0 | **89.1** |
| | MMLU-Pro (EM) | 58.5 | 66.2 | 71.6 | 73.3 | **78.0** | 72.6 | 75.9 |
| | DROP (3-shot F1) | 83.0 | 87.8 | 76.7 | 88.7 | 88.3 | 83.7 | **91.6** |
| | IF-Eval (Prompt Strict) | 57.7 | 80.6 | 84.1 | 86.0 | **86.5** | 84.3 | 86.1 |
| | GPQA-Diamond (Pass@1) | 35.3 | 41.3 | 49.0 | 51.1 | **65.0** | 49.9 | 59.1 |
| | SimpleQA (Correct) | 9.0 | 10.2 | 9.1 | 17.1 | 28.4 | **38.2** | 24.9 |
| | FRAMES (Acc.) | 66.9 | 65.4 | 69.8 | 70.0 | 72.5 | **80.5** | 73.3 |
| | LongBench v2 (Acc.) | 31.6 | 35.4 | 39.4 | 36.1 | 41.0 | 48.1 | **48.7** |
| Code | HumanEval-Mul (Pass@1) | 69.3 | 77.4 | 77.3 | 77.2 | 81.7 | 80.5 | **82.6** |
| | LiveCodeBench (Pass@1-COT) | 18.8 | 29.2 | 31.1 | 28.4 | 36.3 | 33.4 | **40.5** |
| | LiveCodeBench (Pass@1) | 20.3 | 28.4 | 28.7 | 30.1 | 32.8 | 34.2 | **37.6** |
| | Codeforces (Percentile) | 17.5 | 35.6 | 24.8 | 25.3 | 20.3 | 23.6 | **51.6** |
| | SWE Verified (Resolved) | - | 22.6 | 23.8 | 24.5 | **50.8** | 38.8 | 42.0 |
| | Aider-Edit (Acc.) | 60.3 | 71.6 | 65.4 | 63.9 | **84.2** | 72.9 | 79.7 |
| | Aider-Polyglot (Acc.) | - | 18.2 | 7.6 | 5.8 | 45.3 | 16.0 | **49.6** |
| Math | AIME 2024 (Pass@1) | 4.6 | 16.7 | 23.3 | 23.3 | 16.0 | 9.3 | **39.2** |
| | MATH-500 (EM) | 56.3 | 74.7 | 80.0 | 73.8 | 78.3 | 74.6 | **90.2** |
| | CNMO 2024 (Pass@1) | 2.8 | 10.8 | 15.9 | 6.8 | 13.1 | 10.8 | **43.2** |
| Chinese | CLUEWSC (EM) | 89.9 | 90.4 | **91.4** | 84.7 | 85.4 | 87.9 | 90.9 |
| | C-Eval (EM) | 78.6 | 79.5 | 86.1 | 61.5 | 76.7 | 76.0 | **86.5** |
| | C-SimpleQA (Correct) | 48.5 | 54.1 | 48.4 | 50.4 | 51.3 | 59.3 | **64.8** |
</div>
> [!NOTE]
> All models are evaluated in a configuration that limits the output length to 8K. Benchmarks containing fewer than 1000 samples are tested multiple times using varying temperature settings to derive robust final results. DeepSeek-V3 stands as the best-performing open-source model, and also exhibits competitive performance against frontier closed-source models.
#### Open Ended Generation Evaluation
<div align="center">
| Model | Arena-Hard | AlpacaEval 2.0 |
|-------|------------|----------------|
| DeepSeek-V2.5-0905 | 76.2 | 50.5 |
| Qwen2.5-72B-Instruct | 81.2 | 49.1 |
| LLaMA-3.1 405B | 69.3 | 40.5 |
| GPT-4o-0513 | 80.4 | 51.1 |
| Claude-Sonnet-3.5-1022 | 85.2 | 52.0 |
| DeepSeek-V3 | **85.5** | **70.0** |
</div>
> [!NOTE]
> English open-ended conversation evaluations. For AlpacaEval 2.0, we use the length-controlled win rate as the metric.
## 5. Chat Website & API Platform
You can chat with DeepSeek-V3 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com/sign_in)
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
DeepSeek-V3 can be deployed locally using the following hardware and open-source community software:
1. **DeepSeek-Infer Demo**: We provide a simple and lightweight demo for FP8 and BF16 inference.
2. **SGLang**: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes, with Multi-Token Prediction [coming soon](https://github.com/sgl-project/sglang/issues/2591).
3. **LMDeploy**: Enables efficient FP8 and BF16 inference for local and cloud deployment.
4. **TensorRT-LLM**: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon.
5. **vLLM**: Support DeepSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism.
6. **AMD GPU**: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes.
7. **Huawei Ascend NPU**: Supports running DeepSeek-V3 on Huawei Ascend devices.
Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation.
Here is an example of converting FP8 weights to BF16:
```shell
cd inference
python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights
```
> [!NOTE]
> Hugging Face's Transformers has not been directly supported yet.
### 6.1 Inference with DeepSeek-Infer Demo (example only)
#### System Requirements
> [!NOTE]
> Linux with Python 3.10 only. Mac and Windows are not supported.
Dependencies:
```pip-requirements
torch==2.4.1
triton==3.0.0
transformers==4.46.3
safetensors==0.4.5
```
#### Model Weights & Demo Code Preparation
First, clone our DeepSeek-V3 GitHub repository:
```shell
git clone https://github.com/deepseek-ai/DeepSeek-V3.git
```
Navigate to the `inference` folder and install dependencies listed in `requirements.txt`. Easiest way is to use a package manager like `conda` or `uv` to create a new virtual environment and install the dependencies.
```shell
cd DeepSeek-V3/inference
pip install -r requirements.txt
```
Download the model weights from Hugging Face, and put them into `/path/to/DeepSeek-V3` folder.
#### Model Weights Conversion
Convert Hugging Face model weights to a specific format:
```shell
python convert.py --hf-ckpt-path /path/to/DeepSeek-V3 --save-path /path/to/DeepSeek-V3-Demo --n-experts 256 --model-parallel 16
```
#### Run
Then you can chat with DeepSeek-V3:
```shell
torchrun --nnodes 2 --nproc-per-node 8 --node-rank $RANK --master-addr $ADDR generate.py --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --interactive --temperature 0.7 --max-new-tokens 200
```
Or batch inference on a given file:
```shell
torchrun --nnodes 2 --nproc-per-node 8 --node-rank $RANK --master-addr $ADDR generate.py --ckpt-path /path/to/DeepSeek-V3-Demo --config configs/config_671B.json --input-file $FILE
```
### 6.2 Inference with SGLang (recommended)
[SGLang](https://github.com/sgl-project/sglang) currently supports [MLA optimizations](https://lmsys.org/blog/2024-09-04-sglang-v0-3/#deepseek-multi-head-latent-attention-mla-throughput-optimizations), [DP Attention](https://lmsys.org/blog/2024-12-04-sglang-v0-4/#data-parallelism-attention-for-deepseek-models), FP8 (W8A8), FP8 KV Cache, and Torch Compile, delivering state-of-the-art latency and throughput performance among open-source frameworks.
Notably, [SGLang v0.4.1](https://github.com/sgl-project/sglang/releases/tag/v0.4.1) fully supports running DeepSeek-V3 on both **NVIDIA and AMD GPUs**, making it a highly versatile and robust solution.
SGLang also supports [multi-node tensor parallelism](https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3#example-serving-with-2-h208), enabling you to run this model on multiple network-connected machines.
Multi-Token Prediction (MTP) is in development, and progress can be tracked in the [optimization plan](https://github.com/sgl-project/sglang/issues/2591).
Here are the launch instructions from the SGLang team: https://github.com/sgl-project/sglang/tree/main/benchmark/deepseek_v3
### 6.3 Inference with LMDeploy (recommended)
[LMDeploy](https://github.com/InternLM/lmdeploy), a flexible and high-performance inference and serving framework tailored for large language models, now supports DeepSeek-V3. It offers both offline pipeline processing and online deployment capabilities, seamlessly integrating with PyTorch-based workflows.
For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy, please refer to here: https://github.com/InternLM/lmdeploy/issues/2960
### 6.4 Inference with TRT-LLM (recommended)
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/deepseek/examples/deepseek_v3.
### 6.5 Inference with vLLM (recommended)
[vLLM](https://github.com/vllm-project/vllm) v0.6.6 supports DeepSeek-V3 inference for FP8 and BF16 modes on both NVIDIA and AMD GPUs. Aside from standard techniques, vLLM offers _pipeline parallelism_ allowing you to run this model on multiple machines connected by networks. For detailed guidance, please refer to the [vLLM instructions](https://docs.vllm.ai/en/latest/serving/distributed_serving.html). Please feel free to follow [the enhancement plan](https://github.com/vllm-project/vllm/issues/11539) as well.
### 6.6 Recommended Inference Functionality with AMD GPUs
In collaboration with the AMD team, we have achieved Day-One support for AMD GPUs using SGLang, with full compatibility for both FP8 and BF16 precision. For detailed guidance, please refer to the [SGLang instructions](#63-inference-with-lmdeploy-recommended).
### 6.7 Recommended Inference Functionality with Huawei Ascend NPUs
The [MindIE](https://www.hiascend.com/en/software/mindie) framework from the Huawei Ascend community has successfully adapted the BF16 version of DeepSeek-V3. For step-by-step guidance on Ascend NPUs, please follow the [instructions here](https://modelers.cn/models/MindIE/deepseekv3).
## 7. License
This code repository is licensed under [the MIT License](LICENSE-CODE). The use of DeepSeek-V3 Base/Chat models is subject to [the Model License](LICENSE-MODEL). DeepSeek-V3 series (including Base and Chat) supports commercial use.
## 8. Citation
```
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). | {
"source": "deepseek-ai/DeepSeek-V3",
"title": "README.md",
"url": "https://github.com/deepseek-ai/DeepSeek-V3/blob/main/README.md",
"date": "2024-12-26T09:52:40",
"stars": 88783,
"description": null,
"file_size": 20114
} |
# DeepSeek-V3 Weight File Documentation
## New Fields in `config.json`
- **model_type**: Specifies the model type, which is updated to `deepseek_v3` in this release.
- **num_nextn_predict_layers**: Indicates the number of Multi-Token Prediction (MTP) Modules. The open-sourced V3 weights include **1 MTP Module** .
- **quantization_config**: Describes the configuration for FP8 quantization.
---
## Weight Structure Overview
The DeepSeek-V3 weight file consists of two main components: **Main Model Weights** and **MTP Modules**.
### 1. Main Model Weights
- **Composition**:
- Input/output embedding layers and a complete set of 61 Transformer hidden layers.
- **Parameter Count**:
- Total parameters: **671B**
- Activation parameters: **36.7B** (including 0.9B for Embedding and 0.9B for the output Head).
#### Structural Details
- **Embedding Layer**:
- `model.embed_tokens.weight`
- **Transformer Hidden Layers**:
- `model.layers.0` to `model.layers.60`, totaling `num_hidden_layers` layers.
- **Output Layer**:
- `model.norm.weight`
- `lm_head.weight`
### 2. Multi-Token Prediction (MTP) Modules
- **Composition**:
- Additional MTP Modules defined by the `num_nextn_predict_layers` field. In this model, the value is set to 1.
- **Parameter Count**:
- Parameters: **11.5B unique parameters**, excluding the shared 0.9B Embedding and 0.9B output Head).
- Activation parameters: **2.4B** (including the shared 0.9B Embedding and 0.9B output Head).
#### Structural Details
- **embed_tokens**: **Shares parameters** with the Embedding layer of the Main Model weights.
- **enorm & hnorm**: RMSNorm parameters required for speculative decoding.
- **eh_proj**: Parameters for dimensionality reduction projection on the norm results.
- **Additional Transformer Hidden Layer**:
- `model.layers.61.self_attn & mlp` (structure identical to the Main Model hidden layers).
- **shared_head**: **Shares parameters** with the output Head of the Main Model weights.
---
### Loading Rules
- **Main Model Weights**: Loaded via the `num_hidden_layers` parameter in `config.json`.
- **MTP Modules**: Loaded via the `num_nextn_predict_layers` parameter, with layer IDs appended immediately after the Main Model hidden layers. For example:
- If `num_hidden_layers = 61` and `num_nextn_predict_layers = 1`, the MTP Module's layer ID is `61`.
---
## FP8 Weight Documentation
DeepSeek-V3 natively supports FP8 weight format with 128x128 block scaling.
### FP8 Configuration
The FP8 weight file introduces a `quantization_config` field to describe the quantization method. Below is an example configuration:
```json
"quantization_config": {
"activation_scheme": "dynamic",
"fmt": "e4m3",
"quant_method": "fp8",
"weight_block_size": [128, 128]
}
```
- **Quantization Format**:
- Format type: `fp8` and `e4m3` (corresponding to `torch.float8_e4m3fn`).
- Weight block size: `128x128`.
- **Activation Quantization Scheme**:
- Utilizes dynamic activation quantization (`dynamic`).
### Dequantization Method
The FP8 weight file includes a `weight_scale_inv` field, which stores the dequantization scale for each weight block.
- **Storage Format**: `float32 Tensor`, stored alongside the weight data.
- **Dequantization Formula**:
- If the weight block is not aligned to 128, it is zero-padded to 128 before calculating the scale. After quantization, the padded portion is removed.
- The dequantization process is performed as: `(128x128 weight block) * weight_scale_inv`.
Through dequantization of the FP8 weights, runtime operations enable online quantization at a granularity of `per-token-per-128-channel`.
--- | {
"source": "deepseek-ai/DeepSeek-V3",
"title": "README_WEIGHTS.md",
"url": "https://github.com/deepseek-ai/DeepSeek-V3/blob/main/README_WEIGHTS.md",
"date": "2024-12-26T09:52:40",
"stars": 88783,
"description": null,
"file_size": 3654
} |
---
name: Bug report
about: Create a report to help us improve
title: "[BUG]"
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here. | {
"source": "deepseek-ai/DeepSeek-V3",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/deepseek-ai/DeepSeek-V3/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2024-12-26T09:52:40",
"stars": 88783,
"description": null,
"file_size": 467
} |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. | {
"source": "deepseek-ai/DeepSeek-V3",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/deepseek-ai/DeepSeek-V3/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-12-26T09:52:40",
"stars": 88783,
"description": null,
"file_size": 594
} |
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-R1" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank"><img alt="Homepage"
src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true"/></a>
<a href="https://chat.deepseek.com/" target="_blank"><img alt="Chat"
src="https://img.shields.io/badge/๐ค%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white"/></a>
<a href="https://huggingface.co/deepseek-ai" target="_blank"><img alt="Hugging Face"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white"/></a>
<br>
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank"><img alt="Discord"
src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da"/></a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank"><img alt="WeChat"
src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white"/></a>
<a href="https://twitter.com/deepseek_ai" target="_blank"><img alt="Twitter Follow"
src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white"/></a>
<br>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE"><img alt="License"
src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53"/></a>
<br>
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>๐๏ธ</a>
</div>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
### Official Prompts
In the official DeepSeek web/app, we don't use system prompts but design two specific prompts for file upload and web search for better user experience. In addition, the temperature in web/app is 0.6.
For file upload, please follow the template to create prompts, where {file_name}, {file_content} and {question} are arguments.
```
file_template = \
"""[file name]: {file_name}
[file content begin]
{file_content}
[file content end]
{question}"""
```
For Web Search, {search_results}, {cur_date}, and {question} are arguments.
For Chinese query, we use the prompt:
```
search_answer_zh_template = \
'''# ไปฅไธๅ
ๅฎนๆฏๅบไบ็จๆทๅ้็ๆถๆฏ็ๆ็ดข็ปๆ:
{search_results}
ๅจๆ็ปไฝ ็ๆ็ดข็ปๆไธญ๏ผๆฏไธช็ปๆ้ฝๆฏ[webpage X begin]...[webpage X end]ๆ ผๅผ็๏ผXไปฃ่กจๆฏ็ฏๆ็ซ ็ๆฐๅญ็ดขๅผใ่ฏทๅจ้ๅฝ็ๆ
ๅตไธๅจๅฅๅญๆซๅฐพๅผ็จไธไธๆใ่ฏทๆ็
งๅผ็จ็ผๅท[citation:X]็ๆ ผๅผๅจ็ญๆกไธญๅฏนๅบ้จๅๅผ็จไธไธๆใๅฆๆไธๅฅ่ฏๆบ่ชๅคไธชไธไธๆ๏ผ่ฏทๅๅบๆๆ็ธๅ
ณ็ๅผ็จ็ผๅท๏ผไพๅฆ[citation:3][citation:5]๏ผๅ่ฎฐไธ่ฆๅฐๅผ็จ้ไธญๅจๆๅ่ฟๅๅผ็จ็ผๅท๏ผ่ๆฏๅจ็ญๆกๅฏนๅบ้จๅๅๅบใ
ๅจๅ็ญๆถ๏ผ่ฏทๆณจๆไปฅไธๅ ็น๏ผ
- ไปๅคฉๆฏ{cur_date}ใ
- ๅนถ้ๆ็ดข็ปๆ็ๆๆๅ
ๅฎน้ฝไธ็จๆท็้ฎ้ขๅฏๅ็ธๅ
ณ๏ผไฝ ้่ฆ็ปๅ้ฎ้ข๏ผๅฏนๆ็ดข็ปๆ่ฟ่ก็ๅซใ็ญ้ใ
- ๅฏนไบๅไธพ็ฑป็้ฎ้ข๏ผๅฆๅไธพๆๆ่ช็ญไฟกๆฏ๏ผ๏ผๅฐฝ้ๅฐ็ญๆกๆงๅถๅจ10ไธช่ฆ็นไปฅๅ
๏ผๅนถๅ่ฏ็จๆทๅฏไปฅๆฅ็ๆ็ดขๆฅๆบใ่ทๅพๅฎๆดไฟกๆฏใไผๅ
ๆไพไฟกๆฏๅฎๆดใๆ็ธๅ
ณ็ๅไธพ้กน๏ผๅฆ้ๅฟ
่ฆ๏ผไธ่ฆไธปๅจๅ่ฏ็จๆทๆ็ดข็ปๆๆชๆไพ็ๅ
ๅฎนใ
- ๅฏนไบๅไฝ็ฑป็้ฎ้ข๏ผๅฆๅ่ฎบๆ๏ผ๏ผ่ฏทๅกๅฟ
ๅจๆญฃๆ็ๆฎต่ฝไธญๅผ็จๅฏนๅบ็ๅ่็ผๅท๏ผไพๅฆ[citation:3][citation:5]๏ผไธ่ฝๅชๅจๆ็ซ ๆซๅฐพๅผ็จใไฝ ้่ฆ่งฃ่ฏปๅนถๆฆๆฌ็จๆท็้ข็ฎ่ฆๆฑ๏ผ้ๆฉๅ้็ๆ ผๅผ๏ผๅ
ๅๅฉ็จๆ็ดข็ปๆๅนถๆฝๅ้่ฆไฟกๆฏ๏ผ็ๆ็ฌฆๅ็จๆท่ฆๆฑใๆๅ
ทๆๆณๆทฑๅบฆใๅฏๆๅ้ ๅไธไธไธๆง็็ญๆกใไฝ ็ๅไฝ็ฏๅน
้่ฆๅฐฝๅฏ่ฝๅปถ้ฟ๏ผๅฏนไบๆฏไธไธช่ฆ็น็่ฎบ่ฟฐ่ฆๆจๆต็จๆท็ๆๅพ๏ผ็ปๅบๅฐฝๅฏ่ฝๅค่งๅบฆ็ๅ็ญ่ฆ็น๏ผไธๅกๅฟ
ไฟกๆฏ้ๅคงใ่ฎบ่ฟฐ่ฏฆๅฐฝใ
- ๅฆๆๅ็ญๅพ้ฟ๏ผ่ฏทๅฐฝ้็ปๆๅใๅๆฎต่ฝๆป็ปใๅฆๆ้่ฆๅ็นไฝ็ญ๏ผๅฐฝ้ๆงๅถๅจ5ไธช็นไปฅๅ
๏ผๅนถๅๅนถ็ธๅ
ณ็ๅ
ๅฎนใ
- ๅฏนไบๅฎข่ง็ฑป็้ฎ็ญ๏ผๅฆๆ้ฎ้ข็็ญๆก้ๅธธ็ฎ็ญ๏ผๅฏไปฅ้ๅฝ่กฅๅ
ไธๅฐไธคๅฅ็ธๅ
ณไฟกๆฏ๏ผไปฅไธฐๅฏๅ
ๅฎนใ
- ไฝ ้่ฆๆ นๆฎ็จๆท่ฆๆฑๅๅ็ญๅ
ๅฎน้ๆฉๅ้ใ็พ่ง็ๅ็ญๆ ผๅผ๏ผ็กฎไฟๅฏ่ฏปๆงๅผบใ
- ไฝ ็ๅ็ญๅบ่ฏฅ็ปผๅๅคไธช็ธๅ
ณ็ฝ้กตๆฅๅ็ญ๏ผไธ่ฝ้ๅคๅผ็จไธไธช็ฝ้กตใ
- ้ค้็จๆท่ฆๆฑ๏ผๅฆๅไฝ ๅ็ญ็่ฏญ่จ้่ฆๅ็จๆทๆ้ฎ็่ฏญ่จไฟๆไธ่ดใ
# ็จๆทๆถๆฏไธบ๏ผ
{question}'''
```
For English query, we use the prompt:
```
search_answer_en_template = \
'''# The following contents are the search results related to the user's message:
{search_results}
In the search results I provide to you, each result is formatted as [webpage X begin]...[webpage X end], where X represents the numerical index of each article. Please cite the context at the end of the relevant sentence when appropriate. Use the citation format [citation:X] in the corresponding part of your answer. If a sentence is derived from multiple contexts, list all relevant citation numbers, such as [citation:3][citation:5]. Be sure not to cluster all citations at the end; instead, include them in the corresponding parts of the answer.
When responding, please keep the following points in mind:
- Today is {cur_date}.
- Not all content in the search results is closely related to the user's question. You need to evaluate and filter the search results based on the question.
- For listing-type questions (e.g., listing all flight information), try to limit the answer to 10 key points and inform the user that they can refer to the search sources for complete information. Prioritize providing the most complete and relevant items in the list. Avoid mentioning content not provided in the search results unless necessary.
- For creative tasks (e.g., writing an essay), ensure that references are cited within the body of the text, such as [citation:3][citation:5], rather than only at the end of the text. You need to interpret and summarize the user's requirements, choose an appropriate format, fully utilize the search results, extract key information, and generate an answer that is insightful, creative, and professional. Extend the length of your response as much as possible, addressing each point in detail and from multiple perspectives, ensuring the content is rich and thorough.
- If the response is lengthy, structure it well and summarize it in paragraphs. If a point-by-point format is needed, try to limit it to 5 points and merge related content.
- For objective Q&A, if the answer is very brief, you may add one or two related sentences to enrich the content.
- Choose an appropriate and visually appealing format for your response based on the user's requirements and the content of the answer, ensuring strong readability.
- Your answer should synthesize information from multiple relevant webpages and avoid repeatedly citing the same webpage.
- Unless the user requests otherwise, your response should be in the same language as the user's question.
# The user's message is:
{question}'''
```
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). | {
"source": "deepseek-ai/DeepSeek-R1",
"title": "README.md",
"url": "https://github.com/deepseek-ai/DeepSeek-R1/blob/main/README.md",
"date": "2025-01-20T11:57:28",
"stars": 82023,
"description": null,
"file_size": 19332
} |
# Microsoft Open Source Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Resources:
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [[email protected]](mailto:[email protected]) with questions or concerns | {
"source": "microsoft/markitdown",
"title": "CODE_OF_CONDUCT.md",
"url": "https://github.com/microsoft/markitdown/blob/main/CODE_OF_CONDUCT.md",
"date": "2024-11-13T19:56:40",
"stars": 39007,
"description": "Python tool for converting files and office documents to Markdown.",
"file_size": 443
} |
# MarkItDown
[](https://pypi.org/project/markitdown/)

[](https://github.com/microsoft/autogen)
> [!IMPORTANT]
> MarkItDown 0.0.2 alpha 1 (0.0.2a1) introduces a plugin-based architecture. As much as was possible, command-line and Python interfaces have remained the same as 0.0.1a3 to support backward compatibility. Please report any issues you encounter. Some interface changes may yet occur as we continue to refine MarkItDown to a first non-alpha release.
MarkItDown is a utility for converting various files to Markdown (e.g., for indexing, text analysis, etc).
It supports:
- PDF
- PowerPoint
- Word
- Excel
- Images (EXIF metadata and OCR)
- Audio (EXIF metadata and speech transcription)
- HTML
- Text-based formats (CSV, JSON, XML)
- ZIP files (iterates over contents)
- ... and more!
To install MarkItDown, use pip: `pip install markitdown`. Alternatively, you can install it from the source:
```bash
git clone [email protected]:microsoft/markitdown.git
cd markitdown
pip install -e packages/markitdown
```
## Usage
### Command-Line
```bash
markitdown path-to-file.pdf > document.md
```
Or use `-o` to specify the output file:
```bash
markitdown path-to-file.pdf -o document.md
```
You can also pipe content:
```bash
cat path-to-file.pdf | markitdown
```
### Plugins
MarkItDown also supports 3rd-party plugins. Plugins are disabled by default. To list installed plugins:
```bash
markitdown --list-plugins
```
To enable plugins use:
```bash
markitdown --use-plugins path-to-file.pdf
```
To find available plugins, search GitHub for the hashtag `#markitdown-plugin`. To develop a plugin, see `packages/markitdown-sample-plugin`.
### Azure Document Intelligence
To use Microsoft Document Intelligence for conversion:
```bash
markitdown path-to-file.pdf -o document.md -d -e "<document_intelligence_endpoint>"
```
More information about how to set up an Azure Document Intelligence Resource can be found [here](https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/how-to-guides/create-document-intelligence-resource?view=doc-intel-4.0.0)
### Python API
Basic usage in Python:
```python
from markitdown import MarkItDown
md = MarkItDown(enable_plugins=False) # Set to True to enable plugins
result = md.convert("test.xlsx")
print(result.text_content)
```
Document Intelligence conversion in Python:
```python
from markitdown import MarkItDown
md = MarkItDown(docintel_endpoint="<document_intelligence_endpoint>")
result = md.convert("test.pdf")
print(result.text_content)
```
To use Large Language Models for image descriptions, provide `llm_client` and `llm_model`:
```python
from markitdown import MarkItDown
from openai import OpenAI
client = OpenAI()
md = MarkItDown(llm_client=client, llm_model="gpt-4o")
result = md.convert("example.jpg")
print(result.text_content)
```
### Docker
```sh
docker build -t markitdown:latest .
docker run --rm -i markitdown:latest < ~/your-file.pdf > output.md
```
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
### How to Contribute
You can help by looking at issues or helping review PRs. Any issue or PR is welcome, but we have also marked some as 'open for contribution' and 'open for reviewing' to help facilitate community contributions. These are ofcourse just suggestions and you are welcome to contribute in any way you like.
<div align="center">
| | All | Especially Needs Help from Community |
|-----------------------|------------------------------------------|------------------------------------------------------------------------------------------|
| **Issues** | [All Issues](https://github.com/microsoft/markitdown/issues) | [Issues open for contribution](https://github.com/microsoft/markitdown/issues?q=is%3Aissue+is%3Aopen+label%3A%22open+for+contribution%22) |
| **PRs** | [All PRs](https://github.com/microsoft/markitdown/pulls) | [PRs open for reviewing](https://github.com/microsoft/markitdown/pulls?q=is%3Apr+is%3Aopen+label%3A%22open+for+reviewing%22) |
</div>
### Running Tests and Checks
- Navigate to the MarkItDown package:
```sh
cd packages/markitdown
```
- Install `hatch` in your environment and run tests:
```sh
pip install hatch # Other ways of installing hatch: https://hatch.pypa.io/dev/install/
hatch shell
hatch test
```
(Alternative) Use the Devcontainer which has all the dependencies installed:
```sh
# Reopen the project in Devcontainer and run:
hatch test
```
- Run pre-commit checks before submitting a PR: `pre-commit run --all-files`
### Contributing 3rd-party Plugins
You can also contribute by creating and sharing 3rd party plugins. See `packages/markitdown-sample-plugin` for more details.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies. | {
"source": "microsoft/markitdown",
"title": "README.md",
"url": "https://github.com/microsoft/markitdown/blob/main/README.md",
"date": "2024-11-13T19:56:40",
"stars": 39007,
"description": "Python tool for converting files and office documents to Markdown.",
"file_size": 6482
} |
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK --> | {
"source": "microsoft/markitdown",
"title": "SECURITY.md",
"url": "https://github.com/microsoft/markitdown/blob/main/SECURITY.md",
"date": "2024-11-13T19:56:40",
"stars": 39007,
"description": "Python tool for converting files and office documents to Markdown.",
"file_size": 2655
} |
# TODO: The maintainer of this repo has not yet edited this file
**REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project?
- **No CSS support:** Fill out this template with information about how to file issues and get help.
- **Yes CSS support:** Fill out an intake form at [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine next steps.
- **Not sure?** Fill out an intake as though the answer were "Yes". CSS will help you decide.
*Then remove this first heading from this SUPPORT.MD file before publishing your repo.*
# Support
## How to file issues and get help
This project uses GitHub Issues to track bugs and feature requests. Please search the existing
issues before filing new issues to avoid duplicates. For new issues, file your bug or
feature request as a new Issue.
For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE
FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER
CHANNEL. WHERE WILL YOU HELP PEOPLE?**.
## Microsoft Support Policy
Support for this **PROJECT or PRODUCT** is limited to the resources listed above. | {
"source": "microsoft/markitdown",
"title": "SUPPORT.md",
"url": "https://github.com/microsoft/markitdown/blob/main/SUPPORT.md",
"date": "2024-11-13T19:56:40",
"stars": 39007,
"description": "Python tool for converting files and office documents to Markdown.",
"file_size": 1242
} |
# MarkItDown Sample Plugin
[](https://pypi.org/project/markitdown/)

[](https://github.com/microsoft/autogen)
This project shows how to create a sample plugin for MarkItDown. The most important parts are as follows:
Next, implement your custom DocumentConverter:
```python
from typing import Union
from markitdown import DocumentConverter, DocumentConverterResult
class RtfConverter(DocumentConverter):
def convert(self, local_path, **kwargs) -> Union[None, DocumentConverterResult]:
# Bail if not an RTF file
extension = kwargs.get("file_extension", "")
if extension.lower() != ".rtf":
return None
# Implement the conversion logic here ...
# Return the result
return DocumentConverterResult(
title=title,
text_content=text_content,
)
```
Next, make sure your package implements and exports the following:
```python
# The version of the plugin interface that this plugin uses.
# The only supported version is 1 for now.
__plugin_interface_version__ = 1
# The main entrypoint for the plugin. This is called each time MarkItDown instances are created.
def register_converters(markitdown: MarkItDown, **kwargs):
"""
Called during construction of MarkItDown instances to register converters provided by plugins.
"""
# Simply create and attach an RtfConverter instance
markitdown.register_converter(RtfConverter())
```
Finally, create an entrypoint in the `pyproject.toml` file:
```toml
[project.entry-points."markitdown.plugin"]
sample_plugin = "markitdown_sample_plugin"
```
Here, the value of `sample_plugin` can be any key, but should ideally be the name of the plugin. The value is the fully qualified name of the package implementing the plugin.
## Installation
To use the plugin with MarkItDown, it must be installed. To install the plugin from the current directory use:
```bash
pip install -e .
```
Once the plugin package is installed, verify that it is available to MarkItDown by running:
```bash
markitdown --list-plugins
```
To use the plugin for a conversion use the `--use-plugins` flag. For example, to convert a PDF:
```bash
markitdown --use-plugins path-to-file.pdf
```
In Python, plugins can be enabled as follows:
```python
from markitdown import MarkItDown
md = MarkItDown(enable_plugins=True)
result = md.convert("path-to-file.pdf")
print(result.text_content)
```
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies. | {
"source": "microsoft/markitdown",
"title": "packages/markitdown-sample-plugin/README.md",
"url": "https://github.com/microsoft/markitdown/blob/main/packages/markitdown-sample-plugin/README.md",
"date": "2024-11-13T19:56:40",
"stars": 39007,
"description": "Python tool for converting files and office documents to Markdown.",
"file_size": 3144
} |
# MarkItDown
> [!IMPORTANT]
> MarkItDown is a Python package and command-line utility for converting various files to Markdown (e.g., for indexing, text analysis, etc).
>
> For more information, and full documentation, see the project [README.md](https://github.com/microsoft/markitdown) on GitHub.
## Installation
From PyPI:
```bash
pip install markitdown
```
From source:
```bash
git clone [email protected]:microsoft/markitdown.git
cd markitdown
pip install -e packages/markitdown
```
## Usage
### Command-Line
```bash
markitdown path-to-file.pdf > document.md
```
### Python API
```python
from markitdown import MarkItDown
md = MarkItDown()
result = md.convert("test.xlsx")
print(result.text_content)
```
### More Information
For more information, and full documentation, see the project [README.md](https://github.com/microsoft/markitdown) on GitHub.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies. | {
"source": "microsoft/markitdown",
"title": "packages/markitdown/README.md",
"url": "https://github.com/microsoft/markitdown/blob/main/packages/markitdown/README.md",
"date": "2024-11-13T19:56:40",
"stars": 39007,
"description": "Python tool for converting files and office documents to Markdown.",
"file_size": 1391
} |
# Welcome to Ink Kit
Ink Kit is an onchain-focused SDK that delivers a delightful developer experience with ready-to-use app layout templates, themes, and magical animated components.
## Install
```bash
npm install @inkonchain/ink-kit
# or
pnpm install @inkonchain/ink-kit
```
## Resources
- **GitHub**: Visit our [GitHub](https://github.com/inkonchain/ink-kit) repository
- **Documentation**: Visit our [Storybook](https://ink-kit.inkonchain.com/)
- **Contributing**: Visit our [GitHub repository](https://github.com/inkonchain/ink-kit)
## WIP Notice
This is a work in progress: we are constantly adding new components, improving the developer experience, and fixing bugs. | {
"source": "inkonchain/ink-kit",
"title": "README.md",
"url": "https://github.com/inkonchain/ink-kit/blob/main/README.md",
"date": "2024-11-04T16:32:17",
"stars": 33333,
"description": "Onchain-focused SDK with ready-to-use templates, themes, and magical animated components โจ",
"file_size": 680
} |
<img src="../src/images/banner.webp" alt="Ink Kit Banner" style="width: 100%; border-radius: 8px; margin-bottom: 2rem;" />
# Welcome to Ink Kit
Ink Kit is an onchain-focused SDK that delivers a delightful developer experience with ready-to-use app layout templates, themes, and magical animated components.
## Install
```bash
npm install @inkonchain/ink-kit
# or
pnpm install @inkonchain/ink-kit
```
## Usage
```tsx
// Import styles first at the root of your project (required)
import "@inkonchain/ink-kit/style.css";
```
```tsx
// Import components as needed
import { Button } from "@inkonchain/ink-kit";
function App() {
return (
<div>
<Button onClick={() => {}} size="md" variant="secondary">
Ship It
</Button>
</div>
);
}
```
Note: Ink Kit classes are prefixed with `ink:` and can be customized using CSS variables instead of Tailwind classes. They should be imported first so that your own custom classes are taking precedence.
## Key Features
- ๐จ **Customizable app layout templates**
- โจ **Magical animated components**
- ๐ญ **Vibrant themes**
- โ๏ธ **Onchain-focused development**
- ๐ **Efficient developer experience**
- ๐ฑ **Polished, engaging interfaces**
## Theming
By default, Ink Kit provides a couple of themes already in the stylesheet:
- Light (`light-theme`)
- Dark (`dark-theme`)
- Contrast (`contrast-theme`)
- Neo (`neo-theme`)
- Morpheus (`morpheus-theme`)
To specify which theme to use, add the `ink:THEME_ID` to your document root:
```tsx
<html class="ink:dark-theme">
...
```
If you want to programmatically set this value, you can use the `useInkThemeClass`:
```tsx
const theme = getMyCurrentTheme();
useInkThemeClass(theme === "light" ? "ink:neo-theme" : "ink:dark-theme");
```
### Custom Theme
To create a custom theme, you can override CSS variables:
```css
:root {
--ink-button-primary: rgb(10, 55, 10);
...
}
```
To see examples on specific colors that you can override, check the following [theme](https://github.com/inkonchain/ink-kit/tree/main/src/styles/theme) section of the Ink Kit repository.
## Resources
- **Documentation**: Visit our [Storybook](https://ink-kit.inkonchain.com/)
- **Contributing**: Visit our [GitHub repository](https://github.com/inkonchain/ink-kit)
## WIP Notice
This is a work in progress: we are constantly adding new components, improving the developer experience, and fixing bugs. | {
"source": "inkonchain/ink-kit",
"title": ".github/README.md",
"url": "https://github.com/inkonchain/ink-kit/blob/main/.github/README.md",
"date": "2024-11-04T16:32:17",
"stars": 33333,
"description": "Onchain-focused SDK with ready-to-use templates, themes, and magical animated components โจ",
"file_size": 2405
} |
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./static/browser-use-dark.png">
<source media="(prefers-color-scheme: light)" srcset="./static/browser-use.png">
<img alt="Shows a black Browser Use Logo in light color mode and a white one in dark color mode." src="./static/browser-use.png" width="full">
</picture>
<h1 align="center">Enable AI to control your browser ๐ค</h1>
[](https://github.com/gregpr07/browser-use/stargazers)
[](https://link.browser-use.com/discord)
[](https://docs.browser-use.com)
[](https://cloud.browser-use.com)
[](https://x.com/gregpr07)
[](https://x.com/mamagnus00)
[](https://app.workweave.ai/reports/repository/org_T5Pvn3UBswTHIsN1dWS3voPg/881458615)
๐ Browser-use is the easiest way to connect your AI agents with the browser.
๐ก See what others are building and share your projects in our [Discord](https://link.browser-use.com/discord) - we'd love to see what you create!
๐ฉ๏ธ Skip the setup - try our hosted version for instant browser automation! [Try it now](https://cloud.browser-use.com).
# Quick start
With pip (Python>=3.11):
```bash
pip install browser-use
```
install playwright:
```bash
playwright install
```
Spin up your agent:
```python
from langchain_openai import ChatOpenAI
from browser_use import Agent
import asyncio
from dotenv import load_dotenv
load_dotenv()
async def main():
agent = Agent(
task="Go to Reddit, search for 'browser-use', click on the first post and return the first comment.",
llm=ChatOpenAI(model="gpt-4o"),
)
result = await agent.run()
print(result)
asyncio.run(main())
```
Add your API keys for the provider you want to use to your `.env` file.
```bash
OPENAI_API_KEY=
```
For other settings, models, and more, check out the [documentation ๐](https://docs.browser-use.com).
### Test with UI
You can test [browser-use with a UI repository](https://github.com/browser-use/web-ui)
Or simply run the gradio example:
```
uv pip install gradio
```
```bash
python examples/ui/gradio_demo.py
```
# Demos
<br/><br/>
[Task](https://github.com/browser-use/browser-use/blob/main/examples/use-cases/shopping.py): Add grocery items to cart, and checkout.
[](https://www.youtube.com/watch?v=L2Ya9PYNns8)
<br/><br/>
Prompt: Add my latest LinkedIn follower to my leads in Salesforce.

<br/><br/>
[Prompt](https://github.com/browser-use/browser-use/blob/main/examples/use-cases/find_and_apply_to_jobs.py): Read my CV & find ML jobs, save them to a file, and then start applying for them in new tabs, if you need help, ask me.'
https://github.com/user-attachments/assets/171fb4d6-0355-46f2-863e-edb04a828d04
<br/><br/>
[Prompt](https://github.com/browser-use/browser-use/blob/main/examples/browser/real_browser.py): Write a letter in Google Docs to my Papa, thanking him for everything, and save the document as a PDF.

<br/><br/>
[Prompt](https://github.com/browser-use/browser-use/blob/main/examples/custom-functions/save_to_file_hugging_face.py): Look up models with a license of cc-by-sa-4.0 and sort by most likes on Hugging face, save top 5 to file.
https://github.com/user-attachments/assets/de73ee39-432c-4b97-b4e8-939fd7f323b3
<br/><br/>
## More examples
For more examples see the [examples](examples) folder or join the [Discord](https://link.browser-use.com/discord) and show off your project.
# Vision
Tell your computer what to do, and it gets it done.
## Roadmap
### Agent
- [ ] Improve agent memory (summarize, compress, RAG, etc.)
- [ ] Enhance planning capabilities (load website specific context)
- [ ] Reduce token consumption (system prompt, DOM state)
### DOM Extraction
- [ ] Improve extraction for datepickers, dropdowns, special elements
- [ ] Improve state representation for UI elements
### Rerunning tasks
- [ ] LLM as fallback
- [ ] Make it easy to define workfows templates where LLM fills in the details
- [ ] Return playwright script from the agent
### Datasets
- [ ] Create datasets for complex tasks
- [ ] Benchmark various models against each other
- [ ] Fine-tuning models for specific tasks
### User Experience
- [ ] Human-in-the-loop execution
- [ ] Improve the generated GIF quality
- [ ] Create various demos for tutorial execution, job application, QA testing, social media, etc.
## Contributing
We love contributions! Feel free to open issues for bugs or feature requests. To contribute to the docs, check out the `/docs` folder.
## Local Setup
To learn more about the library, check out the [local setup ๐](https://docs.browser-use.com/development/local-setup).
## Cooperations
We are forming a commission to define best practices for UI/UX design for browser agents.
Together, we're exploring how software redesign improves the performance of AI agents and gives these companies a competitive advantage by designing their existing software to be at the forefront of the agent age.
Email [Toby](mailto:[email protected]?subject=I%20want%20to%20join%20the%20UI/UX%20commission%20for%20AI%20agents&body=Hi%20Toby%2C%0A%0AI%20found%20you%20in%20the%20browser-use%20GitHub%20README.%0A%0A) to apply for a seat on the committee.
## Citation
If you use Browser Use in your research or project, please cite:
```bibtex
@software{browser_use2024,
author = {Mรผller, Magnus and ลฝuniฤ, Gregor},
title = {Browser Use: Enable AI to control your browser},
year = {2024},
publisher = {GitHub},
url = {https://github.com/browser-use/browser-use}
}
```
<div align="center"> <img src="https://github.com/user-attachments/assets/402b2129-b6ac-44d3-a217-01aea3277dce" width="400"/>
[](https://x.com/gregpr07)
[](https://x.com/mamagnus00)
</div>
<div align="center">
Made with โค๏ธ in Zurich and San Francisco
</div> | {
"source": "browser-use/browser-use",
"title": "README.md",
"url": "https://github.com/browser-use/browser-use/blob/main/README.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 6849
} |
## Reporting Security Issues
If you believe you have found a security vulnerability in browser-use, please report it through coordinated disclosure.
**Please do not report security vulnerabilities through the repository issues, discussions, or pull requests.**
Instead, please open a new [Github security advisory](https://github.com/browser-use/browser-use/security/advisories/new).
Please include as much of the information listed below as you can to help me better understand and resolve the issue:
* The type of issue (e.g., buffer overflow, SQL injection, or cross-site scripting)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help me triage your report more quickly. | {
"source": "browser-use/browser-use",
"title": "SECURITY.md",
"url": "https://github.com/browser-use/browser-use/blob/main/SECURITY.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 1037
} |
# Codebase Structure
> The code structure inspired by https://github.com/Netflix/dispatch.
Very good structure on how to make a scalable codebase is also in [this repo](https://github.com/zhanymkanov/fastapi-best-practices).
Just a brief document about how we should structure our backend codebase.
## Code Structure
```markdown
src/
/<service name>/
models.py
services.py
prompts.py
views.py
utils.py
routers.py
/_<subservice name>/
```
### Service.py
Always a single file, except if it becomes too long - more than ~500 lines, split it into \_subservices
### Views.py
Always split the views into two parts
```python
# All
...
# Requests
...
# Responses
...
```
If too long โ split into multiple files
### Prompts.py
Single file; if too long โ split into multiple files (one prompt per file or so)
### Routers.py
Never split into more than one file | {
"source": "browser-use/browser-use",
"title": "browser_use/README.md",
"url": "https://github.com/browser-use/browser-use/blob/main/browser_use/README.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 872
} |
# Docs
The official documentation for Browser Use. The docs are published to [Browser Use Docs](https://docs.browser-use.com).
### Development
Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command
```
npm i -g mintlify
```
Run the following command at the root of your documentation (where mint.json is)
```
mintlify dev
``` | {
"source": "browser-use/browser-use",
"title": "docs/README.md",
"url": "https://github.com/browser-use/browser-use/blob/main/docs/README.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 427
} |
You are an AI agent designed to automate browser tasks. Your goal is to accomplish the ultimate task following the rules.
# Input Format
Task
Previous steps
Current URL
Open Tabs
Interactive Elements
[index]<type>text</type>
- index: Numeric identifier for interaction
- type: HTML element type (button, input, etc.)
- text: Element description
Example:
[33]<button>Submit Form</button>
- Only elements with numeric indexes in [] are interactive
- elements without [] provide only context
# Response Rules
1. RESPONSE FORMAT: You must ALWAYS respond with valid JSON in this exact format:
{{"current_state": {{"evaluation_previous_goal": "Success|Failed|Unknown - Analyze the current elements and the image to check if the previous goals/actions are successful like intended by the task. Mention if something unexpected happened. Shortly state why/why not",
"memory": "Description of what has been done and what you need to remember. Be very specific. Count here ALWAYS how many times you have done something and how many remain. E.g. 0 out of 10 websites analyzed. Continue with abc and xyz",
"next_goal": "What needs to be done with the next immediate action"}},
"action":[{{"one_action_name": {{// action-specific parameter}}}}, // ... more actions in sequence]}}
2. ACTIONS: You can specify multiple actions in the list to be executed in sequence. But always specify only one action name per item. Use maximum {{max_actions}} actions per sequence.
Common action sequences:
- Form filling: [{{"input_text": {{"index": 1, "text": "username"}}}}, {{"input_text": {{"index": 2, "text": "password"}}}}, {{"click_element": {{"index": 3}}}}]
- Navigation and extraction: [{{"go_to_url": {{"url": "https://example.com"}}}}, {{"extract_content": {{"goal": "extract the names"}}}}]
- Actions are executed in the given order
- If the page changes after an action, the sequence is interrupted and you get the new state.
- Only provide the action sequence until an action which changes the page state significantly.
- Try to be efficient, e.g. fill forms at once, or chain actions where nothing changes on the page
- only use multiple actions if it makes sense.
3. ELEMENT INTERACTION:
- Only use indexes of the interactive elements
- Elements marked with "[]Non-interactive text" are non-interactive
4. NAVIGATION & ERROR HANDLING:
- If no suitable elements exist, use other functions to complete the task
- If stuck, try alternative approaches - like going back to a previous page, new search, new tab etc.
- Handle popups/cookies by accepting or closing them
- Use scroll to find elements you are looking for
- If you want to research something, open a new tab instead of using the current tab
- If captcha pops up, try to solve it - else try a different approach
- If the page is not fully loaded, use wait action
5. TASK COMPLETION:
- Use the done action as the last action as soon as the ultimate task is complete
- Dont use "done" before you are done with everything the user asked you, except you reach the last step of max_steps.
- If you reach your last step, use the done action even if the task is not fully finished. Provide all the information you have gathered so far. If the ultimate task is completly finished set success to true. If not everything the user asked for is completed set success in done to false!
- If you have to do something repeatedly for example the task says for "each", or "for all", or "x times", count always inside "memory" how many times you have done it and how many remain. Don't stop until you have completed like the task asked you. Only call done after the last step.
- Don't hallucinate actions
- Make sure you include everything you found out for the ultimate task in the done text parameter. Do not just say you are done, but include the requested information of the task.
6. VISUAL CONTEXT:
- When an image is provided, use it to understand the page layout
- Bounding boxes with labels on their top right corner correspond to element indexes
7. Form filling:
- If you fill an input field and your action sequence is interrupted, most often something changed e.g. suggestions popped up under the field.
8. Long tasks:
- Keep track of the status and subresults in the memory.
9. Extraction:
- If your task is to find information - call extract_content on the specific pages to get and store the information.
Your responses must be always JSON with the specified format. | {
"source": "browser-use/browser-use",
"title": "browser_use/agent/system_prompt.md",
"url": "https://github.com/browser-use/browser-use/blob/main/browser_use/agent/system_prompt.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 4421
} |
# Gemini
Detailed video on how to integrate browser-use with Gemini: https://www.youtube.com/watch?v=JluZiWBV_Tc | {
"source": "browser-use/browser-use",
"title": "examples/models/README.md",
"url": "https://github.com/browser-use/browser-use/blob/main/examples/models/README.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 112
} |
# **User Interfaces of Browser-Use**
| **File Name** | **User Interface** | **Description** | **Example Usage** |
|------------------------|-------------------|-------------------------------------------|-------------------------------------------|
| `command_line.py` | **Terminal** | Parses arguments for command-line execution. | `python command_line.py` |
| `gradio_demo.py` | **Gradio** | Provides a Gradio-based interactive UI. | `python gradio_demo.py` |
| `streamlit_demo.py` | **Streamlit** | Runs a Streamlit-based web interface. | `python -m streamlit run streamlit_demo.py` | | {
"source": "browser-use/browser-use",
"title": "examples/ui/README.md",
"url": "https://github.com/browser-use/browser-use/blob/main/examples/ui/README.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 716
} |
# Use Cases of Browser-Use
| File Name | Description |
|-----------|------------|
| `captcha.py` | Automates CAPTCHA solving on a demo website. |
| `check_appointment.py` | Checks for available visa appointment slots on the Greece MFA website. |
| `find_and_apply_to_jobs.py` | Searches for job listings, evaluates relevance based on a CV, and applies automatically. |
| `online_coding_agent.py` | Implements a multi-agent system for online code editors, with separate agents for coding and execution. |
| `post-twitter.py` | Provides a template for automated posting on X (Twitter), including new tweets, tagging, and replies. |
| `scrolling_page.py` | Automates webpage scrolling with various scrolling actions and text search functionality. |
| `twitter_post_using_cookies.py` | Automates posting on X (Twitter) using stored authentication cookies. |
| `web_voyager_agent.py` | A general-purpose web navigation agent for tasks like flight booking and course searching. | | {
"source": "browser-use/browser-use",
"title": "examples/use-cases/README.md",
"url": "https://github.com/browser-use/browser-use/blob/main/examples/use-cases/README.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 974
} |
# Slack Integration
Steps to create and configure a Slack bot:
1. Create a Slack App:
* Go to the Slack API: https://api.slack.com/apps
* Click on "Create New App".
* Choose "From scratch" and give your app a name and select the workspace.
* Provide a name and description for your bot (these are required fields).
2. Configure the Bot:
* Navigate to the "OAuth & Permissions" tab on the left side of the screen.
* Under "Scopes", add the necessary bot token scopes (add these "chat:write", "channels:history", "im:history").
3. Enable Event Subscriptions:
* Navigate to the "Event Subscriptions" tab.
* Enable events and add the necessary bot events (add these "message.channels", "message.im").
* Add your request URL (you can use ngrok to expose your local server if needed). [See how to set up ngrok](#installing-and-starting-ngrok).
* **Note:** The URL provided by ngrok is ephemeral and will change each time ngrok is started. You will need to update the request URL in the bot's settings each time you restart ngrok. [See how to update the request URL](#updating-the-request-url-in-bots-settings).
4. Add the bot to your Slack workspace:
* Navigate to the "OAuth & Permissions" tab.
* Under "OAuth Tokens for Your Workspace", click on "Install App to Workspace".
* Follow the prompts to authorize the app and add it to your workspace.
5. Set up environment variables:
* Obtain the `SLACK_SIGNING_SECRET`:
* Go to the Slack API: https://api.slack.com/apps
* Select your app.
* Navigate to the "Basic Information" tab.
* Copy the "Signing Secret".
* Obtain the `SLACK_BOT_TOKEN`:
* Go to the Slack API: https://api.slack.com/apps
* Select your app.
* Navigate to the "OAuth & Permissions" tab.
* Copy the "Bot User OAuth Token".
* Create a `.env` file in the root directory of your project and add the following lines:
```env
SLACK_SIGNING_SECRET=your-signing-secret
SLACK_BOT_TOKEN=your-bot-token
```
6. Invite the bot to a channel:
* Use the `/invite @your-bot-name` command in the Slack channel where you want the bot to be active.
7. Run the code in `examples/slack_example.py` to start the bot with your bot token and signing secret.
8. Write e.g. "$bu whats the weather in Tokyo?" to start a browser-use task and get a response inside the Slack channel.
## Installing and Starting ngrok
To expose your local server to the internet, you can use ngrok. Follow these steps to install and start ngrok:
1. Download ngrok from the official website: https://ngrok.com/download
2. Create a free account and follow the offical steps to install ngrok.
3. Start ngrok by running the following command in your terminal:
```sh
ngrok http 3000
```
Replace `3000` with the port number your local server is running on.
## Updating the Request URL in Bot's Settings
If you need to update the request URL (e.g., when the ngrok URL changes), follow these steps:
1. Go to the Slack API: https://api.slack.com/apps
2. Select your app.
3. Navigate to the "Event Subscriptions" tab.
4. Update the "Request URL" field with the new ngrok URL. The URL should be something like: `https://<ngrok-id>.ngrok-free.app/slack/events`
5. Save the changes.
## Installing Required Packages
To run this example, you need to install the following packages:
- `fastapi`
- `uvicorn`
- `slack_sdk`
You can install these packages using pip:
```sh
pip install fastapi uvicorn slack_sdk | {
"source": "browser-use/browser-use",
"title": "examples/integrations/slack/README.md",
"url": "https://github.com/browser-use/browser-use/blob/main/examples/integrations/slack/README.md",
"date": "2024-10-31T16:00:56",
"stars": 33084,
"description": "Make websites accessible for AI agents",
"file_size": 3596
} |
# Open R1
*A fully open reproduction of DeepSeek-R1. This repo is a work in progress, let's build it together!*
**Table of Contents**
1. [Overview](#overview)
2. [Plan of attack](#plan-of-attack)
3. [Installation](#installation)
4. [Training models](#training-models)
- [SFT](#sft)
- [GRPO](#grpo)
5. [Evaluating models](#evaluating-models)
6. [Reproducing Deepseek's evaluation results](#reproducing-deepseeks-evaluation-results)
7. [Data generation](#data-generation)
- [Generate data from a smol distilled R1 model](#generate-data-from-a-smol-distilled-r1-model)
- [Generate data from DeepSeek-R1](#generate-data-from-deepseek-r1)
8. [Contributing](#contributing)
## Overview
The goal of this repo is to build the missing pieces of the R1 pipeline such that everybody can reproduce and build on top of it. The project is simple by design and mostly consists of:
- `src/open_r1`: contains the scripts to train and evaluate models as well as generate synthetic data:
- `grpo.py`: trains a model with GRPO on a given dataset.
- `sft.py`: performs a simple SFT of a model on a dataset.
- `evaluate.py`: evaluates a model on the R1 benchmarks.
- `generate.py`: generates synthetic data from a model using [Distilabel](https://github.com/argilla-io/distilabel).
- `Makefile`: contains easy-to-run commands for each step in the R1 pipeline leveraging the scripts above.
### Plan of attack
We will use the DeepSeek-R1 [tech report](https://github.com/deepseek-ai/DeepSeek-R1) as a guide, which can roughly be broken down into three main steps:
* Step 1: replicate the R1-Distill models by distilling a high-quality corpus from DeepSeek-R1.
* Step 2: replicate the pure RL pipeline that DeepSeek used to create R1-Zero. This will likely involve curating new, large-scale datasets for math, reasoning, and code.
* Step 3: show we can go from base model to RL-tuned via multi-stage training.
<center>
<img src="assets/plan-of-attack.png" width="500">
</center>
## Installation
> [!CAUTION]
> Libraries rely on CUDA 12.4. If you see errors related to segmentation faults, double check the version your system is running with `nvcc --version`.
To run the code in this project, first, create a Python virtual environment using e.g. `uv`.
To install `uv`, follow the [UV Installation Guide](https://docs.astral.sh/uv/getting-started/installation/).
```shell
uv venv openr1 --python 3.11 && source openr1/bin/activate && uv pip install --upgrade pip
```
> [!TIP]
> For Hugging Face cluster users, add `export UV_LINK_MODE=copy` to your `.bashrc` to suppress cache warnings from `uv`
Next, install vLLM and FlashAttention:
```shell
uv pip install vllm==0.7.2
uv pip install setuptools && uv pip install flash-attn --no-build-isolation
```
This will also install PyTorch `v2.5.1` and it is **very important** to use this version since the vLLM binaries are compiled for it. You can then install the remaining dependencies for your specific use case via `pip install -e .[LIST OF MODES]`. For most contributors, we recommend:
```shell
GIT_LFS_SKIP_SMUDGE=1 uv pip install -e ".[dev]"
```
Next, log into your Hugging Face and Weights and Biases accounts as follows:
```shell
huggingface-cli login
wandb login
```
Finally, check whether your system has Git LFS installed so that you can load and push models/datasets to the Hugging Face Hub:
```shell
git-lfs --version
```
If it isn't installed, run:
```shell
sudo apt-get install git-lfs
```
## Training models
We support training models with either DDP or DeepSpeed (ZeRO-2 and ZeRO-3). For example, to run SFT on a dataset distilled from DeepSeek-R1 with reasoning traces such as [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k), run:
```shell
# Train via command line
accelerate launch --config_file=recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \
--model_name_or_path Qwen/Qwen2.5-1.5B-Instruct \
--dataset_name open-r1/OpenR1-Math-220k \
--learning_rate 1.0e-5 \
--num_train_epochs 1 \
--packing \
--max_seq_length 16384 \
--per_device_train_batch_size 16 \
--gradient_checkpointing \
--bf16 \
--output_dir data/Qwen2.5-1.5B-Open-R1-Distill
# Train via YAML config
accelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \
--config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml
```
Currently, the following tasks are supported:
* Supervised Fine-Tuning `sft`
* Group Relative Policy Optimization `grpo`
> [!TIP]
> If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant.
By default, these scripts will push each model to your Hugging Face Hub username, i.e. `{username}/{model_name}-{task}`. You can override the parameters in each YAML config by appending them to the command as follows:
```shell
# Change batch size, number of epochs etc
accelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \
--config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml
--per_device_train_batch_size=1 --num_train_epochs=5
```
If you also wish to override the Weights and Biases default settings, you can do so as follows:
```shell
accelerate launch --config_file recipes/accelerate_configs/zero3.yaml src/open_r1/sft.py \
--config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml
--wandb_entity huggingface --wandb_project open-r1 --run_name Qwen2.5-1.5B-GRPO
```
> [!NOTE]
> The training commands below are configured for a node of 8 x H100s (80GB). For different hardware and topologies, you may need to tune the batch size and number of gradient accumulation steps.
### SFT
To run SFT on a dataset distilled from DeepSeek-R1 with reasoning traces such as [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k), run:
```shell
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero3.yaml \
src/open_r1/sft.py \
--config recipes/Qwen2.5-1.5B-Instruct/sft/config_demo.yaml
```
### GRPO
To train via the GRPO trainer, we use one GPU to run vLLM for faster generation and the remaining GPUs for training. For example, one a node with 8 GPUs, set `--num_processes` to override the default value in the `accelerate` configs:
```shell
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero2.yaml \
--num_processes=7 src/open_r1/grpo.py \
--config recipes/DeepSeek-R1-Distill-Qwen-1.5B/grpo/config_demo.yaml
```
> [!WARNING]
> The chat template used in the distilled DeepSeek models omits the contents of the reasoning block within the `<think>` and `</think>` tags. It also prefills the assistant response with `<think>` which interferes with the format reward function. To handle that, it is important to override the chat template as done in e.g. [recipes/DeepSeek-R1-Distill-Qwen-1.5B/grpo/config_demo.yaml](./recipes/DeepSeek-R1-Distill-Qwen-1.5B/grpo/config_demo.yaml).
We provide a minimal reproducible experiment using GRPO for mathematical reasoning, referencing the approach from [SimpleRL-Reason](https://hkust-nlp.notion.site/simplerl-reason) which uses a 7B model trained on 8K examples. Running this on 8 H100 80G GPU takes about 3 hours:
```shell
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero2.yaml \
--num_processes=7 src/open_r1/grpo.py \
--config recipes/Qwen2.5-Math-7B/grpo/config_simple_rl.yaml
```
Our final [model](https://huggingface.co/Dongwei/Qwen-2.5-7B_Base_Math_smalllr), while using different learning rates, loss functions and reward structures, achieves 69.4% accuracy on MATH-500, demonstrating a 17%+ improvement over the base model.
#### ๐จโ๐ป Training with a code interpreter
We provide a `code` reward function for executing code generated by the policy during training. Currently, this reward function targets code contests like [Codeforces](https://codeforces.com), where solutions are executed against a set of test cases and the overall success rate is returned as the final reward. To ensure safe execution, we use [E2B](https://e2b.dev) sandboxes, which are fast and cheap to run. To use this reward function, first install the necessary dependencies:
```shell
uv pip install -e '.[code]'
```
Then create a `.env` file and place an API token from E2B within it:
```
E2B_API_KEY="e2b_xxx"
```
Then make sure your dataset contains a `verification_info` column with the following schema (adopted from PrimeIntellect's excellent [datasets](https://huggingface.co/collections/PrimeIntellect/synthetic-1-67a2c399cfdd6c9f7fae0c37) of verifiable problems):
```python
{
"language": "python",
"test_cases": [
{
"input": "4\n4\n0001\n1000\n0011\n0111\n3\n010\n101\n0\n2\n00000\n00001\n4\n01\n001\n0001\n00001\n",
"output": "1\n3 \n-1\n0\n\n2\n1 2 \n",
"type": "stdin_stdout",
}
],
}
```
For example, to train a smol model on Python problems, run:
```shell
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/zero2.yaml \
--num_processes=7 src/open_r1/grpo.py \
--config recipes/Qwen2.5-1.5B-Instruct/grpo/config_demo_code.yaml
```
#### Data decontamination
Following [s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393) the data can be decontaminated using the script at: [scripts/decontaminate.py](./scripts/decontaminate.py), which decontaminates a dataset using 8-grams and deduplicate the data. Sample run:
```shell
python scripts/decontaminate.py \
--dataset "open-r1/verifiable-coding-problems-python" \
--problem_column problem \
--cleanup
```
It will decontaminate against the benchmark datasets, and remove the contaminated samples afterwards. If no argument `--new_dataset_name` is provided, the same dataset will be reused, adding a `_decontaminated`. It runs against the prompt, which for this dataset is the column `problem`, but a different one can be provided.
Arguments for the script:
```shell
usage: decontaminate.py [-h] --dataset DATASET [--split SPLIT] [--ngram_size NGRAM_SIZE] [--problem_column PROBLEM_COLUMN] [--cleanup] [--new_dataset_name NEW_DATASET_NAME]
options:
-h, --help show this help message and exit
--dataset DATASET Name of the dataset to check for contamination.
--split SPLIT Split to check for contamination, defaults to `train`.
--ngram_size NGRAM_SIZE
Size of n-grams to build, defaults to 8.
--problem_column PROBLEM_COLUMN
Name of the column containing the problem (prompt).
--cleanup Whether to remove the contaminated rows before pushing the dataset.
--new_dataset_name NEW_DATASET_NAME
New name for the dataset. If not provided, will reuse the name and add a `_decontaminated` to the name.
```
### Launching jobs on a Slurm cluster
If you have access to a Slurm cluster, we provide a `slurm/train.slurm` script that will automatically queue training jobs for you. Here's how you can use it:
```shell
sbatch --job-name=open_r1 --nodes=1 slurm/train.slurm {model_name} {task} {config_suffix} {accelerator}
```
Here `{model_name}` and `{task}` are defined as above, while `{config_suffix}` refers to the specific config and `{accelerator}` refers to the choice of ๐ค Accelerate config in `recipes/accelerate_configs`. If you wish to override the default config parameters, you can provide them by appending a space-separated string like `'--arg1=value1 --arg2=value2'`. Here's a concrete example to run SFT on 1 node of 8 GPUs:
```shell
# Launch on Slurm and override default hyperparameters
sbatch --job-name=open_r1 --nodes=1 slurm/train.slurm Qwen2.5-1.5B-Instruct sft demo zero3 '--per_device_train_batch_size=1 --num_train_epochs=5'
```
You can scale the number of nodes by increasing the `--nodes` flag.
> [!NOTE]
> The configuration in `slurm/train.slurm` is optimised for the Hugging Face Compute Cluster and may require tweaking to be adapted to your own compute nodes.
## Evaluating models
We use `lighteval` to evaluate models, with custom tasks defined in `src/open_r1/evaluate.py`. For models which fit on a single GPU, run:
```shell
MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
# AIME 2024
TASK=aime24
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
# MATH-500
TASK=math_500
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
# GPQA Diamond
TASK=gpqa:diamond
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
# LiveCodeBench
lighteval vllm $MODEL_ARGS "extended|lcb:codegeneration|0|0" \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
> [!IMPORTANT]
> You must set `max_model_length=32768` in the `vllm` command to align with the `max_new_tokens` we define per eval. Without this, `lighteval` will throw an error.
To increase throughput across multiple GPUs, use _data parallel_ as follows:
```shell
NUM_GPUS=8
MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,data_parallel_size=$NUM_GPUS,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
TASK=aime24
OUTPUT_DIR=data/evals/$MODEL
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
For large models which require sharding across GPUs, use _tensor parallel_ and run:
```shell
NUM_GPUS=8
MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,tensor_parallel_size=$NUM_GPUS,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
TASK=aime24
OUTPUT_DIR=data/evals/$MODEL
export VLLM_WORKER_MULTIPROC_METHOD=spawn
lighteval vllm $MODEL_ARGS "custom|$TASK|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
You can also launch an evaluation with `make evaluate`, specifying the model, task, and optionally the parallelism technique and number of GPUs.
To evaluate on a single GPU:
```shell
make evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24
```
To use Data Parallelism:
```shell
make evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24 PARALLEL=data NUM_GPUS=8
```
To use Tensor Parallelism:
```shell
make evaluate MODEL=deepseek-ai/DeepSeek-R1-Distill-Qwen-32B TASK=aime24 PARALLEL=tensor NUM_GPUS=8
```
## Reproducing Deepseek's evaluation results
> [!NOTE]
> The DeepSeek-R1 paper uses sampling with 64 responses per query to estimate `pass@1`. Below, we report the results from sampling 1 response per query, which likely explains the small 1-3ฯ discrepancies between our results and theirs.
### AIME 2024
We are able to reproduce Deepseek's reported results on the AIME 2024 benchmark within ~1-3 standard deviations:
| Model | AIME 2024 (๐ค LightEval) | AIME 2024 (DeepSeek Reported) |
|:------------------------------|:-----------------------:|:----------------------------:|
| DeepSeek-R1-Distill-Qwen-1.5B | 26.7 | 28.9 |
| DeepSeek-R1-Distill-Qwen-7B | 56.6 | 55.5 |
| DeepSeek-R1-Distill-Qwen-14B | 60.0 | 69.7 |
| DeepSeek-R1-Distill-Qwen-32B | 73.2 | 72.6 |
| DeepSeek-R1-Distill-Llama-8B | 43.3 | 50.4 |
| DeepSeek-R1-Distill-Llama-70B | 73.3 | 70.0 |
To reproduce these results use the following command:
```shell
NUM_GPUS=1 # Set to 8 for 32B and 70B models
MODEL=deepseek-ai/{model_name}
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,data_parallel_size=$NUM_GPUS,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
lighteval vllm $MODEL_ARGS "custom|aime24|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
Alternatively, you can launch Slurm jobs as follows:
```shell
python scripts/run_benchmarks.py --model-id {model_id} --benchmarks aime24
```
### MATH-500
We are able to reproduce Deepseek's reported results on the MATH-500 benchmark within ~1-3 standard deviations:
| Model | MATH-500 (๐ค LightEval) | MATH-500 (DeepSeek Reported) |
|:------------------------------|:-----------------------:|:----------------------------:|
| DeepSeek-R1-Distill-Qwen-1.5B | 84.6 | 83.9 |
| DeepSeek-R1-Distill-Qwen-7B | 93.0 | 92.8 |
| DeepSeek-R1-Distill-Qwen-14B | 95.0 | 93.9 |
| DeepSeek-R1-Distill-Qwen-32B | 96.6 | 94.3 |
| DeepSeek-R1-Distill-Llama-8B | 88.6 | 89.1 |
| DeepSeek-R1-Distill-Llama-70B | 96.4 | 94.5 |
To reproduce these results use the following command:
```shell
NUM_GPUS=1 # Set to 8 for 32B and 70B models
MODEL=deepseek-ai/{model_name}
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,data_parallel_size=$NUM_GPUS,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
lighteval vllm $MODEL_ARGS "custom|math_500|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
Alternatively, you can launch Slurm jobs as follows:
```shell
python scripts/run_benchmarks.py --model-id {model_id} --benchmarks math_500
```
### GPQA Diamond
We are able to reproduce Deepseek's reported results on the GPQA Diamond benchmark within ~1-3 standard deviations:
| Model | GPQA Diamond (๐ค LightEval) | GPQA Diamond (DeepSeek Reported) |
|:------------------------------|:---------------------------:|:--------------------------------:|
| DeepSeek-R1-Distill-Qwen-1.5B | 34.3 | 33.8 |
| DeepSeek-R1-Distill-Qwen-7B | 50.5 | 49.1 |
| DeepSeek-R1-Distill-Qwen-14B | 59.6 | 59.1 |
| DeepSeek-R1-Distill-Qwen-32B | 63.6 | 62.1 |
| DeepSeek-R1-Distill-Llama-8B | 52.0 | 49.0 |
| DeepSeek-R1-Distill-Llama-70B | 67.2 | 65.2 |
To reproduce these results use the following command:
```shell
NUM_GPUS=1 # Set to 8 for 32B and 70B models
MODEL=deepseek-ai/{model_name}
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,data_parallel_size=$NUM_GPUS,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
lighteval vllm $MODEL_ARGS "custom|gpqa:diamond|0|0" \
--custom-tasks src/open_r1/evaluate.py \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
```shell
python scripts/run_benchmarks.py --model-id {model_id} --benchmarks gpqa
```
### LiveCodeBench
We are able to reproduce Deepseek's reported results on the LiveCodeBench code generation benchmark within ~1-3 standard deviations:
| Model | LiveCodeBench (๐ค LightEval) | GPQA Diamond (DeepSeek Reported) |
|:------------------------------|:----------------------------:|:--------------------------------:|
| DeepSeek-R1-Distill-Qwen-1.5B | 16.3 | 16.9 |
| DeepSeek-R1-Distill-Qwen-7B | 36.6 | 37.6 |
| DeepSeek-R1-Distill-Qwen-14B | 51.5 | 53.1 |
| DeepSeek-R1-Distill-Qwen-32B | 56.6 | 57.2 |
| DeepSeek-R1-Distill-Llama-8B | 37.0 | 39.6 |
| DeepSeek-R1-Distill-Llama-70B | 54.5 | 57.5 |
To reproduce these results use the following command:
```shell
NUM_GPUS=1 # Set to 8 for 32B and 70B models, or data_parallel_size=8 with the smaller models for speed
MODEL=deepseek-ai/{model_name}
MODEL_ARGS="pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,data_parallel_size=$NUM_GPUS,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}"
OUTPUT_DIR=data/evals/$MODEL
lighteval vllm $MODEL_ARGS "extended|lcb:codegeneration|0|0" \
--use-chat-template \
--output-dir $OUTPUT_DIR
```
```shell
python scripts/run_benchmarks.py --model-id {model_id} --benchmarks lcb
```
## Data generation
### Generate data from a smol distilled R1 model
The following example can be run in 1xH100.
First install the following dependencies:
```shell
uv pip install "distilabel[vllm]>=1.5.2"
```
Now save the following snippet into a file named `pipeline.py` and run it with `python pipeline.py`. It will generate 4 outputs for each of the 10 examples (change the username for the repository to your org/user name):
```python
from datasets import load_dataset
from distilabel.models import vLLM
from distilabel.pipeline import Pipeline
from distilabel.steps.tasks import TextGeneration
prompt_template = """\
You will be given a problem. Please reason step by step, and put your final answer within \boxed{}:
{{ instruction }}"""
dataset = load_dataset("AI-MO/NuminaMath-TIR", split="train").select(range(10))
model_id = "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B" # Exchange with another smol distilled r1
with Pipeline(
name="distill-qwen-7b-r1",
description="A pipeline to generate data from a distilled r1 model",
) as pipeline:
llm = vLLM(
model=model_id,
tokenizer=model_id,
extra_kwargs={
"tensor_parallel_size": 1,
"max_model_len": 8192,
},
generation_kwargs={
"temperature": 0.6,
"max_new_tokens": 8192,
},
)
prompt_column = "problem"
text_generation = TextGeneration(
llm=llm,
template=prompt_template,
num_generations=4,
input_mappings={"instruction": prompt_column} if prompt_column is not None else {}
)
if __name__ == "__main__":
distiset = pipeline.run(dataset=dataset)
distiset.push_to_hub(repo_id="username/numina-deepseek-r1-qwen-7b")
```
Take a look at the sample dataset at [HuggingFaceH4/numina-deepseek-r1-qwen-7b](https://huggingface.co/datasets/HuggingFaceH4/numina-deepseek-r1-qwen-7b).
### Generate data from DeepSeek-R1
To run the bigger DeepSeek-R1, we used 2 nodes, each with 8รH100 GPUs using the slurm file present in this repo at `slurm/generate.slurm`. First, install the dependencies:
(for now we need to install the vllm dev wheel that [fixes the R1 cuda graph capture](https://github.com/vllm-project/vllm/commits/221d388cc5a836fa189305785ed7e887cea8b510/csrc/moe/moe_align_sum_kernels.cu))
```shell
pip install https://wheels.vllm.ai/221d388cc5a836fa189305785ed7e887cea8b510/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu121
uv pip install "distilabel[vllm,ray,openai]>=1.5.2"
```
And then run the following command:
```shell
sbatch slurm/generate.slurm \
--hf-dataset AI-MO/NuminaMath-TIR \
--temperature 0.6 \
--prompt-column problem \
--model deepseek-ai/DeepSeek-R1 \
--hf-output-dataset username/r1-dataset
```
> [!NOTE]
> While the job is running, you can setup an SSH tunnel through the cluster login node to access the Ray dashboard from your computer running `ssh -L 8265:ray_ip_head_node:8265 <login_node>`, then browsing `http://localhost:8265`
## Contributing
Contributions are welcome. Please refer to https://github.com/huggingface/open-r1/issues/23. | {
"source": "huggingface/open-r1",
"title": "README.md",
"url": "https://github.com/huggingface/open-r1/blob/main/README.md",
"date": "2025-01-24T15:44:11",
"stars": 21442,
"description": "Fully open reproduction of DeepSeek-R1",
"file_size": 24807
} |
**TODO:** we will add more recipes in the future, just like alignment-handbook, this is the purpose of adding recipes to this project. | {
"source": "huggingface/open-r1",
"title": "recipes/README.md",
"url": "https://github.com/huggingface/open-r1/blob/main/recipes/README.md",
"date": "2025-01-24T15:44:11",
"stars": 21442,
"description": "Fully open reproduction of DeepSeek-R1",
"file_size": 134
} |
## Serving DeepSeek-R1 on 2x8 H100 SLURM nodes with SGLang
1. Set up the environment (adjust for your cuda version):
```bash
conda create -n sglang124 python=3.11
conda activate sglang124
pip install torch=2.5.1 --index-url https://download.pytorch.org/whl/cu124
pip install sgl-kernel --force-reinstall --no-deps
pip install "sglang[all]>=0.4.2.post4" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer/
```
2. Run the server and wait for the model to load:
```bash
sbatch slurm/serve_r1.slurm -m "/fsx/deepseek-r1-checkpoint" -e "sglang124"
```
3. Run the data generation script:
```bash
python scripts/generate_reasoning.py \
--dataset-name "AI-MO/NuminaMath-1.5" \
--output-file "numinamath_r1_generations.jsonl" \
--prompt-column "problem" \
--uuid-column "problem" \
--api-addr "<SGLANG_SERVER_ADDRESS>:39877" \
--num-generations 2 \
--max-tokens 16384 \
--max-concurrent 200
``` | {
"source": "huggingface/open-r1",
"title": "slurm/README.md",
"url": "https://github.com/huggingface/open-r1/blob/main/slurm/README.md",
"date": "2025-01-24T15:44:11",
"stars": 21442,
"description": "Fully open reproduction of DeepSeek-R1",
"file_size": 937
} |
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="images/logo.svg" width="60%" alt="DeepSeek LLM" />
</div>
<hr>
<div align="center">
<h1>๐ Janus-Series: Unified Multimodal Understanding and Generation Models</h1>
</div>
<div align="center">
<a href="https://www.deepseek.com/" target="_blank">
<img alt="Homepage" src="images/badge.svg" />
</a>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" />
</a>
</div>
<div align="center">
<!-- <a href="https://discord.gg/Tc7c45Zzu5" target="_blank">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" />
</a> -->
<!-- <a href="images/qr.jpeg" target="_blank">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" />
</a> -->
<!-- <a href="https://twitter.com/deepseek_ai" target="_blank">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" />
</a> -->
</div>
<div align="center">
<a href="LICENSE-CODE">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53">
</a>
<a href="LICENSE-MODEL">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53">
</a>
</div>
<p align="center">
<a href="#2-model-download"><b>๐ฅ Model Download</b></a> |
<a href="#3-quick-start"><b>โก Quick Start</b></a> |
<a href="#4-license"><b>๐ License</b></a> |
<a href="#5-citation"><b>๐ Citation</b></a> <br>
<!-- ๐ Paper Link (<a href="https://arxiv.org/abs/2410.13848"><b>Janus</b></a>, <a href="https://arxiv.org/abs/2410.13848"><b>JanusFlow</b></a>) | -->
๐ค Online Demo (<a href="https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B"><b>Janus-Pro-7B</b></a>, <a href="https://huggingface.co/spaces/deepseek-ai/Janus-1.3B"><b>Janus</b></a>, <a href="https://huggingface.co/spaces/deepseek-ai/JanusFlow-1.3B"><b>JanusFlow</b></a>)
</p>
## News
**2025.01.27**: Janus-Pro is released, an advanced version of Janus, improving both multimodal understanding and visual generation significantly. See [paper](./janus_pro_tech_report.pdf)
**2024.11.13**: JanusFlow is released, a new unified model with rectified flow for image generation. See [paper](https://arxiv.org/abs/2411.07975), [demo](https://huggingface.co/spaces/deepseek-ai/JanusFlow-1.3B) and [usage](https://github.com/deepseek-ai/Janus?tab=readme-ov-file#janusflow).
**2024.10.23**: Evaluation code for reproducing the multimodal understanding results from the paper has been added to VLMEvalKit. Please refer to [this link]( https://github.com/open-compass/VLMEvalKit/pull/541).
**2024.10.20**: (1) Fix a bug in [tokenizer_config.json](https://huggingface.co/deepseek-ai/Janus-1.3B/blob/main/tokenizer_config.json). The previous version caused classifier-free guidance to not function properly, resulting in relatively poor visual generation quality. (2) Release Gradio demo ([online demo](https://huggingface.co/spaces/deepseek-ai/Janus-1.3B) and [local](#gradio-demo)).
## 1. Introduction
<a href="./janus_pro_tech_report.pdf"><b>Janus-Pro: Unified Multimodal Understanding and
Generation with Data and Model Scaling</b></a>
**Janus-Pro** is an advanced version of the previous work Janus. Specifically, Janus-Pro incorporates (1) an optimized training strategy, (2) expanded training data, and (3) scaling to larger model size. With these improvements, Janus-Pro achieves significant advancements in both multimodal understanding and text-to-image instruction-following capabilities, while also enhancing the stability of text-to-image generation.
<div align="center">
<img alt="image" src="images/teaser_januspro.png" style="width:90%;">
</div>
<a href="https://arxiv.org/abs/2410.13848"><b>Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation</b></a>
**Janus** is a novel autoregressive framework that unifies multimodal understanding and generation. It addresses the limitations of previous approaches by decoupling visual encoding into separate pathways, while still utilizing a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoderโs roles in understanding and generation, but also enhances the frameworkโs flexibility. Janus surpasses previous unified model and matches or exceeds the performance of task-specific models. The simplicity, high flexibility, and effectiveness of Janus make it a strong candidate for next-generation unified multimodal models.
<div align="center">
<img alt="image" src="images/teaser.png" style="width:90%;">
</div>
<a href="https://arxiv.org/abs/2411.07975"><b>JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation</b></a>
**JanusFlow** introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.
<div align="center">
<img alt="image" src="images/teaser_janusflow.png" style="width:90%;">
</div>
## 2. Model Download
We release Janus to the public to support a broader and more diverse range of research within both academic and commercial communities.
Please note that the use of this model is subject to the terms outlined in [License section](#5-license). Commercial usage is
permitted under these terms.
### Huggingface
| Model | Sequence Length | Download |
|-----------------------|-----------------|-----------------------------------------------------------------------------|
| Janus-1.3B | 4096 | [๐ค Hugging Face](https://huggingface.co/deepseek-ai/Janus-1.3B) |
| JanusFlow-1.3B | 4096 | [๐ค Hugging Face](https://huggingface.co/deepseek-ai/JanusFlow-1.3B) |
| Janus-Pro-1B | 4096 | [๐ค Hugging Face](https://huggingface.co/deepseek-ai/Janus-Pro-1B) |
| Janus-Pro-7B | 4096 | [๐ค Hugging Face](https://huggingface.co/deepseek-ai/Janus-Pro-7B) |
## 3. Quick Start
<details>
<summary><h3>Janus-Pro</h3></summary>
### Installation
On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command:
```shell
pip install -e .
```
### Simple Inference Example
#### Multimodal Understanding
```python
import torch
from transformers import AutoModelForCausalLM
from janus.models import MultiModalityCausalLM, VLChatProcessor
from janus.utils.io import load_pil_images
# specify the path to the model
model_path = "deepseek-ai/Janus-Pro-7B"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True
)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
conversation = [
{
"role": "<|User|>",
"content": f"<image_placeholder>\n{question}",
"images": [image],
},
{"role": "<|Assistant|>", "content": ""},
]
# load images and prepare for inputs
pil_images = load_pil_images(conversation)
prepare_inputs = vl_chat_processor(
conversations=conversation, images=pil_images, force_batchify=True
).to(vl_gpt.device)
# # run image encoder to get the image embeddings
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
# # run the model to get the response
outputs = vl_gpt.language_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=prepare_inputs.attention_mask,
pad_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=512,
do_sample=False,
use_cache=True,
)
answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
print(f"{prepare_inputs['sft_format'][0]}", answer)
```
#### Text-to-Image Generation
```python
import os
import PIL.Image
import torch
import numpy as np
from transformers import AutoModelForCausalLM
from janus.models import MultiModalityCausalLM, VLChatProcessor
# specify the path to the model
model_path = "deepseek-ai/Janus-Pro-7B"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True
)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
conversation = [
{
"role": "<|User|>",
"content": "A stunning princess from kabul in red, white traditional clothing, blue eyes, brown hair",
},
{"role": "<|Assistant|>", "content": ""},
]
sft_format = vl_chat_processor.apply_sft_template_for_multi_turn_prompts(
conversations=conversation,
sft_format=vl_chat_processor.sft_format,
system_prompt="",
)
prompt = sft_format + vl_chat_processor.image_start_tag
@torch.inference_mode()
def generate(
mmgpt: MultiModalityCausalLM,
vl_chat_processor: VLChatProcessor,
prompt: str,
temperature: float = 1,
parallel_size: int = 16,
cfg_weight: float = 5,
image_token_num_per_image: int = 576,
img_size: int = 384,
patch_size: int = 16,
):
input_ids = vl_chat_processor.tokenizer.encode(prompt)
input_ids = torch.LongTensor(input_ids)
tokens = torch.zeros((parallel_size*2, len(input_ids)), dtype=torch.int).cuda()
for i in range(parallel_size*2):
tokens[i, :] = input_ids
if i % 2 != 0:
tokens[i, 1:-1] = vl_chat_processor.pad_id
inputs_embeds = mmgpt.language_model.get_input_embeddings()(tokens)
generated_tokens = torch.zeros((parallel_size, image_token_num_per_image), dtype=torch.int).cuda()
for i in range(image_token_num_per_image):
outputs = mmgpt.language_model.model(inputs_embeds=inputs_embeds, use_cache=True, past_key_values=outputs.past_key_values if i != 0 else None)
hidden_states = outputs.last_hidden_state
logits = mmgpt.gen_head(hidden_states[:, -1, :])
logit_cond = logits[0::2, :]
logit_uncond = logits[1::2, :]
logits = logit_uncond + cfg_weight * (logit_cond-logit_uncond)
probs = torch.softmax(logits / temperature, dim=-1)
next_token = torch.multinomial(probs, num_samples=1)
generated_tokens[:, i] = next_token.squeeze(dim=-1)
next_token = torch.cat([next_token.unsqueeze(dim=1), next_token.unsqueeze(dim=1)], dim=1).view(-1)
img_embeds = mmgpt.prepare_gen_img_embeds(next_token)
inputs_embeds = img_embeds.unsqueeze(dim=1)
dec = mmgpt.gen_vision_model.decode_code(generated_tokens.to(dtype=torch.int), shape=[parallel_size, 8, img_size//patch_size, img_size//patch_size])
dec = dec.to(torch.float32).cpu().numpy().transpose(0, 2, 3, 1)
dec = np.clip((dec + 1) / 2 * 255, 0, 255)
visual_img = np.zeros((parallel_size, img_size, img_size, 3), dtype=np.uint8)
visual_img[:, :, :] = dec
os.makedirs('generated_samples', exist_ok=True)
for i in range(parallel_size):
save_path = os.path.join('generated_samples', "img_{}.jpg".format(i))
PIL.Image.fromarray(visual_img[i]).save(save_path)
generate(
vl_gpt,
vl_chat_processor,
prompt,
)
```
### Gradio Demo
We have deployed online demo in [Huggingface](https://huggingface.co/spaces/deepseek-ai/Janus-Pro-7B).
For the local gradio demo, you can run with the following command:
```
pip install -e .[gradio]
python demo/app_januspro.py
```
Have Fun!
</details>
<details>
<summary><h3>Janus</h3></summary>
### Installation
On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command:
```shell
pip install -e .
```
### Simple Inference Example
#### Multimodal Understanding
```python
import torch
from transformers import AutoModelForCausalLM
from janus.models import MultiModalityCausalLM, VLChatProcessor
from janus.utils.io import load_pil_images
# specify the path to the model
model_path = "deepseek-ai/Janus-1.3B"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True
)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
conversation = [
{
"role": "User",
"content": "<image_placeholder>\nConvert the formula into latex code.",
"images": ["images/equation.png"],
},
{"role": "Assistant", "content": ""},
]
# load images and prepare for inputs
pil_images = load_pil_images(conversation)
prepare_inputs = vl_chat_processor(
conversations=conversation, images=pil_images, force_batchify=True
).to(vl_gpt.device)
# # run image encoder to get the image embeddings
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
# # run the model to get the response
outputs = vl_gpt.language_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=prepare_inputs.attention_mask,
pad_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=512,
do_sample=False,
use_cache=True,
)
answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
print(f"{prepare_inputs['sft_format'][0]}", answer)
```
#### Text-to-Image Generation
```python
import os
import PIL.Image
import torch
import numpy as np
from transformers import AutoModelForCausalLM
from janus.models import MultiModalityCausalLM, VLChatProcessor
# specify the path to the model
model_path = "deepseek-ai/Janus-1.3B"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True
)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
conversation = [
{
"role": "User",
"content": "A stunning princess from kabul in red, white traditional clothing, blue eyes, brown hair",
},
{"role": "Assistant", "content": ""},
]
sft_format = vl_chat_processor.apply_sft_template_for_multi_turn_prompts(
conversations=conversation,
sft_format=vl_chat_processor.sft_format,
system_prompt="",
)
prompt = sft_format + vl_chat_processor.image_start_tag
@torch.inference_mode()
def generate(
mmgpt: MultiModalityCausalLM,
vl_chat_processor: VLChatProcessor,
prompt: str,
temperature: float = 1,
parallel_size: int = 16,
cfg_weight: float = 5,
image_token_num_per_image: int = 576,
img_size: int = 384,
patch_size: int = 16,
):
input_ids = vl_chat_processor.tokenizer.encode(prompt)
input_ids = torch.LongTensor(input_ids)
tokens = torch.zeros((parallel_size*2, len(input_ids)), dtype=torch.int).cuda()
for i in range(parallel_size*2):
tokens[i, :] = input_ids
if i % 2 != 0:
tokens[i, 1:-1] = vl_chat_processor.pad_id
inputs_embeds = mmgpt.language_model.get_input_embeddings()(tokens)
generated_tokens = torch.zeros((parallel_size, image_token_num_per_image), dtype=torch.int).cuda()
for i in range(image_token_num_per_image):
outputs = mmgpt.language_model.model(inputs_embeds=inputs_embeds, use_cache=True, past_key_values=outputs.past_key_values if i != 0 else None)
hidden_states = outputs.last_hidden_state
logits = mmgpt.gen_head(hidden_states[:, -1, :])
logit_cond = logits[0::2, :]
logit_uncond = logits[1::2, :]
logits = logit_uncond + cfg_weight * (logit_cond-logit_uncond)
probs = torch.softmax(logits / temperature, dim=-1)
next_token = torch.multinomial(probs, num_samples=1)
generated_tokens[:, i] = next_token.squeeze(dim=-1)
next_token = torch.cat([next_token.unsqueeze(dim=1), next_token.unsqueeze(dim=1)], dim=1).view(-1)
img_embeds = mmgpt.prepare_gen_img_embeds(next_token)
inputs_embeds = img_embeds.unsqueeze(dim=1)
dec = mmgpt.gen_vision_model.decode_code(generated_tokens.to(dtype=torch.int), shape=[parallel_size, 8, img_size//patch_size, img_size//patch_size])
dec = dec.to(torch.float32).cpu().numpy().transpose(0, 2, 3, 1)
dec = np.clip((dec + 1) / 2 * 255, 0, 255)
visual_img = np.zeros((parallel_size, img_size, img_size, 3), dtype=np.uint8)
visual_img[:, :, :] = dec
os.makedirs('generated_samples', exist_ok=True)
for i in range(parallel_size):
save_path = os.path.join('generated_samples', "img_{}.jpg".format(i))
PIL.Image.fromarray(visual_img[i]).save(save_path)
generate(
vl_gpt,
vl_chat_processor,
prompt,
)
```
### Gradio Demo
We have deployed online demo in [Huggingface](https://huggingface.co/spaces/deepseek-ai/Janus-1.3B).
For the local gradio demo, you can run with the following command:
```
pip install -e .[gradio]
python demo/app.py
```
Have Fun!
### FastAPI Demo
It's easy to run a FastAPI server to host an API server running the same functions as gradio.
To start FastAPI server, run the following command:
```
python demo/fastapi_app.py
```
To test the server, you can open another terminal and run:
```
python demo/fastapi_client.py
```
</details>
<details>
<summary><h3>JanusFlow</h3></summary>
### Installation
On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command:
```shell
pip install -e .
pip install diffusers[torch]
```
### ๐ค Huggingface Online Demo
Check out the demo in [this link](https://huggingface.co/spaces/deepseek-ai/JanusFlow-1.3B).
### Simple Inference Example
#### Multimodal Understanding
```python
import torch
from janus.janusflow.models import MultiModalityCausalLM, VLChatProcessor
from janus.utils.io import load_pil_images
# specify the path to the model
model_path = "deepseek-ai/JanusFlow-1.3B"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt = MultiModalityCausalLM.from_pretrained(
model_path, trust_remote_code=True
)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
conversation = [
{
"role": "User",
"content": "<image_placeholder>\nConvert the formula into latex code.",
"images": ["images/equation.png"],
},
{"role": "Assistant", "content": ""},
]
# load images and prepare for inputs
pil_images = load_pil_images(conversation)
prepare_inputs = vl_chat_processor(
conversations=conversation, images=pil_images, force_batchify=True
).to(vl_gpt.device)
# # run image encoder to get the image embeddings
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
# # run the model to get the response
outputs = vl_gpt.language_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=prepare_inputs.attention_mask,
pad_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=512,
do_sample=False,
use_cache=True,
)
answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
print(f"{prepare_inputs['sft_format'][0]}", answer)
```
#### Text-to-Image Generation
```python
import os
import PIL.Image
import torch
import numpy as np
from janus.janusflow.models import MultiModalityCausalLM, VLChatProcessor
import torchvision
# specify the path to the model
model_path = "deepseek-ai/JanusFlow-1.3B"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt = MultiModalityCausalLM.from_pretrained(
model_path, trust_remote_code=True
)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
from diffusers.models import AutoencoderKL
# remember to use bfloat16 dtype, this vae doesn't work with fp16
vae = AutoencoderKL.from_pretrained("stabilityai/sdxl-vae")
vae = vae.to(torch.bfloat16).cuda().eval()
conversation = [
{
"role": "User",
"content": "A stunning princess from kabul in red, white traditional clothing, blue eyes, brown hair",
},
{"role": "Assistant", "content": ""},
]
sft_format = vl_chat_processor.apply_sft_template_for_multi_turn_prompts(
conversations=conversation,
sft_format=vl_chat_processor.sft_format,
system_prompt="",
)
prompt = sft_format + vl_chat_processor.image_gen_tag
@torch.inference_mode()
def generate(
mmgpt: MultiModalityCausalLM,
vl_chat_processor: VLChatProcessor,
prompt: str,
cfg_weight: float = 5.0,
num_inference_steps: int = 30,
batchsize: int = 5
):
input_ids = vl_chat_processor.tokenizer.encode(prompt)
input_ids = torch.LongTensor(input_ids)
tokens = torch.stack([input_ids] * 2 * batchsize).cuda()
tokens[batchsize:, 1:] = vl_chat_processor.pad_id
inputs_embeds = vl_gpt.language_model.get_input_embeddings()(tokens)
# we remove the last <bog> token and replace it with t_emb later
inputs_embeds = inputs_embeds[:, :-1, :]
# generate with rectified flow ode
# step 1: encode with vision_gen_enc
z = torch.randn((batchsize, 4, 48, 48), dtype=torch.bfloat16).cuda()
dt = 1.0 / num_inference_steps
dt = torch.zeros_like(z).cuda().to(torch.bfloat16) + dt
# step 2: run ode
attention_mask = torch.ones((2*batchsize, inputs_embeds.shape[1]+577)).to(vl_gpt.device)
attention_mask[batchsize:, 1:inputs_embeds.shape[1]] = 0
attention_mask = attention_mask.int()
for step in range(num_inference_steps):
# prepare inputs for the llm
z_input = torch.cat([z, z], dim=0) # for cfg
t = step / num_inference_steps * 1000.
t = torch.tensor([t] * z_input.shape[0]).to(dt)
z_enc = vl_gpt.vision_gen_enc_model(z_input, t)
z_emb, t_emb, hs = z_enc[0], z_enc[1], z_enc[2]
z_emb = z_emb.view(z_emb.shape[0], z_emb.shape[1], -1).permute(0, 2, 1)
z_emb = vl_gpt.vision_gen_enc_aligner(z_emb)
llm_emb = torch.cat([inputs_embeds, t_emb.unsqueeze(1), z_emb], dim=1)
# input to the llm
# we apply attention mask for CFG: 1 for tokens that are not masked, 0 for tokens that are masked.
if step == 0:
outputs = vl_gpt.language_model.model(inputs_embeds=llm_emb,
use_cache=True,
attention_mask=attention_mask,
past_key_values=None)
past_key_values = []
for kv_cache in past_key_values:
k, v = kv_cache[0], kv_cache[1]
past_key_values.append((k[:, :, :inputs_embeds.shape[1], :], v[:, :, :inputs_embeds.shape[1], :]))
past_key_values = tuple(past_key_values)
else:
outputs = vl_gpt.language_model.model(inputs_embeds=llm_emb,
use_cache=True,
attention_mask=attention_mask,
past_key_values=past_key_values)
hidden_states = outputs.last_hidden_state
# transform hidden_states back to v
hidden_states = vl_gpt.vision_gen_dec_aligner(vl_gpt.vision_gen_dec_aligner_norm(hidden_states[:, -576:, :]))
hidden_states = hidden_states.reshape(z_emb.shape[0], 24, 24, 768).permute(0, 3, 1, 2)
v = vl_gpt.vision_gen_dec_model(hidden_states, hs, t_emb)
v_cond, v_uncond = torch.chunk(v, 2)
v = cfg_weight * v_cond - (cfg_weight-1.) * v_uncond
z = z + dt * v
# step 3: decode with vision_gen_dec and sdxl vae
decoded_image = vae.decode(z / vae.config.scaling_factor).sample
os.makedirs('generated_samples', exist_ok=True)
save_path = os.path.join('generated_samples', "img.jpg")
torchvision.utils.save_image(decoded_image.clip_(-1.0, 1.0)*0.5+0.5, save_path)
generate(
vl_gpt,
vl_chat_processor,
prompt,
cfg_weight=2.0,
num_inference_steps=30,
batchsize=5
)
```
### Gradio Demo
For the local gradio demo, you can run with the following command:
```
pip install -e .[gradio]
python demo/app_janusflow.py
```
Have Fun!
</details>
## 4. License
This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-CODE). The use of Janus models is subject to [DeepSeek Model License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-MODEL).
## 5. Citation
```bibtex
@article{chen2025janus,
title={Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling},
author={Chen, Xiaokang and Wu, Zhiyu and Liu, Xingchao and Pan, Zizheng and Liu, Wen and Xie, Zhenda and Yu, Xingkai and Ruan, Chong},
journal={arXiv preprint arXiv:2501.17811},
year={2025}
}
@article{wu2024janus,
title={Janus: Decoupling visual encoding for unified multimodal understanding and generation},
author={Wu, Chengyue and Chen, Xiaokang and Wu, Zhiyu and Ma, Yiyang and Liu, Xingchao and Pan, Zizheng and Liu, Wen and Xie, Zhenda and Yu, Xingkai and Ruan, Chong and others},
journal={arXiv preprint arXiv:2410.13848},
year={2024}
}
@misc{ma2024janusflow,
title={JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation},
author={Yiyang Ma and Xingchao Liu and Xiaokang Chen and Wen Liu and Chengyue Wu and Zhiyu Wu and Zizheng Pan and Zhenda Xie and Haowei Zhang and Xingkai yu and Liang Zhao and Yisong Wang and Jiaying Liu and Chong Ruan},
journal={arXiv preprint arXiv:2411.07975},
year={2024}
}
```
## 6. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). | {
"source": "deepseek-ai/Janus",
"title": "README.md",
"url": "https://github.com/deepseek-ai/Janus/blob/main/README.md",
"date": "2024-10-18T03:48:16",
"stars": 16295,
"description": "Janus-Series: Unified Multimodal Understanding and Generation Models",
"file_size": 26741
} |
# Browser Extension Installation Guide
> [!WARNING]
> React Scan's Browser extension is still pending approvals from the Chrome Web Store, Firefox Add-ons, and Brave Browser. Below is a guide to installing the extension manually.
## Chrome
1. Download the [`chrome-extension-v1.0.5.zip`](https://github.com/aidenybai/react-scan/tree/main/packages/extension/build) file.
2. Unzip the file.
3. Open Chrome and navigate to `chrome://extensions/`.
4. Enable "Developer mode" if it is not already enabled.
5. Click "Load unpacked" and select the unzipped folder (or drag the folder into the page).
## Firefox
1. Download the [`firefox-extension-v1.0.5.zip`](https://github.com/aidenybai/react-scan/tree/main/packages/extension/build) file.
2. Unzip the file.
3. Open Firefox and navigate to `about:debugging#/runtime/this-firefox`.
4. Click "Load Temporary Add-on..."
5. Select `manifest.json` from the unzipped folder
## Brave
1. Download the [`brave-extension-v1.0.5.zip`](https://github.com/aidenybai/react-scan/tree/main/packages/extension/build) file.
2. Unzip the file.
3. Open Brave and navigate to `brave://extensions`.
4. Click "Load unpacked" and select the unzipped folder (or drag the folder into the page).
> [!NOTE]
> The React Scan browser extension currently uses `[email protected]` | {
"source": "aidenybai/react-scan",
"title": "BROWSER_EXTENSION_GUIDE.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/BROWSER_EXTENSION_GUIDE.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 1302
} |
# Contributing to React Scan
First off, thanks for taking the time to contribute! โค๏ธ
## Table of Contents
- [Contributing to React Scan](#contributing-to-react-scan)
- [Table of Contents](#table-of-contents)
- [Project Structure](#project-structure)
- [Development Setup](#development-setup)
- [Contributing Guidelines](#contributing-guidelines)
- [Commits](#commits)
- [Pull Request Process](#pull-request-process)
- [Development Workflow](#development-workflow)
- [Getting Help](#getting-help)
## Project Structure
This is a monorepo containing several packages:
- `packages/scan` - Core React Scan package
- `packages/vite-plugin-react-scan` - Vite plugin for React Scan
- `packages/extension` - VS Code extension
## Development Setup
1. **Clone and Install**
```bash
git clone https://github.com/aidenybai/react-scan.git
cd react-scan
pnpm install
```
2. **Build all packages**
```bash
pnpm build
```
3. **Development Mode**
```bash
# Run all packages in dev mode
pnpm dev
```
## Contributing Guidelines
### Commits
We use conventional commits to ensure consistent commit messages:
- `feat:` New features
- `fix:` Bug fixes
- `docs:` Documentation changes
- `chore:` Maintenance tasks
- `test:` Adding or updating tests
- `refactor:` Code changes that neither fix bugs nor add features
Example: `fix(scan): fix a typo`
### Pull Request Process
1. Fork the repository
2. Create your feature branch (`git checkout -b feat/amazing-feature`)
3. Commit your changes using conventional commits
4. Push to your branch
5. Open a Pull Request
6. Ask for reviews (@pivanov, @RobPruzan are your friends in this journey)
### Development Workflow
1. **TypeScript**
- All code must be written in TypeScript
- Ensure strict type checking passes
- No `any` types unless absolutely necessary
2. **Code Style**
- We use Biome for formatting and linting
- Run `pnpm format` to format code
- Run `pnpm lint` to check for issues
3. **Documentation**
- Update relevant documentation
- Add JSDoc comments for public APIs
- Update README if needed
## Getting Help
- Check existing issues
- Create a new issue
<br />
โ๏ธ Happy coding! ๐ | {
"source": "aidenybai/react-scan",
"title": "CONTRIBUTING.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/CONTRIBUTING.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 2223
} |
# <img src="https://github.com/aidenybai/react-scan/blob/main/.github/assets/logo.svg" width="30" height="30" align="center" /> React Scan
React Scan automatically detects performance issues in your React app.
Previously, tools like:
- [`<Profiler />`](https://react.dev/reference/react/Profiler) required lots of manual changes
- [Why Did You Render?](https://github.com/welldone-software/why-did-you-render) lacked simple visual cues
- [React Devtools](https://legacy.reactjs.org/blog/2018/09/10/introducing-the-react-profiler.html) didn't have a simple, portable, and programmatic API
React Scan attempts to solve these problems:
- It requires no code changes โ just drop it in
- It highlights exactly the components you need to optimize
- Use it via script tag, npm, CLI, you name it!
Trusted by engineering teams at:
Airbnb <a href="https://polaris.shopify.com/"><img src="https://raw.githubusercontent.com/aidenybai/react-scan/refs/heads/main/.github/assets/shopify-logo.png" height="30" align="center" /></a> <a href="https://www.faire.com/"><img src="https://raw.githubusercontent.com/aidenybai/react-scan/refs/heads/main/.github/assets/faire-logo.svg" height="20" align="center" /></a> <a href="https://perplexity.com/"><img src="https://raw.githubusercontent.com/aidenybai/react-scan/refs/heads/main/.github/assets/perplexity-logo.png" height="30" align="center" /></a>
### [**Try it out! โ**](https://react-scan.million.dev)

> [!IMPORTANT]
> Want to monitor issues in production? Check out [React Scan Monitoring](https://react-scan.com/monitoring)!
## Install
### Package managers
```bash
npm i react-scan
```
```bash
pnpm add react-scan
```
```bash
bun add react-scan
```
```bash
yarn add react-scan
```
### CDN
```html
<!-- import this BEFORE any scripts -->
<script src="https://unpkg.com/react-scan/dist/auto.global.js"></script>
```
## Usage
- [NextJS App Router](https://github.com/aidenybai/react-scan/blob/main/docs/installation/next-js-app-router.md)
- [NextJS Page Router](https://github.com/aidenybai/react-scan/blob/main/docs/installation/next-js-page-router.md)
- [Create React App](https://github.com/aidenybai/react-scan/blob/main/docs/installation/create-react-app.md)
- [Vite](https://github.com/aidenybai/react-scan/blob/main/docs/installation/vite.md)
- [Parcel](https://github.com/aidenybai/react-scan/blob/main/docs/installation/parcel.md)
- [Remix](https://github.com/aidenybai/react-scan/blob/main/docs/installation/remix.md)
- [React Router](https://github.com/aidenybai/react-scan/blob/main/docs/installation/react-router.md)
- [Astro](https://github.com/aidenybai/react-scan/blob/main/docs/installation/astro.md)
- [TanStack Start](https://github.com/aidenybai/react-scan/blob/main/docs/installation/tanstack-start.md)
### CLI
If you don't have a local version of the site or you want to test a React app remotely, you can use the CLI. This will spin up an isolated browser instance which you can interact or use React Scan with.
```bash
npx react-scan@latest http://localhost:3000
# you can technically scan ANY website on the web:
# npx react-scan@latest https://react.dev
```
You can add it to your existing dev process as well. Here's an example for Next.js:
```json
{
"scripts": {
"dev": "next dev",
"scan": "next dev & npx react-scan@latest localhost:3000"
}
}
```
### Browser Extension
If you want to install the extension, follow the guide [here](https://github.com/aidenybai/react-scan/blob/main/BROWSER_EXTENSION_GUIDE.md).
### React Native
See [discussion](https://github.com/aidenybai/react-scan/pull/23)
## API Reference
<details>
<summary><code>Options</code></summary>
<br />
```tsx
export interface Options {
/**
* Enable/disable scanning
*
* Please use the recommended way:
* enabled: process.env.NODE_ENV === 'development',
*
* @default true
*/
enabled?: boolean;
/**
* Force React Scan to run in production (not recommended)
*
* @default false
*/
dangerouslyForceRunInProduction?: boolean;
/**
* Log renders to the console
*
* WARNING: This can add significant overhead when the app re-renders frequently
*
* @default false
*/
log?: boolean;
/**
* Show toolbar bar
*
* If you set this to true, and set {@link enabled} to false, the toolbar will still show, but scanning will be disabled.
*
* @default true
*/
showToolbar?: boolean;
/**
* Animation speed
*
* @default "fast"
*/
animationSpeed?: "slow" | "fast" | "off";
/**
* Track unnecessary renders, and mark their outlines gray when detected
*
* An unnecessary render is defined as the component re-rendering with no change to the component's
* corresponding dom subtree
*
* @default false
* @warning tracking unnecessary renders can add meaningful overhead to react-scan
*/
trackUnnecessaryRenders?: boolean;
onCommitStart?: () => void;
onRender?: (fiber: Fiber, renders: Array<Render>) => void;
onCommitFinish?: () => void;
onPaintStart?: (outlines: Array<Outline>) => void;
onPaintFinish?: (outlines: Array<Outline>) => void;
}
```
</details>
- `scan(options: Options)`: Imperative API to start scanning
- `useScan(options: Options)`: Hook API to start scanning
- `getReport()`: Get a report of all the renders
- `setOptions(options: Options): void`: Set options at runtime
- `getOptions()`: Get the current options
- `onRender(Component, onRender: (fiber: Fiber, render: Render) => void)`: Hook into a specific component's renders
## Why React Scan?
React can be tricky to optimize.
The issue is that component props are compared by reference, not value. This is intentional โ this way rendering can be cheap to run.
However, this makes it easy to accidentally cause unnecessary renders, making the app slow. Even in production apps, with hundreds of engineers, can't fully optimize their apps (see [GitHub](https://github.com/aidenybai/react-scan/blob/main/.github/assets/github.mp4), [Twitter](https://github.com/aidenybai/react-scan/blob/main/.github/assets/twitter.mp4), and [Instagram](https://github.com/aidenybai/react-scan/blob/main/.github/assets/instagram.mp4)).
This often comes down to props that update in reference, like callbacks or object values. For example, the `onClick` function and `style` object are re-created on every render, causing `ExpensiveComponent` to slow down the app:
```jsx
<ExpensiveComponent onClick={() => alert("hi")} style={{ color: "purple" }} />
```
React Scan helps you identify these issues by automatically detecting and highlighting renders that cause performance issues. Now, instead of guessing, you can see exactly which components you need to fix.
> Want monitor issues in production? Check out [React Scan Monitoring](https://react-scan.com/monitoring)!
### FAQ
**Q: Why this instead of React Devtools?**
React Devtools aims to be a general purpose tool for React. However, I deal with React performance issues every day, and React Devtools doesn't fix my problems well. There's a lot of noise (no obvious distinction between unnecessary and necessary renders), and there's no programmatic API. If it sounds like you have the same problems, then React Scan may be a better choice.
Also, some personal complaints about React Devtools' highlight feature:
- React Devtools "batches" paints, so if a component renders too fast, it will lag behind and only show 1 every second or so
- When you scroll/resize the boxes don't update position
- No count of how many renders there are
- I don't know what the bad/slow renders are without inspecting
- The menu is hidden away so it's annoying to turn on/off, user experience should be specifically tuned for debugging performance, instead of hidden behind a profiler/component tree
- No programmatic API
- It's stuck in a chrome extension, I want to run it anywhere on the web
- It looks subjectively ugly (lines look fuzzy, feels sluggish)
- I'm more ambitious with react-scan
## Resources & Contributing Back
Want to try it out? Check the [our demo](https://react-scan.million.dev).
Looking to contribute back? Check the [Contributing Guide](https://github.com/aidenybai/react-scan/blob/main/CONTRIBUTING.md) out.
Want to talk to the community? Hop in our [Discord](https://discord.gg/X9yFbcV2rF) and share your ideas and what you've build with React Scan.
Find a bug? Head over to our [issue tracker](https://github.com/aidenybai/react-scan/issues) and we'll do our best to help. We love pull requests, too!
We expect all contributors to abide by the terms of our [Code of Conduct](https://github.com/aidenybai/react-scan/blob/main/.github/CODE_OF_CONDUCT.md).
[**โ Start contributing on GitHub**](https://github.com/aidenybai/react-scan/blob/main/CONTRIBUTING.md)
## Acknowledgments
React Scan takes inspiration from the following projects:
- [React Devtools](https://react.dev/learn/react-developer-tools) for the initial idea of [highlighting renders](https://medium.com/dev-proto/highlight-react-components-updates-1b2832f2ce48). We chose to diverge from this to provide a [better developer experience](https://x.com/aidenybai/status/1857122670929969551)
- [Million Lint](https://million.dev) for scanning and linting approaches
- [Why Did You Render?](https://github.com/welldone-software/why-did-you-render) for the concept of hijacking internals to detect unnecessary renders caused by "unstable" props
## License
React Scan is [MIT-licensed](LICENSE) open-source software by Aiden Bai, [Million Software, Inc.](https://million.dev), and [contributors](https://github.com/aidenybai/react-scan/graphs/contributors). | {
"source": "aidenybai/react-scan",
"title": "README.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/README.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 9871
} |
# Changesets
Hello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that works
with multi-package repos, or single-package repos to help you version and publish your code. You can
find the full documentation for it [in our repository](https://github.com/changesets/changesets)
We have a quick list of common questions to get you started engaging with this project in
[our documentation](https://github.com/changesets/changesets/blob/main/docs/common-questions.md) | {
"source": "aidenybai/react-scan",
"title": ".changeset/README.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/.changeset/README.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 509
} |
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or
advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic
address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [email protected]. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org), version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq | {
"source": "aidenybai/react-scan",
"title": ".github/CODE_OF_CONDUCT.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/.github/CODE_OF_CONDUCT.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 3333
} |
# Astro Guide
## As a script tag
Add the script tag to your root layout
```astro
<!doctype html>
<html lang="en">
<head>
<script is:inline src="https://unpkg.com/react-scan/dist/auto.global.js" />
<!-- rest of your scripts go under -->
</head>
<body>
<!-- ... -->
</body>
</html>
```
## As a module import
Add the script to your root layout
```astro
<!doctype html>
<html lang="en">
<head>
<script>
import { scan } from 'react-scan';
scan({
enabled: true,
});
</script>
<!-- rest of your scripts go under -->
</head>
<body>
<!-- ... -->
</body>
</html>
``` | {
"source": "aidenybai/react-scan",
"title": "docs/installation/astro.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/astro.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 634
} |
# Create React App (CRA) Guide
## As a script tag
Add the script tag to your `index.html`:
```html
<!doctype html>
<html lang="en">
<head>
<script src="https://unpkg.com/react-scan/dist/auto.global.js"></script>
<!-- rest of your scripts go under -->
</head>
<body>
<!-- ... -->
</body>
</html>
```
## As a module import
In your project entrypoint (e.g. `src/index`, `src/main`):
```jsx
// src/index.jsx
import { scan } from "react-scan"; // must be imported before React and React DOM
import React from "react";
scan({
enabled: true,
});
```
> [!CAUTION]
> React Scan must be imported before React (and other React renderers like React DOM) in your entire project, as it needs to hijack React DevTools before React gets to access it. | {
"source": "aidenybai/react-scan",
"title": "docs/installation/create-react-app.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/create-react-app.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 765
} |
# NextJS App Router Guide
## As a script tag
Add the script tag to your `app/layout`:
```jsx
// app/layout.jsx
export default function RootLayout({ children }) {
return (
<html lang="en">
<head>
<script src="https://unpkg.com/react-scan/dist/auto.global.js" />
{/* rest of your scripts go under */}
</head>
<body>{children}</body>
</html>
);
}
```
## As a module import
Create a `<ReactScan>` client component:
```jsx
// path/to/ReactScanComponent
"use client";
// react-scan must be imported before react
import { scan } from "react-scan";
import { JSX, useEffect } from "react";
export function ReactScan(): JSX.Element {
useEffect(() => {
scan({
enabled: true,
});
}, []);
return <></>;
}
```
Import the `<ReactScan>` component into `app/layout`:
```jsx
// app/layout
// This component must be the top-most import in this file!
import { ReactScan } from "path/to/ReactScanComponent";
// ...
export default function RootLayout({ children }) {
return (
<html lang="en">
<ReactScan />
<body>
{children}
</body>
</html>
);
}
``` | {
"source": "aidenybai/react-scan",
"title": "docs/installation/next-js-app-router.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/next-js-app-router.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 1144
} |
# NextJS Page Router Guide
## As a script tag
Add the script tag to your `pages/_document`:
```jsx
// pages/_document.jsx
import { Html, Head, Main, NextScript } from "next/document";
export default function Document() {
return (
<Html lang="en">
<Head>
<script src="https://unpkg.com/react-scan/dist/auto.global.js" />
{/* rest of your scripts go under */}
</Head>
<body>
<Main />
<NextScript />
</body>
</Html>
);
}
```
## As a module import
Add the following code to your `App` component in `pages/_app`:
```jsx
// react-scan must be the top-most import
import { scan } from "react-scan";
import "@/styles/globals.css";
import type { AppProps } from "next/app";
import { useEffect } from "react";
export default function App({ Component, pageProps }: AppProps) {
useEffect(() => {
// Make sure to run React Scan after hydration
scan({
enabled: true,
});
}, []);
return <Component {...pageProps} />;
}
``` | {
"source": "aidenybai/react-scan",
"title": "docs/installation/next-js-page-router.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/next-js-page-router.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 1008
} |
# Parcel Guide
## As a script tag
Add the script tag to your `index.html`:
```html
<!doctype html>
<html lang="en">
<head>
<script src="https://unpkg.com/react-scan/dist/auto.global.js"></script>
<!-- rest of your scripts go under -->
</head>
<body>
<!-- ... -->
</body>
</html>
``` | {
"source": "aidenybai/react-scan",
"title": "docs/installation/parcel.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/parcel.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 306
} |
# React Router v7 Guide
## As a script tag
Add the script tag to your `Layout` component in the `app/root`:
```jsx
// app/root.jsx
// ...
export function Layout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<head>
<script src="https://unpkg.com/react-scan/dist/auto.global.js" />
<meta charSet="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<Meta />
<Links />
</head>
<body>
{children}
<ScrollRestoration />
<Scripts />
</body>
</html>
);
}
// ...
```
> [!CAUTION]
> This only works for React 19
## As an import
Add the following code to your `app/root`
```jsx
// app/root.jsx
import { scan } from "react-scan"; // Must be imported before React Router
import { Links, Meta, Scripts, ScrollRestoration } from "react-router";
import { useEffect } from "react";
export function Layout({ children }) {
useEffect(() => {
// Make sure to run react-scan only after hydration
scan({
enabled: true,
});
}, []);
return (
<html lang="en">
<head>
<meta charSet="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<Meta />
<Links />
</head>
<body>
{children}
<ScrollRestoration />
<Scripts />
</body>
</html>
);
}
// ...
```
> [!CAUTION]
> React Scan must be imported before React (and other React renderers like React DOM), as well as React Router, in your entire project, as it needs to hijack React DevTools before React gets to access it. | {
"source": "aidenybai/react-scan",
"title": "docs/installation/react-router.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/react-router.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 1646
} |
# Remix Guide
## As a script tag
Add the script tag to your `<Layout>` component in `app/root`:
```jsx
// app/root.jsx
import {
Links,
Meta,
Scripts,
ScrollRestoration,
} from "@remix-run/react";
export function Layout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<head>
{/* Must run before any of your scripts */}
<script src="https://unpkg.com/react-scan/dist/auto.global.js" />
<meta charSet="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<Meta />
<Links />
</head>
<body>
{children}
<ScrollRestoration />
<Scripts />
</body>
</html>
);
}
// ...
```
> [!CAUTION]
> This only works for React 19
## As a module import
Add the following code to your `app/root`:
```jsx
// app/root.jsx
import { scan } from "react-scan"; // Must be imported before Remix
import {
Links,
Meta,
Outlet,
Scripts,
ScrollRestoration,
} from "@remix-run/react";
export function Layout({ children }) {
useEffect(() => {
// Make sure to run React Scan after hydration
scan({
enabled: true,
});
}, []);
return (
<html lang="en">
<head>
<meta charSet="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<Meta />
<Links />
</head>
<body>
{children}
<ScrollRestoration />
<Scripts />
</body>
</html>
);
}
export default function App() {
return <Outlet />;
}
```
> [!CAUTION]
> React Scan must be imported before React (and other React renderers like React DOM), as well as Remix, in your entire project, as it needs to hijack React DevTools before React gets to access it.
Alternatively you can also do the following code in `app/entry.client`:
```jsx
// app/entry.client.jsx
import { RemixBrowser } from "@remix-run/react";
import { StrictMode, startTransition } from "react";
import { hydrateRoot } from "react-dom/client";
import { scan } from "react-scan";
scan({
enabled: true,
});
startTransition(() => {
hydrateRoot(
document,
<StrictMode>
<RemixBrowser />
</StrictMode>
);
});
```
> [!CAUTION]
> This only works for React 19 | {
"source": "aidenybai/react-scan",
"title": "docs/installation/remix.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/remix.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 2279
} |
# TanStack Router Guide
## As a script tag
Add the script tag to your `<RootDocument>` component at `app/routes/__root`:
```jsx
// app/routes/__root.jsx
import { Meta, Scripts } from "@tanstack/start";
// ...
function RootDocument({ children }) {
return (
<html>
<head>
<script src="https://unpkg.com/react-scan/dist/auto.global.js" />
<Meta />
</head>
<body>
{children}
<Scripts />
</body>
</html>
);
}
// ...
```
> [!CAUTION]
> This only works for React 19
## As a module import
Add the following code to your `<RootDocument>` component at `app/routes/__root`:
```jsx
// app/routes/__root.jsx
// react-scan must be imported before React and TanStack Start
import { scan } from "react-scan";
import { Meta, Scripts } from "@tanstack/start";
import { useEffect } from "react";
// ...
function RootDocument({ children }) {
useEffect(() => {
// Make sure to run this only after hydration
scan({
enabled: true,
});
}, []);
return (
<html>
<head>
<Meta />
</head>
<body>
{children}
<Scripts />
</body>
</html>
);
}
```
> [!CAUTION]
> React Scan must be imported before React (and other React renderers like React DOM) in your entire project, as it needs to hijack React DevTools before React gets to access it.
Alternatively you can also do the following code in `app/client`:
```jsx
// app/client.jsx
import { scan } from "react-scan"; // must be imported before React and React DOM
import { hydrateRoot } from "react-dom/client";
import { StartClient } from "@tanstack/start";
import { createRouter } from "./router";
scan({
enabled: true,
});
const router = createRouter();
hydrateRoot(document, <StartClient router={router} />);
```
> [!CAUTION]
> This only works for React 19 | {
"source": "aidenybai/react-scan",
"title": "docs/installation/tanstack-start.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/tanstack-start.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 1846
} |
# Vite Guide
## As a script tag
Add the script tag to your `index.html`:
```html
<!doctype html>
<html lang="en">
<head>
<script src="https://unpkg.com/react-scan/dist/auto.global.js"></script>
<!-- rest of your scripts go under -->
</head>
<body>
<!-- ... -->
</body>
</html>
```
## As a module import
In your project entrypoint (e.g. `src/index`, `src/main`):
```jsx
// src/index.jsx
import { scan } from "react-scan"; // must be imported before React and React DOM
import React from "react";
scan({
enabled: true,
});
```
> [!CAUTION]
> React Scan must be imported before React (and other React renderers like React DOM) in your entire project, as it needs to hijack React DevTools before React gets to access it.
## Vite plugin
TODO
## Preserving component names
TODO | {
"source": "aidenybai/react-scan",
"title": "docs/installation/vite.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/docs/installation/vite.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 806
} |
# React Scanner Extension
Browser extension for scanning React applications and identifying performance issues.
### Environment Variables
When developing with Brave, you need to set the `BRAVE_BINARY` environment variable. Create a `.env` file (copy from `.env.example`):
```env
# For macOS
BRAVE_BINARY="/Applications/Brave Browser.app/Contents/MacOS/Brave Browser"
# For Windows
BRAVE_BINARY="C:\\Program Files\\BraveSoftware\\Brave-Browser\\Application\\brave.exe"
# For Linux
BRAVE_BINARY="/usr/bin/brave"
```
### Development Setup
#### For Chrome
1. Run development server:
```bash
pnpm dev
```
3. This will automatically open Chrome with the extension loaded.
<i>If you need to inspect the extension, open `chrome://extensions` in Chrome</i>
#### For Firefox
<br />
#### For Firefox
1. Run development server:
```bash
pnpm dev:firefox
```
2. This will automatically open Firefox with the extension loaded.
<i>If you need to inspect the extension, open `about:debugging#/runtime/this-firefox` in Firefox</i>
<br />
#### For Brave
1. Run development server:
```bash
pnpm dev:brave
```
2. This will automatically open Brave with the extension loaded.
<i>If you need to inspect the extension, open `brave://extensions` in Brave</i>
<br />
### Building for Production
To build the extension for all browsers:
```bash
pnpm pack:all
```
This will create:
- `chrome-extension-v1.0.5.zip`
- `firefox-extension-v1.0.5.zip`
- `brave-extension-v1.0.5.zip`
in the `build` directory. | {
"source": "aidenybai/react-scan",
"title": "packages/extension/README.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/packages/extension/README.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 1525
} |
# <img src="https://github.com/aidenybai/react-scan/blob/main/.github/assets/logo.svg" width="30" height="30" align="center" /> React Scan
React Scan automatically detects performance issues in your React app.
Previously, tools like:
- [`<Profiler />`](https://react.dev/reference/react/Profiler) required lots of manual changes
- [Why Did You Render?](https://github.com/welldone-software/why-did-you-render) lacked simple visual cues
- [React Devtools](https://legacy.reactjs.org/blog/2018/09/10/introducing-the-react-profiler.html) didn't have a simple, portable, and programmatic API
React Scan attempts to solve these problems:
- It requires no code changes โ just drop it in
- It highlights exactly the components you need to optimize
- Use it via script tag, npm, CLI, you name it!
Trusted by engineering teams at:
Airbnb <a href="https://polaris.shopify.com/"><img src="https://raw.githubusercontent.com/aidenybai/react-scan/refs/heads/main/.github/assets/shopify-logo.png" height="30" align="center" /></a> <a href="https://www.faire.com/"><img src="https://raw.githubusercontent.com/aidenybai/react-scan/refs/heads/main/.github/assets/faire-logo.svg" height="20" align="center" /></a> <a href="https://perplexity.com/"><img src="https://raw.githubusercontent.com/aidenybai/react-scan/refs/heads/main/.github/assets/perplexity-logo.png" height="30" align="center" /></a>
### [**Try it out! โ**](https://react-scan.million.dev)

> [!IMPORTANT]
> Want to monitor issues in production? Check out [React Scan Monitoring](https://react-scan.com/monitoring)!
## Install
### Package managers
```bash
npm i react-scan
```
```bash
pnpm add react-scan
```
```bash
yarn add react-scan
```
### CDN
```html
<!-- import this BEFORE any scripts -->
<script src="https://unpkg.com/react-scan/dist/auto.global.js"></script>
```
## Usage
- [NextJS App Router](https://github.com/aidenybai/react-scan/blob/main/docs/installation/next-js-app-router.md)
- [NextJS Page Router](https://github.com/aidenybai/react-scan/blob/main/docs/installation/next-js-page-router.md)
- [Create React App](https://github.com/aidenybai/react-scan/blob/main/docs/installation/create-react-app.md)
- [Vite](https://github.com/aidenybai/react-scan/blob/main/docs/installation/vite.md)
- [Parcel](https://github.com/aidenybai/react-scan/blob/main/docs/installation/parcel.md)
- [Remix](https://github.com/aidenybai/react-scan/blob/main/docs/installation/remix.md)
- [React Router](https://github.com/aidenybai/react-scan/blob/main/docs/installation/react-router.md)
- [Astro](https://github.com/aidenybai/react-scan/blob/main/docs/installation/astro.md)
- [TanStack Start](https://github.com/aidenybai/react-scan/blob/main/docs/installation/tanstack-start.md)
### CLI
If you don't have a local version of the site or you want to test a React app remotely, you can use the CLI. This will spin up an isolated browser instance which you can interact or use React Scan with.
```bash
npx react-scan@latest http://localhost:3000
# you can technically scan ANY website on the web:
# npx react-scan@latest https://react.dev
```
You can add it to your existing dev process as well. Here's an example for Next.js:
```json
{
"scripts": {
"dev": "next dev",
"scan": "next dev & npx react-scan@latest localhost:3000"
}
}
```
### Browser Extension
If you want to install the extension, follow the guide [here](https://github.com/aidenybai/react-scan/blob/main/BROWSER_EXTENSION_GUIDE.md).
### React Native
See [discussion](https://github.com/aidenybai/react-scan/pull/23)
## API Reference
<details>
<summary><code>Options</code></summary>
<br />
```tsx
export interface Options {
/**
* Enable/disable scanning
*
* Please use the recommended way:
* enabled: process.env.NODE_ENV === 'development',
*
* @default true
*/
enabled?: boolean;
/**
* Force React Scan to run in production (not recommended)
*
* @default false
*/
dangerouslyForceRunInProduction?: boolean;
/**
* Log renders to the console
*
* WARNING: This can add significant overhead when the app re-renders frequently
*
* @default false
*/
log?: boolean;
/**
* Show toolbar bar
*
* If you set this to true, and set {@link enabled} to false, the toolbar will still show, but scanning will be disabled.
*
* @default true
*/
showToolbar?: boolean;
/**
* Animation speed
*
* @default "fast"
*/
animationSpeed?: "slow" | "fast" | "off";
/**
* Track unnecessary renders, and mark their outlines gray when detected
*
* An unnecessary render is defined as the component re-rendering with no change to the component's
* corresponding dom subtree
*
* @default false
* @warning tracking unnecessary renders can add meaningful overhead to react-scan
*/
trackUnnecessaryRenders?: boolean;
onCommitStart?: () => void;
onRender?: (fiber: Fiber, renders: Array<Render>) => void;
onCommitFinish?: () => void;
onPaintStart?: (outlines: Array<Outline>) => void;
onPaintFinish?: (outlines: Array<Outline>) => void;
}
```
</details>
- `scan(options: Options)`: Imperative API to start scanning
- `useScan(options: Options)`: Hook API to start scanning
- `getReport()`: Get a report of all the renders
- `setOptions(options: Options): void`: Set options at runtime
- `getOptions()`: Get the current options
- `onRender(Component, onRender: (fiber: Fiber, render: Render) => void)`: Hook into a specific component's renders
## Why React Scan?
React can be tricky to optimize.
The issue is that component props are compared by reference, not value. This is intentional โ this way rendering can be cheap to run.
However, this makes it easy to accidentally cause unnecessary renders, making the app slow. Even in production apps, with hundreds of engineers, can't fully optimize their apps (see [GitHub](https://github.com/aidenybai/react-scan/blob/main/.github/assets/github.mp4), [Twitter](https://github.com/aidenybai/react-scan/blob/main/.github/assets/twitter.mp4), and [Instagram](https://github.com/aidenybai/react-scan/blob/main/.github/assets/instagram.mp4)).
This often comes down to props that update in reference, like callbacks or object values. For example, the `onClick` function and `style` object are re-created on every render, causing `ExpensiveComponent` to slow down the app:
```jsx
<ExpensiveComponent onClick={() => alert("hi")} style={{ color: "purple" }} />
```
React Scan helps you identify these issues by automatically detecting and highlighting renders that cause performance issues. Now, instead of guessing, you can see exactly which components you need to fix.
> Want monitor issues in production? Check out [React Scan Monitoring](https://react-scan.com/monitoring)!
### FAQ
**Q: Why this instead of React Devtools?**
React Devtools aims to be a general purpose tool for React. However, I deal with React performance issues every day, and React Devtools doesn't fix my problems well. There's a lot of noise (no obvious distinction between unnecessary and necessary renders), and there's no programmatic API. If it sounds like you have the same problems, then React Scan may be a better choice.
Also, some personal complaints about React Devtools' highlight feature:
- React Devtools "batches" paints, so if a component renders too fast, it will lag behind and only show 1 every second or so
- When you scroll/resize the boxes don't update position
- No count of how many renders there are
- I don't know what the bad/slow renders are without inspecting
- The menu is hidden away so it's annoying to turn on/off, user experience should be specifically tuned for debugging performance, instead of hidden behind a profiler/component tree
- No programmatic API
- It's stuck in a chrome extension, I want to run it anywhere on the web
- It looks subjectively ugly (lines look fuzzy, feels sluggish)
- I'm more ambitious with react-scan
## Resources & Contributing Back
Want to try it out? Check the [our demo](https://react-scan.million.dev).
Looking to contribute back? Check the [Contributing Guide](https://github.com/aidenybai/react-scan/blob/main/CONTRIBUTING.md) out.
Want to talk to the community? Hop in our [Discord](https://discord.gg/X9yFbcV2rF) and share your ideas and what you've build with React Scan.
Find a bug? Head over to our [issue tracker](https://github.com/aidenybai/react-scan/issues) and we'll do our best to help. We love pull requests, too!
We expect all contributors to abide by the terms of our [Code of Conduct](https://github.com/aidenybai/react-scan/blob/main/.github/CODE_OF_CONDUCT.md).
[**โ Start contributing on GitHub**](https://github.com/aidenybai/react-scan/blob/main/CONTRIBUTING.md)
## Acknowledgments
React Scan takes inspiration from the following projects:
- [React Devtools](https://react.dev/learn/react-developer-tools) for the initial idea of [highlighting renders](https://medium.com/dev-proto/highlight-react-components-updates-1b2832f2ce48). We chose to diverge from this to provide a [better developer experience](https://x.com/aidenybai/status/1857122670929969551)
- [Million Lint](https://million.dev) for scanning and linting approaches
- [Why Did You Render?](https://github.com/welldone-software/why-did-you-render) for the concept of hijacking internals to detect unnecessary renders caused by "unstable" props
## License
React Scan is [MIT-licensed](LICENSE) open-source software by Aiden Bai, [Million Software, Inc.](https://million.dev), and [contributors](https://github.com/aidenybai/react-scan/graphs/contributors). | {
"source": "aidenybai/react-scan",
"title": "packages/scan/README.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/packages/scan/README.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 9839
} |
# @react-scan/vite-plugin-react-scan
A Vite plugin that integrates React Scan into your Vite application, automatically detecting performance issues in your React components.
## Installation
```bash
# npm
npm install -D @react-scan/vite-plugin-react-scan react-scan
# pnpm
pnpm add -D @react-scan/vite-plugin-react-scan react-scan
# yarn
yarn add -D @react-scan/vite-plugin-react-scan react-scan
```
## Usage
Add the plugin to your `vite.config.ts`:
```ts
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import reactScan from '@react-scan/vite-plugin-react-scan';
export default defineConfig({
plugins: [
react(),
reactScan({
// options (optional)
}),
],
});
```
## Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `enable` | `boolean` | `process.env.NODE_ENV === 'development'` | Enable/disable scanning |
| `scanOptions` | `object` | `{ ... }` | Custom React Scan options |
| `autoDisplayNames` | `boolean` | `false` | Automatically add display names to React components |
| `debug` | `boolean` | `false` | Enable debug logging |
## Example Configuration
```ts
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import reactScan from '@react-scan/vite-plugin-react-scan';
export default defineConfig({
plugins: [
react(),
reactScan({
enable: true,
autoDisplayNames: true,
scanOptions: {} // React Scan specific options
}),
],
});
```
## Development vs Production
- In development: The plugin injects React Scan directly into your application for real-time analysis
- In production: The plugin can be disabled/enabled by default with specific options
## Contributing
Contributions are welcome! Please read our [Contributing Guide](CONTRIBUTING.md) for details.
## License
React Scan Vite Plugin is [MIT-licensed](LICENSE) open-source software by Aiden Bai, [Million Software, Inc.](https://million.dev), and [contributors](https://github.com/aidenybai/react-scan/graphs/contributors): | {
"source": "aidenybai/react-scan",
"title": "packages/vite-plugin-react-scan/README.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/packages/vite-plugin-react-scan/README.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 2065
} |
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
## Getting Started
First, run the development server:
```bash
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
## Learn More
To learn more about Next.js, take a look at the following resources:
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
## Deploy on Vercel
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details. | {
"source": "aidenybai/react-scan",
"title": "packages/website/README.md",
"url": "https://github.com/aidenybai/react-scan/blob/main/packages/website/README.md",
"date": "2024-09-02T21:52:12",
"stars": 15685,
"description": "Scan for React performance issues and eliminate slow renders in your app",
"file_size": 1449
} |
GitHub AI Project Docs Dataset
This dataset contains project documentation and README files extracted from top open-source GitHub repositories. It is designed to support research and evaluation of large language models and frontier modelsโespecially for in-context learning using data that lies outside their original training distribution.
๐ Summary Statistics:
- Total documents: 3,296
- Total content size: 18,283,541 characters
- Average document size: 5,547 characters
File types distribution:
- .md: 3,125 files
- .rst: 171 files
Dataset Overview
- Source Repositories: Documentation files are collected from GitHub repositories that:
- Use the Apache 2.0 or MIT license
- Have at least 1,000 stars
- Were created within the last 6 months
- Content: Includes various project documentation such as
README.md
, additional markdown files, and related documentation (e.g., recipes, configuration guides).
Key Features
- Quality & Relevance: Sourced from popular and actively maintained projects.
- Diverse Documentation: Provides a wide range of writing styles and content formats.
- Evaluation Ready: Ideal for testing the generalization and in-context learning abilities of modern language models.
Process Details
Repository Selection:
Repositories are filtered based on:- License: Apache 2.0
- Popularity: 1k+ stars
- Recency: Created in the last 6 months
Document Extraction:
Each repository is crawled to extract documentation files (e.g.,README.md
), including additional project docs.Aggregation:
Extracted files are combined into a unified dataset, ready for analysis and model evaluation.
๐ Example Repositories
Some examples of repositories included in this dataset:
๐ huggingface/open-r1 โญ Stars: 19,596 ๐ Description: A fully open reproduction of DeepSeek-R1 with extensive documentation. ๐ Created: 2025-01-24
๐ raga-ai-hub/RagaAI-Catalyst โญ Stars: 10,374 ๐ Description: Python SDK for Agent AI Observability, Monitoring, and Evaluation Framework. ๐ Created: 2024-08-26
๐ huggingface/smolagents โญ Stars: 10,361 ๐ Description: A barebones library for agents with associated project docs. ๐ Created: 2024-12-05
For a complete list, please refer to the dataset details on the Hugging Face Hub.
How to Use This Dataset
You can load the dataset directly with the Hugging Face datasets
library:
from datasets import load_dataset
dataset = load_dataset("meowterspace42/github-ai-project-docs")
Each entry in the dataset provides both the documentation content and relevant metadata (e.g., repository name, star count, creation date).
License
The documentation files in this dataset are sourced from GitHub repositories under the Apache 2.0 license. Please refer to the individual repository licenses for full details. This dataset is provided solely for research and evaluation purposes.
Citation
If you use this dataset in your research, please cite it as follows:
@misc{meowterspace42_github_ai_project_docs,
title={GitHub AI Project Docs Dataset},
author={meowterspace42},
howpublished={\url{https://huggingface.co/datasets/meowterspace42/github-ai-project-docs}},
year={2025}
}
Acknowledgements
Thank you to the maintainers of the original GitHub repositories and the Hugging Face community for making these resources available. Your work helps advance research in AI and language modeling.
Contact
For any questions or feedback, please open an issue on the Hugging Face Hub repository
- Downloads last month
- 180