Add pipeline tag and paper link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -1,15 +1,17 @@
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
  datasets:
5
  - HoangHa/Pensez-v0.1
6
  language:
7
  - en
8
  - fr
9
- base_model:
10
- - Qwen/Qwen2.5-7B-Instruct
 
11
  ---
12
 
 
13
  <div align="center">
14
 
15
  # Pensez: Less Data, Better Reasoning – Rethinking French LLM
@@ -23,7 +25,7 @@ base_model:
23
 
24
  ## About
25
 
26
- Paper: [Pensez: Less Data, Better Reasoning - Rethinking French LLM](https://arxiv.org/abs/2503.13661)
27
 
28
  Pensez is a bilingual (French-English) reasoning model designed to maximize efficiency with significantly reduced training data. The model leverages a curated dataset focusing on daily reasoning tasks and scientific questions to enhance performance.
29
 
@@ -149,7 +151,7 @@ More details: [Training Config](https://huggingface.co/HoangHa/Pensez-v0.1-e5/bl
149
  year={2025},
150
  archivePrefix={arXiv},
151
  primaryClass={cs.CL},
152
- url={},
153
  }
154
  ```
155
 
@@ -165,4 +167,5 @@ More details: [Training Config](https://huggingface.co/HoangHa/Pensez-v0.1-e5/bl
165
  - [Deepspeed](https://github.com/deepspeedai/DeepSpeed)
166
  - [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
167
  - [Hyperbolic](https://hyperbolic.xyz/)
168
- - [Modal](https://modal.com/)
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B-Instruct
4
  datasets:
5
  - HoangHa/Pensez-v0.1
6
  language:
7
  - en
8
  - fr
9
+ library_name: transformers
10
+ license: apache-2.0
11
+ pipeline_tag: text-generation
12
  ---
13
 
14
+ ```markdown
15
  <div align="center">
16
 
17
  # Pensez: Less Data, Better Reasoning – Rethinking French LLM
 
25
 
26
  ## About
27
 
28
+ Paper: [Pensez: Less Data, Better Reasoning - Rethinking French LLM](https://huggingface.co/papers/2503.13661)
29
 
30
  Pensez is a bilingual (French-English) reasoning model designed to maximize efficiency with significantly reduced training data. The model leverages a curated dataset focusing on daily reasoning tasks and scientific questions to enhance performance.
31
 
 
151
  year={2025},
152
  archivePrefix={arXiv},
153
  primaryClass={cs.CL},
154
+ url={https://arxiv.org/abs/2503.13661},
155
  }
156
  ```
157
 
 
167
  - [Deepspeed](https://github.com/deepspeedai/DeepSpeed)
168
  - [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
169
  - [Hyperbolic](https://hyperbolic.xyz/)
170
+ - [Modal](https://modal.com/)
171
+ ```