lbourdois commited on
Commit
0712a0c
·
verified ·
1 Parent(s): f2126cb

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +70 -58
README.md CHANGED
@@ -1,58 +1,70 @@
1
- ---
2
- license: other
3
- license_name: qwen-research
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-3B-Instruct
9
- tags:
10
- - chat
11
- - llama-cpp
12
- - gguf-my-repo
13
- library_name: transformers
14
- ---
15
-
16
- # Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF
17
- This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
- Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
19
-
20
- ## Use with llama.cpp
21
- Install llama.cpp through brew (works on Mac and Linux)
22
-
23
- ```bash
24
- brew install llama.cpp
25
-
26
- ```
27
- Invoke the llama.cpp server or the CLI.
28
-
29
- ### CLI:
30
- ```bash
31
- llama-cli --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
32
- ```
33
-
34
- ### Server:
35
- ```bash
36
- llama-server --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -c 2048
37
- ```
38
-
39
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
40
-
41
- Step 1: Clone llama.cpp from GitHub.
42
- ```
43
- git clone https://github.com/ggerganov/llama.cpp
44
- ```
45
-
46
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
47
- ```
48
- cd llama.cpp && LLAMA_CURL=1 make
49
- ```
50
-
51
- Step 3: Run inference through the main binary.
52
- ```
53
- ./llama-cli --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
54
- ```
55
- or
56
- ```
57
- ./llama-server --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -c 2048
58
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ pipeline_tag: text-generation
20
+ base_model: Qwen/Qwen2.5-3B-Instruct
21
+ tags:
22
+ - chat
23
+ - llama-cpp
24
+ - gguf-my-repo
25
+ library_name: transformers
26
+ ---
27
+
28
+ # Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF
29
+ This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
30
+ Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) for more details on the model.
31
+
32
+ ## Use with llama.cpp
33
+ Install llama.cpp through brew (works on Mac and Linux)
34
+
35
+ ```bash
36
+ brew install llama.cpp
37
+
38
+ ```
39
+ Invoke the llama.cpp server or the CLI.
40
+
41
+ ### CLI:
42
+ ```bash
43
+ llama-cli --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
44
+ ```
45
+
46
+ ### Server:
47
+ ```bash
48
+ llama-server --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -c 2048
49
+ ```
50
+
51
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
52
+
53
+ Step 1: Clone llama.cpp from GitHub.
54
+ ```
55
+ git clone https://github.com/ggerganov/llama.cpp
56
+ ```
57
+
58
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
59
+ ```
60
+ cd llama.cpp && LLAMA_CURL=1 make
61
+ ```
62
+
63
+ Step 3: Run inference through the main binary.
64
+ ```
65
+ ./llama-cli --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
66
+ ```
67
+ or
68
+ ```
69
+ ./llama-server --hf-repo Aldaris/Qwen2.5-3B-Instruct-IQ4_NL-GGUF --hf-file qwen2.5-3b-instruct-iq4_nl-imat.gguf -c 2048
70
+ ```