pittawat commited on
Commit
c382bba
·
verified ·
1 Parent(s): 710a480

doc: add model card

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ datasets:
4
+ - scb10x/typhoon-t1-3b-research-preview-data
5
+ language:
6
+ - th
7
+ - en
8
+ base_model:
9
+ - scb10x/llama3.2-typhoon2-t1-3b-research-preview
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - llama-cpp
13
+ ---
14
+
15
+ This is a GGUF format for [`scb10x/llama3.2-typhoon2-t1-3b-research-preview`](https://huggingface.co/scb10x/llama3.2-typhoon2-t1-3b-research-preview) using `llama.cpp`.
16
+
17
+ Please see the [original model card](https://huggingface.co/scb10x/llama3.2-typhoon2-t1-3b-research-preview) for more details on the model.
18
+
19
+ ## Use with llama.cpp
20
+
21
+ Install llama.cpp through `brew` (works on Mac and Linux)
22
+
23
+ ```bash
24
+ brew install llama.cpp
25
+ ```
26
+
27
+ Invoke the llama.cpp server or the CLI.
28
+
29
+ ### CLI:
30
+ ```bash
31
+ llama-cli --hf-repo scb10x/llama3.2-typhoon2-t1-3b-research-preview-gguf --hf-file llama3.2-typhoon2-t1-3b-q4_k_m.gguf -p "หากแปลคำว่า \"ไต้ฝุ่น\" เป็นภาษาอังกฤษ ในคำที่ถูกแปลแล้วจะมีตัวอักษร \"o\" ทั้งหมดกี่ตัว"
32
+ ```
33
+
34
+ ### Server:
35
+ ```bash
36
+ llama-server --hf-repo scb10x/llama3.2-typhoon2-t1-3b-research-preview-gguf --hf-file llama3.2-typhoon2-t1-3b-q4_k_m.gguf -c 2048
37
+ ```
38
+
39
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
40
+
41
+ Step 1: Clone llama.cpp from GitHub.
42
+ ```
43
+ git clone https://github.com/ggerganov/llama.cpp
44
+ ```
45
+
46
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
47
+ ```
48
+ cd llama.cpp && LLAMA_CURL=1 make
49
+ ```
50
+
51
+ Step 3: Run inference through the main binary.
52
+ ```
53
+ ./llama-cli --hf-repo scb10x/llama3.2-typhoon2-t1-3b-research-preview-gguf --hf-file llama3.2-typhoon2-t1-3b-q4_k_m.gguf -p "หากแปลคำว่า \"ไต้ฝุ่น\" เป็นภาษาอังกฤษ ในคำที่ถูกแปลแล้วจะมีตัวอักษร \"o\" ทั้งหมดกี่ตัว"
54
+ ```
55
+
56
+ or
57
+
58
+ ```
59
+ ./llama-server --hf-repo scb10x/llama3.2-typhoon2-t1-3b-research-preview-gguf --hf-file llama3.2-typhoon2-t1-3b-q4_k_m.gguf -c 2048
60
+ ```