bartowski commited on
Commit
83d0d52
·
verified ·
1 Parent(s): 074b2b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -18,6 +18,8 @@ base_model: google/gemma-3-27b-it
18
  **Original model**: [gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it)<br>
19
  **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b4877](https://github.com/ggerganov/llama.cpp/releases/tag/b4877)<br>
20
 
 
 
21
  ## Technical Details
22
 
23
  Supports a context length of 128k tokens, with a max output of 8192.
 
18
  **Original model**: [gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it)<br>
19
  **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b4877](https://github.com/ggerganov/llama.cpp/releases/tag/b4877)<br>
20
 
21
+ Requires llama.cpp runtime v1.19.0
22
+
23
  ## Technical Details
24
 
25
  Supports a context length of 128k tokens, with a max output of 8192.