DRT-8B-Q4_K_S-GGUF / README.md
Triangle104's picture
Update README.md
3ff972f verified
metadata
base_model: Krystalan/DRT-8B
language:
  - en
  - zh
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
tags:
  - machine tranlsation
  - O1-like model
  - Chat
  - llama-cpp
  - gguf-my-repo

Triangle104/DRT-8B-Q4_K_S-GGUF

This model was converted to GGUF format from Krystalan/DRT-8B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


This repository contains the resources for our paper "DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought"

Updates: 2024.12.31: We updated our paper with more detals and analyses. Check it out! 2024.12.31: We released the testing set of our work, please refer to data/test.jsonl 2024.12.30: We released a new model checkpoint using Llama-3.1-8B-Instruct as the backbone, i.e., 🤗 DRT-o1-8B 2024.12.24: We released our paper. Check it out! 2024.12.23: We released our model checkpoints. 🤗 DRT-o1-7B and 🤗 DRT-o1-14B. If you find this work is useful, please consider cite our paper:

@article{wang2024drt, title={DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought}, author={Wang, Jiaan and Meng, Fandong and Liang, Yunlong and Zhou, Jie}, journal={arXiv preprint arXiv:2412.17498}, year={2024} }


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/DRT-8B-Q4_K_S-GGUF --hf-file drt-8b-q4_k_s.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/DRT-8B-Q4_K_S-GGUF --hf-file drt-8b-q4_k_s.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/DRT-8B-Q4_K_S-GGUF --hf-file drt-8b-q4_k_s.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/DRT-8B-Q4_K_S-GGUF --hf-file drt-8b-q4_k_s.gguf -c 2048