|
|
|
--- |
|
|
|
language: |
|
- en |
|
license: llama3.1 |
|
tags: |
|
- fireplace |
|
- fireplace-2 |
|
- valiant |
|
- valiant-labs |
|
- llama |
|
- llama-3.1 |
|
- llama-3.1-instruct |
|
- llama-3.1-instruct-8b |
|
- llama-3 |
|
- llama-3-instruct |
|
- llama-3-instruct-8b |
|
- 8b |
|
- function-calling |
|
- sql |
|
- database |
|
- data-visualization |
|
- matplotlib |
|
- json |
|
- conversational |
|
- chat |
|
- instruct |
|
pipeline_tag: text-generation |
|
model_type: llama |
|
model-index: |
|
- name: Llama3.1-8B-Fireplace2 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: IFEval (0-Shot) |
|
type: HuggingFaceH4/ifeval |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: inst_level_strict_acc and prompt_level_strict_acc |
|
value: 54.83 |
|
name: strict accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: BBH (3-Shot) |
|
type: BBH |
|
args: |
|
num_few_shot: 3 |
|
metrics: |
|
- type: acc_norm |
|
value: 24.07 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MATH Lvl 5 (4-Shot) |
|
type: hendrycks/competition_math |
|
args: |
|
num_few_shot: 4 |
|
metrics: |
|
- type: exact_match |
|
value: 5.82 |
|
name: exact match |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GPQA (0-shot) |
|
type: Idavidrein/gpqa |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 5.15 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MuSR (0-shot) |
|
type: TAUR-Lab/MuSR |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: acc_norm |
|
value: 4.38 |
|
name: acc_norm |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU-PRO (5-shot) |
|
type: TIGER-Lab/MMLU-Pro |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 15.63 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.1-8B-Fireplace2 |
|
name: Open LLM Leaderboard |
|
|
|
--- |
|
|
|
 |
|
|
|
# QuantFactory/Llama3.1-8B-Fireplace2-GGUF |
|
This is quantized version of [ValiantLabs/Llama3.1-8B-Fireplace2](https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2) created using llama.cpp |
|
|
|
# Original Model Card |
|
|
|
|
|
|
|
 |
|
|
|
|
|
Fireplace 2 is a chat model, adding helpful structured outputs to Llama 3.1 8b Instruct. |
|
- an expansion pack of supplementary outputs - request them at will within your chat: |
|
- Inline function calls |
|
- SQL queries |
|
- JSON objects |
|
- Data visualization with matplotlib |
|
- Mix normal chat and structured outputs within the same conversation. |
|
- Fireplace 2 supplements the existing strengths of Llama 3.1, providing inline capabilities within the Llama 3 Instruct format. |
|
|
|
|
|
## Version |
|
|
|
This is the **2024-07-23** release of Fireplace 2 for Llama 3.1 8b. |
|
|
|
We're excited to bring further upgrades and releases to Fireplace 2 in the future. |
|
|
|
Help us and recommend Fireplace 2 to your friends! |
|
|
|
|
|
## Prompting Guide |
|
Fireplace uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat with Llama 3.1 and also includes the different special tokens used for Fireplace 2's added features: |
|
|
|
|
|
import transformers |
|
import torch |
|
|
|
model_id = "ValiantLabs/Llama3.1-8B-Fireplace2" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device_map="auto", |
|
) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are Fireplace, an expert technical assistant."}, |
|
{"role": "user", "content": "Hi, can you explain local area networking to me?"}, #general Llama 3.1 chat |
|
#{"role": "user", "content": "I have the following SQL table: employees (job_id VARCHAR, salary INTEGER)\n\nCan you find all employees with a salary above $75000?<|request_sql|>"}, #for SQL query |
|
#{"role": "user", "content": "{""name"": ""get_news_headlines"",""description"": ""Get the latest news headlines"",""parameters"": {""type"": ""object"",""properties"": {""country"": {""type"": ""string"",""description"": ""The country for which news headlines are to be retrieved""}},""required"": [""country""]}}\n\nHi, can you get me the latest news headlines for the United States?<|request_function_call|>"}, # for function call |
|
#{"role": "user", "content": "Show me an example of a histogram with a fixed bin size. Use attractive colors.<|request_matplotlib|>"}, #for data visualization |
|
#{"role": "user", "content": "Can you define the word 'presence' for me, thanks!<|request_json|>"}, #for JSON output |
|
] |
|
|
|
outputs = pipeline( |
|
messages, |
|
max_new_tokens=512, |
|
) |
|
print(outputs[0]["generated_text"][-1]) |
|
|
|
|
|
While Fireplace 2 is trained to minimize incorrect structured outputs, they can still occur occasionally. Production uses of Fireplace 2 should verify the structure of all model outputs and remove any unneeded components of the output. |
|
|
|
For handling of function call responses, use the [Llama 3.1 Instruct tool response style.](https://huggingface.co/blog/llama31#custom-tool-calling) |
|
|
|
|
|
## Special Tokens |
|
|
|
Fireplace 2 utilizes special tokens applied to the Llama 3.1 tokenizer: |
|
|
|
- <|request_json|> |
|
- <|start_json|> |
|
- <|end_json|> |
|
- <|request_sql|> |
|
- <|start_sql|> |
|
- <|end_sql|> |
|
- <|request_matplotlib|> |
|
- <|start_matplotlib|> |
|
- <|end_matplotlib|> |
|
- <|request_function_call|> |
|
- <|start_function_call|> |
|
- <|end_function_call|> |
|
|
|
These are supplemental to the existing special tokens used by Llama 3.1, such as <|python_tag|> and <|start_header_id|>. Fireplace 2 has been trained using the Llama 3.1 Instruct chat structure, with new special tokens added within the conversation. |
|
|
|
The 'request' tokens are used by the user to request a specific type of structured output. They should be appended to the end of the user's message and can be alternated with normal chat responses throughout the conversation. |
|
|
|
|
|
## The Model |
|
Fireplace 2 is built on top of Llama 3.1 8b Instruct. |
|
|
|
This version of Fireplace 2 uses data from the following datasets: |
|
|
|
- [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) |
|
- [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) |
|
- [sequelbox/Cadmium](https://huggingface.co/datasets/sequelbox/Cadmium) |
|
- [sequelbox/Harlequin](https://huggingface.co/datasets/sequelbox/Harlequin) |
|
- [migtissera/Tess-v1.5](https://huggingface.co/datasets/migtissera/Tess-v1.5) |
|
- [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove) |
|
|
|
Additional capabilities will be added to future releases. |
|
|
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ValiantLabs__Llama3.1-8B-Fireplace2) |
|
|
|
| Metric |Value| |
|
|-------------------|----:| |
|
|Avg. |18.31| |
|
|IFEval (0-Shot) |54.83| |
|
|BBH (3-Shot) |24.07| |
|
|MATH Lvl 5 (4-Shot)| 5.82| |
|
|GPQA (0-shot) | 5.15| |
|
|MuSR (0-shot) | 4.38| |
|
|MMLU-PRO (5-shot) |15.63| |
|
|
|
|
|
 |
|
|
|
|
|
Fireplace 2 is created by [Valiant Labs.](http://valiantlabs.ca/) |
|
|
|
[Check out our HuggingFace page for Shining Valiant 2 and our other models!](https://huggingface.co/ValiantLabs) |
|
|
|
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs) |
|
|
|
We care about open source. |
|
For everyone to use. |
|
|
|
We encourage others to finetune further from our models. |
|
|
|
|
|
|