Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ metrics:
|
|
9 |
- accuracy
|
10 |
pipeline_tag: text-generation
|
11 |
model-index:
|
12 |
-
- name: Latxa-Llama-3.1-70B-Instruct
|
13 |
results:
|
14 |
- task:
|
15 |
type: multiple-choice
|
@@ -68,7 +68,7 @@ model-index:
|
|
68 |
|
69 |
license: llama3.1
|
70 |
base_model:
|
71 |
-
-
|
72 |
|
73 |
co2_eq_emissions:
|
74 |
emissions: 1900800
|
@@ -80,12 +80,15 @@ co2_eq_emissions:
|
|
80 |
quantized_by: HiTZ
|
81 |
---
|
82 |
|
83 |
-
# Model Card for HiTZ/Latxa-Llama-3.1-70B-Instruct
|
84 |
|
85 |
<p align="center">
|
86 |
<img src="https://github.com/hitz-zentroa/latxa/blob/b9aa705f60ee2cc03c9ed62fda82a685abb31b07/assets/latxa_round.png?raw=true" style="height: 350px;">
|
87 |
</p>
|
88 |
|
|
|
|
|
|
|
89 |
We introduce Latxa 3.1 70B Instruct, an instructed version of [Latxa](https://aclanthology.org/2024.acl-long.799/). This new Latxa is based on Llama-3.1 (Instruct), which we trained on our Basque corpus (Etxaniz et al., 2024) comprising 4.3M documents and 4.2B tokens using language adaptation techniques (paper in preparation).
|
90 |
> [!WARNING]
|
91 |
> DISCLAIMER
|
@@ -123,7 +126,7 @@ Use the code below to get started with the model.
|
|
123 |
```python
|
124 |
from transformers import pipeline
|
125 |
|
126 |
-
pipe = pipeline('text-generation', model='HiTZ/Latxa-Llama-3.1-70B-Instruct')
|
127 |
|
128 |
messages = [
|
129 |
{'role': 'user', 'content': 'Kaixo!'},
|
|
|
9 |
- accuracy
|
10 |
pipeline_tag: text-generation
|
11 |
model-index:
|
12 |
+
- name: Latxa-Llama-3.1-70B-Instruct-FP8
|
13 |
results:
|
14 |
- task:
|
15 |
type: multiple-choice
|
|
|
68 |
|
69 |
license: llama3.1
|
70 |
base_model:
|
71 |
+
- HiTZ/Latxa-Llama-3.1-70B-Instruct
|
72 |
|
73 |
co2_eq_emissions:
|
74 |
emissions: 1900800
|
|
|
80 |
quantized_by: HiTZ
|
81 |
---
|
82 |
|
83 |
+
# Model Card for HiTZ/Latxa-Llama-3.1-70B-Instruct-FP8
|
84 |
|
85 |
<p align="center">
|
86 |
<img src="https://github.com/hitz-zentroa/latxa/blob/b9aa705f60ee2cc03c9ed62fda82a685abb31b07/assets/latxa_round.png?raw=true" style="height: 350px;">
|
87 |
</p>
|
88 |
|
89 |
+
> [!IMPORTANT]
|
90 |
+
> This is a FP8 quantized version of the original Latxa 3.1 70B Instruct.
|
91 |
+
|
92 |
We introduce Latxa 3.1 70B Instruct, an instructed version of [Latxa](https://aclanthology.org/2024.acl-long.799/). This new Latxa is based on Llama-3.1 (Instruct), which we trained on our Basque corpus (Etxaniz et al., 2024) comprising 4.3M documents and 4.2B tokens using language adaptation techniques (paper in preparation).
|
93 |
> [!WARNING]
|
94 |
> DISCLAIMER
|
|
|
126 |
```python
|
127 |
from transformers import pipeline
|
128 |
|
129 |
+
pipe = pipeline('text-generation', model='HiTZ/Latxa-Llama-3.1-70B-Instruct-FP8')
|
130 |
|
131 |
messages = [
|
132 |
{'role': 'user', 'content': 'Kaixo!'},
|