Update README.md
Browse files
README.md
CHANGED
@@ -43,8 +43,8 @@ There are two main steps
|
|
43 |
|
44 |
## Download the model from Huggingface
|
45 |
To use the model, you can run it with TorchTune commands. I have provided the necessary Python code to automate the process. Follow these steps to get started:
|
46 |
-
- Download the fintuned version `meta_model_0.pt` file (see the `files` tap in this page).
|
47 |
-
- Save the model file in the following directory: `/home/USERNAME/Meta-Llama-3-8B/`
|
48 |
|
49 |
## Using the model
|
50 |
|
@@ -192,11 +192,11 @@ python command.py
|
|
192 |
|
193 |
We fine-tuned the `Meta-Llama-3-8B` model by two key steps: preparing the dataset and executing the fine-tuning process.
|
194 |
|
195 |
-
### Prepare the Dataset
|
196 |
|
197 |
For this fine-tuning, we utilized the [QueryBridge dataset](https://huggingface.co/datasets/aorogat/QueryBridge), specifically the pairs of questions and their corresponding tagged questions. However, before we can use this dataset, it is necessary to convert the data into instruct prompts suitable for fine-tuning the model. You can find these prompts at [this link](https://huggingface.co/datasets/aorogat/Questions_to_Tagged_Questions_Prompts). Download the prompts and save them in the directory: `/home/YOUR_USERNAME/data`
|
198 |
|
199 |
-
### Fine-Tune the Model
|
200 |
|
201 |
To fine-tune the `Meta-Llama-3-8B` model, we leveraged [Torchtune](https://pytorch.org/torchtune/stable/index.html). Follow these steps to complete the process:
|
202 |
|
|
|
43 |
|
44 |
## Download the model from Huggingface
|
45 |
To use the model, you can run it with TorchTune commands. I have provided the necessary Python code to automate the process. Follow these steps to get started:
|
46 |
+
1- Download the fintuned version `meta_model_0.pt` file (see the `files` tap in this page).
|
47 |
+
2- Save the model file in the following directory: `/home/USERNAME/Meta-Llama-3-8B/`
|
48 |
|
49 |
## Using the model
|
50 |
|
|
|
192 |
|
193 |
We fine-tuned the `Meta-Llama-3-8B` model by two key steps: preparing the dataset and executing the fine-tuning process.
|
194 |
|
195 |
+
### 1- Prepare the Dataset
|
196 |
|
197 |
For this fine-tuning, we utilized the [QueryBridge dataset](https://huggingface.co/datasets/aorogat/QueryBridge), specifically the pairs of questions and their corresponding tagged questions. However, before we can use this dataset, it is necessary to convert the data into instruct prompts suitable for fine-tuning the model. You can find these prompts at [this link](https://huggingface.co/datasets/aorogat/Questions_to_Tagged_Questions_Prompts). Download the prompts and save them in the directory: `/home/YOUR_USERNAME/data`
|
198 |
|
199 |
+
### 2- Fine-Tune the Model
|
200 |
|
201 |
To fine-tune the `Meta-Llama-3-8B` model, we leveraged [Torchtune](https://pytorch.org/torchtune/stable/index.html). Follow these steps to complete the process:
|
202 |
|