Rahman Azhar
commited on
Commit
·
69ec99f
0
Parent(s):
Initial commit: LLaMA-based itinerary generator
Browse files- .gitattributes +35 -0
- README.md +111 -0
- config/config.json +25 -0
- data/itineraries.json +23 -0
- requirements.txt +11 -0
- sample_model.txt +4 -0
- src/generate.py +87 -0
- src/train.py +102 -0
.gitattributes
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# LLaMA Itinerary Generator
|
2 |
+
|
3 |
+
A custom fine-tuned version of LLaMA-2 for generating detailed travel itineraries.
|
4 |
+
|
5 |
+
## Prerequisites
|
6 |
+
|
7 |
+
1. Access to Hugging Face's LLaMA-2 model (requires approval from Meta)
|
8 |
+
2. Python 3.8 or higher
|
9 |
+
3. CUDA-capable GPU with at least 16GB VRAM
|
10 |
+
4. Hugging Face account and token
|
11 |
+
|
12 |
+
## Setup
|
13 |
+
|
14 |
+
1. Create a virtual environment:
|
15 |
+
```bash
|
16 |
+
python -m venv venv
|
17 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
18 |
+
```
|
19 |
+
|
20 |
+
2. Install dependencies:
|
21 |
+
```bash
|
22 |
+
pip install -r requirements.txt
|
23 |
+
```
|
24 |
+
|
25 |
+
3. Configure your Hugging Face token:
|
26 |
+
```bash
|
27 |
+
huggingface-cli login
|
28 |
+
```
|
29 |
+
|
30 |
+
## Project Structure
|
31 |
+
|
32 |
+
```
|
33 |
+
.
|
34 |
+
├── config/
|
35 |
+
│ └── config.json # Training configuration
|
36 |
+
├── data/
|
37 |
+
│ └── itineraries.json # Training data
|
38 |
+
└── src/
|
39 |
+
└── train.py # Training script
|
40 |
+
```
|
41 |
+
|
42 |
+
## Training Data
|
43 |
+
|
44 |
+
The training data in `data/itineraries.json` contains examples of travel itineraries with the following structure:
|
45 |
+
- Destination
|
46 |
+
- Duration
|
47 |
+
- Preferences
|
48 |
+
- Budget
|
49 |
+
- Detailed day-by-day itinerary
|
50 |
+
|
51 |
+
## Training the Model
|
52 |
+
|
53 |
+
1. Review and adjust the configuration in `config/config.json` if needed.
|
54 |
+
|
55 |
+
2. Start training:
|
56 |
+
```bash
|
57 |
+
python src/train.py
|
58 |
+
```
|
59 |
+
|
60 |
+
The script will:
|
61 |
+
- Load the LLaMA-2 base model
|
62 |
+
- Fine-tune it on the itinerary dataset
|
63 |
+
- Save checkpoints during training
|
64 |
+
- Export the final model
|
65 |
+
|
66 |
+
## Model Details
|
67 |
+
|
68 |
+
This model is fine-tuned to generate travel itineraries based on:
|
69 |
+
- Destination
|
70 |
+
- Duration of stay
|
71 |
+
- Travel preferences
|
72 |
+
- Budget constraints
|
73 |
+
|
74 |
+
The model learns to:
|
75 |
+
- Structure day-by-day itineraries
|
76 |
+
- Balance activities based on preferences
|
77 |
+
- Consider budget constraints
|
78 |
+
- Include practical details like transportation and check-in/out
|
79 |
+
|
80 |
+
## Output Format
|
81 |
+
|
82 |
+
The model generates itineraries in a structured format:
|
83 |
+
```
|
84 |
+
Day 1:
|
85 |
+
- Activity 1
|
86 |
+
- Activity 2
|
87 |
+
...
|
88 |
+
|
89 |
+
Day 2:
|
90 |
+
- Activity 1
|
91 |
+
- Activity 2
|
92 |
+
...
|
93 |
+
```
|
94 |
+
|
95 |
+
## Monitoring Training
|
96 |
+
|
97 |
+
Training progress can be monitored using TensorBoard:
|
98 |
+
```bash
|
99 |
+
tensorboard --logdir output/runs
|
100 |
+
```
|
101 |
+
|
102 |
+
## Model Deployment
|
103 |
+
|
104 |
+
After training, the model will be saved in the `output` directory. You can upload it to Hugging Face Hub using:
|
105 |
+
```bash
|
106 |
+
huggingface-cli upload rahmanazhar/Travereel-Model-V1 output/
|
107 |
+
```
|
108 |
+
|
109 |
+
## License
|
110 |
+
|
111 |
+
This project uses LLaMA 2 which is licensed under the LLAMA 2 Community License Agreement.
|
config/config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"model_config": {
|
3 |
+
"base_model": "meta-llama/Llama-2-7b-hf",
|
4 |
+
"max_length": 512,
|
5 |
+
"learning_rate": 2e-5,
|
6 |
+
"num_epochs": 3,
|
7 |
+
"batch_size": 4,
|
8 |
+
"gradient_accumulation_steps": 4,
|
9 |
+
"warmup_steps": 100
|
10 |
+
},
|
11 |
+
"training_config": {
|
12 |
+
"output_dir": "output",
|
13 |
+
"logging_steps": 10,
|
14 |
+
"save_steps": 100,
|
15 |
+
"evaluation_strategy": "steps",
|
16 |
+
"eval_steps": 100,
|
17 |
+
"save_total_limit": 3,
|
18 |
+
"fp16": true
|
19 |
+
},
|
20 |
+
"data_config": {
|
21 |
+
"train_file": "data/itineraries.json",
|
22 |
+
"validation_split": 0.1,
|
23 |
+
"max_train_samples": null
|
24 |
+
}
|
25 |
+
}
|
data/itineraries.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"destination": "Bali, Indonesia",
|
4 |
+
"duration": 5,
|
5 |
+
"preferences": "Beach activities, cultural experiences, local cuisine",
|
6 |
+
"budget": "$1000",
|
7 |
+
"itinerary": "Day 1:\n- Arrive in Denpasar International Airport\n- Check-in at hotel in Seminyak\n- Evening walk on Seminyak Beach\n- Dinner at local warung\n\nDay 2:\n- Morning yoga session\n- Visit Tanah Lot Temple\n- Sunset dinner at Jimbaran Beach\n\nDay 3:\n- Day trip to Ubud\n- Visit Monkey Forest\n- Traditional dance performance\n- Art market shopping\n\nDay 4:\n- Nusa Penida island tour\n- Snorkeling at Crystal Bay\n- Visit Kelingking Beach\n- Local seafood dinner\n\nDay 5:\n- Morning spa treatment\n- Visit Uluwatu Temple\n- Final beach sunset\n- Departure"
|
8 |
+
},
|
9 |
+
{
|
10 |
+
"destination": "Tokyo, Japan",
|
11 |
+
"duration": 4,
|
12 |
+
"preferences": "Modern attractions, traditional culture, food exploration",
|
13 |
+
"budget": "$1500",
|
14 |
+
"itinerary": "Day 1:\n- Arrive at Narita Airport\n- Check-in at hotel in Shinjuku\n- Evening exploration of Shinjuku\n- Dinner at Robot Restaurant\n\nDay 2:\n- Tsukiji Outer Market tour\n- Visit Senso-ji Temple\n- Explore Akihabara\n- Sushi-making class\n\nDay 3:\n- TeamLab Borderless\n- Harajuku shopping\n- Meiji Shrine visit\n- Shibuya Crossing & dinner\n\nDay 4:\n- Morning at Ueno Park\n- Tokyo Sky Tree\n- Final shopping\n- Departure"
|
15 |
+
},
|
16 |
+
{
|
17 |
+
"destination": "Paris, France",
|
18 |
+
"duration": 3,
|
19 |
+
"preferences": "Art, history, romantic spots",
|
20 |
+
"budget": "$1200",
|
21 |
+
"itinerary": "Day 1:\n- Arrive at Charles de Gaulle Airport\n- Check-in at hotel near Le Marais\n- Visit Eiffel Tower\n- Seine River dinner cruise\n\nDay 2:\n- Louvre Museum morning\n- Notre-Dame Cathedral view\n- Montmartre walk\n- Evening at Moulin Rouge\n\nDay 3:\n- Palace of Versailles\n- Shopping at Champs-Élysées\n- Arc de Triomphe sunset\n- Departure"
|
22 |
+
}
|
23 |
+
]
|
requirements.txt
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
torch>=2.0.0
|
2 |
+
transformers>=4.30.0
|
3 |
+
datasets>=2.12.0
|
4 |
+
accelerate>=0.20.0
|
5 |
+
bitsandbytes>=0.39.0
|
6 |
+
trl>=0.4.7
|
7 |
+
peft>=0.4.0
|
8 |
+
evaluate>=0.4.0
|
9 |
+
tensorboard>=2.13.0
|
10 |
+
scipy>=1.10.0
|
11 |
+
scikit-learn>=1.2.2
|
sample_model.txt
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This is a sample model file for testing Hugging Face upload functionality.
|
2 |
+
Model Name: Travereel-Model-V1
|
3 |
+
Version: 1.0
|
4 |
+
Type: Test Model
|
src/generate.py
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
from transformers import LlamaForCausalLM, LlamaTokenizer
|
3 |
+
import argparse
|
4 |
+
import json
|
5 |
+
|
6 |
+
class ItineraryGenerator:
|
7 |
+
def __init__(self, model_path: str):
|
8 |
+
self.tokenizer = LlamaTokenizer.from_pretrained(model_path)
|
9 |
+
self.model = LlamaForCausalLM.from_pretrained(
|
10 |
+
model_path,
|
11 |
+
torch_dtype=torch.float16,
|
12 |
+
device_map="auto"
|
13 |
+
)
|
14 |
+
self.model.eval()
|
15 |
+
|
16 |
+
def generate_itinerary(
|
17 |
+
self,
|
18 |
+
destination: str,
|
19 |
+
duration: int,
|
20 |
+
preferences: str,
|
21 |
+
budget: str,
|
22 |
+
max_length: int = 1024,
|
23 |
+
temperature: float = 0.7,
|
24 |
+
top_p: float = 0.9,
|
25 |
+
) -> str:
|
26 |
+
prompt = f"""Generate a detailed travel itinerary for {destination} for {duration} days.
|
27 |
+
Preferences: {preferences}
|
28 |
+
Budget: {budget}
|
29 |
+
|
30 |
+
Detailed Itinerary:"""
|
31 |
+
|
32 |
+
inputs = self.tokenizer(prompt, return_tensors="pt").to(self.model.device)
|
33 |
+
|
34 |
+
with torch.no_grad():
|
35 |
+
outputs = self.model.generate(
|
36 |
+
**inputs,
|
37 |
+
max_length=max_length,
|
38 |
+
temperature=temperature,
|
39 |
+
top_p=top_p,
|
40 |
+
num_return_sequences=1,
|
41 |
+
pad_token_id=self.tokenizer.eos_token_id
|
42 |
+
)
|
43 |
+
|
44 |
+
generated_text = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
|
45 |
+
# Extract only the generated itinerary part
|
46 |
+
itinerary = generated_text[len(prompt):]
|
47 |
+
return itinerary.strip()
|
48 |
+
|
49 |
+
def main():
|
50 |
+
parser = argparse.ArgumentParser(description="Generate travel itineraries using fine-tuned LLaMA model")
|
51 |
+
parser.add_argument("--model_path", type=str, required=True, help="Path to the fine-tuned model")
|
52 |
+
parser.add_argument("--destination", type=str, required=True, help="Travel destination")
|
53 |
+
parser.add_argument("--duration", type=int, required=True, help="Number of days")
|
54 |
+
parser.add_argument("--preferences", type=str, required=True, help="Travel preferences")
|
55 |
+
parser.add_argument("--budget", type=str, required=True, help="Travel budget")
|
56 |
+
parser.add_argument("--output", type=str, help="Output file path (optional)")
|
57 |
+
|
58 |
+
args = parser.parse_args()
|
59 |
+
|
60 |
+
generator = ItineraryGenerator(args.model_path)
|
61 |
+
|
62 |
+
itinerary = generator.generate_itinerary(
|
63 |
+
destination=args.destination,
|
64 |
+
duration=args.duration,
|
65 |
+
preferences=args.preferences,
|
66 |
+
budget=args.budget
|
67 |
+
)
|
68 |
+
|
69 |
+
output = {
|
70 |
+
"destination": args.destination,
|
71 |
+
"duration": args.duration,
|
72 |
+
"preferences": args.preferences,
|
73 |
+
"budget": args.budget,
|
74 |
+
"generated_itinerary": itinerary
|
75 |
+
}
|
76 |
+
|
77 |
+
if args.output:
|
78 |
+
with open(args.output, 'w') as f:
|
79 |
+
json.dump(output, f, indent=2)
|
80 |
+
print(f"Itinerary saved to {args.output}")
|
81 |
+
else:
|
82 |
+
print("\nGenerated Itinerary:")
|
83 |
+
print("=" * 50)
|
84 |
+
print(itinerary)
|
85 |
+
|
86 |
+
if __name__ == "__main__":
|
87 |
+
main()
|
src/train.py
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
from transformers import (
|
3 |
+
LlamaForCausalLM,
|
4 |
+
LlamaTokenizer,
|
5 |
+
TrainingArguments,
|
6 |
+
Trainer,
|
7 |
+
DataCollatorForLanguageModeling
|
8 |
+
)
|
9 |
+
from datasets import load_dataset
|
10 |
+
import os
|
11 |
+
import json
|
12 |
+
from typing import Dict, List
|
13 |
+
|
14 |
+
class ItineraryDataset(torch.utils.data.Dataset):
|
15 |
+
def __init__(self, data_path: str, tokenizer, max_length: int = 512):
|
16 |
+
self.tokenizer = tokenizer
|
17 |
+
self.max_length = max_length
|
18 |
+
self.examples = self._load_data(data_path)
|
19 |
+
|
20 |
+
def _load_data(self, data_path: str) -> List[Dict]:
|
21 |
+
with open(data_path, 'r') as f:
|
22 |
+
return json.load(f)
|
23 |
+
|
24 |
+
def __len__(self):
|
25 |
+
return len(self.examples)
|
26 |
+
|
27 |
+
def __getitem__(self, idx):
|
28 |
+
example = self.examples[idx]
|
29 |
+
prompt = f"""Generate a detailed travel itinerary for {example['destination']} for {example['duration']} days.
|
30 |
+
Preferences: {example['preferences']}
|
31 |
+
Budget: {example['budget']}"""
|
32 |
+
|
33 |
+
target = example['itinerary']
|
34 |
+
|
35 |
+
# Combine prompt and target with special tokens
|
36 |
+
combined = f"{prompt}\n{target}</s>"
|
37 |
+
|
38 |
+
# Tokenize
|
39 |
+
encodings = self.tokenizer(
|
40 |
+
combined,
|
41 |
+
truncation=True,
|
42 |
+
max_length=self.max_length,
|
43 |
+
padding="max_length",
|
44 |
+
return_tensors="pt"
|
45 |
+
)
|
46 |
+
|
47 |
+
return {
|
48 |
+
"input_ids": encodings["input_ids"][0],
|
49 |
+
"attention_mask": encodings["attention_mask"][0],
|
50 |
+
"labels": encodings["input_ids"][0].clone()
|
51 |
+
}
|
52 |
+
|
53 |
+
def train_itinerary_model(
|
54 |
+
model_name: str = "meta-llama/Llama-2-7b-hf",
|
55 |
+
data_path: str = "data/itineraries.json",
|
56 |
+
output_dir: str = "output",
|
57 |
+
num_epochs: int = 3,
|
58 |
+
batch_size: int = 4,
|
59 |
+
learning_rate: float = 2e-5,
|
60 |
+
):
|
61 |
+
# Initialize tokenizer and model
|
62 |
+
tokenizer = LlamaTokenizer.from_pretrained(model_name)
|
63 |
+
model = LlamaForCausalLM.from_pretrained(
|
64 |
+
model_name,
|
65 |
+
torch_dtype=torch.float16,
|
66 |
+
device_map="auto"
|
67 |
+
)
|
68 |
+
|
69 |
+
# Load dataset
|
70 |
+
dataset = ItineraryDataset(data_path, tokenizer)
|
71 |
+
|
72 |
+
# Training arguments
|
73 |
+
training_args = TrainingArguments(
|
74 |
+
output_dir=output_dir,
|
75 |
+
num_train_epochs=num_epochs,
|
76 |
+
per_device_train_batch_size=batch_size,
|
77 |
+
gradient_accumulation_steps=4,
|
78 |
+
learning_rate=learning_rate,
|
79 |
+
warmup_steps=100,
|
80 |
+
logging_steps=10,
|
81 |
+
save_steps=100,
|
82 |
+
fp16=True,
|
83 |
+
report_to="tensorboard"
|
84 |
+
)
|
85 |
+
|
86 |
+
# Initialize trainer
|
87 |
+
trainer = Trainer(
|
88 |
+
model=model,
|
89 |
+
args=training_args,
|
90 |
+
train_dataset=dataset,
|
91 |
+
data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False)
|
92 |
+
)
|
93 |
+
|
94 |
+
# Train the model
|
95 |
+
trainer.train()
|
96 |
+
|
97 |
+
# Save the model
|
98 |
+
trainer.save_model()
|
99 |
+
tokenizer.save_pretrained(output_dir)
|
100 |
+
|
101 |
+
if __name__ == "__main__":
|
102 |
+
train_itinerary_model()
|