magistermilitum commited on
Commit
98cb8e2
·
verified ·
1 Parent(s): 5143237

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +258 -5
README.md CHANGED
@@ -62,14 +62,16 @@ size_categories:
62
  ---
63
 
64
 
65
- This is the first dataset version of the corpora used in **TRIDIS** (*Tria Digita Scribunt*) which is a series of Handwriting Text Recognition models trained on semi-diplomatic transcriptions
66
- from medieval and Early Modern Manuscripts.
67
 
68
- Semi-diplomatic transcription paradigm involves resolving abbreviations used originally in manuscripts and normalizing Punctuation and Allographs.
69
 
70
- The dataset involves 4k pages of manuscripts and is suitable for work on documentary manuscripts, that is, manuscripts arising from legal, administrative, and memorial practices such as registers, feudal books, charters, proceedings, comptability more commonly from the Late Middle Ages (13th century and onwards).
 
 
 
 
71
 
72
- The dataset covers Western Europe areas (Spain, France and Germany mostly) spanning from the 12th to the 17th centuries.
73
 
74
  #### Corpora
75
  The original ground-truth corpora are available under CC BY licenses on online repositories:
@@ -95,4 +97,255 @@ There is a pre-print presenting this corpus:
95
  journal={arXiv preprint arXiv:2503.22714},
96
  year={2025}
97
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  ```
 
62
  ---
63
 
64
 
65
+ This is the first version of the dataset derived from the corpora used for **TRIDIS** (*Tria Digita Scribunt*).
 
66
 
67
+ TRIDIS encompasses a series of Handwriting Text Recognition (HTR) models trained using semi-diplomatic transcriptions of medieval and early modern manuscripts.
68
 
69
+ The semi-diplomatic transcription approach involves resolving abbreviations found in the original manuscripts and normalizing Punctuation and Allographs.
70
+
71
+ The dataset contains approximately 4,000 pages of manuscripts and is particularly suitable for working with documentary sources – manuscripts originating from legal, administrative, and memorial practices. Examples include registers, feudal books, charters, proceedings, and accounting records, primarily dating from the Late Middle Ages (13th century onwards).
72
+
73
+ The dataset covers Western European regions (mainly Spain, France, and Germany) and spans the 12th to the 17th centuries.
74
 
 
75
 
76
  #### Corpora
77
  The original ground-truth corpora are available under CC BY licenses on online repositories:
 
97
  journal={arXiv preprint arXiv:2503.22714},
98
  year={2025}
99
  }
100
+ ```
101
+
102
+ ### How to Get Started with this DataSet
103
+ Use this Python code to easily train a TrOCR model with the TRIDIS dataset:
104
+
105
+ ```python
106
+ #Use Transformers==4.43.0
107
+ #Note: Data augmentation is omitted here but strongly recommended.
108
+
109
+ import torch
110
+ from PIL import Image
111
+
112
+ import torchvision.transforms as transforms
113
+ from torch.utils.data import Dataset
114
+ from datasets import load_dataset # Import load_dataset
115
+ from transformers import (
116
+ AutoFeatureExtractor,
117
+ AutoTokenizer,
118
+ TrOCRProcessor,
119
+ VisionEncoderDecoderModel,
120
+ Seq2SeqTrainer,
121
+ Seq2SeqTrainingArguments,
122
+ default_data_collator
123
+ )
124
+ from evaluate import load
125
+
126
+ # --- START MODIFIED SECTION ---
127
+
128
+ # Load the dataset from Hugging Face
129
+ dataset = load_dataset("magistermilitum/Tridis")
130
+ print("Dataset loaded.")
131
+
132
+ # Initialize the processor
133
+ # Use the specific processor associated with the TrOCR model
134
+ processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") #or the large version for better performance
135
+ print("Processor loaded.")
136
+
137
+ # --- Custom Dataset Modified for Deferred Loading (No Augmentation) ---
138
+ class CustomDataset(Dataset):
139
+ def __init__(self, hf_dataset, processor, max_target_length=160):
140
+ """
141
+ Args:
142
+ hf_dataset: The dataset loaded by Hugging Face (datasets.Dataset).
143
+ processor: The TrOCR processor.
144
+ max_target_length: Maximum length for the target labels.
145
+ """
146
+ self.hf_dataset = hf_dataset
147
+ self.processor = processor
148
+ self.max_target_length = max_target_length
149
+
150
+ # --- EFFICIENT FILTERING ---
151
+ # Filter here to know the actual length and avoid processing invalid samples in __getitem__
152
+ # Use indices to maintain the efficiency of accessing the original dataset
153
+ self.valid_indices = [
154
+ i for i, text in enumerate(self.hf_dataset["text"])
155
+ if isinstance(text, str) and 3 < len(text) < 257 # Filter based on text length
156
+ ]
157
+ print(f"Dataset filtered. Valid samples: {len(self.valid_indices)} / {len(self.hf_dataset)}")
158
+
159
+ def __len__(self):
160
+ # The length is the number of valid indices after filtering
161
+ return len(self.valid_indices)
162
+
163
+ def __getitem__(self, idx):
164
+ # Get the original index in the Hugging Face dataset
165
+ original_idx = self.valid_indices[idx]
166
+
167
+ # Load the specific sample from the Hugging Face dataset
168
+ item = self.hf_dataset[original_idx]
169
+ image = item["image"]
170
+ text = item["text"]
171
+
172
+ # Ensure the image is PIL and RGB
173
+ if not isinstance(image, Image.Image):
174
+ # If not PIL (rare with load_dataset, but for safety)
175
+ # Assume it can be loaded by PIL or is a numpy array
176
+ try:
177
+ image = Image.fromarray(image).convert("RGB")
178
+ except:
179
+ # Fallback or error handling if conversion fails
180
+ print(f"Error converting image at original index {original_idx}. Using placeholder.")
181
+ # Returning a placeholder might be better handled by the collator or skipping.
182
+ # For now, repeating the first valid sample as a placeholder (not ideal).
183
+ item = self.hf_dataset[self.valid_indices[0]]
184
+ image = item["image"].convert("RGB")
185
+ text = item["text"]
186
+ else:
187
+ image = image.convert("RGB")
188
+
189
+ # Process image using the TrOCR processor
190
+ try:
191
+ # The processor handles resizing and normalization
192
+ pixel_values = self.processor(images=image, return_tensors="pt").pixel_values
193
+ except Exception as e:
194
+ print(f"Error processing image at original index {original_idx}: {e}. Using placeholder.")
195
+ # Create a black placeholder tensor if processing fails
196
+ # Ensure the size matches the expected input size for the model
197
+ img_size = self.processor.feature_extractor.size
198
+ # Check if size is defined as int or dict/tuple
199
+ if isinstance(img_size, int):
200
+ h = w = img_size
201
+ elif isinstance(img_size, dict) and 'height' in img_size and 'width' in img_size:
202
+ h = img_size['height']
203
+ w = img_size['width']
204
+ elif isinstance(img_size, (tuple, list)) and len(img_size) == 2:
205
+ h, w = img_size
206
+ else: # Default fallback size if uncertain
207
+ h, w = 384, 384 # Common TrOCR size, adjust if needed
208
+ pixel_values = torch.zeros((3, h, w))
209
+
210
+
211
+ # Tokenize the text
212
+ labels = self.processor.tokenizer(
213
+ text,
214
+ padding="max_length",
215
+ max_length=self.max_target_length,
216
+ truncation=True # Important to add truncation just in case
217
+ ).input_ids
218
+
219
+ # Replace pad tokens with -100 to ignore in the loss function
220
+ labels = [label if label != self.processor.tokenizer.pad_token_id else -100
221
+ for label in labels]
222
+
223
+ encoding = {
224
+ # .squeeze() removes dimensions of size 1, necessary as we process one image at a time
225
+ "pixel_values": pixel_values.squeeze(),
226
+ "labels": torch.tensor(labels)
227
+ }
228
+ return encoding
229
+
230
+ # --- Create Instances of the Modified Dataset ---
231
+ # Pass the Hugging Face dataset directly
232
+ train_dataset = CustomDataset(dataset["train"], processor)
233
+ eval_dataset = CustomDataset(dataset["validation"], processor)
234
+
235
+ print(f"\nNumber of training examples (valid and filtered): {len(train_dataset)}")
236
+ print(f"Number of validation examples (valid and filtered): {len(eval_dataset)}")
237
+
238
+ # --- END MODIFIED SECTION ---
239
+
240
+
241
+ # Load pretrained model
242
+ print("\nLoading pre-trained model...")
243
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
244
+ model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
245
+ model.to(device)
246
+ print(f"Model loaded on: {device}")
247
+
248
+ # Configure the model for fine-tuning
249
+ print("Configuring model...")
250
+ model.config.decoder.is_decoder = True # Explicitly set decoder flag
251
+ model.config.decoder.add_cross_attention = True # Ensure decoder attends to encoder outputs
252
+ model.config.decoder_start_token_id = processor.tokenizer.cls_token_id # Start generation with CLS token
253
+ model.config.pad_token_id = processor.tokenizer.pad_token_id # Set pad token ID
254
+ model.config.vocab_size = model.config.decoder.vocab_size # Set vocabulary size
255
+ model.config.eos_token_id = processor.tokenizer.sep_token_id # Set end-of-sequence token ID
256
+
257
+ # Generation configuration (influences evaluation and inference)
258
+ model.config.max_length = 160 # Max generated sequence length
259
+ model.config.early_stopping = True # Stop generation early if EOS is reached
260
+ model.config.no_repeat_ngram_size = 3 # Prevent repetitive n-grams
261
+ model.config.length_penalty = 2.0 # Encourage longer sequences slightly
262
+ model.config.num_beams = 3 # Use beam search for better quality generation
263
+
264
+ # Metrics
265
+ print("Loading metrics...")
266
+ cer_metric = load("cer")
267
+ wer_metric = load("wer")
268
+
269
+ def compute_metrics(pred):
270
+ labels_ids = pred.label_ids
271
+ pred_ids = pred.predictions
272
+
273
+ # Replace -100 with pad_token_id for correct decoding
274
+ labels_ids[labels_ids == -100] = processor.tokenizer.pad_token_id
275
+
276
+ # Decode predictions and labels
277
+ pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
278
+ label_str = processor.batch_decode(labels_ids, skip_special_tokens=True)
279
+
280
+ # Calculate CER and WER
281
+ cer = cer_metric.compute(predictions=pred_str, references=label_str)
282
+ wer = wer_metric.compute(predictions=pred_str, references=label_str)
283
+
284
+ print(f"\nEvaluation Step Metrics - CER: {cer:.4f}, WER: {wer:.4f}") # Print metrics
285
+
286
+ return {"cer": cer, "wer": wer} # Return metrics required by Trainer
287
+
288
+
289
+ # Training configuration
290
+ batch_size_train = 32 # Adjust based on GPU memory, 32 for 48GB of vram
291
+ batch_size_eval = 32 # Adjust based on GPU memory
292
+ epochs = 10 # Number of training epochs (15 recommended)
293
+
294
+ print("\nConfiguring training arguments...")
295
+ training_args = Seq2SeqTrainingArguments(
296
+ predict_with_generate=True, # Use generate for evaluation (needed for CER/WER)
297
+ per_device_train_batch_size=batch_size_train,
298
+ per_device_eval_batch_size=batch_size_eval,
299
+ fp16=True if device == "cuda" else False, # Enable mixed precision training on GPU
300
+ output_dir="./trocr-model-tridis", # Directory to save model checkpoints
301
+ logging_strategy="steps",
302
+ logging_steps=10, # Log training loss every 50 steps
303
+ evaluation_strategy='steps', # Evaluate every N steps
304
+ eval_steps=5000, # Adjust based on dataset size
305
+ save_strategy='steps', # Save checkpoint every N steps
306
+ save_steps=5000, # Match eval steps)
307
+ num_train_epochs=epochs,
308
+ save_total_limit=3, # Keep only the last 3 checkpoints
309
+ learning_rate=7e-5, # Learning rate for the optimizer
310
+ weight_decay=0.01, # Weight decay for regularization
311
+ warmup_ratio=0.05, # Percentage of training steps for learning rate warmup
312
+ lr_scheduler_type="cosine", # Learning rate scheduler type (better than linear)
313
+ dataloader_num_workers=8, # Use multiple workers for data loading (adjust based on CPU cores)
314
+ # report_to="tensorboard", # Uncomment to enable TensorBoard logging
315
+ )
316
+
317
+ # Initialize the Trainer
318
+ trainer = Seq2SeqTrainer(
319
+ model=model,
320
+ tokenizer=processor.feature_extractor, # Pass the feature_extractor for collation
321
+ args=training_args,
322
+ compute_metrics=compute_metrics,
323
+ train_dataset=train_dataset,
324
+ eval_dataset=eval_dataset,
325
+ data_collator=default_data_collator, # Default collator handles padding inputs/labels
326
+ )
327
+
328
+ # Start Training
329
+ print("\n--- Starting Training ---")
330
+ try:
331
+ trainer.train()
332
+ print("\n--- Training Completed ---")
333
+ except Exception as e:
334
+ error_message = f"Error during training: {e}"
335
+ print(error_message)
336
+ # Consider saving a checkpoint on error if needed
337
+ # trainer.save_model("./trocr-model-magistermilitum-interrupted")
338
+
339
+ # Save the final model and processor
340
+ print("Saving final model and processor...")
341
+ # Ensure the final directory name is consistent
342
+ final_save_path = "./trocr-model-tridis-final"
343
+ trainer.save_model(final_save_path)
344
+ processor.save_pretrained(final_save_path) # Save the processor alongside the model
345
+ print(f"Model and processor saved to {final_save_path}")
346
+
347
+ # Clean up CUDA cache if GPU was used
348
+ if device == "cuda":
349
+ torch.cuda.empty_cache()
350
+ print("CUDA cache cleared.")
351
  ```