Pearl Model (11M-translate): English to Luganda Translation
This is the Pearl Model (11M-translate), a Transformer-based neural machine translation (NMT) model trained from scratch. It is designed to translate text from English to Luganda and contains approximately 11 million parameters.
Model Overview
The Pearl Model is an encoder-decoder Transformer architecture implemented entirely in PyTorch. It was developed to explore NMT capabilities for English-Luganda, a relatively low-resource language pair.
- Model Type: Sequence-to-Sequence Transformer
- Source Language: English ('english')
- Target Language: Luganda ('luganda')
- Framework: PyTorch
- Parameters: ~11 Million
- Training: From scratch
Detailed hyperparameters, architectural specifics, and tokenizer configurations can be found in the accompanying config.json
file.
Intended Use
This model is intended for:
- Translating general domain text from English to Luganda.
- Research purposes in low-resource machine translation, Transformer architectures, and NLP for African languages.
- Serving as a baseline for future improvements in English-Luganda translation.
- Educational tool for understanding how to build and train NMT models from scratch.
Out-of-scope:
- Translation of highly specialized or technical jargon not present in the training data.
- High-stakes applications requiring perfect fluency or nuance without further fine-tuning and rigorous evaluation.
- Translation into English (this model is unidirectional: English to Luganda).
Training Details
Dataset
The model was trained exclusively on the kambale/luganda-english-parallel-corpus
dataset available on the Hugging Face Hub. This dataset consists of parallel sentences in English and Luganda.
- Dataset ID: kambale/luganda-english-parallel-corpus
- Training Epochs: 50
- Tokenizers: Byte-Pair Encoding (BPE) tokenizers were trained from scratch on the respective language portions of the training dataset.
- English Tokenizer:
english_tokenizer.json
- Luganda Tokenizer:
luganda_tokenizer.json
- English Tokenizer:
Compute Infrastructure
- Hardware: 1x NVIDIA A100 40GB
- Training Time: Approx. 2 hours
Performance & Evaluation
The model's performance was evaluated based on validation loss and BLEU score on the test split of the kambale/luganda-english-parallel-corpus
dataset.
- Best Validation Loss: 1.181
- Test Set BLEU Score: 27.90
Validation Set Examples
Source: Youths turned up in big numbers for the event .
Target (Reference): Abavubuka bazze mu bungi ku mukolo .
Target (Predicted): Abavubuka bazze mu bungi ku mukolo .
Source: Employers should ensure their places of work are safe for employees .
Target (Reference): Abakozesa basaanidde okukakasa nti ebifo abakozi baabwe we bakolera si bya bulabe eri abakozi baabwe .
Target (Predicted): Abakozesa balina okukakasa nti ebifo abakozi baabwe we bakolera si bya bulabe eri abakozi baabwe .
Source: We sent our cond ol ences to the family of the deceased .
Target (Reference): Twa weereza obubaka obu kuba gi za eri ab ' omu maka g ' omugenzi .
Target (Predicted): Twa sindika abe ere tu waayo mu maka gaffe omugenzi .
(Note: BLEU scores can vary based on the exact tokenization and calculation method. The score reported here uses SacreBLEU on detokenized text.)
Training Loss Curve
The following graph illustrates the training and validation loss over epochs:
How to Use
This model is provided with its PyTorch state dictionary, tokenizer files, and configuration. As it's a custom implementation, direct use with Hugging Face AutoModelForSeq2SeqLM
might require adapting the model class definition to be compatible with the Transformers library's expectations or by loading the components manually.
Manual Loading (Conceptual Example)
Define the Model Architecture: You'll need the Python code for
Seq2SeqTransformer
,Encoder
,Decoder
,MultiHeadAttentionLayer
,PositionwiseFeedforwardLayer
, andPositionalEncoding
classes as used during training.Load Tokenizers:
from tokenizers import Tokenizer # Make sure these paths point to the tokenizer files in your local environment # after downloading them from the Hub. src_tokenizer_path = "english_tokenizer.json" # Or path to downloaded file trg_tokenizer_path = "luganda_tokenizer.json" # Or path to downloaded file src_tokenizer = Tokenizer.from_file(src_tokenizer_path) trg_tokenizer = Tokenizer.from_file(trg_tokenizer_path) # Retrieve special token IDs (ensure these match your training config) SRC_PAD_IDX = src_tokenizer.token_to_id("<pad>") TRG_PAD_IDX = trg_tokenizer.token_to_id("<pad>") # ... and other special tokens if needed by your model class
Instantiate and Load Model Weights:
import torch # Assuming your model class definitions are available # from your_model_script import Seq2SeqTransformer, Encoder, Decoder # etc. # Retrieve model parameters from config.json or define them # Example (these should match your actual model config): INPUT_DIM = src_tokenizer.get_vocab_size() OUTPUT_DIM = trg_tokenizer.get_vocab_size() HID_DIM = 256 # Example, check config.json ENC_LAYERS = 3 # Example DEC_LAYERS = 3 # Example ENC_HEADS = 8 # Example DEC_HEADS = 8 # Example ENC_PF_DIM = 512 # Example DEC_PF_DIM = 512 # Example ENC_DROPOUT = 0.1 # Example DEC_DROPOUT = 0.1 # Example DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') MAX_LEN_MODEL = 128 # Example, max sequence length model was trained with # Instantiate encoder and decoder enc = Encoder(INPUT_DIM, HID_DIM, ENC_LAYERS, ENC_HEADS, ENC_PF_DIM, ENC_DROPOUT, DEVICE, MAX_LEN_MODEL) dec = Decoder(OUTPUT_DIM, HID_DIM, DEC_LAYERS, DEC_HEADS, DEC_PF_DIM, DEC_DROPOUT, DEVICE, MAX_LEN_MODEL) # Instantiate the main model model = Seq2SeqTransformer(enc, dec, SRC_PAD_IDX, TRG_PAD_IDX, DEVICE) # Load the state dictionary model_weights_path = "pytorch_model.bin" # Or path to downloaded file model.load_state_dict(torch.load(model_weights_path, map_location=DEVICE)) model.to(DEVICE) model.eval()
(The above code is illustrative. You'll need to ensure the model class and parameters correctly match those used for training, as detailed in
config.json
and your training script.)Inference/Translation Function: You would then use your
translate_sentence
function (or a similar one) from your training notebook, passing the loaded model and tokenizers.
Limitations and Bias
- Low-Resource Pair: Luganda is a low-resource language. While the
kambale/luganda-english-parallel-corpus
is a valuable asset, the overall volume of parallel data is still limited compared to high-resource language pairs. This can lead to:- Difficulties in handling out-of-vocabulary (OOV) words or rare phrases.
- Potential for translations to be less fluent or accurate for complex sentences or nuanced expressions.
- The model might reflect biases present in the training data.
- Data Source Bias: The characteristics and biases of the
kambale/luganda-english-parallel-corpus
(e.g., domain, style, demographic representation) will be reflected in the model's translations. - Generalization: The model may not generalize well to domains significantly different from the training data.
- No Back-translation or Advanced Techniques: This model was trained directly on the parallel corpus without more advanced techniques like back-translation or pre-training on monolingual data, which could further improve performance.
- Greedy Decoding for Examples: Performance metrics (BLEU) are typically calculated using beam search. The conceptual usage examples might rely on greedy decoding, which can be suboptimal.
Ethical Considerations
- Bias Amplification: Machine translation models can inadvertently perpetuate or even amplify societal biases present in the training data. Users should be aware of this potential when using the translations.
- Misinformation: As with any generative model, there's a potential for misuse in generating misleading or incorrect information.
- Cultural Nuance: Automated translation may miss critical cultural nuances, potentially leading to misinterpretations. Human oversight is recommended for sensitive or important translations.
- Attribution: The training data is sourced from
kambale/luganda-english-parallel-corpus
. Please refer to the dataset card for its specific sourcing and licensing.
Future Work & Potential Improvements
- Fine-tuning on domain-specific data.
- Training with a larger parallel corpus if available.
- Incorporating monolingual Luganda data through techniques like back-translation.
- Experimenting with larger model architectures or pre-trained multilingual models as a base.
- Implementing more sophisticated decoding strategies (e.g., beam search with length normalization).
- Conducting a thorough human evaluation of translation quality.
Disclaimer
This model is provided "as-is" without warranty of any kind, express or implied. It was trained as part of an educational demonstration and may have limitations in accuracy, fluency, and robustness. Users should validate its suitability for their specific applications.
- Downloads last month
- 13
Dataset used to train kambale/pearl-11m-translate
Evaluation results
- BLEU on kambale/luganda-english-parallel-corpus (Test Split)self-reported27.900
- Validation Loss on kambale/luganda-english-parallel-corpus (Test Split)self-reported1.181