AKA Tagamistral-7b-v1:

  • Yet another archived test/toy model, fine-tuned on a synthetic Tagalog dataset partially produced by Mistral based on this dataset
  • Base: SeaLLM
  • GGUF

USAGE

This is meant to be mainly a chat model.

Best results with "Human" and "Assistant" and prompt with Tagalog. Example:

"Ito ay isang chat log sa pagitan ng AI Assistant na nagta-Tagalog at isang Pilipino. Magsimula ng chat:\nHuman: Hello po?\nAssistant:"

HYPERPARAMS

  • Trained for ~1 epoch
  • rank: 32
  • lora alpha: 32
  • lora dropout: 0
  • lr: 2e-4
  • batch size: 2
  • grad steps: 4

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

There is still a chance that the model may switch to English or Taglish.

It is possible that the Tagalog capability still comes mostly from the fine-tuned base more than the dataset.

Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk!

Downloads last month
14
Safetensors
Model size
7.38B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 922-Narra/tagalog-seallm-7b-v1

Finetuned
(21)
this model
Quantizations
1 model