Improve language tag
Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.
Hi! First, I’m so flattered that you took the time to look at this model!
Before I merge, I’d like to understand where you got the figure of 13 explicit languages. The README for this fine-tune only lists en as far as I can see. The adaptors were only trained on submissions to SEC EDGAR, which are in English only-so if anything I’d expect the tuning to degrade any other languages in the base model. It’s really intended as an English-only model since the use case (if there even is a use case) would also be centered around SEC-EDGAR. I wouldn’t want to mislead anyone into thinking there was fine tuning around for instance Portuguese, French or Hindi or that it was capable of generating regulation-appropriate inference for any other languages.
Were you looking at the Qwen/Qwen2.5-7B-Instruct or Qwen2.5-7B cards, perhaps?
Thanks!
Indeed, my loop looks at all models base on the Qwen-2.5 family model (where it's mentioned in the READMEs of the original models or on their blog https://qwenlm.github.io/blog/qwen2.5/ that they support 29 languages).
The fact that you have finetuned on a single language has probably effectively altered the capabilities of the other languages. Merge or close the PR, whichever suits you best
Very good! Again, thanks for looking at the model and have a great week!