Abstract
Enhancing existing models with new knowledge is a crucial aspect of AI development. This paper introduces a novel method for integrating a new language into a large language model (LLM). Our approach successfully incorporates a previously unseen target language into an existing LLM without compromising its prior knowledge. We trained a tiny model with 1.5 billion parameters named Kuwain by injecting the Arabic language into a small open-source model mainly trained in English. Our method demonstrates significant improvements in Arabic language performance, with an average 8% improvement across various benchmarks, while retaining the model's existing knowledge with a minimum amount of the original model's data. This offers a cost-effective alternative to training a comprehensive model in both English and Arabic. The results highlight the potential for efficient, targeted language model expansion without extensive retraining or resource-intensive processes.
Community
This paper introduces a novel method for integrating a new language into a large language model (LLM). Our approach successfully incorporates a previously unseen target language into an existing LLM without compromising its prior knowledge. Also we only need to train on very small data from it's previous knowledge.
اكتب لي كتاب الكتروني من 40صفحة في موضوع البستنة
It's nice to see new multilingual LLM for more languages!
In two months ago, our Sailor2 model (LLM for Southeast Asian languages) also explored the model expansion to get more improvement in new language and less degeneration on existing languages. See https://huggingface.co/papers/2502.12982 for more details.
Welcome to follow and discuss!
it's quite interested paper, I've read it I found it really informative you've done quite good job @dreamerdeo . They go through the whole LLM pipline development from pre-training to post-training (Supervised fine-tuning and LR-DPO). They Also implement Pruning algorithm with new perspective,I really recommend reading the mentioned paper as it might provide a deep sight about LLM development. We currently work on scaling-up our data and models, and in near future we will release large dataset in Arabic, to rich the field, and encourage researcher to work in the Arabic field.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Llama-3-Nanda-10B-Chat: An Open Generative Large Language Model for Hindi (2025)
- UrduLLaMA 1.0: Dataset Curation, Preprocessing, and Evaluation in Low-Resource Settings (2025)
- Command R7B Arabic: A Small, Enterprise Focused, Multilingual, and Culturally Aware Arabic LLM (2025)
- Domain-Adaptive Continued Pre-Training of Small Language Models (2025)
- Kanana: Compute-efficient Bilingual Language Models (2025)
- Evaluating Compact LLMs for Zero-Shot Iberian Language Tasks on End-User Devices (2025)
- Lugha-Llama: Adapting Large Language Models for African Languages (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper