BabyLM's First Words
Collection
Models trained on IPA-CHILDES and evaluated for phonological knowledge using the word segmentation task, linked to child language acquisition.
•
7 items
•
Updated
Phoneme-based GPT-2 models trained on the largest 17 sections of the IPA-CHILDES dataset for the paper BabyLM's First Words: Word Segmentation as a Phonological Probing Task.
The models have 800k non-embedding parameters and were trained on 700k tokens of their language. They were evaluated for phonological knowledge using the word segmentation task. Check out the paper for more details. Training and analysis scripts can be found here.
To load a model:
from transformers import AutoModel
swedish_model = AutoModel.from_pretrained('phonemetransformers/ipa-childes-models-small', subfolder='Swedish')