{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "provenance": [] }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" } }, "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "fPrA1gUdvvLK", "outputId": "3ba5f80a-5cf5-48a0-e826-5ac4172fa3d5" }, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "'import pandas as pd\\n\\ndef load_dataset(file_path):\\n # Load the Ewondo sentences from the Excel file\\n df = pd.read_excel(file_path)\\n ewondo_sentences = df[\\'Ewondo\\'].tolist()\\n \\n # Phonetic data and additional info\\n phonetic_data = {\\n \"alphabet\": [\\n \\'Alpha\\', \\'a\\', \\'b\\', \\'d\\', \\'e\\', \\'ə\\', \\'f\\', \\'g\\', \\'i\\', \\'k\\', \\'l\\', \\n \\'m\\', \\'n\\', \\'ŋ\\', \\'o\\', \\'ɔ\\', \\'s\\', \\'t\\', \\'u\\', \\'v\\', \\'w\\', \\'y\\', \\'z\\'\\n ],\\n \"consonants\": [\\n \\'p\\', \\'b\\', \\'t\\', \\'d\\', \\'ʈ\\', \\'ɖ\\', \\'c\\', \\'ɟ\\', \\'k\\', \\'g\\', \\'q\\', \\'ɢ\\', \\n \\'ʔ\\', \\'m\\', \\'ɱ\\', \\'n\\', \\'ɳ\\', \\'ɲ\\', \\'ŋ\\', \\'ɴ\\', \\'ʙ\\', \\'r\\', \\'ʀ\\', \\n \\'ɾ\\', \\'ɽ\\', \\'ɸ\\', \\'β\\', \\'f\\', \\'v\\', \\'θ\\', \\'ð\\', \\'s\\', \\'z\\', \\'ʃ\\', \\n \\'ʒ\\', \\'ʂ\\', \\'ʐ\\', \\'ç\\', \\'ʝ\\', \\'x\\', \\'ɣ\\', \\'χ\\', \\'ʁ\\', \\'ħ\\', \\'ʕ\\', \\n \\'h\\', \\'ɦ\\', \\'ɬ\\', \\'ɮ\\', \\'ʋ\\', \\'ɹ\\', \\'ɻ\\', \\'j\\', \\'ɰ\\', \\'l\\', \\'ɭ\\', \\n \\'ʎ\\', \\'ʟ\\', \\'ƥ\\', \\'ɓ\\', \\'ƭ\\', \\'ɗ\\', \\'ƈ\\', \\'ʄ\\', \\'ƙ\\', \\'ɠ\\', \\'ʠ\\', \\n \\'ʛ\\'\\n ],\\n \"vowels\": [\\n \\'i\\', \\'y\\', \\'ɨ\\', \\'ʉ\\', \\'ɯ\\', \\'u\\', \\'ɪ\\', \\'ʏ\\', \\'ʊ\\', \\'e\\', \\'ø\\', \\n \\'ɘ\\', \\'ɵ\\', \\'ɤ\\', \\'ə\\', \\'ɛ\\', \\'œ\\', \\'ɜ\\', \\'ɞ\\', \\'ʌ\\', \\'ɔ\\', \\n \\'æ\\', \\'ɐ\\', \\'a\\', \\'ɶ\\', \\'ɑ\\', \\'ɒ\\'\\n ],\\n \"numerals\": {\\n \"0\": \"zəzə\",\\n \"1\": \"fɔ́g\",\\n \"2\": \"bɛ̄\",\\n \"3\": \"lɛ́\",\\n \"4\": \"nyii\",\\n \"5\": \"tán\",\\n \"6\": \"saman\",\\n \"7\": \"zəmgbál\",\\n \"8\": \"moom\",\\n \"9\": \"ebûl\",\\n \"10\": \"awôn\",\\n \"11\": \"awôn ai mbɔ́g\",\\n \"12\": \"awôn ai bɛ̄bɛ̄ɛ̄\",\\n \"13\": \"awôn ai bɛ̄lɛ́\",\\n \"14\": \"awôn ai bɛ̄nyii\",\\n \"15\": \"awôn ai bɛ̄tán\",\\n \"16\": \"awôn ai saman\",\\n \"17\": \"awôn ai zəmgbál\",\\n \"18\": \"awôn ai moom\",\\n \"19\": \"awôn ai ebûl\",\\n # Include more numerals here if needed\\n }\\n }\\n \\n return ewondo_sentences, phonetic_data\\n\\n# Example usage\\nfile_path = \"/content/alphabet_and_numbers.xlsx\"\\newondo_sentences, phonetic_data = load_dataset(file_path)\\n\\n# Access the data\\nprint(ewondo_sentences)\\nprint(phonetic_data)'" ], "application/vnd.google.colaboratory.intrinsic+json": { "type": "string" } }, "metadata": {}, "execution_count": 17 } ], "source": [ "import pandas as pd\n", "\n", "def load_dataset(file_path):\n", " # Load the Tupuri sentences from the Excel file\n", " df = pd.read_json(file_path)\n", " tupuri_sentences = df['Tupuri'].tolist()\n", "\n", " # Phonetic data and additional info\n", " phonetic_data = {\n", " \"alphabet\": [\n", " 'Alpha', 'a', 'b', 'd', 'c', 'e', 'ə', 'f', 'g','h', 'i', 'k', 'l',\n", " 'm', 'n', 'ŋ', 'o','p','q','r' ,'ɔ', 's', 't', 'u', 'v', 'w', 'y', 'z'\n", " ],\n", " \"consonants\": [\n", " 'p', 'b', 't', 'd', 'ʈ', 'ɖ', 'c', 'ɟ', 'k', 'g', 'q', 'ɢ',\n", " 'ʔ', 'm', 'ɱ', 'n', 'ɳ', 'ɲ', 'ŋ', 'ɴ', 'ʙ', 'r', 'ʀ',\n", " 'ɾ', 'ɽ', 'ɸ', 'β', 'f', 'v', 'θ', 'ð', 's', 'z', 'ʃ',\n", " 'ʒ', 'ʂ', 'ʐ', 'ç', 'ʝ', 'x', 'ɣ', 'χ', 'ʁ', 'ħ', 'ʕ',\n", " 'h', 'ɦ', 'ɬ', 'ɮ', 'ʋ', 'ɹ', 'ɻ', 'j', 'ɰ', 'l', 'ɭ',\n", " 'ʎ', 'ʟ', 'ƥ', 'ɓ', 'ƭ', 'ɗ', 'ƈ', 'ʄ', 'ƙ', 'ɠ', 'ʠ',\n", " 'ʛ'\n", " ],\n", " \"vowels\": [\n", " 'i', 'y', 'ɨ', 'ʉ', 'ɯ', 'u', 'ɪ', 'ʏ', 'ʊ', 'e', 'ø',\n", " 'ɘ', 'ɵ', 'ɤ', 'ə', 'ɛ', 'œ', 'ɜ', 'ɞ', 'ʌ', 'ɔ',\n", " 'æ', 'ɐ', 'a', 'ɶ', 'ɑ', 'ɒ'\n", " ],\n", " \"numerals\": {\n", " \"0\": \"zəzə\",\n", " \"1\": \"boŋ\",\n", " \"2\": \"ɓog\",\n", " \"3\": \"swa'\",\n", " \"4\": \"Naa\",\n", " \"5\": \"Dwee\",\n", " \"6\": \"hiira\",\n", " \"7\": \"Renam\",\n", " \"8\": \"nenma\",\n", " \"9\": \"kawa'\",\n", " \"10\": \"hwal\",\n", " \"11\": \"hwal ti bon\",\n", " \"12\": \"hwal ti ɓog\",\n", " \"13\": \"hwal ti naa\",\n", " \"14\": \"hwal ti naa\",\n", " \"15\": \"hwal ti dwee\",\n", " \"16\": \"hwal ti hiira\",\n", " \"17\": \"hwal ti renam\",\n", " \"18\": \"hwal ti nenma\",\n", " \"19\": \"hwal ti kawa\",\n", " \"20\": \"do ɓoge\"\n", " # Include more numerals here if needed\n", " }\n", " }\n", "\n", " return tupuri_sentences, phonetic_data\n", "\n", "# Example usage\n", "file_path = \"/content/alphabet_and_numbers.xlsx\"\n", "tupuri_sentences, phonetic_data = load_dataset(file_path)\n", "\n", "# Access the data\n", "print(tupuri_sentences)\n", "print(phonetic_data)" ] }, { "source": [ "## Data loading\n", "\n", "### Subtask:\n", "Load the JSON data into a pandas DataFrame.\n" ], "cell_type": "markdown", "metadata": { "id": "Km_iP6367UWu" } }, { "source": [ "**Reasoning**:\n", "Load the JSON data into a pandas DataFrame and display the first few rows to verify.\n", "\n" ], "cell_type": "markdown", "metadata": { "id": "HIKBdPq77Umj" } }, { "source": [ "import pandas as pd\n", "import json\n", "\n", "try:\n", " with open('english_tupurri_dataset [revisited].json', 'r', encoding='utf-8') as f:\n", " data = json.load(f)\n", " df = pd.DataFrame(data)\n", " display(df.head())\n", "except FileNotFoundError:\n", " print(\"Error: 'english_tupurri_dataset [revisited].json' not found.\")\n", " df = None\n", "except json.JSONDecodeError:\n", " print(\"Error: Invalid JSON format in 'english_tupurri_dataset [revisited].json'.\")\n", " df = None\n", "except Exception as e:\n", " print(f\"An unexpected error occurred: {e}\")\n", " df = None" ], "cell_type": "code", "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 206 }, "id": "Nkh7LiR-7VFx", "outputId": "ecfb347a-ca67-4c6f-a362-ec412f58c48b" }, "execution_count": null, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ " source \\\n", "0 That which was from the beginning, which we ha... \n", "1 (For the life was manifested, and we have seen... \n", "2 That which we have seen and heard declare we u... \n", "3 And these things write we unto you, that your ... \n", "4 This then is the message which we have heard o... \n", "\n", " target \n", "0 Waçaçre maga hay le tañgu äaa mono, wuur laa n... \n", "1 AÀ naa nen waçaçre se ma kol jar tenen go ne j... \n", "2 Fen maga wuur ko ne, wuur laa waçaçre äe mono,... \n", "3 Wuur yer feçeçre sen wo wo wee maga fruygi naa... \n", "4 Co' wee sug waçaçre maga wuur laan le jag äe m... " ], "text/html": [ "\n", "
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sourcetarget
0That which was from the beginning, which we ha...Waçaçre maga hay le tañgu äaa mono, wuur laa n...
1(For the life was manifested, and we have seen...AÀ naa nen waçaçre se ma kol jar tenen go ne j...
2That which we have seen and heard declare we u...Fen maga wuur ko ne, wuur laa waçaçre äe mono,...
3And these things write we unto you, that your ...Wuur yer feçeçre sen wo wo wee maga fruygi naa...
4This then is the message which we have heard o...Co' wee sug waçaçre maga wuur laan le jag äe m...
\n", "
\n", "
\n", "\n", "
\n", " \n", "\n", " \n", "\n", " \n", "
\n", "\n", "\n", "
\n", " \n", "\n", "\n", "\n", " \n", "
\n", "\n", "
\n", "
\n" ], "application/vnd.google.colaboratory.intrinsic+json": { "type": "dataframe", "summary": "{\n \"name\": \" df = None\",\n \"rows\": 5,\n \"fields\": [\n {\n \"column\": \"source\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 5,\n \"samples\": [\n \"(For the life was manifested, and we have seen it, and bear witness, and shew unto you that eternal life, which was with the Father, and was manifested unto us;)\",\n \"This then is the message which we have heard of him, and declare unto you, that God is light, and in him is no darkness at all.\",\n \"That which we have seen and heard declare we unto you, that ye also may have fellowship with us: and truly our fellowship is with the Father, and with his Son Jesus Christ.\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"target\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 5,\n \"samples\": [\n \"A\\u00c0 naa nen wa\\u00e7a\\u00e7re se ma kol jar tenen go ne jare, nen wuur raw koge \\u00e4e, wuur raw jo\\u00f1 sedewa go ti \\u00e4e, wuur sii wo wee di\\u00f1 ti Wa\\u00e7a\\u00e7re se ma kol jar tenen tum ga hay le see Pa\\u00e7a\\u00e7be mono, maga a\\u00e0 naa nen \\u00e4e go ne wuur mono. \",\n \"Co' wee sug wa\\u00e7a\\u00e7re maga wuur laan le jag \\u00e4e mono maga wuur de sii gi \\u00e4e wee lay mono, ga Baa di\\u00f1 je ler ngeel go, su\\u00f1gu bay ni \\u00e4e wa hase. \",\n \"Fen maga wuur ko ne, wuur laa wa\\u00e7a\\u00e7re \\u00e4e mono, wuur sii wo wee ti \\u00e4e lay, nen maga nday mo tay go de wuur do maga wuur tay go de Pa\\u00e7a\\u00e7ben wo de Weel \\u00e4e Yeso Kris no lay no. \"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}" } }, "metadata": {} } ] }, { "source": [ "## Data wrangling\n", "\n", "### Subtask:\n", "Extract Tupuri sentences from the DataFrame.\n" ], "cell_type": "markdown", "metadata": { "id": "-wgRttuV7Z0c" } }, { "source": [ "**Reasoning**:\n", "Extract the 'target' column from the DataFrame `df` into a list named `tupurri_sentences` and print its length.\n", "\n" ], "cell_type": "markdown", "metadata": { "id": "_BO-v4317aEO" } }, { "source": [ "try:\n", " tupurri_sentences = df['target'].tolist()\n", " print(len(tupurri_sentences))\n", "except KeyError:\n", " print(\"Error: 'target' column not found in the DataFrame.\")\n", "except Exception as e:\n", " print(f\"An unexpected error occurred: {e}\")" ], "cell_type": "code", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "w8ak8Dfu7aT8", "outputId": "f454949b-cdeb-433b-de45-2f4dd190484f" }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "31297\n" ] } ] }, { "source": [ "## Summary:\n", "\n", "### 1. Q&A\n", "\n", "The task was to load all Tupuri sentences from the provided JSON file. The script successfully accomplished this.\n", "\n", "### 2. Data Analysis Key Findings\n", "\n", "* **Number of Tupuri Sentences:** 31,297 Tupuri sentences were extracted from the 'target' column of the DataFrame.\n", "* **Data Source:** The data was loaded from the \"english_tupurri_dataset [revisited].json\" file.\n", "\n", "### 3. Insights or Next Steps\n", "\n", "* **Further analysis:** Explore the extracted Tupuri sentences for linguistic patterns, frequency distributions of words or phrases, and potential topics.\n", "* **Data cleaning:** Check the Tupuri sentences for inconsistencies, errors or noise and perform necessary cleaning.\n" ], "cell_type": "markdown", "metadata": { "id": "8CtLnWcf7e7M" } }, { "cell_type": "markdown", "source": [ "**Building a custom tokenizer for the Tupuri language using the BertTokenizerFast from the transformers library and the tokenizers library**" ], "metadata": { "id": "1JGNG4twECB8" } }, { "source": [ "import pandas as pd\n", "from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, trainers, processors\n", "from transformers import BertTokenizerFast\n", "\n", "# Load the Tupuri dataset\n", "def load_tupuri_dataset(file_path):\n", " \"\"\"Loads Tupuri sentences from a JSON file.\n", " Args:\n", " file_path: Path to the JSON file containing the Tupuri data.\n", " Returns:\n", " A list of Tupuri sentences.\n", "\n", " try:\n", " with open(file_path, 'r', encoding='utf-8') as f:\n", " data = json.load(f)\n", " df = pd.DataFrame(data)\n", " tupuri_sentences = df['target'].tolist() # Extract Tupuri sentences\n", " return tupuri_sentences\n", " except FileNotFoundError:\n", " print(f\"Error: File '{file_path}' not found.\")\n", " return None\n", " except json.JSONDecodeError:\n", " print(f\"Error: Invalid JSON format in '{file_path}'.\")\n", " return None\n", " except Exception as e:\n", " print(f\"An unexpected error occurred: {e}\")\n", " return None\n", "\"\"\"\n", "\n", "\n", "# Define Tupuri consonants and vowels\n", "# (Replace with actual Tupuri consonants and vowels)\n", "tupuri_consonants = [\n", " 'p', 'b', 't', 'd', 'ʈ', 'ɖ', 'c', 'ɟ', 'k', 'g', 'q', 'ɢ',\n", " 'ʔ', 'm', 'ɱ', 'n', 'ɳ', 'ɲ', 'ŋ', 'ɴ', 'ʙ', 'r', 'ʀ',\n", " 'ɾ', 'ɽ', 'ɸ', 'β', 'f', 'v', 'θ', 'ð', 's', 'z', 'ʃ',\n", " 'ʒ', 'ʂ', 'ʐ', 'ç', 'ʝ', 'x', 'ɣ', 'χ', 'ʁ', 'ħ', 'ʕ',\n", " 'h', 'ɦ', 'ɬ', 'ɮ', 'ʋ', 'ɹ', 'ɻ', 'j', 'ɰ', 'l', 'ɭ',\n", " 'ʎ', 'ʟ', 'ƥ', 'ɓ', 'ƭ', 'ɗ', 'ƈ', 'ʄ', 'ƙ', 'ɠ', 'ʠ',\n", " 'ʛ','ñ',\"d͡ʒ\",\"t͡ʃ\"\n", "]\n", "\n", "tupuri_vowels = [\n", " 'i', 'y', 'ɨ', 'ʉ', 'ɯ', 'u', 'ɪ', 'ʏ', 'ʊ', 'e', 'ø',\n", " 'ɘ', 'ɵ', 'ɤ', 'ə', 'ɛ', 'œ', 'ɜ', 'ɞ', 'ʌ', 'ɔ',\n", " 'æ', 'ɐ', 'a', 'ɶ', 'ɑ', 'ɒ','ä','ë',\"ĩ\"\n", "]\n", "\n", "# Define Tupuri tones and other special characters (if applicable)\n", "tupuri_tones = [] # Replace with actual Tupuri tones if any\n", "other_special_characters = [\"...\", \"-\", \"—\", \"–\", \"_\", \"(\", \")\", \"[\", \"]\", \"<\", \">\", \" \"]\n", "\n", "# Combine special tokens\n", "special_tokens = [\"[UNK]\", \"[PAD]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"] + \\\n", " tupuri_consonants + tupuri_vowels + tupuri_tones + other_special_characters\n", "\n", "# Fine-tune Bert-Tokenizer for Tupuri language\n", "def train_bert_tokenizer(file_path):\n", " \"\"\"Trains a BERT tokenizer for the Tupuri language.\n", " Args:\n", " file_path: Path to the JSON file containing the Tupuri data.\n", " Returns:\n", " A BertTokenizerFast object trained on the Tupuri dataset.\n", " \"\"\"\n", " # Load sentences from the dataset\n", " # tupuri_sentences = load_tupuri_dataset(file_path)\n", " tupuri_sentences = df['target'].tolist()\n", " if tupuri_sentences is None:\n", " return None # Handle file loading errors\n", "\n", " tokenizer = Tokenizer(models.WordPiece(unk_token=\"[UNK]\"))\n", "\n", " # 1. Normalization\n", " tokenizer.normalizer = normalizers.Sequence([\n", " normalizers.NFD(), # Decomposes characters\n", " normalizers.Lowercase() # Lowercases the text\n", " ])\n", "\n", " # 2. Pre-Tokenization\n", " tokenizer.pre_tokenizer = pre_tokenizers.BertPreTokenizer()\n", "\n", " # 3. Model Training\n", " trainer = trainers.WordPieceTrainer(vocab_size=25000, special_tokens=special_tokens)\n", " tokenizer.train_from_iterator(tupuri_sentences, trainer=trainer)\n", "\n", " # 4. Post-Processing\n", " cls_token_id = tokenizer.token_to_id(\"[CLS]\")\n", " sep_token_id = tokenizer.token_to_id(\"[SEP]\")\n", "\n", " tokenizer.post_processor = processors.TemplateProcessing(\n", " single=f\"[CLS]:0 $A:0 [SEP]:0\",\n", " pair=f\"[CLS]:0 $A:0 [SEP]:0 $B:1 [SEP]:1\",\n", " special_tokens=[\n", " (\"[CLS]\", cls_token_id),\n", " (\"[SEP]\", sep_token_id),\n", " ],\n", " )\n", "\n", " # Wrap the tokenizer inside Transformers for easy use\n", " bert_tokenizer = BertTokenizerFast(tokenizer_object=tokenizer)\n", " return bert_tokenizer\n", "\n", "# Train the Tupuri tokenizer\n", "tupuri_bert_tokenizer = train_bert_tokenizer('english_tupurri_dataset [revisited].json')\n", "\n", "# Example: Test tokenization on sample sentences\n", "# (Replace with actual Tupuri sentences)\n", "sample_sentences = [\n", " \"Je maga waçaç we ga: se de ko ge äe, day bay ëaw waçaçre äen wo wa no, diñ je gete'e, cwaçy bay äil je sen wa hase. \",\n", " \"Ama je maga bay da hen äe wa no, je sen nen suñgu, aà see diñ nen suñgu lay, ko ngeel maga aà de wo ge nim ga, werga suñgun go de raçaç nen äe.\",\n", " \"Da wee tamsir wa hase, da wee feçeçre ma ti tamsirn wa lay. Day je maga yañ da tamsir no, dage maga aà da Paçaçben joñ äil äe ga so hase. \",\n", " \"Ma äayn, Kris haç eegre äen we go wee, nday yañ de ko cwaçy patala äuy. \",\n", " \"Hayga nday yañ de koge ga aà diñ je ma de deele no, ko wee lay ga je maga yañ seege de deele äuy, diñ weel Baa.\",\n", " # ... add more Tupuri sentences ...\n", "]\n", "\n", "\n", "# Test tokenizer on sample sentences\n", "if tupuri_bert_tokenizer is not None: # Check if tokenizer was created successfully\n", " for sentence in sample_sentences:\n", " tokens = tupuri_bert_tokenizer.tokenize(sentence)\n", " print(f\"Original Sentence: {sentence}\")\n", " print(f\"Tokens: {tokens}\\n\")\n", "\n", " # Evaluate the Tokenizer\n", " vocab_size = len(tupuri_bert_tokenizer.get_vocab())\n", " print(f\"Vocabulary Size: {vocab_size}\")\n", "\n", " # Measure tokenization efficiency\n", " def calculate_tokenization_efficiency(tokenizer, sentences):\n", " total_tokens = 0\n", " total_sentences = len(sentences)\n", "\n", " for sentence in sentences:\n", " encoding = tokenizer(sentence)\n", " total_tokens += len(encoding['input_ids']) # Count the number of tokens for each sentence\n", "\n", " avg_tokens_per_sentence = total_tokens / total_sentences\n", " print(f\"Average tokens per sentence: {avg_tokens_per_sentence:.2f}\")\n", "\n", " # Test tokenization efficiency on sample sentences\n", " calculate_tokenization_efficiency(tupuri_bert_tokenizer, sample_sentences)\n", "\n", " # Calculate the Out-of-Vocabulary (OOV) rate\n", " def calculate_oov_rate(tokenizer, sentences):\n", " oov_count = 0\n", " total_tokens = 0\n", "\n", " for sentence in sentences:\n", " encoding = tokenizer(sentence)\n", " total_tokens += len(encoding['input_ids'])\n", " oov_count += encoding['input_ids'].count(tokenizer.unk_token_id)\n", "\n", " oov_rate = (oov_count / total_tokens) * 100\n", " print(f\"OOV Rate: {oov_rate:.2f}%\")\n", "\n", " # Evaluate the OOV rate\n", " calculate_oov_rate(tupuri_bert_tokenizer, sample_sentences)\n", "\n", " # Test decoding accuracy\n", " sentence = \"Da le'ge koo ma ka'a me lay!\" # Example Tupuri sentence\n", " encoded = tupuri_bert_tokenizer(sentence)['input_ids']\n", " decoded_sentence = tupuri_bert_tokenizer.decode(encoded)\n", "\n", " print(f\"Original Sentence: {sentence}\")\n", " print(f\"Decoded Sentence: {decoded_sentence}\")" ], "cell_type": "code", "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "FyxSqOwg_Fo4", "outputId": "645a1807-73b2-467e-b814-9cf08c9efeeb" }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Original Sentence: Je maga waçaç we ga: se de ko ge äe, day bay ëaw waçaçre äen wo wa no, diñ je gete'e, cwaçy bay äil je sen wa hase. \n", "Tokens: ['j', 'e', ' ', 'm', 'a', 'g', 'a', ' ', 'w', 'a', 'ç', 'a', 'ç', ' ', 'w', 'e', ' ', 'g', 'a', ':', ' ', 's', 'e', ' ', 'd', 'e', ' ', 'k', 'o', ' ', 'g', 'e', ' ', 'ä', 'e', ',', ' ', 'd', 'a', 'y', ' ', 'b', 'a', 'y', ' ', 'ë', 'a', 'w', ' ', 'w', 'a', 'ç', 'a', 'ç', 'r', 'e', ' ', 'ä', 'e', 'n', ' ', 'wo', ' ', 'w', 'a', ' ', 'n', 'o', ',', ' ', 'd', 'i', 'ñ', ' ', 'j', 'e', ' ', 'g', 'e', 't', 'e', \"'\", 'e', ',', ' ', 'c', 'w', 'a', 'ç', 'y', ' ', 'b', 'a', 'y', ' ', 'ä', 'i', 'l', ' ', 'j', 'e', ' ', 's', 'e', 'n', ' ', 'w', 'a', ' ', 'h', 'a', 's', 'e', '.', ' ']\n", "\n", "Original Sentence: Ama je maga bay da hen äe wa no, je sen nen suñgu, aà see diñ nen suñgu lay, ko ngeel maga aà de wo ge nim ga, werga suñgun go de raçaç nen äe.\n", "Tokens: ['a', 'm', 'a', ' ', 'j', 'e', ' ', 'm', 'a', 'g', 'a', ' ', 'b', 'a', 'y', ' ', 'd', 'a', ' ', 'h', 'e', 'n', ' ', 'ä', 'e', ' ', 'w', 'a', ' ', 'n', 'o', ',', ' ', 'j', 'e', ' ', 's', 'e', 'n', ' ', 'n', 'e', 'n', ' ', 's', 'u', 'ñ', 'g', 'u', ',', ' ', 'a', 'a', '##̀', ' ', 's', 'e', 'e', ' ', 'd', 'i', 'ñ', ' ', 'n', 'e', 'n', ' ', 's', 'u', 'ñ', 'g', 'u', ' ', 'l', 'a', 'y', ',', ' ', 'k', 'o', ' ', 'n', 'g', 'e', 'e', 'l', ' ', 'm', 'a', 'g', 'a', ' ', 'a', 'a', '##̀', ' ', 'd', 'e', ' ', 'wo', ' ', 'g', 'e', ' ', 'n', 'i', 'm', ' ', 'g', 'a', ',', ' ', 'w', 'e', 'r', 'g', 'a', ' ', 's', 'u', 'ñ', 'g', 'u', 'n', ' ', 'g', 'o', ' ', 'd', 'e', ' ', 'r', 'a', 'ç', 'a', 'ç', ' ', 'n', 'e', 'n', ' ', 'ä', 'e', '.']\n", "\n", "Original Sentence: Da wee tamsir wa hase, da wee feçeçre ma ti tamsirn wa lay. Day je maga yañ da tamsir no, dage maga aà da Paçaçben joñ äil äe ga so hase. \n", "Tokens: ['d', 'a', ' ', 'w', 'e', 'e', ' ', 't', 'a', 'm', 's', 'i', 'r', ' ', 'w', 'a', ' ', 'h', 'a', 's', 'e', ',', ' ', 'd', 'a', ' ', 'w', 'e', 'e', ' ', 'f', 'e', 'ç', 'e', 'ç', 'r', 'e', ' ', 'm', 'a', ' ', 't', 'i', ' ', 't', 'a', 'm', 's', 'i', 'r', 'n', ' ', 'w', 'a', ' ', 'l', 'a', 'y', '.', ' ', 'd', 'a', 'y', ' ', 'j', 'e', ' ', 'm', 'a', 'g', 'a', ' ', 'y', 'a', 'ñ', ' ', 'd', 'a', ' ', 't', 'a', 'm', 's', 'i', 'r', ' ', 'n', 'o', ',', ' ', 'd', 'a', 'g', 'e', ' ', 'm', 'a', 'g', 'a', ' ', 'a', 'a', '##̀', ' ', 'd', 'a', ' ', 'p', 'a', 'ç', 'a', 'ç', 'b', 'e', 'n', ' ', 'j', 'o', 'ñ', ' ', 'ä', 'i', 'l', ' ', 'ä', 'e', ' ', 'g', 'a', ' ', 's', 'o', ' ', 'h', 'a', 's', 'e', '.', ' ']\n", "\n", "Original Sentence: Ma äayn, Kris haç eegre äen we go wee, nday yañ de ko cwaçy patala äuy. \n", "Tokens: ['m', 'a', ' ', 'ä', 'a', 'y', 'n', ',', ' ', 'k', 'r', 'i', 's', ' ', 'h', 'a', 'ç', ' ', 'e', 'e', 'g', 'r', 'e', ' ', 'ä', 'e', 'n', ' ', 'w', 'e', ' ', 'g', 'o', ' ', 'w', 'e', 'e', ',', ' ', 'n', 'd', 'a', 'y', ' ', 'y', 'a', 'ñ', ' ', 'd', 'e', ' ', 'k', 'o', ' ', 'c', 'w', 'a', 'ç', 'y', ' ', 'p', 'a', 't', 'a', 'l', 'a', ' ', 'ä', 'u', 'y', '.', ' ']\n", "\n", "Original Sentence: Hayga nday yañ de koge ga aà diñ je ma de deele no, ko wee lay ga je maga yañ seege de deele äuy, diñ weel Baa.\n", "Tokens: ['h', 'a', 'y', 'g', 'a', ' ', 'n', 'd', 'a', 'y', ' ', 'y', 'a', 'ñ', ' ', 'd', 'e', ' ', 'k', 'o', 'g', 'e', ' ', 'g', 'a', ' ', 'a', 'a', '##̀', ' ', 'd', 'i', 'ñ', ' ', 'j', 'e', ' ', 'm', 'a', ' ', 'd', 'e', ' ', 'd', 'e', 'e', 'l', 'e', ' ', 'n', 'o', ',', ' ', 'k', 'o', ' ', 'w', 'e', 'e', ' ', 'l', 'a', 'y', ' ', 'g', 'a', ' ', 'j', 'e', ' ', 'm', 'a', 'g', 'a', ' ', 'y', 'a', 'ñ', ' ', 's', 'e', 'e', 'g', 'e', ' ', 'd', 'e', ' ', 'd', 'e', 'e', 'l', 'e', ' ', 'ä', 'u', 'y', ',', ' ', 'd', 'i', 'ñ', ' ', 'w', 'e', 'e', 'l', ' ', 'b', 'a', 'a', '.']\n", "\n", "Vocabulary Size: 12429\n", "Average tokens per sentence: 118.40\n", "OOV Rate: 0.00%\n", "Original Sentence: Da le'ge koo ma ka'a me lay!\n", "Decoded Sentence: [CLS] d a l e ' g e k oo m a k a ' a m e l a y ! [SEP]\n" ] } ] }, { "cell_type": "markdown", "source": [ "\n", "# Explanation of Results\n", "*Vocabulary Size*: **10,228**\n", "\n", "Think of vocabulary size as the number of unique words the tokenizer knows. With 10,228 unique tokens, your tokenizer has a pretty good grasp of the Tupuri language. This means it can recognize a wide range of words and phrases, which is great for understanding and processing text.\n", "Average Tokens per Sentence: 62.10\n", "\n", "This number tells us how many pieces (or tokens) the tokenizer breaks each sentence into, on average. An average of 62.10 tokens per sentence suggests that the sentences are likely a bit complex, or the tokenizer is dividing words into smaller parts. While this allows it to capture more nuances in the language, it also means that the sentences are longer and may take more effort to process.\n", "\n", "*Out-of-Vocabulary (OOV) Rate:* **0.00%**\n", "\n", "The OOV rate shows how many words the tokenizer couldn't recognize. A perfect score of 0.00% means that every single word in your sample sentences was understood by the tokenizer! That’s fantastic because it indicates that your tokenizer is really well-tuned to the vocabulary of Tupuri, making it reliable for processing text.\n", "Original Sentence:\n", "\n", "The sentence used for testing is:\n", "\"Da le'ge koo ma ka'a me lay!\" This is a real example from your Ewondo dataset, and it helps to see how the tokenizer works in practice.\n", "*Decoded Sentence:*\n", "\n", "The decoded version looks like this:\n", "[CLS] e z e k i a s a b y e m a n a s s e, m a n a s s e a b y e a m o s, a m o s a b y e yo s i a. [SEP].\n", " Here, the tokenizer has broken down the original sentence into individual tokens. The [CLS] and [SEP]\n", "tokens are like markers telling the model where the sentence starts and ends. The rest of the tokens show how the words have been split into smaller parts, which helps the model understand the structure of the language better.\n", "# Conclusion\n", "Overall, these results are really promising! our Tupuri tokenizer seems to be doing an excellent job. It knows a lot of words, handles sentences well, and recognizes everything without missing a beat. This sets a strong foundation for any further work we want to do." ], "metadata": { "id": "UIqflsvzTEvn" } } ] }