--- base_model: unsloth/smollm-135m-instruct-bnb-4bit base_model_relation: finetune tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - python license: apache-2.0 language: - en datasets: - AI-MO/NuminaMath-CoT - TIGER-Lab/MathInstruct - Vezora/Tested-143k-Python-Alpaca - glaiveai/glaive-code-assistant-v2 pipeline_tag: text-generation new_version: ShubhamSinghCodes/PyNanoLm --- # Uploaded model - **Developed by:** ShubhamSinghCodes - **License:** apache-2.0 - **Finetuned from model :** unsloth/smollm-135m-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. Meant as a first step towards a fast, lite, not entirely stupid model that assists in Python programming. (WIP) [](https://github.com/unslothai/unsloth)