--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mllama - trl license: apache-2.0 language: - en datasets: - unsloth/Radiology_mini --- # Uploaded model - **Developed by:** MMoshtaghi - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit - **Finetuned on dataset:** [unsloth/Radiology_mini](https://huggingface.co/datasets/unsloth/Radiology_mini) - **PEFT method :** [Quantized LoRA](https://huggingface.co/papers/2305.14314) ## Quick start ```python from datasets import load_dataset from unsloth import FastVisionModel model, tokenizer = FastVisionModel.from_pretrained( model_name = "MMoshtaghi/Llama-3.2-11B-Vision-LoRAAdpt-Radiology", load_in_4bit = True, ) FastVisionModel.for_inference(model) # Enable for inference! dataset = load_dataset("unsloth/Radiology_mini", split = "train") image = dataset[0]["image"] instruction = "You are an expert radiographer. Describe accurately what you see in this image." messages = [ {"role": "user", "content": [ {"type": "image"}, {"type": "text", "text": instruction} ]} ] input_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True) inputs = tokenizer( image, input_text, add_special_tokens = False, return_tensors = "pt", ).to("cuda") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer, skip_prompt = True) _ = model_inf.generate(**inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True, temperature = 1.5, min_p = 0.1) ``` ### Framework versions - TRL: 0.13.0 - Transformers: 4.47.1 - Pytorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.21.0 - Unsloth: 2025.1.5 ## Citations This VLM model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.