--- license: mit datasets: - sajjadhadi/disease-diagnosis-dataset base_model: - Qwen/Qwen2.5-3B pipeline_tag: text-classification tags: - biology language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara library_name: adapter-transformers --- # Disease Diagnosis Adapter A fine-tuned adapter for the Qwen/Qwen2.5-3B model specialized in disease diagnosis and classification. Trained through MLX and MPI, to test performance and accuracy. ## Overview This adapter enhances the base Ministral-3b-instruct model to improve performance on medical diagnosis tasks. It was trained on the [disease-diagnosis-dataset](https://huggingface.co/datasets/sajjadhadi/disease-diagnosis-dataset). The data is over-saturated in some diagnosis, I limit the number of diagnosis and take a limit number of them as training tags. ## Usage ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer # Load model and tokenizer model_name = "naifenn/diagnosis-adapter" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Example input text = "Patient presents with fever, cough, and fatigue for 3 days." inputs = tokenizer(text, return_tensors="pt") # Get prediction outputs = model(**inputs) prediction = outputs.logits.argmax(-1).item() print(f"Predicted diagnosis: {model.config.id2label[prediction]}")