Model Card for Medical Oncology Reasoning Fine-Tuned Model
This is a fine-tuned version of the DeepSeek-R1-Distill-Qwen-1.5B model, specifically adapted for medical oncology reasoning tasks using chain-of-thought prompting. The fine-tuning was performed on a curated subset of the FreedomIntelligence/medical-o1-reasoning-SFT dataset. This model is designed to provide detailed, step-by-step reasoning when answering medical questions, making it suitable for clinical decision support, medical education, and research.
Model Details
Model Description
This model leverages the capabilities of DeepSeek-R1-Distill-Qwen-1.5B and has been fine-tuned to enhance its performance in medical reasoning. It incorporates chain-of-thought prompting to produce detailed explanations for clinical queries. The fine-tuning process focused on tasks related to clinical diagnostics, treatment planning, and medical reasoning.
- Developed by: Arihant Tripathi
- Model type: Causal Language Model (LLM)
- Language(s): English
- Finetuned from model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Model Sources
- Repository: EndOfLe/OncoFineTuned
Uses
Direct Use
This model can be used directly to generate detailed, step-by-step medical reasoning in response to clinical queries. It is especially useful for:
- Medical diagnosis support.
- Clinical reasoning and treatment planning.
- Medical education and research.
Downstream Use
The model can be integrated into larger clinical decision support systems or educational tools that require natural language understanding and detailed reasoning for medical queries.
Out-of-Scope Use
The model is not intended for:
- Replacing expert medical advice or making final clinical decisions.
- General-purpose language generation without domain adaptation.
- High-stakes applications where errors could have severe consequences without expert oversight.
Bias, Risks, and Limitations
The model’s output should be treated as an aid to human decision-making rather than a substitute for professional medical advice. Key considerations include:
- The model may generate outdated or incorrect medical information.
- The reasoning is based on the training data and might reflect its inherent biases.
- Use in clinical settings should always involve human review and validation.
Recommendations
Users should verify the model’s outputs with expert medical knowledge and ensure its use complies with clinical standards and ethical guidelines.
How to Get Started with the Model
You can get started with the model using the following code snippet:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "EndOfLe/OncoFineTuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "What are the causes of colon cancer?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))