DeepSeek SOAP Summary Generator

This model is fine-tuned to generate SOAP (Subjective, Objective, Assessment, Plan) summaries from patient-doctor dialogues.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("hazem74/deepseek-soap-summary", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("hazem74/deepseek-soap-summary")

# Sample dialogue
dialogue = """
Doctor: Hello, how are you feeling today?
Patient: I've been having some chest pain for the last two days.
Doctor: Can you describe the pain?
Patient: It's a sharp pain, mostly on the left side.
"""

# Format the prompt
system_message = "You are a medical professional tasked with creating SOAP notes from patient-doctor dialogues."
user_content = f"""
# Patient-Doctor Dialogue:
{dialogue}

# Task:
Generate a SOAP summary from the above medical dialogue.
The summary should include Subjective, Objective, Assessment, and Plan sections.

# SOAP Summary:
"""

messages = [
    {"role": "system", "content": system_message},
    {"role": "user", "content": user_content}
]

# Generate SOAP summary
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    inputs.input_ids,
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.9
)

soap_summary = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(soap_summary)

Limitations This model assists healthcare professionals but should not replace human judgment. Always review generated summaries for accuracy.

Downloads last month
38
Safetensors
Model size
1.78B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train hazem74/deepseek-soap-summary