🧠 Dia-Convo-v1.2c

petrioteer/dia-convo-v1.2c is a conversational mental-health-focused LLM designed for Gen Z, built on top of Qwen2.5-7B-Instruct and fine-tuned using dia-therapy-dataset. This model powers Dia-Therapist, an empathetic AI that offers mental health support while being context-aware, brief, and emotionally intelligent.


πŸ’¬ Intended Use

This model is tuned to offer:

  • Thoughtful responses to mental health queries
  • Conversational tone suited for Gen Z
  • Non-medical, non-clinical guidance
  • Short, contextually sensitive replies

It does not replace professional therapy.


πŸ“š Training Dataset


πŸ§ͺ Example Inference (πŸ€— Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "petrioteer/dia-convo-v1.2c"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    torch_dtype=torch.float16
)

prompt = """
### Instruction:
Your name is Dia, a mental health therapist Assistant Bot. Provide guidance on mental health topics only and avoid others. Don\'t give medical advice. Keep responses short and relevant.

### Input:
I'm feeling overwhelmed with my classes. I can't seem to focus.

### Response:
"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=100,
    temperature=0.3,
    top_p=0.85,
    top_k=40,
    do_sample=True,
    eos_token_id=tokenizer.eos_token_id,
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

⚑ Fast Inference (🧬 Unsloth)

from unsloth import FastLanguageModel
from transformers import AutoTokenizer

model_name = "petrioteer/dia-convo-v1.2c"

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name=model_name,
    max_seq_length=2048,
    load_in_4bit=True,
    device_map="auto",
)

FastLanguageModel.for_inference(model)

prompt = """
### Instruction:
Your name is Dia, a mental health therapist Assistant Bot. Provide guidance on mental health topics only and avoid others. Don\'t give medical advice. Keep responses short and relevant.

### Input:
I just feel numb and disconnected from everyone lately.

### Response:
"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=100,
    temperature=0.3,
    top_p=0.85,
    top_k=40,
    do_sample=True,
    repetition_penalty=1.2,
    no_repeat_ngram_size=4,
    eos_token_id=tokenizer.eos_token_id,
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“ Model Details

  • πŸ”— Base model: Qwen2.5-7B-Instruct
  • 🧠 Fine-tuned using dia-therapy-dataset on Gen Z mental health patterns
  • πŸ› οΈ Quantized with 4-bit support (for faster loading)
  • πŸ§ͺ Best used with Unsloth for optimized inference

❀️ Citation & Thanks

If you use Dia-Convo in research, demos, or builds, consider citing or linking back to this repo and dataset authors.


Built with ❀️ & care by Itesh (aka petrioteer) ✨

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support

Model tree for petrioteer/dia-convo-v1.2c

Base model

Qwen/Qwen2.5-7B
Finetuned
(2106)
this model

Dataset used to train petrioteer/dia-convo-v1.2c