Ansah-E1

This repository contains a fully merged, 4-bit quantized model built by integrating a customer support adapter into the base Llama-3.2-1B-Instruct model.

Model Overview

  • Base Model: Llama-3.2-1B-Instruct from Meta
  • Adapter: Customer Support Chatbot fine-tuned for customer support scenarios
  • Merged Model: The adapter weights have been fully merged into the base model for streamlined inference
  • Quantization: The model is quantized to 4-bit for improved efficiency while maintaining performance

Usage

This model behaves like any other Hugging Face model. For example:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("your_username/Ansah-E1", load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("your_username/Ansah-E1")

prompt = "I received a damaged product and want to return it. What's the process?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
10
Safetensors
Model size
764M params
Tensor type
F32
·
FP16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dheerajdasari/Customer-support-instruct-1B

Quantized
(241)
this model

Dataset used to train dheerajdasari/Customer-support-instruct-1B