Model Card for Zamba2-2.7B-Instruct-v2

Zamba2-2.7B-Instruct-v2 is derived from the base Zamba2-2.7B model through SFT and DPO training on instruction-following and conversational datasets.

Zamba2-2.7B-Instruct-v2 is a hybrid model composed of state-space (Mamba2) and transformer blocks.

Quick start

Prerequisites

To use Zamba2-2.7B-Instruct-v2, install transformers:

pip install transformers -U

To install dependencies necessary to run Mamba2 kernels, install mamba-ssm from source (due to compatibility issues with PyTorch) as well as causal-conv1d:

  1. git clone https://github.com/state-spaces/mamba.git
  2. cd mamba && git checkout v2.1.0 && pip install .
  3. pip install causal-conv1d

You can run the model without using the optimized Mamba2 kernels, but it is not recommended as it will result in significantly higher latency and memory usage.

Inference

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Instantiate model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-Instruct-v2")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-Instruct-v2", device_map="cuda", torch_dtype=torch.bfloat16)

# Format the input as a chat template
prompt = "What factors contributed to the fall of the Roman Empire?"
sample = [{'role': 'user', 'content': prompt}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

# Tokenize input and generate output
input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))

Performance

Zamba2-2.7B-Instruct-v2 achieves comparable performance to models of similar size.

Model Size (B) IFEval BBH GPQA MATH (Hard) MMLU Pro MUSR Aggregate
Zamba2-2.7B-Instruct-v2 2.66 71.92 22.42 6.13 6.47 24.40 14.97 24.38
Zamba2-2.7B-Instruct 2.66 46.56 21.32 4.09 5.71 23.18 8.56 18.24
Granite-3.2-2B-Instruct 2.53 63.03 26.87 6.09 13.32 27.80 3.74 23.48
Qwen-2.5-3B-Instruct 3.09 65.02 30.98 2.03 34.73 32.59 7.28 28.77
Llama3.2-3B-Instruct 3.21 73.87 29.31 4.06 17.12 32.01 1.74 26.22
Gemma-2-2b-it 2.61 19.76 24.42 2.58 1.04 25.80 7.16 13.46

Moreover, due to its unique hybrid SSM architecture, Zamba2-2.7B-Instruct-v2 achieves extremely low inference latency and rapid generation with a significantly smaller memory footprint than comparable transformer-based models.

Zamba performance
Time to First Token (TTFT) Output Generation
image/png image/png

And memory overhead

Zamba inference and memory cost

Model Details

Zamba2-2.7B utilizes and extends our original Zamba hybrid SSM-attention architecture. The core Zamba architecture consists of a backbone of Mamba2 layers interleaved with one or more shared attention layers. This attention has shared weights to minimize the parameter cost of the model. We find that concatenating the original model embeddings to the input to this attention block improves performance, likely due to better maintenance of information across depth. The Zamba2 architecture also applies LoRA projection matrices to the shared transformer blocks to gain some additional expressivity in each block and allow each shared block to specialize slightly to its own unique position while keeping the additional parameter overhead small.

Zamba architecture

A standalone Pytorch implementation of Zamba2-2.7B may be found here.

Training Recipe

Zamba2-2.7B-Instruct-v2 was trained on a mix of publicly available dataset including instruction-following and chat data. We experimented with various training approaches and found that the best recipe was as follows:

  1. SFT for one epoch on core chat, reasoning and math datasets such as HuggingFaceTB/smoltalk and nvidia/OpenMathInstruct-2
  2. DPO for 3 epochs on core alignment datasets including a subset of allenai/llama-3.1-tulu-3-70b-preference-mixture
  3. DPO on very high quality preference datasets such as jondurbin/truthy-dpo-v0.1 and jondurbin/gutenberg-dpo-v0.1
Downloads last month
8
Safetensors
Model size
2.66B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Zyphra/Zamba2-2.7B-Instruct-v2

Base model

Zyphra/Zamba2-2.7B
Finetuned
(3)
this model