πŸ¦™ LLaMA 2 7B + TAT FinQA Adapter (Merged, Pre-CoT)

Repo: michael-sigamani/llama2-7b-tat-lora-fp16
Base: NousResearch/Llama-2-7b-hf
Adapter: next-tat/tat-llm-7b-lora
Merged: βœ… Yes
Fine-tuned: ❌ Not yet (this is the pre-CoT stage)
Format: Float16 (fp16)


πŸ“– Overview

This model merges a FinQA-tuned adapter (TAT) into LLaMA 2 7B, producing a standalone checkpoint ready for further fine-tuning or inference on financial reasoning tasks.

  • πŸ“ˆ Finetuned LoRA (TAT) captures scalar reasoning from FinQA
  • πŸ” Merged via peft.merge_and_unload() into the full model
  • 🧡 Next step: fine-tune on train_turn.jsonl with chain-of-thought (CoT) supervision

πŸ” Intended Usage

Use this model as the starting point for:

  • 🧠 Fine-tuning on CoT financial datasets (e.g. ConvFinQA turn-based reasoning)
  • πŸ§ͺ Evaluation on scalar, program, and reasoning benchmarks
  • πŸ¦™ Export to GGUF for Ollama / llama.cpp

🚧 Not a Final Model

This checkpoint has not been CoT fine-tuned yet. It is the output of:

Base:   NousResearch/Llama-2-7b-hf
LoRA:   next-tat/tat-llm-7b-lora (FinQA-style)
Merged: Yes (fp16, no adapter required)

Next step: Train on chain-of-thought examples (train_turn.jsonl) using Unsloth or PEFT + TRL.


🧠 Merge Script

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("NousResearch/Llama-2-7b-hf")
adapter = PeftModel.from_pretrained(base_model, "next-tat/tat-llm-7b-lora")
merged = adapter.merge_and_unload()

merged.save_pretrained("llama2-7b-tat-lora-fp16")
AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-hf").save_pretrained("llama2-7b-tat-lora-fp16")

πŸ§‘β€πŸ’» Maintainer

Michael Sigamani
github.com/sigamani


πŸ“œ License

  • Base: Meta LLaMA 2 license (via NousResearch)
  • Adapter: Apache 2.0
  • Merged model: Inherits original LLaMA 2 license – requires HF auth
Downloads last month
7
Safetensors
Model size
6.74B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for michael-sigamani/llama2-7b-tat-convfinqa-fp16

Adapter
(133)
this model

Dataset used to train michael-sigamani/llama2-7b-tat-convfinqa-fp16