YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

reporesolutefoldermain

This is a PEFT Multilingual fine-tuned version of meta-llama/Llama-2-7b-hf using Bhagwad Gita verses.

Model Details

  • Base Model: meta-llama/Llama-2-7b-hf
  • Fine-tuning Method: PEFT (Parameter Efficient Fine-Tuning)
  • Task: Translation
  • Language: Hindi/Sanskrit to English

Directory Structure

β”œβ”€β”€ adapter_model.safetensors    # Main model weights
β”œβ”€β”€ adapter_config.json          # Model configuration
β”œβ”€β”€ all_results.json            # Training results
β”œβ”€β”€ train_results.json          # Detailed training metrics
β”œβ”€β”€ trainer_state.json          # Trainer state
└── checkpoint/                 # Checkpoint directory
    β”œβ”€β”€ training_args.bin       # Training arguments
    β”œβ”€β”€ optimizer.pt            # Optimizer state
    β”œβ”€β”€ scheduler.pt            # Scheduler state
    β”œβ”€β”€ adapter_model.safetensors
    β”œβ”€β”€ adapter_config.json
    β”œβ”€β”€ rng_state.pth          # Random state
    └── trainer_state.json

Usage

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained("aneeshm44/reporesolutefoldermain")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.