YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
reporesolutefoldermain
This is a PEFT Multilingual fine-tuned version of meta-llama/Llama-2-7b-hf using Bhagwad Gita verses.
Model Details
- Base Model: meta-llama/Llama-2-7b-hf
- Fine-tuning Method: PEFT (Parameter Efficient Fine-Tuning)
- Task: Translation
- Language: Hindi/Sanskrit to English
Directory Structure
βββ adapter_model.safetensors # Main model weights
βββ adapter_config.json # Model configuration
βββ all_results.json # Training results
βββ train_results.json # Detailed training metrics
βββ trainer_state.json # Trainer state
βββ checkpoint/ # Checkpoint directory
βββ training_args.bin # Training arguments
βββ optimizer.pt # Optimizer state
βββ scheduler.pt # Scheduler state
βββ adapter_model.safetensors
βββ adapter_config.json
βββ rng_state.pth # Random state
βββ trainer_state.json
Usage
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained("aneeshm44/reporesolutefoldermain")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.