๐ง Anime-Gen-Llama-2-7B
Anime-Gen-Llama-2-7B is a LoRA fine-tuned version of meta-llama/Llama-2-7b-hf
, trained on a custom anime/manga-style dataset to generate structured short stories and panel descriptions from prompts. This model was trained using the PEFT library and bitsandbytes for efficient finetuning.
Model Details
Model Description
- Developer: Vignesh Ramaswamy Balasundaram
- Finetuned From:
meta-llama/Llama-2-7b-hf
- Language: English
- License: Meta LLaMA 2 community license
- Task Type: Causal Language Modeling
- LoRA Adapter: Yes (via
peft
) - Model Size: 7B parameters (base model)
Model Sources
Uses
Direct Use
- Text-to-Anime panel generation
- Short manga-style storytelling
- Prompt-driven narrative generation
Out-of-Scope Use
- Legal, financial, or medical decision-making
- Real-time conversation agents
- General-purpose dialogue
Bias, Risks, and Limitations
Limitations
- Trained on a small, domain-specific dataset (anime-style stories)
- May hallucinate character names or plots
- Not guaranteed to follow strict story structure
Recommendations
- Users should validate outputs for correctness and coherence before using in production.
- Consider using as a creative writing aid rather than factual generation.
How to Get Started
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
adapter_id = "vignesh0007/Anime-Gen-Llama-2-7B"
config = PeftConfig.from_pretrained(adapter_id)
base_model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(base_model, adapter_id)
tokenizer = AutoTokenizer.from_pretrained(adapter_id)
prompt = "Title: The Final Duel\nCharacters: Yuki, Daichi\nPanel 1:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.8, top_p=0.95, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support