CogVideoX LoRA - BelGio13/cogvideoX-I2V-locobot
Model description
These are BelGio13/cogvideoX-I2V-locobot LoRA weights for THUDM/CogVideoX-5b-I2V.
The weights were trained using the CogVideoX Diffusers trainer.
Was LoRA for the text encoder enabled? No.
Download model
Download the *.safetensors LoRA in the Files & versions tab.
Use it with the 🧨 diffusers library
import torch
from diffusers import CogVideoXImageToVideoPipeline
from diffusers.utils import load_image, export_to_video
pipe = CogVideoXImageToVideoPipeline.from_pretrained("THUDM/CogVideoX-5b-I2V", torch_dtype=torch.bfloat16).to("cuda")
pipe.load_lora_weights("BelGio13/cogvideoX-I2V-locobot", weight_name="pytorch_lora_weights.safetensors", adapter_name="cogvideox-i2v-lora")
# The LoRA adapter weights are determined by what was used for training.
# In this case, we assume `--lora_alpha` is 32 and `--rank` is 64.
# It can be made lower or higher from what was used in training to decrease or amplify the effect
# of the LoRA upto a tolerance, beyond which one might notice no effect at all or overflows.
pipe.set_adapters("cogvideox-i2v-lora", [32 / 64])
image = load_image("/path/to/image")
video = pipe(image=image, "", guidance_scale=6, use_dynamic_cfg=True).frames[0]
export_to_video(video, "output.mp4", fps=8)
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
License
Please adhere to the licensing terms as described here.
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 27
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for BelGio13/cogvideoX-I2V-locobot
Base model
THUDM/CogVideoX-5b-I2V