make fine-tuning script public?

#5
by wdli - opened

Hi thanks for the great work! Do you have plan to make the fine-tuning script public?

Hello thank for the reply. I was using the diffusers script that the link points to. But I got this error:
ValueError: Module down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_k is not a LoRACompatibleConv or LoRACompatibleLinear module. when trying to load the lora on stable diffusion. It's pretty weird. Have you ever encounterd this?

I'm using stable diffusion 2-1-base. and I stored the lora checkpoint in my local folder. The comamnd I'm using is:

export MODEL_NAME="stabilityai/stable-diffusion-2-1-base"
export OUTPUT_DIR="<output_path>"
export TRAIN_DATA_DIR="<dataset_path>"

accelerate launch --multi_gpu --num_processes=2 train_text_to_image_lora.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --train_data_dir=$TRAIN_DATA_DIR \
  --dataloader_num_workers=8 \
  --resolution=512 \
  --train_batch_size=16 \
  --gradient_accumulation_steps=1 \
  --max_train_steps=15000 \
  --learning_rate=1e-04 \
  --max_grad_norm=1 \
  --lr_scheduler="cosine" \
  --lr_warmup_steps=0 \
  --mixed_precision="fp16" \
  --output_dir=${OUTPUT_DIR} \
  --report_to=wandb \
  --checkpointing_steps=200 \
  --seed=42 
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment