Can't reproduce given example (no meaningful output)
1
#8 opened 15 days ago
by
pzarzycki

Error for fine tuning model when using FSDP: auto wrap: Could not find the transformer layer class LlavaOnevisionVisionAttention in the model.
1
#6 opened 4 months ago
by
liuzijing2014
Error when attempting to run either model... ValueError: embed_dim must be divisible by num_heads (got `embed_dim`: 1152 and `num_heads`: 14).
3
#4 opened 6 months ago
by
jdc4429
Download transformers for LlavaOnevisionForConditionalGeneration
2
#1 opened 7 months ago
by
mjbooo