Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption
Abstract
Video Detailed Captioning (VDC) is a crucial task for vision-language bridging, enabling fine-grained descriptions of complex video content. In this paper, we first comprehensively benchmark current state-of-the-art approaches and systematically identified two critical limitations: biased capability towards specific captioning aspect and misalignment with human preferences. To address these deficiencies, we propose Cockatiel, a novel three-stage training pipeline that ensembles synthetic and human-aligned training for improving VDC performance. In the first stage, we derive a scorer from a meticulously annotated dataset to select synthetic captions high-performing on certain fine-grained video-caption alignment and human-preferred while disregarding others. Then, we train Cockatiel-13B, using this curated dataset to infuse it with assembled model strengths and human preferences. Finally, we further distill Cockatiel-8B from Cockatiel-13B for the ease of usage. Extensive quantitative and qualitative experiments reflect the effectiveness of our method, as we not only set new state-of-the-art performance on VDCSCORE in a dimension-balanced way but also surpass leading alternatives on human preference by a large margin as depicted by the human evaluation results.
Community
🌟Project page: https://sais-fuxi.github.io/projects/cockatiel/
📖Paper: https://arxiv.org/abs/2503.09279
💥Code: https://github.com/Fr0zenCrane/Cockatiel
🤗Captioner Model: https://huggingface.co/Fr0zencr4nE/Cockatiel-13B
🤗Scorer Model: https://huggingface.co/Fr0zencr4nE/Cockatiel-Scorer
🤗Dataset: https://huggingface.co/datasets/Fr0zencr4nE/Cockatiel-4K
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LongCaptioning: Unlocking the Power of Long Video Caption Generation in Large Multimodal Models (2025)
- Painting with Words: Elevating Detailed Image Captioning with Benchmark and Alignment Learning (2025)
- Fine-Grained Video Captioning through Scene Graph Consolidation (2025)
- Raccoon: Multi-stage Diffusion Training with Coarse-to-Fine Curating Videos (2025)
- Pretrained Image-Text Models are Secretly Video Captioners (2025)
- VidCapBench: A Comprehensive Benchmark of Video Captioning for Controllable Text-to-Video Generation (2025)
- MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper