File size: 6,887 Bytes
9493b2b 75350c6 437e9f0 290c2a0 437e9f0 290c2a0 437e9f0 290c2a0 75350c6 437e9f0 900cef8 437e9f0 900cef8 437e9f0 900cef8 d67ed13 437e9f0 900cef8 437e9f0 2dcb6dd 75350c6 2dcb6dd 75350c6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
---
license: cc-by-4.0
task_categories:
- object-detection
language:
- en
---
# 8-Calves Dataset
[](https://arxiv.org/abs/2503.13777)
A benchmark dataset for occlusion-rich object detection, identity classification, and multi-object tracking. Features 8 Holstein Friesian calves with unique coat patterns in a 1-hour video with temporal annotations.
---
## Overview
This dataset provides:
- 🕒 **1-hour video** (67,760 frames @20 fps, 600x800 resolution)
- 🎯 **537,908 verified bounding boxes** with calf identities (1-8)
- 🖼️ **900 hand-labeled static frames** for detection tasks
- Designed to evaluate robustness in occlusion handling, identity preservation, and temporal consistency.
<img src="dataset_screenshot.png" alt="Dataset Example Frame" width="50%" />
*Example frame with bounding boxes (green) and calf identities. Challenges include occlusion, motion blur, and pose variation.*
---
## Key Features
- **Temporal Richness**: 1-hour continuous recording (vs. 10-minute benchmarks like 3D-POP)
- **High-Quality Labels**:
- Generated via **ByteTrack + YOLOv8m** pipeline with manual correction
- <0.56% annotation error rate
- **Unique Challenges**: Motion blur, pose variation, and frequent occlusions
- **Efficiency Testing**: Compare lightweight (e.g., YOLOv9t) vs. large models (e.g., ConvNextV2)
---
## Dataset Structure
hand_labelled_frames/ # 900 manually annotated frames and labels in YOLO format, class=0 for cows
pmfeed_4_3_16.avi # 1-hour video (4th March 2016)
pmfeed_4_3_16_bboxes_and_labels.pkl # Temporal annotations
### Annotation Details
**PKL File Columns**:
| Column | Description |
|--------|-------------|
| `class` | Always `0` (cow detection) |
| `x`, `y`, `w`, `h` | YOLO-format bounding boxes |
| `conf` | Ignore (detections manually verified) |
| `tracklet_id` | Calf identity (1-8) |
| `frame_id` | Temporal index matching video |
**Load annotations**:
```python
import pandas as pd
df = pd.read_pickle("pmfeed_4_3_16_bboxes_and_labels.pkl")
```
## Usage
### Dataset Download:
Step 1: install git-lfs:
`git lfs install`
Step 2:
`git clone [email protected]:datasets/tonyFang04/8-calves`
Step 3: install conda and pip environments:
```
conda create --name new_env --file conda_requirements.txt
pip install -r pip_requirements.txt
```
### Object Detection
- **Training/Validation**: Use the first 600 frames from `hand_labelled_frames/` (chronological split).
- **Testing**: Evaluate on the full video (`pmfeed_4_3_16.avi`) using the provided PKL annotations.
- ⚠️ **Avoid Data Leakage**: Do not use all 900 frames for training - they are temporally linked to the test video.
**Recommended Split**:
| Split | Frames | Purpose |
|------------|--------|------------------|
| Training | 500 | Model training |
| Validation | 100 | Hyperparameter tuning |
| Test | 67,760 | Final evaluation |
### Benchmarking YOLO Models:
Step 1:
`cd 8-calves/object_detector_benchmark`. Run
`./create_yolo_dataset.sh` and
`create_yolo_testset.py`. This creates a YOLO dataset with the 500/100/67760 train/val/test split recommended above.
Step 2: find the `Albumentations` class in the `data/augment.py` file in ultralytics source code. And replace the default transforms to:
```
# Transforms
T = [
A.RandomRotate90(p=1.0),
A.HorizontalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.4),
A.ElasticTransform(
alpha=100.0,
sigma=5.0,
p=0.5
),
]
```
Step 3:
run the yolo detectors following the following commands:
```
cd yolo_benchmark
Model_Name=yolov9t
yolo cfg=experiment.yaml model=$Model_Name.yaml name=$Model_Name
```
### Benchmark Transformer Based Models:
Step 1: run the following commands to load the data into yolo format, then into coco, then into arrow:
```
cd 8-calves/object_detector_benchmark
./create_yolo_dataset.sh
python create_yolo_testset.py
python yolo_to_coco.py
python data_wrangling.py
```
Step 2: run the following commands to train:
```
cd transformer_benchmark
python train.py --config Configs/conditional_detr.yaml
```
### Temporal Classification
- Use `tracklet_id` (1-8) from the PKL file as labels.
- **Temporal Split**: 30% train / 30% val / 40% test (chronological order).
### Benchmark vision models for temporal classification:
Step 1: cropping the bounding boxes from `pmfeed_4_3_16.mp4` using the correct labels in `pmfeed_4_3_16_bboxes_and_labels.pkl`. Then convert the folder of images cropped from `pmfeed_4_3_16.mp4` into lmdb dataset for fast loading:
```
cd identification_benchmark
python crop_pmfeed_4_3_16.py
python construct_lmdb.py
```
Step 2: get embeddings from vision model:
```
cd big_model_inference
```
Use `inference_resnet.py` to get embeddings from resnet and `inference_transformers.py` to get embeddings from transformer weights available on Huggingface:
```
python inference_resnet.py --resnet_type resnet18
python inference_transformers.py --model_name facebook/convnextv2-nano-1k-224
```
Step 3: use the embeddings and labels obtained from step 2 to conduct knn evaluation and linear classification:
```
cd ../classification
python train.py
python knn_evaluation.py
```
## Key Results
### Object Detection (YOLO Models)
| Model | Parameters (M) | mAP50:95 (%) | Inference Speed (ms/sample) |
|-------------|----------------|--------------|-----------------------------|
| **YOLOv9c** | 25.6 | **68.4** | 2.8 |
| YOLOv8x | 68.2 | 68.2 | 4.4 |
| YOLOv10n | 2.8 | 64.6 | 0.7 |
---
### Identity Classification (Top Models)
| Model | Accuracy (%) | KNN Top-1 (%) | Parameters (M) |
|----------------|--------------|---------------|----------------|
| **ConvNextV2-Nano** | 73.1 | 50.8 | 15.6 |
| Swin-Tiny | 68.7 | 43.9 | 28.3 |
| ResNet50 | 63.7 | 38.3 | 25.6 |
---
**Notes**:
- **mAP50:95**: Mean Average Precision at IoU thresholds 0.5–0.95.
- **KNN Top-1**: Nearest-neighbor accuracy using embeddings.
- Full results and methodology: [arXiv paper](https://arxiv.org/abs/2503.13777).
## License
This dataset is released under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/).
*Modifications/redistribution must include attribution.*
## Citation
```bibtex
@article{fang20248calves,
title={8-Calves: A Benchmark for Object Detection and Identity Classification in Occlusion-Rich Environments},
author={Fang, Xuyang and Hannuna, Sion and Campbell, Neill},
journal={arXiv preprint arXiv:2503.13777},
year={2024}
}
```
## Contact
**Dataset Maintainer**:
Xuyang Fang
Email: [[email protected]](mailto:[email protected]) |