You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Koa Associated Biodiversity Camera Trap Dataset

This dataset is aimed at classification of birds visiting planted Acacia koa (koa) trees in the Pu'u Maka'ala Natural Area Reserve (PUUM) on the island of Hawaii (Big Island). The dataset contains full and cropped images collected by camera trap. These images were collected from January 24th to February 25th, 2025.

Dataset Details

This dataset is aimed at classification of birds visiting planted Acacia koa (Koa) trees in the Pu'u Maka'ala Natural Area Reserve on the island of Hawaii (Big Island). Camera traps were placed facing Koa trees and took bursts of 3 images when motion triggered. The trigger is activated by moving animals, but also by the wind.

The dataset contains both the full and cropped images collected. To obtain the crops, the images were first fed through MegaDetector (1), then cropped to the bounding box for any "animal" detections. These crops were then passed to BioCLIP (2) for class-level prediction. The 2,306 crops predicted as "Aves" by BioCLIP were manually labeled as either "plant" or one of the four bird species found in PUUM: Branta sandvicensis (nēnē), Myadestes obscurus ('ōma'o), Lophura leucomelanos (Kalij pheasant), and Himatione sanguinea ('apapane).

This data was collected as part of the 2025 Imageomics Ai and Ecology course, and form part of the Koa restoration project. This project aims to assess koa replanting for conservation, including the effects of replanting on animal diversity. The data in this dataset was collected from January to April 2025. The field site and reserve where the data is collected is a National Ecological Observatory Network (NEON) site.

References:

  1. Beery, S., Morris, D., and Yang, S. (2019). Efficient pipeline for camera trap image review.
  2. Stevens, S., Wu, J., Thompson, M. J., Campolongo, E. G., Song, C. H., Carlyn, D. E., Dong, L., Dahdul, W. M., Stewart, C., Berger-Wolf, T., et al. (2024). Bioclip: A vision foundation model for the tree of life. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19412–19424.

Dataset Structure

PUUM-koa-restoration-camera-trap-dataset/
    cropped_images/  
      <site 1>_<SD_card_id 1>/
        <SD_card_id 1>_<from_date 1>_<to_date 1>/
          0_<crop_id 1>_<img_id 1>.jpg
          ...
          0_<crop_id n>_<img_id n>.jpg
        ...
        <SD_card_id 1>_<from_date n>_<to_date n>/
          0_<crop_id 1>_<img_id 1>.jpg
          ...
          0_<crop_id n>_<img_id n>.jpg
      ...
      <site n>_<SD_card_id n>/
        <SD_card_id 1>_<from_date 1>_<to_date 1>/
          0_<crop_id 1>_<img_id 1>.jpg
          ...
          0_<crop_id n>_<img_id n>.jpg

    full_images/  
      <site 1>_<SD_card_id 1>/
        <SD_card_id 1>_<from_date 1>_<to_date 1>/
          <img_id 1>.jpg
          ...
          <img_id n>.jpg
        ...
        <SD_card_id 1>_<from_date n>_<to_date n>/
          <img_id 1>.jpg
          ...
          <img_id n>.jpg
      ...
      <site n>_<SD_card_id n>/
        <SD_card_id 1>_<from_date 1>_<to_date 1>/
          <img_id 1>.jpg
          ...
          <img_id n>.jpg

    test.parquet  
    train.parquet  
    validation.parquet

Data Instances

full_images contains all camera trap images and cropped_images contains crops generated by MegaDetector. train/validation/test.parquet contain the metadata of each split.

Data Fields

train/validation/test.parquet:

  • label: The label of the cropped image
  • bbox: The coordinate of the bounding box of the cropped image in the full image
  • full_image: Relative path to the full image from the root of the directory
  • cropped_image: Relative path to the cropped image from the root of the directory

Data Splits

Due to the setting of the camera trap, random split might lead to data leakage. Hence, annotated crops under the same subfolder will either all be used for training or validation/test. Subfolders used for training are as follows:

Kalij pheasant:
  PL3_51/R_32_51_A_25_02_10_25_02_25
Nene:
  OG1_40/R_32_40_A_25_02_10_25_02_25
  OG1_40/R_32_40B_2025_01_27_2025_02_10
Apapane:
  PL5_6/R_32_6_A_25_02_10_25_02_25
Omao:
  PL3_51/R_32_51_A_25_02_10_25_02_25  
Plant:
  CC3_48/R_32_48_A_25_02_10_25_02_25
  OG2_44/R_32_44_A_2025_01_23_2025_01_27
  PL4_52/R_32_52_A_25_02_10_25_02_25
  OG1_40/R_32_40_A_25_02_10_25_02_25

Dataset Creation

This dataset was compiled as part of the field component of the Experiential Introduction to AI and Ecology Course run by the Imageomics Institute and the AI and Biodiversity Change (ABC) Global Center. This field work was done on the island of Hawai'i January 15-30, 2025.

Curation Rationale

This dataset was created with the goal of monitoring Koa associated biodiversity. Specifically we wanted to capture visiting birds and mammals. To achieve this we collected camera trap data and fed it through MegaDetector and BioCLIP to create a pipeline that would allow us to rapidly analyse large volumes of data.

Source Data

The full images come directly from the camera trap sd, and are stored in .jpg format.

Data Collection and Processing

The data was collected by camera traps placed in the field. We applied MegaDetector on full images. All detections labeled as ’animal’ were passed to BioCLIP to get class-level prediction. We manually annotated the crops predicted as "Aves" by BioCLIP.

Who are the source data producers?

These collection of images was taken on the field by the camera traps, themselves placed by the dataset curators. The PUUM NEON field staff then swapped out SD cards and uploaded the data every two weeks. Data curation was done by the dataset curators listed, especially Yuyan Chen.

Annotations

Annotation process

Annotators were given the crops as well as the full images. Annotators chose from the following categories: plant, nēnē, kalij pheasant, 'ō'ma, and 'apapane and wrote down their certainty as well. Only crops labeled as "certain" are included in this dataset.

Who are the annotators?

  • Yuyan Chen
  • Maximiliane Jousse

Personal and Sensitive Information

The data was collected on a reserve and scientific field site. Permission is needed for access if you are interested in reproducing this data collection at the same site.

Considerations for Using the Data

Bias, Risks, and Limitations

Camera trap image resolution is low, which may affect how accurate the labels and detections (i.e. crops) are.

These images do not represent a comprehensive survey of the fauna visiting Koa trees at PUUM. Missed detections, errors such as full SD cards, and camera placement all limit how much data is collected. Furthermore, the image bursts mean that the same event is potentially observed three times.

Since the bounding boxes were not manually annotated, but generated by MegaDetector instead, this dataset should not be considered as an object detection dataset.

Licensing Information

This dataset is under the CC-BY-NC-4.0 ( Creative Commons Attribution Non Commercial 4.0) license.

Citation

Data

@misc{PUUM-koa-restoration-camera-trap-dataset,
  author = {Yuyan Chen, Maximiliane Jousse, Ted Zolotarev, Valentin Gabeff, Mike Long, and Dan Rubenstein},
  title = {Koa Associated Biodiversity Camera Trap Dataset},
  year = {2025},
  url = {https://huggingface.co/datasets/imageomics/PUUM-koa-restoration-camera-trap-dataset},
  publisher = {Hugging Face}
}

Paper

@article{PUUM-koa-restoration-camera-trap-dataset,
  title    = {Monitoring replanted koa paper},
  author   = {Yuyan Chen, Maximiliane Jousse, Ted Zolotarev, Valentin Gabeff, and Dan Rubenstein},
  year     =  2025
}

Acknowledgements

This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

This work was supported by the National Science Foundation OAC 2118240 Imageomics Institute and OISE 2330423 AI and Biodiversity Change Global Center awards and the Natural Sciences and Engineering Research Council of Canada 585136 award. This material draws on research supported in part by the Social Sciences and Humanities Research Council. This material is based in part upon work supported by the National Ecological Observatory Network (NEON), a program sponsored by the U.S. National Science Foundation (NSF) and operated under cooperative agreement by Battelle.

Dataset Card Authors

Yuyan Chen, Maximiliane Jousse

Downloads last month
105