You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Kenyan Animal Behavior Recognition (KABR) Mini-Scene Raw Videos

Dataset Summary

This dataset is comprised of a collection of 10+ hours of drone videos focused on Kenyan wildlife that contains behaviors of giraffes, plains zebras, and Grevy's zebras. Animals can be identified with bounding box coordinates provided, and behavior annotations can be recovered by linking the labels back to these bounding boxes from the mini-scene annotations provided in our ML-ready behavior recognition-focused subset of this data: KABR.

Data collection was conducted at the Mpala Research Centre in Kenya, by flying drones over the animals, providing high-quality video footage of the animal's natural behaviors.

KABR is the processed, ML-ready version of this dataset (with mini-scenes). It includes eight different classes, encompassing seven types of animal behavior and an additional category for occluded instances. In the annotation process for this dataset, a team of 10 people was involved, with an expert zoologist overseeing the process. Each behavior was labeled based on its distinctive features, using a standardized set of criteria to ensure consistency and accuracy across the annotations.

Note that these behaviors are not explicitly labeled on the data provided in this dataset, but can be linked through the process described below in the Dataset Instances Section.

Supported Tasks and Leaderboards

This dataset could be used for training or evaluating animal detection models or as input for behavior analysis on videos with a custom pipeline.

Languages

English

Dataset Structure

The KABR full video dataset follows the following format:

/dataset
    /data
        /DD_MM_YY-DJI_0NNN
            DD_MM_YY-DJI_0NNN.mp4  (or DD_MM_YY-DJI_0NNN-trimmed.mp4)
            /actions
                MS#.xml
                ...
            /metadata
                DJI_0NNN.jpg
                DJI_0NNN_metadata.json
                DJI_0NNN_tracks.xml
                DJI_0NNN.SRT  (optional)
        /DD_MM_YY-DJI_0NNN
            DD_MM_YY-DJI_0NNN.mp4  (or DD_MM_YY-DJI_0NNN-trimmed.mp4)
            /actions
                MS#.xml
                ...
            /metadata
                DJI_0NNN.jpg
                DJI_0NNN_metadata.json
                DJI_0NNN_tracks.xml
                DJI_0NNN.SRT  (optional)
        ...

Note: Directory names use the format DD_MM_YY-DJI_0NNN where DD_MM_YY represents the collection date (e.g., 11_01_23 for January 11, 2023) and DJI_0NNN is the video identifier. Some directories may include session information (e.g., 16_01_23_session_1-DJI_0001).

Ecological Metadata

For Darwin Core compliant ecological details including session information, environmental conditions, and sampling event data, please see the session_events.csv file in the KABR Behavior Telemetry dataset.

Data Instances

Naming: Within the data folder, each DD_MM_YY-DJI_0NNN directory contains:

  • DD_MM_YY-DJI_0NNN.mp4 or DD_MM_YY-DJI_0NNN-trimmed.mp4: Video collected by the drone (original or trimmed to remove people/takeoff/landing).
  • actions - Folder containing:
    • NN.xml: Contains behavior annotation information for the indicated mini-scene (indicated by number NN).
  • metadata - Folder containing:
    • DJI_0NNN.jpg: Color-coded Gantt chart indicating the timeline for mini-scenes derived from the video.
    • DJI_0NNN_metadata.json: Contains binary data relating the main video to its derived mini-scenes.
    • DJI_0NNN_tracks.xml: Contains bounding box coordinates for each mini-scene within the main video, with references to the frame ID relative to the main video.
    • DJI_0NNN.SRT: (Optional) Subtitle file with drone telemetry data.

Examples:

  • DJI_0022_metadata.json:
{
    "original": "../data/recording_NNN/DJI_0022.mp4",
...
  • DJI_0022_tracks.xml:
<?xml version='1.0' encoding='UTF-8'?>
<annotations>
  <version>1.1</version>
  <meta>
    <task>
      <size>8720</size>
      <original_size>
        <width>3840</width>
        <height>2160</height>
      </original_size>
      <source>DJI_0022</source>
    </task>
  </meta>
  <track id="1" label="Zebra" source="manual">
    <box frame="1" outside="0" occluded="0" keyframe="1" xtl="1651.00" ytl="1114.00" xbr="1681.00" ybr="1132.00" z_order="0"/>
    <box frame="2" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1681.00" ybr="1133.00" z_order="0"/>
    <box frame="3" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1111.00" xbr="1681.00" ybr="1133.00" z_order="0"/>
    <box frame="4" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1681.00" ybr="1135.00" z_order="0"/>
    <box frame="5" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1680.00" ybr="1135.00" z_order="0"/>
    <box frame="6" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1114.00" xbr="1680.00" ybr="1136.00" z_order="0"/>
    <box frame="7" outside="0" occluded="0" keyframe="1" xtl="1650.00" ytl="1115.00" xbr="1680.00" ybr="1136.00" z_order="0"/>
...

The bounding box can then be linked to the appropriate action/MS#.xml file (track id is MS#, the mini-scene number), which provides the behavior annotation:

<points frame="64" keyframe="0" outside="0" occluded="0" points="161.15,145.68" z_order="0">
    <attribute name="Behavior">Walk</attribute>

Note: The dataset consists of a total of 1,139,893 annotated frames captured from drone videos. There are 488,638 annotated frames of Grevy's zebras, 492,507 annotated frames of plains zebras, and 158,748 annotated frames of giraffes. Occasionally other animals and vehiclesappear in the videos, but they are not identified.

Data Fields

There are 14,764 unique behavioral sequences in the dataset. These consist of eight distinct behaviors:

  • Walk
  • Trot
  • Run: animal is moving at a cantor or gallop
  • Graze: animal is eating grass or other vegetation
  • Browse: animal is eating trees or bushes
  • Head Up: animal is looking around or observe surroundings
  • Auto-Groom: animal is grooming itself (licking, scratching, or rubbing)
  • Occluded: animal is not fully visible

Dataset Creation

The KABR full video dataset was created to provide a comprehensive resource for studying animal behavior in their natural habitat using drone technology.

Source Data

Initial Data Collection and Normalization

Data was collected from 6 January 2023 through 21 January 2023 at the Mpala Research Centre in Kenya under a Nacosti research license. We used DJI Air and Mavic 2S drone equipped with cameras to record 4K and 5.4K resolution videos from varying altitudes and distances of 10 to 50 meters from the animals (distance was determined by circumstances and safety regulations).

Annotations

See KABR_In-Situ_Dataset_for_Kenyan_Animal_Behavior_Recognition_From_Drone for full annotation process details.

Video Trimming and Processing Details

Some videos were trimmed to remove footage of people, vehicle appearances, drone takeoff, or landing sequences. The table below details which videos were trimmed and the reasons:

Note: January 16, 2023 data represents a single collection session split into two flights (flight_1 and flight_2).

Date Session/Flight Video ID Trim Type Trim Point Reason
11_01_23 session_1 DJI_0488 End trim 7:48 Remove people
11_01_23 session_2 DJI_0980 End trim 4:00 Remove people
12_01_23 session_1 DJI_0989 End trim 1:00 Remove people
12_01_23 session_2 DJI_0994 End trim 3:30 Remove people
12_01_23 session_3 DJI_0997 Start trim 0:15 Remove people
12_01_23 session_3 DJI_0998 End trim 2:30 Remove people
12_01_23 session_4 DJI_0003 End trim 1:00 Remove people and landing
12_01_23 session_5 DJI_0008 End trim 3:00 Remove landing and people
13_01_23 session_1 DJI_0009 End trim 4:18 Remove landing
13_01_23 session_2 DJI_0011 Start trim 0:50 Remove people and takeoff
13_01_23 session_3 DJI_0014 Start trim 0:24 Remove people
13_01_23 session_4 DJI_0017 Start trim 0:12 Remove people
13_01_23 session_5 DJI_0018 Start trim 0:22 Remove takeoff
13_01_23 session_5 DJI_0021 Start trim 0:27 Remove launch
13_01_23 session_6 DJI_0027 Start trim 0:27 Remove takeoff
13_01_23 session_6 DJI_0029 Start trim 0:30 Remove takeoff
13_01_23 session_7 DJI_0031 Start trim 0:27 Remove takeoff
13_01_23 session_8 DJI_0034 Start trim 0:27 Remove takeoff
13_01_23 session_8 DJI_0039 Start trim 0:39 Remove takeoff and people
16_01_23 flight_1 DJI_0001 Start trim 0:12 Remove people
16_01_23 flight_2 DJI_0004 End trim Last 0:10 Remove landing
17_01_23 session_1 DJI_0005 Start trim 0:40 Remove people and takeoff
17_01_23 session_2 DJI_0008 Start trim 0:28 Remove people and takeoff

Excluded Videos:

  • 12_01_23-DJI_0993: Deleted (no wildlife data or behavior annotations)
  • 13_01_23-DJI_0010: Deleted (only contained drone landing footage)
  • 16_01_23-DJI_0005: Deleted (no useful data)

Additional Notes:

  • 11_01_23-DJI_0979: Field vehicle appears briefly at 2:18 but not close enough for individual identification

Personal and Sensitive Information

Personally identifiable information (PII) has been removed from the dataset.

Other Known Limitations

This dataset is not ML-ready. It contains the full videos (with bounding box coordinates) that were processed to create the KABR dataset. See KABR Behavior Telemetry Dataset for ecological metadata associated with this dataset and AI-ready behavior and detection annotations.

This data exhibits a long-tailed distribution due to the natural variation in frequency of the observed behaviors.

Additional Information

Authors

  • Jenna Kline (The Ohio State University) - ORCID: 0009-0006-7301-5774
  • Maksim Kholiavchenko (Rensselaer Polytechnic Institute) - ORCID: 0000-0001-6757-1957
  • Michelle Ramirez (The Ohio State University)
  • Sam Stevens (The Ohio State University) - ORCID: 0009-0000-9493-7766
  • Alec Sheets (The Ohio State University) - ORCID: 0000-0002-3737-1484
  • Reshma Ramesh Babu (The Ohio State University) - ORCID: 0000-0002-2517-5347
  • Namrata Banerji (The Ohio State University) - ORCID: 0000-0001-6813-0010
  • Elizabeth Campolongo (The Ohio State University) - ORCID: 0000-0003-0846-2413
  • Matthew Thompson (The Ohio State University) - ORCID: 0000-0003-0583-8585
  • Nina Van Tiel (École polytechnique fédérale de Lausanne) - ORCID: 0000-0001-6393-5629
  • Jackson Miliko (Mpala Research Centre)
  • Isla Duporge (Princeton University) - ORCID: 0000-0001-8463-2459
  • Neil Rosser (University of Florida) - ORCID:0000-0001-7796-2548
  • Eduardo Bessa (Universidade de Brasília) - ORCID: 0000-0003-0606-5860
  • Charles Stewart (Rensselaer Polytechnic Institute)
  • Tanya Berger-Wolf (The Ohio State University) - ORCID: 0000-0001-7610-1412
  • Daniel Rubenstein (Princeton University) - ORCID: 0000-0001-9049-5219

Licensing Information

This dataset is dedicated to the public domain for the benefit of scientific pursuits. We ask that you cite the dataset using the below citation if you make use of it in your research.

Citation Information

@misc{kabr-mini-scene-videos,
  author = {
    Jenna Kline and Maksim Kholiavchenko and Michelle Ramirez and Sam Stevens and Alec Sheets and Reshma Ramesh Babu and
    Namrata Banerji and Elizabeth Campolongo and Matthew Thompson Nina Van Tiel and Jackson Miliko and Isla Duporge and Neil Rosser and
    Eduardo Bessa and Charles Stewart and Tanya Berger-Wolf and Daniel Rubenstein
  },
  title = {Kenyan Animal Behavior Recognition (KABR) Mini-Scene Raw Videos},
  year = {2025},
  url = {https://huggingface.co/datasets/imageomics/KABR-mini-scene-raw-videos},
  doi = {},
  publisher = {Hugging Face},
  }

Contributions

The Imageomics Institute is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) Institute program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning).

Downloads last month
92

Collection including imageomics/KABR-mini-scene-raw-videos