Datasets:
audio audioduration (s) 0.04 4.75 | label class label 3
classes |
|---|---|
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 | |
0juvExpBP01 |
YAML Metadata Warning:The task_categories "audio-to-text" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
V33DA++
Summary
V33DA++ is a companion release to the V33DA benchmark. It is built from the same BirdPark recordings and the same 10 birds, but includes data outside the strict single-vocalizer benchmark clips. V33DA++ is intended for tasks that require overlapping callers or longer temporal context, such as source separation, vocal activity detection, audio-visual synchronization, or active-speaker detection.
V33DA++ contains two buckets:
- overlap: reviewer-verified events that were excluded from V33DA because another bird vocalized within the same window. Each sample includes the list of overlapping callers verified from on-body accelerometer channels.
- padded: +/-2s context windows around each V33DA event, preserving the original event onset/offset inside a longer window.
This release is not the benchmark used in the V33DA paper’s main results tables.
Data Structure
Top-level folders:
overlap/: V33DA++ overlap bucketoverlap_padded/: overlap bucket with +/-2s contextpadded/: V33DA++ padded-context bucket
Each bucket contains:
v33da_pp_*.parquet: data shardsaudio/: multi-channel cage microphone WAVs (if exported)accelerometer/: multichannel on-body vibration WAVs (if exported)clips/: aligned MP4 clips (if exported)metadata.json: build metadata and filters
Configs
We publish two dataset configs:
overlap: overlap-filtered calls with the verified list of overlapping callersoverlap_padded: overlap bucket with +/-2s contextpadded: +/-2s context windows around each V33DA event
Data Fields (Parquet)
All V33DA fields are preserved, plus:
overlap_callers: list of overlapping caller colors (overlap bucket)overlap_count: number of overlapping callers (overlap bucket)event_onset_sec,event_offset_secevent_onset_frame,event_offset_framecontext_pad_sec
License
CC-BY-4.0. External tools used during preprocessing (e.g., Whisper/WhisperSeg, SAM2, ByteTrack, YOLOX-Pose) retain their original licenses.
Citation
If you use V33DA or V33DA++, please cite the V33DA paper:
@inproceedings{basha2026v33da,
title={Who Called? V33DA: A Multimodal Benchmark for Spatial Vocal Attribution in Social Zebra Finches},
author={Basha, Maris and others},
booktitle={NeurIPS},
year={2026}
}
- Downloads last month
- 131