Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:
License:

Acknowledge the terms and conditions to access the dataset

Terms and conditions:

The KITScenes dataset is provided to you under a Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0), with the additional terms included herein. When you download or use the dataset, you are agreeing to comply with the terms of CC BY-NC 4.0 as applicable, and also agreeing to the dataset terms (listed below). Where these dataset terms conflict with the terms of CC BY-NC 4.0, these dataset terms shall prevail.

Dataset terms:

  • In case you use the dataset within your research papers, you refer to at least one of our publications listed below. If the dataset is used in media, a link to our websites (kitscenes.com) is included.
  • We take steps to protect the privacy of individuals by anonymizing faces and license plates using state-of-the-art anonymization software from BrighterAI. To the extent that you like to request removal of specific images/data frames from the dataset, please contact info@mrt.kit.edu.
  • We reserve all rights that are not explicitly granted to you. The dataset is provided as is, and you take full responsibility for any risk of using it.

Publications:

  • Wagner et al.: LongTail Driving Scenarios with Reasoning: The KITScenes LongTail Dataset. In arXiv, 2026

Log in or Sign Up to review the conditions and access this dataset content.

KITScenes LongTail Dataset

In real-world domains such as self-driving, generalization to rare scenarios remains a fundamental challenge. To address this, we introduce a new dataset designed for end-to-end driving that focuses on long-tail driving events. We provide multi-view video data, trajectories, high-level instructions, and detailed reasoning traces, facilitating in-context learning and few-shot generalization. The resulting benchmark for multimodal models, such as VLMs and VLAs, goes beyond safety and comfort metrics by evaluating instruction following and semantic coherence between model outputs. The multilingual reasoning traces in English 🇺🇸, Spanish 🇪🇸, and Chinese 🇨🇳 are from domain experts with diverse cultural backgrounds. Thus, our dataset is a unique resource for studying how different forms of reasoning affect driving competence.

KITScenes-LongTail

Scenarios

We collected our data over the course of two years, beginning in late 2023. Our recordings include urban and suburban environments, as well as highways (the main locations are Karlsruhe, Heidelberg, Mannheim and the Black Forrest). We adjusted our routes to include many construction zones and intersections. In particular, we filtered for rare events such as adverse weather conditions (heavy rain, snow, fog), road closures, and accidents. Consequently, our dataset encompasses scenarios that diverge from nominal data distributions (i.e., long-tail scenarios). Overall, our dataset contains one thousand 9s-long scenarios that are divided into three splits: train (500), test (400), and validation (100).

distribution_of_scenario_types

In addition to specifically selected challenging scenarios, adverse weather, and construction zones, we use the Pareto principle to determine further long-tail data. Specifically, we use the well-established nuScenes dataset (Caesar et al., 2020) as reference and rank-frequency plots with a 80% cumulative frequency threshold to define long-tail data. In nuScenes approx. 88% of the scenarios are recorded during the day, thus nighttime scenarios are long-tail data. For maneuver types, driving straight and regular turns account for approx. 90% of nuScenes. Therefore, overtaking and lane changing are part of the remaining long-tail. As an exception, we also include nominal driving at intersections to better evaluate instruction following since there are more viable trajectories than in most long-tail scenarios.

Our dataset contains multi-view video data with a 360° horizontal field of view (FoV) and six viewing angles (see (a) to (f) in the video below). Furthermore, we perform frame-wise image stitching (see Fig. 3(g) in our paper). Our stitching method introduces gradual image warping to generate 360° views.

Reasoning Traces

We ask domain experts (i.e., researchers working on self-driving) with diverse cultural backgrounds to label reasoning traces about driving actions. The experts answer five questions related to a given driving scenario and an expert-driven trajectory.

reasoning_traces Example prompts for few-shot CoT kinematic inference used in our experiments. We use a XML-like syntax for all prompts.

Citation

If you use KITScenes LongTail, please cite:

@misc{wagner2026longtaildrivingscenariosreasoning,
  title={LongTail Driving Scenarios with Reasoning Traces: The KITScenes LongTail Dataset},
  author={Royden Wagner and Omer Sahin Tas and Jaime Villa and Felix Hauser and Yinzhe Shen and
          Marlon Steiner and Dominik Strutz and Carlos Fernandez and Christian Kinzig and
          Guillermo S. Guitierrez-Cabello and Hendrik Königshof and Fabian Immel and Richard Schwarzkopf and
          Nils Alexander Rack and Kevin Rösch and Kaiwen Wang and Jan-Hendrik Pauls and Martin Lauer and
          Igor Gilitschenski and Holger Caesar and Christoph Stiller},
  year={2026},
  eprint={2603.23607},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2603.23607},
}

Paper: arXiv:2603.23607

Changelog

  • Mar 26, 2026: Initial release. We add reasoning traces for few-shot reasoning, raw images, and link the arXiv paper.
  • Mar 19, 2026: Preview version, we release the test split and 3 training samples for few-shot evaluations. We will release the val and train splits with reasoning traces, raw images with a higher dynamic range and stitched images in later versions.
Downloads last month
51

Paper for KIT-MRT/KITScenes-LongTail