Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

Excavating the interesting: surfacing scenarios in autonomous vehicle training datasets  
Sam Hind (University of Manchester)

Send message to Author

Short abstract:

Offering a comparison between two training datasets, this paper considers the role of ‘interestingness’ as an empirical quality sought after by machine vision researchers. In such cases, the search for interestingness leads researchers to design elaborate ways to define, categorize, and quantify it.

Long abstract:

In 2012, the KITTI Vision Benchmark Suite was launched, a training dataset used to compare real-world benchmarks useful for the development of autonomous vehicles. Funded through a collaboration between the Karlsruhe Institute of Technology (KIT) in Germany and the Toyota Technological Institute at Chicago (TTI-C) in the USA – hence KIT-TI – the Vision Benchmark Suite provided the foundation for the early ‘benchmark era’ of autonomous driving in the 2010s. Seven years later in 2019, Google/Alphabet’s autonomous vehicle division launched the Waymo Open Dataset, indebted to KITTI and other such open-source benchmark projects, establishing a new ‘incrementalist’ phase of autonomous vehicle development. Tied to annual iterations of their Open Dataset Challenges, Waymo published updates to the dataset in 2021 and 2022, adding unrivalled 'domain diversity' to their offering. Together, both dataset and challenge constitute Waymo’s vision to ‘platformize’ autonomous driving, mobilizing open data initiatives and logics as the basis for commercial development, locking prospective users into their plug-and-play machine learning (ML) stack. Offering a comparison between these two training datasets, representative of different phases in the development of autonomous vehicles, this paper considers the role of ‘interestingness’ as an empirical quality sought after by machine vision researchers in the compilation of such training datasets. In these cases, the search for interestingness leads researchers to design and test ever-more elaborate ways to define the kinds of scenes, situations and scenarios captured in the training datasets themselves, resulting in the quantification of interestingness as an increasing degree of interaction between agents.

Combined Format Open Panel P116
Experiments with computer vision: transforming and re-envisioning visual data
  Session 2 Friday 19 July, 2024, -