Call for Participation
IROS 2020 Workshop on Benchmarking Progress in Autonomous Driving
Autonomous driving has made steady progress over the last decade, but it is unclear how close we are to deploying truly autonomous vehicles at scale. This workshop will bring together a diverse group of researchers and practitioners to make progress towards answering the question of how to best measure and verify advances in the field of robotic driving. The forum will provide an opportunity for people to showcase recent developments in benchmarking tools, publicly available diverse datasets, and to identify open challenges. The program will combine invited and contributed talks with interactive discussions to provide an atmosphere for discourse on how to best evaluate progress in autonomous driving.
We welcome contributions from a broad range of areas related to the development of methods to assess progress in autonomous driving. These include datasets and environments for training and evaluation, the tools that facilitate the generation of these datasets and tools, as well as the different metrics used for evaluation. Identifying appropriate benchmarks for embodied agents involves a trade-off between specificity and generality. Abstract metrics and general-purpose datasets have the advantage that they are applicable to the broader research community, however this comes at the expense of providing little insight into specific functionality of the methods being evaluated and their relative, task-specific performance. On the other hand the use of highly focused performance measures and datasets, such as those that involve the use of common hardware platforms or subsystems, requires careful thought to ensure that results generalize beyond the limits of the controlled evaluation.
Topics of interest include, but are not limited to:
- Simulation-based benchmarks
- Evaluating the value of simulation environments
- Real-world datasets for training and evaluation
- Shared-hardware benchmarks
- Robotics reproducibility
- Evaluation metrics
- Out-of-the-box ideas for benchmarking embodied systems
- TBA: Abstract/paper submission deadline
- TBA: Abstract/paper notification
- TBA: Full-day workshop
We invite participants to submit extended abstracts or full papers that describe recent or ongoing research. We encourage authors to accompany their submissions with a video that describes or demonstrates their work. Authors of accepted abstracts/papers will have the opportunity to disseminate their work through an oral spotlight presentation. Papers (max six pages, excluding references) and abstracts (max two pages, excluding references) should be in PDF format and adhere to the ICRA paper format. Note that reviews will not be double blind and submissions should include the author names and affiliations.
Papers, abstracts, and supplementary materials can be submitted by logging into the workshop’s CMT site.
Best submission awards
There will be awards for top submissions. Details to come.
Liam Paull: Assistant Professor, University of Montreal
Andrea Censi: Deputy Director for the Chair of Dynamic Systems and Control, ETH Zurich
Jacopo Tani: Research Scientist, ETH Zurich
Sahika Genc: Senior Applied Scientist, Amazon AI
Sunil Mallya: Principal Deep Learning Scientist, Amazon AI