...

Robotics and Embodied AI Lab (REAL)

The Robotics and Embodied AI Lab (REAL) is a research lab in DIRO at the Université de Montréal and is also affiliated with Mila. REAL is dedicated to making generalist robots and other embodied agents.

We are always looking out for talented students to join us as full-time students / visitors. To know more, click on the link below.

Learn more
News
December 05, 2020
Krishna won an NVIDIA fellowship for 2021-22. Congratulations!

November 30, 2020
We released gradslam - a differentiable dense SLAM framework for deep learning. Check it out!

October 30, 2020
We organized an IROS workshop on Benchmarking progress in autonomous driving

October 15, 2020
Checkout our new Neurips 2020 Oral paper La-MAML: Look-Ahead Meta-Learning for Continual Learning [Code], [Short Video].

October 10, 2020
Two papers accepted to Neurips 2020 (one of them an oral - top 1.1%). Congratulations Gunshi and Ruixiang!

September 10, 2020

June 30, 2020
Gunshi Gupta succesfully completes her M.Sc. and joins Wayve as a deep learning researcher!

June 05, 2020
Our paper [MapLite: Autonomous intersection navigation without detailed prior maps] was adjudged best Robotics and Automation Letters (RAL) paper for 2019! Check it out here. And, here’s a short video abstract.

More news …
Projects

f-Cal - Calibrated aleatoric uncertainty estimation from neural networks for robot perception

f-Cal is calibration method proposed to calibrate probabilistic regression networks. Typical bayesian neural networks are shown to be overconfident in their predictions. To use the predictions for downstream tasks, reliable and calibrated uncertainity estimates are critical. f-Cal is a straightforward loss function, which can be employed to train any probabilistic neural regressor, and obtain calibrated uncertainty estimates.

Collaborators:

Inverse Variance Reinforcement Learning

Improving sample efficiency in deep reinforcement learning by mitigating the impacts of heteroscedastic noise in the bootstraped target using uncertainty estimation.


Lifelong Topological Visual Navigation

A learning-based topological visual navigation method with graph update strategies that improves lifelong navigation performance over time.

Collaborators:

Taskography - Evaluating robot task planning over large 3D scene graphs

Taskography is the first large-scale robotic task planning benchmark over 3DSGs. While most benchmarking efforts in this area focus on vision-based planning, we systematically study symbolic planning, to decouple planning performance from visual representation learning.

Collaborators:

gradsim

gradSim is a framework that overcomes the dependence on 3D supervision by leveraging differentiable multiphysics simulation and differentiable rendering to jointly model the evolution of scene dynamics and image formation.

Collaborators:
  • Miles Macklin
  • Vikram Voleti
  • Linda Petrini
  • Martin Weiss
  • Jerome Parent-Levesque
  • Kevin Xie
  • Kenny Erleben
  • Florian Shkurti
  • Derek Nowrouzerzahrai
  • Sanja Fidler

gradslam

gradslam is an open-source framework providing differentiable building blocks for simultaneous localization and mapping (SLAM) systems. We enable the usage of dense SLAM subsystems from the comfort of PyTorch.

Collaborators:

La-MAML

Look-ahead meta-learning for continual learning

Collaborators:
  • Karmesh Yadav

Active Domain Randomization

Making sim-to-real transfer more efficient

Collaborators:
  • Chris Pal

Self-supervised visual odometry estimation

A self-supervised deep network for visual odometry estimation from monocular imagery.

Collaborators:

Deep Active Localization

Learned active localization, implemented on “real” robots.

Collaborators:
  • Keehong Seo

All projects…

Department of Computer Science and Operations Research | Université de Montréal | Mila