Website / Schedule:
https://udel.edu/~ghuang/icra21-vins-workshop/
As cameras and IMUs are becoming ubiquitous, visual-inertial navigation systems (VINS) that provide high-precision 3D motion estimation, hold great potentials in a wide range of applications from augmented reality (AR) and unmanned aerial vehicles (UAVs) to autonomous driving, in part because of the complementary sensing capabilities and the decreasing costs and size of these sensors. While visual-inertial navigation, alongside with SLAM, has witnessed tremendous progress in the past decade, yet certain critical aspects in the design of visual-inertial systems remain poorly explored, greatly hindering the widespread deployment of these systems in practice. For example, many VINS algorithms are yet not robust to high dynamics and poor lighting conditions; they are yet not accurate enough for long-term, large-scale operations, in particular, in life-critical scenarios; and yet they are unable to provide semantic and cognitive understandings to support high-level decision making. This workshop brings together researchers in robotics, computer vision and AI, from both academia and industry, to share their insights and thoughts on the R&D of VINS. The goal of this workshop is to bring forward the latest breakthroughs and cutting-edge research on visual-inertial navigation and beyond, to open discussions about technical challenges and future research directions for the community, and to identify new applications of this emerging technology.
00:29:25 Paul Huang (UD) – Welcome and Introduction
00:39:10 Patrick Geneva (UD) – Visual-Inertial Navigation Systems: An Introduction
01:43:08 Ping Tan (Alibaba) – Visual localization and dense mapping
02:22:45 Stefan Leutenegger (TUM) – Visual-inertial SLAM and Spatial AI for mobile robots
03:14:21 Giuseppe Loianno (NYU) – Resilient Visual Inertial Estimation for Agile Aerial Robots
03:54:46 Kejian Wu (NReal) – VINS and its Applications in Mixed Reality
04:43:58 Maurice Fallon (Oxford) – Multi-Sensor Tracking to enable exploration of visually degraded underground environments
05:29:20 Luca Carlone (MIT) – From Visual Navigation to Real-time Scene Understanding: Open Problems and Opportunities
06:17:26 Jonathan Kelly (UToronto) – A Question of Time: Revisiting Temporal Calibration for Visual-Inertial Navigation
06:57:32 Abraham Bachrach (Skydio) – Robust VIO in the Real World
07:47:00 Chao Guo (Google) – VINS on Unknown Devices
08:31:34 (1) iCalib: Inertial Aided Multi-Sensor Calibration, Yulin Yang, Woosik Lee, Philip Osteen, Patrick Geneva, Xingxing Zuo and Guoquan Huang
08:40:50 (2) RISE: Real-Time Iteration Scheme for Estimation applied to Visual-Inertial Odometry, Philipp Foehn and Davide Scaramuzza
08:49:16 (3) Redesigning SLAM for Arbitrary Multi-Camera Systems, Juichung Kuo, Manasi Muglikar, Zichao Zhang, and Davide Scaramuzza
08:57:36 (4) Periodic SLAM: Using Cyclic Constraints to Improve the Performance of Visual-Inertial SLAM on Legged Robots, Hans Kumar, J. Joe Payne, Matthew Travers, Aaron M. Johnson, and Howie Choset
09:09:15 (5) DSEC: A Stereo Event Camera Dataset for Driving Scenarios, Mathias Gehrig, Willem Aarents, Daniel Gehrig and Davide Scaramuzza
09:19:32 (6) Tightly-coupled Fusion of Global Positional Measurements in Optimization-based Visual-Inertial Odometry, Giovanni Cioffi and Davide Scaramuzza
09:26:00 (7) An Equivariant Filter for Visual Inertial Odometry, Pieter van Goor1 and Robert Mahony
09:33:50 Paul Huang (UD) – Concluding Remarks