When: Thursday 10th of March, 1pm AEDT
Where: This seminar will be presented online via Zoom, RSVP here.
Speaker: A/Prof. Abhinav Valada (University of Freiburg)
Title: From Scene Understanding to SLAM: Scalable Perception for Automated Driving
Abstract: The past decade has witnessed unprecedented advances in machine learning techniques for various perception tasks. However, they have also increased the dependency on manually annotated labels which is both environment- and task-specific. Moreover, as different vehicles have different hardware configurations (e.g., sensors, modalities, viewpoints), the transferability of these learned modules has become even more challenging. In this talk, I will present my efforts in addressing these challenges. Specifically, I will discuss three fundamental aspects of learning at scale: 1) learning multiple diverse tasks simultaneously by sharing knowledge and exploiting complementary cues, 2) learning to adapt tasks across different robots and environments, and 3) learning efficiently with minimal human supervision. These techniques have not only facilitated setting the new state-of-the-art, they have opened doors to a wide variety of new applications.
Bio: Abhinav Valada is an Assistant Professor and Director of the Robot Learning Lab at the University of Freiburg, Germany. He is a member of the Department of Computer Science, the BrainLinks-BrainTools center, and a founding faculty of the ELLIS Unit Freiburg. Abhinav is a DFG Emmy Noether AI Fellow, Scholar of the ELLIS Society, and Co-chair of the IEEE Robotics and Automation Society Technical Committee on Robot Learning. He received his PhD with distinction from the University of Freiburg and his MS in Robotics from The Robotics Institute of Carnegie Mellon University. He co-founded and served as the Director of Operations of Platypus LLC, a company developing autonomous robotic boats, and has previously worked at the National Robotics Engineering Center and the Field Robotics Center of Carnegie Mellon University. Abhinav’s research lies at the intersection of robotics, machine learning, and computer vision with a focus on tackling fundamental robot perception, state estimation, and planning problems using learning approaches to enable robots to reliably operate in complex and diverse domains. His Robot Learning group has developed several innovative techniques for scene understanding, state estimation, and autonomous navigation that have defined the state of the art and ranked at the top of benchmarks. His group has also won several major competitions and many aspects of their research have been prominently featured in wider media such as the Discovery Channel, NBC News, and Business Times.