Seminar: Detecting Actions in Videos Using Less Labelled Data, 20th October, 1pm

When: Thursday 20th of October, 1pm AEDT

Where: This seminar will be partially presented at the Rose Street Seminar area (J04) and partially online via Zoom, RSVP here.

Speaker: Georgia Markham

Title: Detecting Actions in Videos Using Less Labelled Data

Abstract: Video action detection is an active area of research with applications to performance assessment and monitoring, anomaly and hazard detection, and human-robot interaction, among others. Existing approaches rely on large, labelled datasets which are costly to collect and annotate at scale. This makes them unsuitable for applications where only a small amount of data is available overall, multiple examples of certain action classes are not available, or it is infeasible to manually annotate the amount of data required.

We present ongoing work towards developing methods capable of recognising and segmenting actions in continuous video streams using substantially less labelled data than existing methods. Our work proposes a novel method that combines concepts in few-shot learning and unsupervised segmentation of time-series data, with preliminary test results discussed.

Bio: Georgia is a PhD student with the Rio Tinto Centre for Mine Automation at the Australian Centre for Field Robotics. She received her Bachelor’s degrees in Mechatronic Engineering, and Applied Mathematics and Computer Science from The University of Sydney in 2021. Her research interests include machine learning under limited to no supervision, applied to advanced computer vision systems.

Contacts

Australian Centre for Robotics
info@acfr.usyd.edu.au