Learning from Unlabeled Videos for Recognition, Prediction, and Control

Deep learning has brought tremendous progress to visual recognition, thanks to big labeled data and fast compute. To transfer such success to our daily life, we still need to develop machine intelligence that recognizes hierarchical, composite human activities, and predicts how events unfold over time. These tasks are often too rich to be discretized into categorical labels, or too ambiguous to be manually labeled by humans, making standard supervised deep learning unfit for the tasks.

In this talk, I will introduce several recent works on learning rich semantic and dynamic information from unlabeled videos. The first part of the talk focuses on recognition, where the goal is to learn temporally aware visual representations via self-supervised learning. I will discuss the principles of view construction for contrastive learning, how the vanilla contrastive learning objective loses temporal information, and how to fix it. In the second part of my talk, I will describe our work on predicting key moments into the longer-term time horizons, using only narrated videos as training signals. Finally, I will show how multimodal representation learning leads to agents that better navigate, and interact with the environment, following human instructions.

Speaker Details

Chen Sun is an assistant professor of computer science at Brown University, studying computer vision, machine learning, and artificial intelligence. He is also a staff research scientist at Google Research. Previously, Chen received his Ph.D. from the University of Southern California in 2016, advised by Prof. Ram Nevatia. He completed his bachelor degree in Computer Science at Tsinghua University in 2011. He did research internships at Google and Facebook.

Date:
Speakers:
Chen Sun
Affiliation:
Computer Science at Brown University

Series: Microsoft Vision+Language Summer Talk Series