Visual Recognition beyond Appearances, and its Robotic Applications

The goal of Computer Vision, as coined by Marr, is to develop algorithms to answer What are Where at When from visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. This talk will present the speaker’s efforts over the last decade, ranging from 1) reasoning beyond appearance for visual question answering, image/video captioning tasks, and their evaluation, through 2) temporal and self-supervised knowledge distillation with incremental knowledge transfer, till 3) their roles in a Robotic visual learning framework via a Robotic Indoor Object Search task. The talk will also feature the Active Perception Group (APG)’s ongoing projects (NSF RI, NRI and CPS, DARPA KAIROS, and Arizona IAM) addressing emerging challenges of the nation in autonomous driving and AI security domains, at the ASU School of Computing, Informatics, and Decision Systems Engineering (CIDSE).

List of major papers covered in the talk:

V&L model robustness
ECCV 2020: VQA-LOL: Visual Question Answering under the Lens of Logic (opens in new tab)
ACL 2021: SMURF: SeMantic and linguistic UndeRstanding Fusion for Caption Evaluation via Typicality Analysis (opens in new tab)
EMNLP 2020: MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering (opens in new tab)
EMNLP 2020: Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning (opens in new tab)

Robotic object search
CVPR 2021: Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph (opens in new tab)
ICRA 2021/RA-L: Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling (opens in new tab)

Other teasers:

AI security/GAN attribution
ICLR 2021: Decentralized Attribution of Generative Models (opens in new tab)
AAAI 2021: Attribute-Guided Adversarial Training for Robustness to Natural Perturbations (opens in new tab)

发言人详细信息

Yezhou Yang is an Assistant Professor at School of Computing, Informatics, and Decision Systems Engineering, Arizona State University. He is directing the ASU Active Perception Group. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots. Before joining ASU, Dr. Yang was a Postdoctoral Research Associate at the Computer Vision Lab and the Perception and Robotics Lab, with the University of Maryland Institute for Advanced Computer Studies. He is a recipient of Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018 and the Amazon AWS Machine Learning Research Award 2019. He receives his Ph.D. from University of Maryland at College Park, and B.E. from Zhejiang University, China.

日期:
演讲者:
Yezhou Yang
所属机构:
School of Computing, Informatics, and Decision Systems Engineering, Arizona State University

系列: Microsoft Vision+Language Summer Talk Series