À propos
Chung-Ching Lin is a Principal Researcher in Microsoft GenAI.
Before joining Microsoft, Chung-Ching was a Research Staff Member with IBM T.J. Watson Research, where he began his career after earning his Ph.D. from Georgia Institute of Technology. His research interests include computer vision and machine learning, with specific interests in video understanding, representation learning, and vision and language. In particular, Chung-Ching explores solutions for understanding dynamic video scenes and performing related tasks such as action recognition, event description, and instance segmentation, matting and tracking.
Recent Work:
- Adaptive Human Matting for Dynamic Videos, CVPR 2023 (opens in new tab)
- Neural Voting Field for Camera-Space 3D Hand Pose Estimation, CVPR 2023 (opens in new tab)
- LAVENDER: Unifying Video-Language Understanding As Masked Language Modeling, CVPR 2023 (opens in new tab)
- Equivariant Similarity for Vision-Language Foundation Models, ICCV 2023 (opens in new tab)
- Cross-modal Representation Learning for Zero-shot Action Recognition, CVPR 2022 (opens in new tab)
- SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning, CVPR 2022 (opens in new tab)
- AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition, ICLR 2021 (opens in new tab)
- VA-RED2: Video Adaptive Redundancy Reduction, ICLR 2021 (opens in new tab)
- Video instance segmentation tracking with a modified VAE architecture, CVPR 2020 (opens in new tab)
- AR-Net: Adaptive frame resolution for efficient action recognition, ECCV 2020 (opens in new tab)
- US Patent 9,400,939 10,204,291 10,255,674 10,217,225 10,386,409 10,553,005 10,755,397 10,755,404 11,172,225