{"id":498755,"date":"2018-08-02T05:54:58","date_gmt":"2018-08-02T12:54:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=498755"},"modified":"2018-10-28T14:06:50","modified_gmt":"2018-10-28T21:06:50","slug":"eccv-2018","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/eccv-2018\/","title":{"rendered":"Microsoft @ ECCV 2018"},"content":{"rendered":"
Venue: <\/strong>GASTEIG Cultural Center (opens in new tab)<\/span><\/a> Website:<\/strong> ECCV 2018 (opens in new tab)<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":" Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.<\/p>\n","protected":false},"featured_media":498773,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2018-09-08","msr_enddate":"2018-09-14","msr_location":"Munich, Germany","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"https:\/\/eccv2018.org\/attending\/registration\/","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"footnotes":""},"research-area":[13562],"msr-region":[239178],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-498755","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-computer-vision","msr-region-europe","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"Venue: <\/strong>GASTEIG Cultural Center<\/a>\r\nRosenheimer Str. 5\r\n81667 Munich\r\nGermany\r\n\r\nWebsite:<\/strong> ECCV 2018<\/a>","tab-content":[{"id":0,"name":"About","content":"Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision<\/a> in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities<\/a> with Microsoft.\r\n Andrew Fitzgibbon<\/a>\r\nSebastian Nowozin<\/a><\/p>\r\n\r\n Alex Hagiopol\r\nAna Anastasijevic\r\nAndrew Fitzgibbon<\/a>\r\nBin Li<\/a>\r\nBin Xiao\r\nChris Aholt\r\nChunyu Wang<\/a>\r\nCuiling Lan<\/a>\r\nErroll Wood\r\nFangyun Wei\r\nJamie Shotton<\/a>\r\nJiaolong Yang<\/a>\r\nJoseph DeGol\r\nKuang-Huei Lee<\/a>\r\nMarc Pollefeys<\/a>\r\nMladen Radojevic\r\nNikola Milosavljevic\r\nNikolaos Karianakis\r\nPatrick Buehler\r\nShivkumar Swaminathan\r\nSudipta Sinha<\/a>\r\nTom Cashman<\/a>\r\nVukasin Rankovic\r\nWenjun Zeng<\/a>\r\nXudong Liu\r\nZhirong Wu\r\nZicheng Liu<\/a><\/p>"},{"id":1,"name":"Tutorials\/Workshops","content":" Marc Pollefeys<\/strong><\/a>, Johannes Sch\u00f6nberger<\/strong>, Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n Invited talk: Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n Program chair: Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n Invited talk: Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n Invited talk: Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n Workshop panelist: Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n Invited talk: Wenjun Zeng<\/strong><\/a><\/p>\r\n\r\n Invited talk and panelist: Mark Pollefeys<\/strong><\/a><\/p>"},{"id":2,"name":"Poster Sessions","content":" Daniel Castro, Sebastian Nowozin<\/strong><\/a><\/p>\r\n\r\n Weixuan Chen, Daniel McDuff<\/strong><\/a><\/p>\r\n\r\n Carl Toft, Erik Stenborg, Lars Hammarstrand, Lucas Brynte, Marc Pollefeys<\/strong><\/a>, Torsten Sattler, Fredrik Kahl<\/p>\r\n \r\n Kuang-Huei Lee<\/strong><\/a>, Xi Chen<\/strong><\/a>, Gang Hua<\/strong><\/a>, Houdong Hu<\/strong>, Xiaodong He<\/p>\r\n\r\n Yiding Liu, Siyu Yang, Bin Li<\/strong><\/a>, Wengang Zhou, Ji-Zheng Xu<\/strong><\/a>, Houqiang Li, Yan Lu<\/strong><\/a><\/p>\r\n\r\n Jieru Mei<\/strong>, Chunyu Wang<\/strong><\/a>, Wenjun Zeng<\/strong><\/a>\r\n<\/strong><\/p>\r\n\r\n Konstantinos-Nektarios Lianos, Johannes Sch\u00f6nberger<\/strong>, Marc Pollefeys<\/strong><\/a>, Torsten Sattler<\/p>\r\n\r\n Joseph DeGol<\/strong>, Timothy Bretl, Derek Hoiem<\/p>\r\n \r\n Navaneeth Bodla, Gang Hua<\/strong><\/a>, Rama Chellappa<\/p>\r\n\r\n Xiao Sun<\/strong><\/a>, Bin Xiao, Fangyin Wei, Shuang Liang, Yichen Wei<\/p>\r\n\r\n Dong Li, Zhaofan Qiu, Qi Dai<\/strong><\/a>, Ting Yao, Tao Mei<\/p>\r\n\r\n Nikolaos Karianakis<\/strong>, Zicheng Liu<\/strong><\/a>, Yinpeng Chen<\/strong>, Stefano Soatto<\/p>\r\n\r\n Bin Xiao<\/strong>, Haiping Wu<\/strong>, Yichen Wei<\/strong><\/p>\r\n \r\n Dongqing Zhang<\/strong>, Jiaolong Yang<\/strong><\/a>, Dongqiangzi Ye<\/strong>, Gang Hua<\/strong><\/a><\/p>\r\n\r\n Zhirong Wu<\/strong>, Alexei Efros, Stella Yu<\/p>\r\n \r\n Tianlang Chen, Zhongping Zhang, Quanzeng You<\/strong>, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo<\/p>\r\n\r\n Pengfei Zhang, Jianru Xue, Cuiling Lan<\/strong><\/a>, Wenjun Zeng<\/strong><\/a>, Zhanning Gao, Nanning Zheng<\/p>\r\n\r\n Sergey Prokudin, Sebastian Nowozin<\/strong><\/a>, Peter Gehler<\/p>\r\n\r\n Renjiao Yi, Chenyang Zhu, Ping Tan, Stephen Lin<\/strong><\/a><\/p>\r\n\r\n Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed A. Elghareb, Marc Pollefeys<\/strong><\/a>, Wojciech Matusik<\/p>\r\n \r\n Yalong Bai, Jianlong Fu<\/strong><\/a>, Tao Mei<\/p>\r\n\r\n Jiayuan Gu, Han Hu<\/strong><\/a>, Liwei Wang, Yichen Wei<\/strong>, Jifeng Dai<\/strong><\/a>\r\n<\/strong><\/p>\r\n\r\n Hai Ci, Chunyu Wang<\/strong><\/a>, Yizhou Wang<\/p>\r\n\r\n Ian Cherabier, Johannes Sch\u00f6nberger<\/strong>, Martin R. Oswald, Marc Pollefeys<\/strong><\/a>, Andreas Geiger<\/p>\r\n \r\n Ting Yao<\/strong>, Yingwei Pan, Yehao Li, Tao Mei<\/p>\r\n\r\n Qingnan Fan, Dongdong Chen, Lu Yuan<\/strong><\/a>, Gang Hua<\/strong><\/a>, Nenghai Yu, Baoquan Chen<\/p>\r\n\r\n Johannes Schoenberger, Sudipta Sinha<\/strong><\/a>, Marc Pollefeys<\/strong><\/a><\/p>\r\n\r\n Yumin Suh,\u00a0Jingdong Wang<\/strong><\/a>, Kyoung Mu Lee<\/p>\r\n \r\n Mohammed Fathy, Quoc-Huy Tran, Zeeshan Zia<\/strong><\/strong>, Paul Vernaza, Manmohan Chandraker<\/p>\r\n\r\n Benjamin Hepp, Debadeepta Dey<\/strong><\/a>, Sudipta Sinha<\/strong><\/a>, Ashish Kapoor<\/strong><\/a>, Neel Joshi, Otmar Hilliges<\/p>\r\n\r\n Zheng Shou, Hang Gao, Lei Zhang<\/strong><\/a>, Kazuyuki Miyazawa, Shih-Fu Chang<\/p>"},{"id":3,"name":"Donate For Good","content":"
\nRosenheimer Str. 5
\n81667 Munich
\nGermany<\/p>\nCommittee Chairs<\/h2>\r\n
Area Chair<\/h3>\r\n
Microsoft Attendees<\/h2>\r\n
Saturday AM | Theresianum 606\r\nHoloLens as a tool for computer vision research<\/a><\/h3>\r\n
Saturday PM | Theresianum 601\r\nVision for XR<\/a><\/h3>\r\n
Sunday AM | N1179\r\n3D Reconstruction Meets Semantics (3DRMS)<\/a><\/h3>\r\n
Sunday PM | Audimax 0980\r\n360\u00b0 Perception and Interaction<\/a><\/h3>\r\n
Sunday PM | Theresianum 606\r\nObserving and Understanding Hands in Action (HANDS2018)<\/a><\/h3>\r\n
Sunday PM | N1090ZG\r\nWomen in Computer Vision | N1090ZG<\/a><\/h3>\r\n
Sunday PM | Theresianum 602\r\n1st Person in Context (PIC) Workshop and Challenge<\/a><\/h3>\r\n
Sunday All Day | 1200\r\nApolloScape: Vision-based Navigation for Autonomous Driving<\/a><\/h3>\r\n
Monday, September 9, 2018 | 10:00 AM | 1A<\/h2>\r\n
From Face Recognition to Models of Identity: A Bayesian Approach to Learning about Unknown Identities from Unsupervised Data<\/a><\/h3>\r\n
DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks<\/a><\/h3>\r\n
Semantic Match Consistency for Long-Term Visual Localization<\/a><\/h3>\r\n
Monday, September 9, 2018 | 4:00 PM | 1B<\/h2>\r\n
Stacked Cross Attention for Image-Text Matching<\/a><\/h3>\r\n
Affinity Derivation and Graph Merge for Instance Segmentation<\/a><\/h3>\r\n
Online Dictionary Learning for Approximate Archetypal Analysis<\/a><\/h3>\r\n
VSO: Visual Semantic Odometry<\/a><\/h3>\r\n
Improved Structure from Motion Using Fiducial Marker Matching<\/a><\/h3>\r\n
Tuesday, September 10, 2018 | 10:00 AM | 2A<\/h2>\r\n
Semi-supervised FusedGAN for Conditional Image Generation<\/a><\/h3>\r\n
Integral Human Pose Regression<\/a><\/h3>\r\n
Recurrent Tubelet Proposal and Recognition Networks for Action Detection<\/a><\/h3>\r\n
Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-identification<\/a><\/h3>\r\n
Simple Baselines for Human Pose Estimation and Tracking<\/a><\/h3>\r\n
Tuesday, September 10, 2018 | 4:00 PM | 2B<\/h2>\r\n
Optimized Quantization for Highly Accurate and Compact DNNs<\/a><\/h3>\r\n
Improving Embedding Generalization via Scalable Neighborhood Component Analysis<\/a><\/h3>\r\n
Wednesday, September 11, 2018 | 10:00 AM | 3A<\/h2>\r\n
\"Factual\" or \"Emotional\": Stylized Image Captioning with Adaptive Learning and Attention<\/a><\/h3>\r\n
Adding Attentiveness to the Neurons in Recurrent Neural Networks<\/a><\/h3>\r\n
Deep Directional Statistics: Pose Estimation with Uncertainty Quantification<\/a><\/h3>\r\n
Faces as Lighting Probes via Unsupervised Deep Highlight Extraction<\/a><\/h3>\r\n
A Dataset of Flash and Ambient Illumination Pairs from the Crowd<\/a><\/h3>\r\n
Wednesday, September 11, 2018 | 2:30 PM | 3B<\/h2>\r\n
Deep Attention Neural Tensor Network for Visual Question Answering<\/a><\/h3>\r\n
Learning Region Features for Object Detection<\/a><\/h3>\r\n
Video Object Segmentation by Learning Location-Sensitive Embeddings<\/a><\/h3>\r\n
Learning Priors for Semantic 3D Reconstruction<\/a><\/h3>\r\n
Thursday, September 12, 2018 | 10:00 AM | 4A<\/h2>\r\n
Exploring Visual Relationship for Image Captioning<\/a><\/h3>\r\n
Learning to Learn Parameterized Image Operators<\/a><\/h3>\r\n
Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching<\/a><\/h3>\r\n
Part-Aligned Bilinear Representations for Person Re-Identification<\/a><\/h3>\r\n
Thursday, September 12, 2018 | 4:00PM | 4B<\/h2>\r\n
Hierarchical Metric Learning and Matching for 2D and 3D Geometric Correspondences<\/a><\/h3>\r\n
Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility<\/a><\/h3>\r\n
AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos<\/a><\/h3>\r\n