{"id":498755,"date":"2018-08-02T05:54:58","date_gmt":"2018-08-02T12:54:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=498755"},"modified":"2018-10-28T14:06:50","modified_gmt":"2018-10-28T21:06:50","slug":"eccv-2018","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/eccv-2018\/","title":{"rendered":"Microsoft @ ECCV 2018"},"content":{"rendered":"

Venue: <\/strong>GASTEIG Cultural Center (opens in new tab)<\/span><\/a>
\nRosenheimer Str. 5
\n81667 Munich
\nGermany<\/p>\n

Website:<\/strong> ECCV 2018 (opens in new tab)<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.<\/p>\n","protected":false},"featured_media":498773,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2018-09-08","msr_enddate":"2018-09-14","msr_location":"Munich, Germany","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"https:\/\/eccv2018.org\/attending\/registration\/","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"footnotes":""},"research-area":[13562],"msr-region":[239178],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-498755","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-computer-vision","msr-region-europe","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"Venue: <\/strong>GASTEIG Cultural Center<\/a>\r\nRosenheimer Str. 5\r\n81667 Munich\r\nGermany\r\n\r\nWebsite:<\/strong> ECCV 2018<\/a>","tab-content":[{"id":0,"name":"About","content":"Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision<\/a> in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities<\/a> with Microsoft.\r\n

Committee Chairs<\/h2>\r\n

Area Chair<\/h3>\r\n

Andrew Fitzgibbon<\/a>\r\nSebastian Nowozin<\/a><\/p>\r\n\r\n

Microsoft Attendees<\/h2>\r\n

Alex Hagiopol\r\nAna Anastasijevic\r\nAndrew Fitzgibbon<\/a>\r\nBin Li<\/a>\r\nBin Xiao\r\nChris Aholt\r\nChunyu Wang<\/a>\r\nCuiling Lan<\/a>\r\nErroll Wood\r\nFangyun Wei\r\nJamie Shotton<\/a>\r\nJiaolong Yang<\/a>\r\nJoseph DeGol\r\nKuang-Huei Lee<\/a>\r\nMarc Pollefeys<\/a>\r\nMladen Radojevic\r\nNikola Milosavljevic\r\nNikolaos Karianakis\r\nPatrick Buehler\r\nShivkumar Swaminathan\r\nSudipta Sinha<\/a>\r\nTom Cashman<\/a>\r\nVukasin Rankovic\r\nWenjun Zeng<\/a>\r\nXudong Liu\r\nZhirong Wu\r\nZicheng Liu<\/a><\/p>"},{"id":1,"name":"Tutorials\/Workshops","content":"

Saturday AM | Theresianum 606\r\nHoloLens as a tool for computer vision research<\/a><\/h3>\r\n

Marc Pollefeys<\/strong><\/a>, Johannes Sch\u00f6nberger<\/strong>, Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n

Saturday PM | Theresianum 601\r\nVision for XR<\/a><\/h3>\r\n

Invited talk: Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n

Sunday AM | N1179\r\n3D Reconstruction Meets Semantics (3DRMS)<\/a><\/h3>\r\n

Program chair: Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n

Sunday PM | Audimax 0980\r\n360\u00b0 Perception and Interaction<\/a><\/h3>\r\n

Invited talk: Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n

Sunday PM | Theresianum 606\r\nObserving and Understanding Hands in Action (HANDS2018)<\/a><\/h3>\r\n

Invited talk: Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n

Sunday PM | N1090ZG\r\nWomen in Computer Vision | N1090ZG<\/a><\/h3>\r\n

Workshop panelist: Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n

Sunday PM | Theresianum 602\r\n1st Person in Context (PIC) Workshop and Challenge<\/a><\/h3>\r\n

Invited talk: Wenjun Zeng<\/strong><\/a><\/p>\r\n\r\n

Sunday All Day | 1200\r\nApolloScape: Vision-based Navigation for Autonomous Driving<\/a><\/h3>\r\n

Invited talk and panelist: Mark Pollefeys<\/strong><\/a><\/p>"},{"id":2,"name":"Poster Sessions","content":"

Monday, September 9, 2018 | 10:00 AM | 1A<\/h2>\r\n

From Face Recognition to Models of Identity: A Bayesian Approach to Learning about Unknown Identities from Unsupervised Data<\/a><\/h3>\r\n

Daniel Castro, Sebastian Nowozin<\/strong><\/a><\/p>\r\n\r\n

DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks<\/a><\/h3>\r\n

Weixuan Chen, Daniel McDuff<\/strong><\/a><\/p>\r\n\r\n

Semantic Match Consistency for Long-Term Visual Localization<\/a><\/h3>\r\n

Carl Toft, Erik Stenborg, Lars Hammarstrand, Lucas Brynte, Marc Pollefeys<\/strong><\/a>, Torsten Sattler, Fredrik Kahl<\/p>\r\n \r\n

Monday, September 9, 2018 | 4:00 PM | 1B<\/h2>\r\n

Stacked Cross Attention for Image-Text Matching<\/a><\/h3>\r\n

Kuang-Huei Lee<\/strong><\/a>, Xi Chen<\/strong><\/a>, Gang Hua<\/strong><\/a>, Houdong Hu<\/strong>, Xiaodong He<\/p>\r\n\r\n

Affinity Derivation and Graph Merge for Instance Segmentation<\/a><\/h3>\r\n

Yiding Liu, Siyu Yang, Bin Li<\/strong><\/a>, Wengang Zhou, Ji-Zheng Xu<\/strong><\/a>, Houqiang Li, Yan Lu<\/strong><\/a><\/p>\r\n\r\n

Online Dictionary Learning for Approximate Archetypal Analysis<\/a><\/h3>\r\n

Jieru Mei<\/strong>, Chunyu Wang<\/strong><\/a>, Wenjun Zeng<\/strong><\/a>\r\n<\/strong><\/p>\r\n\r\n

VSO: Visual Semantic Odometry<\/a><\/h3>\r\n

Konstantinos-Nektarios Lianos, Johannes Sch\u00f6nberger<\/strong>, Marc Pollefeys<\/strong><\/a>, Torsten Sattler<\/p>\r\n\r\n

Improved Structure from Motion Using Fiducial Marker Matching<\/a><\/h3>\r\n

Joseph DeGol<\/strong>, Timothy Bretl, Derek Hoiem<\/p>\r\n \r\n

Tuesday, September 10, 2018 | 10:00 AM | 2A<\/h2>\r\n

Semi-supervised FusedGAN for Conditional Image Generation<\/a><\/h3>\r\n

Navaneeth Bodla, Gang Hua<\/strong><\/a>, Rama Chellappa<\/p>\r\n\r\n

Integral Human Pose Regression<\/a><\/h3>\r\n

Xiao Sun<\/strong><\/a>, Bin Xiao, Fangyin Wei, Shuang Liang, Yichen Wei<\/p>\r\n\r\n

Recurrent Tubelet Proposal and Recognition Networks for Action Detection<\/a><\/h3>\r\n

Dong Li, Zhaofan Qiu, Qi Dai<\/strong><\/a>, Ting Yao, Tao Mei<\/p>\r\n\r\n

Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-identification<\/a><\/h3>\r\n

Nikolaos Karianakis<\/strong>, Zicheng Liu<\/strong><\/a>, Yinpeng Chen<\/strong>, Stefano Soatto<\/p>\r\n\r\n

Simple Baselines for Human Pose Estimation and Tracking<\/a><\/h3>\r\n

Bin Xiao<\/strong>, Haiping Wu<\/strong>, Yichen Wei<\/strong><\/p>\r\n \r\n

Tuesday, September 10, 2018 | 4:00 PM | 2B<\/h2>\r\n

Optimized Quantization for Highly Accurate and Compact DNNs<\/a><\/h3>\r\n

Dongqing Zhang<\/strong>, Jiaolong Yang<\/strong><\/a>, Dongqiangzi Ye<\/strong>, Gang Hua<\/strong><\/a><\/p>\r\n\r\n

Improving Embedding Generalization via Scalable Neighborhood Component Analysis<\/a><\/h3>\r\n

Zhirong Wu<\/strong>, Alexei Efros, Stella Yu<\/p>\r\n \r\n

Wednesday, September 11, 2018 | 10:00 AM | 3A<\/h2>\r\n

\"Factual\" or \"Emotional\": Stylized Image Captioning with Adaptive Learning and Attention<\/a><\/h3>\r\n

Tianlang Chen, Zhongping Zhang, Quanzeng You<\/strong>, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo<\/p>\r\n\r\n

Adding Attentiveness to the Neurons in Recurrent Neural Networks<\/a><\/h3>\r\n

Pengfei Zhang, Jianru Xue, Cuiling Lan<\/strong><\/a>, Wenjun Zeng<\/strong><\/a>, Zhanning Gao, Nanning Zheng<\/p>\r\n\r\n

Deep Directional Statistics: Pose Estimation with Uncertainty Quantification<\/a><\/h3>\r\n

Sergey Prokudin, Sebastian Nowozin<\/strong><\/a>, Peter Gehler<\/p>\r\n\r\n

Faces as Lighting Probes via Unsupervised Deep Highlight Extraction<\/a><\/h3>\r\n

Renjiao Yi, Chenyang Zhu, Ping Tan, Stephen Lin<\/strong><\/a><\/p>\r\n\r\n

A Dataset of Flash and Ambient Illumination Pairs from the Crowd<\/a><\/h3>\r\n

Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed A. Elghareb, Marc Pollefeys<\/strong><\/a>, Wojciech Matusik<\/p>\r\n \r\n

Wednesday, September 11, 2018 | 2:30 PM | 3B<\/h2>\r\n

Deep Attention Neural Tensor Network for Visual Question Answering<\/a><\/h3>\r\n

Yalong Bai, Jianlong Fu<\/strong><\/a>, Tao Mei<\/p>\r\n\r\n

Learning Region Features for Object Detection<\/a><\/h3>\r\n

Jiayuan Gu, Han Hu<\/strong><\/a>, Liwei Wang, Yichen Wei<\/strong>, Jifeng Dai<\/strong><\/a>\r\n<\/strong><\/p>\r\n\r\n

Video Object Segmentation by Learning Location-Sensitive Embeddings<\/a><\/h3>\r\n

Hai Ci, Chunyu Wang<\/strong><\/a>, Yizhou Wang<\/p>\r\n\r\n

Learning Priors for Semantic 3D Reconstruction<\/a><\/h3>\r\n

Ian Cherabier, Johannes Sch\u00f6nberger<\/strong>, Martin R. Oswald, Marc Pollefeys<\/strong><\/a>, Andreas Geiger<\/p>\r\n \r\n

Thursday, September 12, 2018 | 10:00 AM | 4A<\/h2>\r\n

Exploring Visual Relationship for Image Captioning<\/a><\/h3>\r\n

Ting Yao<\/strong>, Yingwei Pan, Yehao Li, Tao Mei<\/p>\r\n\r\n

Learning to Learn Parameterized Image Operators<\/a><\/h3>\r\n

Qingnan Fan, Dongdong Chen, Lu Yuan<\/strong><\/a>, Gang Hua<\/strong><\/a>, Nenghai Yu, Baoquan Chen<\/p>\r\n\r\n

Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching<\/a><\/h3>\r\n

Johannes Schoenberger, Sudipta Sinha<\/strong><\/a>, Marc Pollefeys<\/strong><\/a><\/p>\r\n\r\n

Part-Aligned Bilinear Representations for Person Re-Identification<\/a><\/h3>\r\n

Yumin Suh,\u00a0Jingdong Wang<\/strong><\/a>, Kyoung Mu Lee<\/p>\r\n \r\n

Thursday, September 12, 2018 | 4:00PM | 4B<\/h2>\r\n

Hierarchical Metric Learning and Matching for 2D and 3D Geometric Correspondences<\/a><\/h3>\r\n

Mohammed Fathy, Quoc-Huy Tran, Zeeshan Zia<\/strong><\/strong>, Paul Vernaza, Manmohan Chandraker<\/p>\r\n\r\n

Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility<\/a><\/h3>\r\n

Benjamin Hepp, Debadeepta Dey<\/strong><\/a>, Sudipta Sinha<\/strong><\/a>, Ashish Kapoor<\/strong><\/a>, Neel Joshi, Otmar Hilliges<\/p>\r\n\r\n

AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos<\/a><\/h3>\r\n

Zheng Shou, Hang Gao, Lei Zhang<\/strong><\/a>, Kazuyuki Miyazawa, Shih-Fu Chang<\/p>"},{"id":3,"name":"Donate For Good","content":"

Join us in our donation #MSFTResearchGives.<\/h2>\r\n

Help us choose which organization should receive this donation by voting on our Twitter poll @MSFTResearch.<\/a><\/h3>\r\nIn lieu of purchasing thousands of giveaway items, we have decided to reduce our environmental footprint and donate to one of the following organizations:\r\n\r\n
\r\n\r\n

\"Code.org\u00ae<\/a>Code.org<\/h2>\r\nCode.org\u00ae is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented minorities. Their vision is that every student in every school should have the opportunity to learn computer science, just like biology, chemistry or algebra. Code.org organizes the annual Hour of Code<\/a> campaign which has engaged 10% of all students in the world, and provides the leading curriculum for K-12 computer science in the largest school districts in the United States.\r\n\r\n
\r\n\r\n

\"FIRST<\/a>FIRST<\/em><\/h2>\r\nFIRST<\/i> (F<\/b>or Inspiration and R<\/b>ecognition of S<\/b>cience and T<\/b>echnology) was founded in 1989 to inspire young people's interest and participation in science and technology. Based in Manchester, NH, the 501(c)(3) not-for-profit public charity designs accessible, innovative programs that motivate young people to pursue education and career opportunities in science, technology, engineering, and math, while building self-confidence, knowledge, and life skills.\r\n\r\nFIRST<\/em> is More Than Robots<\/b>. FIRST<\/i> participation is proven to encourage students to pursue education and careers in STEM-related fields, inspire them to become leaders and innovators, and enhance their 21st century work-life skills. Read more about the Impact of FIRST<\/i>.\r\n\r\nLearn more at www.firstinspires.org<\/a>.\r\n\r\n
\r\n\r\n

\"Girls<\/a>Girls Who Code<\/h2>\r\nGirls Who Code<\/a> focuses on closing the gender gap in technology. Through the National Girls Who Code Clubs program, Girls Who Code offers a free after-school program for 6th-12th graders that provides computer science instruction along with a community of supportive peers and role models. With support from Microsoft, Girls Who Code will expand the program in cities and rural communities. The support will enable greater engagement within these communities, support of volunteer instructors, a refresh of curriculum, tools, and program evaluation as well as program enrichment opportunities, such as field trips, guest speakers, and meet-ups.\r\n\r\n
"}],"msr_startdate":"2018-09-08","msr_enddate":"2018-09-14","msr_event_time":"","msr_location":"Munich, Germany","msr_event_link":"https:\/\/eccv2018.org\/attending\/registration\/","msr_event_recording_link":"","msr_startdate_formatted":"September 8, 2018","msr_register_text":"Watch now","msr_cta_link":"https:\/\/eccv2018.org\/attending\/registration\/","msr_cta_text":"Watch now","msr_cta_bi_name":"Event Register","featured_image_thumbnail":"\"selective","event_excerpt":"Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.","msr_research_lab":[199561,199565,199560],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[633438,633447,707086,807619],"related-videos":[],"related-posts":[504137],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498755"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498755\/revisions"}],"predecessor-version":[{"id":504533,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498755\/revisions\/504533"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/498773"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=498755"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=498755"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=498755"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=498755"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=498755"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=498755"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=498755"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=498755"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=498755"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}