{"id":488849,"date":"2018-06-01T12:30:00","date_gmt":"2018-06-01T19:30:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=488849"},"modified":"2018-06-22T09:22:03","modified_gmt":"2018-06-22T16:22:03","slug":"microsoft-cvpr-2018","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-cvpr-2018\/","title":{"rendered":"Microsoft @ CVPR 2018"},"content":{"rendered":"

Venue:<\/strong> Calvin L. Rampton Salt Palace Convention Center (opens in new tab)<\/span><\/a><\/p>\n

Website:<\/strong> CVPR 2018 (opens in new tab)<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

Venue: Calvin L. Rampton Salt Palace Convention Center Website: CVPR 2018<\/p>\n","protected":false},"featured_media":489278,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2018-06-18","msr_enddate":"2018-06-22","msr_location":"Salt Lake City, Utah","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"http:\/\/cvpr2018.thecvf.com\/attend\/registration","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":true,"msr_private_event":false,"footnotes":""},"research-area":[13562],"msr-region":[197900],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-488849","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-computer-vision","msr-region-north-america","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"Venue:<\/strong> Calvin L. Rampton Salt Palace Convention Center<\/a>\r\n\r\nWebsite:<\/strong> CVPR 2018<\/a>","tab-content":[{"id":0,"name":"About","content":"Microsoft is proud to be a diamond sponsor of the Conference on Computer Vision and Pattern Recognition (CVPR<\/a>) June 18 \u2013 22 in Salt Lake City, Utah. Please visit us at booth 537 to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.\r\n

Program Committee members<\/h2>\r\nMarc Pollefeys \u2013 Robust Vision Challenge Organizers\r\nSing Bing Kang<\/a>, Stephen Lin, Sebastian Nowozin, and Wenjun Zeng \u2013\u00a0NTIRE 2018 Program Committee\r\nGang Hua<\/a>\u00a0\u2013 PBVS 2018 Program Committee\r\nDaniel McDuff<\/a>\u00a0\u2013 CVPM 2018 Program Co-Chair\r\nTimnit Gebru \u2013 CV-COPS 2018 Program Committee\r\nZhengyou Zhang \u2013 Sight and Sound Workshop Organizers\r\n

Tutorials<\/h2>\r\n

New from HoloLens: Research Mode<\/a>\r\nTuesday | 1:30 \u2013 2:50 | Room 151 - ABCG<\/h4>\r\n

Marc Pollefeys<\/strong>, Pawel Olszta<\/strong><\/p>\r\n

Software Engineering in Computer Vision Systems<\/a>\r\nFriday | 8:30 \u2013 12:30 | Ballroom C<\/h4>\r\n

David Doria, Tim Franklin<\/strong>, Matt Turek, Jan Ernst, Wei Xia, Stephen Miller, Ben Kadlec<\/p>\r\n

Workshops<\/h2>\r\n

The Fifth Workshop on Fine-Grained Visual Categorization<\/a>\r\nFriday | 9:00 \u2013 5:00 | Room 151 A-C<\/h4>\r\n

Why FGVC5 Folks Should be Interested in the Microsoft AI for Earth Program\r\n9:45 \u2013 10:00\r\nDan Morris<\/a><\/p>\r\n

Microsoft attendees<\/h2>\r\nAijun Bai\r\nLuca Ballan\r\nMi\u0107o Banovi\u0107\r\nFederica Bogo<\/a>\r\nBogdan Burlacu\r\nNick Burton\r\nIshani Chakraborty\r\nTemo Chalasani\r\nDong Chen<\/a>\r\nXi Chen\r\nArti Chhajta\r\nJohn Corring\r\nJifeng Dai<\/a>\r\nQi Dai\r\nMandar Dixit\r\nLiang Du\r\nNan Duan<\/a>\r\nXin Duan\r\nGoran Dubajic\r\nAndrew Fitzgibbon<\/a>\r\nDinei Florencio<\/a>\r\nJianlong Fu<\/a>\r\nSean Goldberg\r\nYandong Guo\r\nHan Hu<\/a>\r\nHoudong Hu\r\nGang Hua<\/a>\r\nQiuyuan Huang<\/a>\r\nSing Bing Kang<\/a>\r\nNikolaos Karianakis\r\nNoboru Kuno<\/a>\r\nNabil Lathiff\r\nKuang-Huei Lee\r\nXing Li\r\nOlga Liakhovich\r\nTongliang Liao\r\nStephen Lin\r\nZicheng Liu<\/a>\r\nYan Lu<\/a>\r\nChong Luo<\/a>\r\nDaniel McDuff<\/a>\r\nMeenaz Merchant\r\nLeonardo Nunes\r\nMarc Pollefeys<\/a>\r\nTao Qin<\/a>\r\nArun Sacheti\r\nPablo Sala<\/a>\r\nHarpreet Sawhney\r\nPramod Sharma\r\nYelong Shen<\/a>\r\nJamie Shotton<\/a>\r\nYale Song<\/a>\r\nBaochen Sun<\/a>\r\nXiaoyan Sun<\/a>\r\nRavi Theja Yada\r\nAli Osman Ulusoy\r\nHamidreza Vaezi Joze<\/a>\r\nAlon Vinnikov\r\nBaoyuan Wang<\/a>\r\nJianfeng Wang\r\nJingdong Wang<\/a>\r\nLijuan Wang<\/a>\r\nZhe Wang\r\nZhirong Wu\r\nJiaolong Yang<\/a>\r\nTing Yao\r\nSang Ho Yoon<\/a>\r\nQuanzeng You\r\nCha Zhang<\/a>\r\nLei Zhang<\/a>\r\nMingxue Zhang\r\nPengchuan Zhang<\/a>\r\nTing Zhang<\/a>\r\nYatao Zhong\r\nXiaoyong Zhu"},{"id":1,"name":"Presentations","content":"

Hybrid Camera Pose Estimation<\/h4>\r\nTuesday | 8:50-10:10 | Room 255\r\nFederico Camposeco, Andrea Cohen, Marc Pollefeys<\/strong>, Torsten Sattler\r\n\r\n

Relation Networks for Object Detection<\/a><\/h4>\r\nWednesday | 8:30-10:10 | Ballroom\r\nHan Hu, Jiayuan Gu<\/strong>, Zheng Zhang<\/strong>, Jifeng Dai<\/strong>, Yichen Wei<\/strong><\/a>\r\n\r\n

RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<\/a><\/h4>\r\nWednesday | 8:30-10:10 | Room 255\r\nDespoina Paschalidou, Ali Osman Ulusoy<\/strong>, Carolin Schmitt, Luc Van Gool, Andreas Geiger\r\n\r\n

Automatic 3D Indoor Scene Modeling From Single Panorama<\/a><\/h4>\r\nWednesday | 8:30-10:10 | Room 255\r\nYang Yang, Shi Jin, Ruiyang Liu, Sing Bing Kang<\/strong><\/a>, Jingyi Yu\r\n\r\n

Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<\/a><\/h4>\r\nWednesday | 2:50-4:30 | Room 155\r\nPeter Anderson, Xiaodong He, Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang<\/strong>\r\n\r\n

Visual Question Generation as Dual Task of Visual Question Answering<\/a><\/h4>\r\nWednesday | 2:50-4:30 | Room 155\r\nYikang Li, Nan Duan<\/strong><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, Ming Zhou<\/strong><\/a>\r\n\r\n

Towards High Performance Video Object Detection<\/a><\/h4>\r\nThursday | 8:30-10:10 | Ballroom\r\nXizhou Zhu, Jifeng Dai<\/strong><\/a>, Lu Yuan<\/strong><\/a>, Yichen Wei<\/strong><\/a>\r\n\r\n

Consensus Maximization for Semantic Region Correspondences<\/h4>\r\nThursday | 8:30-10:10 | Room 155\r\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, Marc Pollefeys<\/strong>\r\n\r\n

InLoc: Indoor Visual Localization With Dense Matching and View Synthesis<\/h4>\r\nThursday | 8:30-10:10 | Ballroom\r\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii\r\n\r\n

Language-Based Image Editing With Recurrent Attentive Models<\/a><\/h4>\r\nThursday | 12:50-2:30 | Room 255\r\nJianbo Chen, Yelong Shen<\/strong>, Jianfeng Gao<\/strong>, Jingjing Liu<\/strong>, Xiaodong Liu<\/strong>\r\n\r\n

Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions<\/h4>\r\nThursday | 12:50-2:30 | Room 155\r\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, Fredrik Kahl, Tomas Pajdla<\/strong><\/strong>\r\n\r\n

Feature Space Transfer for Data Augmentation<\/h4>\r\nThursday | 2:50-4:30 | Room 255\r\nBo Liu, Xudong Wang, Mandar Dixit<\/strong>, Roland Kwitt, and Nuno Vasconcelos\r\n\r\n

Interleaved Structured Sparse Convolutional Neural Networks<\/a><\/h4>\r\nThursday | 2:50-4:30 | Ballroom\r\nGuotian Xie, Jingdong Wang<\/strong><\/a>, Ting Zhang<\/strong><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi\r\n\r\n

Revisiting Deep Intrinsic Image Decompositions<\/a><\/h4>\r\nThursday | 2:50-4:30 | Room 155\r\nQingnan Fan, Jiaolong Yang<\/strong><\/a>, Gang Hua<\/strong><\/a>, Baoquan Chen, David Wipf<\/strong><\/a>\r\n

Good Citizen of CVPR Panel<\/a><\/h3>\r\nFriday | 9:30-9:50 | Ballroom E\r\nRights and Obligations (Good review and bad review, constructive criticism)\r\nKatsu lkeuchi<\/strong>\r\n\r\nFriday | 9:50-10:10 | Ballroom E\r\nHow to create an inclusive and welcoming culture at CVPR and not have a \"clique\" culture\r\nTimnit Gebru<\/strong>"},{"id":2,"name":"Posters","content":"

Posters<\/h2>\r\nTuesday | 10:10-12:30 | Halls C-E\r\nReal-Time Seamless Single Shot 6D Object Pose Prediction<\/a>\r\nBugra Tekin, Sudipta Sinha<\/strong><\/a>, Pascal Fua\r\n\r\nTuesday | 10:10-12:30 | Halls C-E\r\nMiCT: Mixed 3D\/2D Convolutional Tube for Human Action Recognition<\/a>\r\nYizhou Zhou, Xiaoyan Sun<\/strong>, Zheng-Jun Zha, Wenjun Zeng<\/strong>\r\n\r\nTuesday | 10:10-12:30 | Halls C-E\r\nHybrid Camera Pose Estimation\r\nFederico Camposeco, Andrea Cohen, Marc Pollefeys<\/strong>, Torsten Sattler\r\n\r\nTuesday | 12:30-2:50 | Halls C-E\r\nGlobal Versus Localized Generative Adversarial Nets<\/a>\r\nGuo-Jun Qi, Liheng Zhang, Hao Hu, Marzieh Edraki, Jingdong Wang<\/strong><\/a>, Xian-Sheng Hua\r\n\r\nTuesday | 12:30-2:50 | Halls C-E\r\nA High-Quality Denoising Dataset for Smartphone Cameras<\/a>\r\nAbdelrahman Abdelhamed, Stephen Lin<\/strong>, Michael S. Brown\r\n\r\nTuesday | 12:30-2:50 | Halls C-E\r\nAugmenting Crowd-Sourced 3D Reconstructions Using Semantic Detections\r\nTrue Price, Johannes L. Sch\u00f6nberger<\/strong>, Zhen Wei, Marc Pollefeys<\/strong>, Jan-Michael Frahm\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\nRelation Networks for Object Detection<\/a>\r\nHan Hu, Jiayuan Gu<\/strong>, Zheng Zhang<\/strong>, Jifeng Dai<\/strong><\/a>, Yichen Wei<\/strong><\/a>\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\nRayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<\/a>\r\nDespoina Paschalidou, Ali Osman Ulusoy,<\/strong> Carolin Schmitt, Luc Van Gool, Andreas Geiger\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\nAutomatic 3D Indoor Scene Modeling From Single Panorama<\/a>\r\nYang Yang, Shi Jin, Ruiyang Liu, Sing Bing Kang<\/strong><\/a>, Jingyi Yu\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\nPseudo Mask Augmented Object Detection<\/a>\r\nXiangyun Zhao, Shuang Liang, Yichen Wei<\/strong><\/a>\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\nA Twofold Siamese Network for Real-Time Object Tracking<\/a>\r\nAnfeng He, Chong Luo<\/strong><\/a>, Xinmei Tian, Wenjun Zeng<\/strong>\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\nCleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise<\/a>\r\nKuang-Huei Lee<\/strong>, Xiaodong He, Lei Zhang<\/strong>, Linjun Yang\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\nEnd-to-End Convolutional Semantic Embeddings<\/a>\r\nQuanzeng You<\/strong>, Zhengyou Zhang<\/strong>, Jiebo Luo\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\nGenerative Adversarial Learning Towards Fast Weakly Supervised Detection<\/a>\r\nYunhan Shen, Rongrong Ji, Shengchuan Zhang, Wangmeng Zuo, Yan Wang<\/strong>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\nBottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<\/a>\r\nPeter Anderson, Xiaodong He, Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang<\/strong>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\nVisual Question Generation as Dual Task of Visual Question Answering<\/a>\r\nYikang Li, Nan Duan<\/strong><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, Ming Zhou<\/strong><\/a>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\nSemantic Visual Localization\r\nJohannes L. Sch\u00f6nberger<\/strong>, Marc Pollefeys<\/strong>, Andreas Geiger, Torsten Sattler\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\nStereoscopic Neural Style Transfer<\/a>\r\nDongdong Chen, Lu Yuan<\/strong><\/a>, Jing Liao<\/strong>, Nenghai Yu, Gang Hua<\/strong><\/a>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\nTowards Open-Set Identity Preserving Face Synthesis<\/a>\r\nJianmin Bao, Dong Chen<\/strong><\/a>, Fang Wen<\/strong>, Houqiang Li, Gang Hua<\/strong><\/a>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\nWeakly-Supervised Semantic Segmentation Network With Deep Seeded Region Growing<\/a>\r\nZilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu, Jingdong Wang<\/strong><\/a>\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\nTowards High Performance Video Object Detection<\/a>\r\nXizhou Zhu, Jifeng Dai<\/strong><\/a>, Lu Yuan<\/strong><\/a>, Yichen Wei<\/strong><\/a>\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\nInLoc: Indoor Visual Localization With Dense Matching and View Synthesis\r\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\nConsensus Maximization for Semantic Region Correspondences\r\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, Marc Pollefeys<\/strong>\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\nArbitrary Style Transfer With Deep Feature Reshuffle<\/a>\r\nShuyang Gu, Congliang Chen, Jing Liao<\/strong>, Lu Yuan<\/strong><\/a>\r\n\r\nThursday | 4:30-6:30 | Halls D-E\r\nLanguage-Based Image Editing With Recurrent Attentive Models<\/a>\r\nJianbo Chen, Yelong Shen,<\/strong> Jianfeng Gao<\/strong>, Jingjing Liu<\/strong>, Xiaodong Liu <\/strong>\r\n\r\nThursday | 4:30-6:30 | Halls D-E\r\nBenchmarking 6DOF Outdoor Visual Localization in Changing Conditions\r\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys<\/strong>, Josef Sivic, Fredrik Kahl, Tomas Pajdla\r\n\r\nThursday | 4:30-6:30 | Halls D-E\r\nInterleaved Structured Sparse Convolutional Neural Networks<\/a>\r\nGuotian Xie, Jingdong Wang<\/strong><\/a>, Ting Zhang<\/strong><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi\r\n\r\nThursday | 4:30-6:30 |\u00a0Halls D-E\r\nRevisiting Deep Intrinsic Image Decompositions<\/a>\r\nQingnan Fan, Jiaolong Yang<\/strong><\/a>, Gang Hua<\/strong><\/a>, Baoquan Chen, David Wipf<\/strong><\/a>"},{"id":3,"name":"Careers","content":"[row]\r\n\r\n[card title=\"Computer Vision Scientist\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/411303\/Computer-Vision-Scientist\" ]In Mixed Reality, people\u2014not devices\u2014are at the center of everything we do. We are a growing team of talented engineers and artists putting technology on a human path across all Windows devices, including Microsoft HoloLens, the Internet of Things, phones, tablets, desktops, and Xbox, and the larger World of all devices. There will be a better way for people to work and play effectively in a human and physical world through Human Augmentation via Mixed Reality. Come join us in creating this future![\/card]\r\n\r\n[card title=\"Research SDE\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/399293\/Research-SDE\" ]The Computer Vision Technology Group is a vital part of the Artificial Intelligence and Research division, which mobilizes research and advanced technology projects by creating and building state-of-the-art AI technology in areas such as computer vision and machine learning. The team is growing, and we are looking for talented people who have background in research and\/or engineering, and love to develop new technology that can be deployed to millions of users worldwide.[\/card]\r\n\r\n[card title=\"Post Doc Researcher - Deep Learning\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/428460\/Post-Doc-Researcher-Deep-Learning\" ]Microsoft Research AI (MSR AI) is comprised of researchers, engineers, and postdocs who take a broad perspective on the next-generation of intelligent systems. We seek exceptional postdoc researchers from all areas of deep learning, reinforcement learning, machine learning, artificial intelligence, and related fields with a passion and demonstrated ability for independent research, including a strong publication record at top international research venues.[\/card]\r\n\r\n[\/row][row]\r\n\r\n[card title=\"Researcher\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/412334\/Researcher\" ]The HoloLens team in Cambridge, UK, is building the future for mixed reality. We are passionate about using computer vision to make interaction with our devices and communication with other people more intuitive and personal. The team has a strong track record of shipping ground-breaking technologies in Microsoft products including Kinect and HoloLens. The team is growing, and we are looking for talented computer vision and machine learning researchers and software engineers: people who love to invent and build new stuff that really works and can be deployed to millions of users.[\/card]\r\n\r\n[\/row]"}],"msr_startdate":"2018-06-18","msr_enddate":"2018-06-22","msr_event_time":"","msr_location":"Salt Lake City, Utah","msr_event_link":"http:\/\/cvpr2018.thecvf.com\/attend\/registration","msr_event_recording_link":"","msr_startdate_formatted":"June 18, 2018","msr_register_text":"Watch now","msr_cta_link":"http:\/\/cvpr2018.thecvf.com\/attend\/registration","msr_cta_text":"Watch now","msr_cta_bi_name":"Event Register","featured_image_thumbnail":"\"CVPR","event_excerpt":"Microsoft is proud to be a diamond sponsor of the Conference on Computer Vision and Pattern Recognition (CVPR) June 18 \u2013 22 in Salt Lake City, Utah. Please visit us at booth 537 to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft. Program Committee members Marc Pollefeys \u2013 Robust Vision Challenge Organizers Sing Bing Kang, Stephen Lin, Sebastian Nowozin, and Wenjun Zeng \u2013\u00a0NTIRE 2018 Program…","msr_research_lab":[199560,199561,199565,199571],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[464061,609237,609252,609834,609843,609864,609873],"related-videos":[],"related-posts":[490556,490736,490835,491132],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/488849"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/488849\/revisions"}],"predecessor-version":[{"id":491702,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/488849\/revisions\/491702"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/489278"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=488849"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=488849"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=488849"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=488849"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=488849"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=488849"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=488849"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=488849"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=488849"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}