{"id":726349,"date":"2021-03-08T07:49:40","date_gmt":"2021-03-08T15:49:40","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=726349"},"modified":"2025-08-06T11:51:44","modified_gmt":"2025-08-06T18:51:44","slug":"joint-research-centre-workshop-2021","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/joint-research-centre-workshop-2021\/","title":{"rendered":"Joint Research Center Workshop 2021"},"content":{"rendered":"\n\n

Time zone:<\/strong> CEST (UTC+02:00)<\/p>\n

Inria Joint Research Center Projects ><\/a><\/p>\n

Swiss Joint Research Center Projects ><\/a>Opens in a new tab<\/span><\/p>\n

This virtual event brought together the PhD students and postdocs working on collaborative research engagements with Microsoft via the Swiss Joint Research Center (opens in new tab)<\/span><\/a>, Mixed Reality & AI Zurich Lab (opens in new tab)<\/span><\/a>, Mixed Reality & AI Cambridge Lab (opens in new tab)<\/span><\/a>, Inria Joint Center, (opens in new tab)<\/span><\/a> their academic and Microsoft supervisors as well as the wider research community. The event continued in the tradition of the annual Swiss JRC Workshops (opens in new tab)<\/span><\/a>. PhD students and postdocs presented project updates and discussed their research with their supervisors and other attendants. In addition, Microsoft speakers provided updates on relevant Microsoft projects and initiatives. There were four event sessions according to research themes:<\/p>\n

‘Computer Vision’ on 20 and 22 of April<\/h3>\n

\"computer<\/p>\n

Speakers included:<\/em><\/p>\n

<\/div>\n

\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Marc Pollefeys (opens in new tab)<\/span><\/a><\/h4>\n

Professor at ETH Zurich and Lab Director of Microsoft Mixed Reality & AI Zurich Lab
\n
\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Jamie Shotton (opens in new tab)<\/span><\/a><\/h4>\n

Lab Director of Mixed Reality & AI Cambridge Lab
\n
<\/p>\n

<\/div>\n

‘Systems’ on 19 May<\/h3>\n

\"systems<\/p>\n

Speakers to included:<\/em><\/p>\n

<\/div>\n

\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

C\u00e9dric Fournet (opens in new tab)<\/span><\/a><\/h4>\n

Senior Principal Research Manager, Microsoft
\n
\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Jonathan Protzenko (opens in new tab)<\/span><\/a><\/h4>\n

Principal Researcher, Microsoft
\n
<\/p>\n

<\/div>\n

‘AI’ on 20 May<\/h3>\n

\"AI
\n
Speakers to included:<\/em><\/p>\n

<\/div>\n

\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Emre Kiciman (opens in new tab)<\/span><\/a><\/h4>\n

Senior Principal Researcher, Microsoft
\n
<\/p>\n

<\/div>\n
<\/div>\n

Microsoft\u2019s Event Code of Conduct<\/h3>\n

Microsoft\u2019s mission is to empower every person and every organization on the planet to achieve more. This includes virtual events Microsoft hosts and participates in, where we seek to create a respectful, friendly, and inclusive experience for all participants. As such, we do not tolerate harassing or disrespectful behavior, messages, images, or interactions by any event participant, in any form, at any aspect of the program including business and social activities, regardless of location.<\/p>\n

We do not tolerate any behavior that is degrading to any gender, race, sexual orientation or disability, or any behavior that would violate Microsoft\u2019s Anti-Harassment and Anti-Discrimination Policy, Equal Employment Opportunity Policy, or Standards of Business Conduct (opens in new tab)<\/span><\/a>. In short, the entire experience must meet our culture standards. We encourage everyone to assist in creating a welcoming and safe environment. Please report (opens in new tab)<\/span><\/a> any concerns, harassing behavior, or suspicious or disruptive activity. Microsoft reserves the right to ask attendees to leave at any time at its sole discretion.<\/p>\n

<\/div>\n
\n\t\n\t\tReport a concern\t<\/a>\n\n\t<\/div>\n

Opens in a new tab<\/span><\/p>\n

Computer Vision | Day 1<\/h2>\n

20 April | 16:30 – 19:15 CEST<\/h3>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
16:30\u201316:35<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
16:35\u201316:55<\/td>\nKeynote: HoloLens, Mixed Reality and Spatial Computing<\/td>\nMarc Pollefeys<\/a>, ETH Zurich \/ Microsoft<\/td>\n<\/tr>\n
16:55\u201317:40<\/td>\nDense Mapping<\/strong><\/td>\nChair: Ondrej Miksik<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nPatchmatchNet: Learned Multi-View Patchmatch Stereo<\/td>\nFangjinhua Wang (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with
Silvano Galliani<\/a>, Marc Pollefeys<\/a>, Pablo Speciale<\/a> and Christoph Vogel<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nVolumetric Mapping for Long-term Robot Interaction | Video<\/a><\/td>\nLukas Schmid (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Cesar Cadena, Roland Siegwart, ETH Zurich and
Johannes Sch\u00f6nberger<\/a>, Juan Nieto<\/a>, Marc Pollefeys<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nMulti-view Appearance Super-resolution with CNNs | Video<\/a><\/td>\nMatthieu Armando (opens in new tab)<\/span><\/a>, INRIA
\n(collaboration with Edmond Boyer, Jean-Sebastien Franco, INRIA)<\/td>\n<\/tr>\n
17:40\u201318:15<\/td>\nPrivacy Preserving<\/strong> SfM<\/strong><\/td>\nChair: Ondrej Miksik<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nPrivacy-Preserving Image Features | Video<\/a><\/td>\nMihai Dusmanu (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with
Marc Pollefeys<\/a> and Johannes Sch\u00f6nberger<\/a>, Sudipta Sinha<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nPrivacy Preserving Structure-from-Motion | Video<\/a><\/td>\nMarcel Geppert (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Viktor Larsson, ETH Zurich and
Pablo Speciale (opens in new tab)<\/span><\/a>, Marc Pollefeys<\/a>, Johannes Sch\u00f6nberger<\/a>, Microsoft)<\/td>\n<\/tr>\n
18:20\u201319:10<\/td>\nRobotics, Medical Imaging and Mixed Reality<\/strong><\/td>\nChair: M\u00e9lanie Bernhardt<\/a>, Jeff Delmerico (opens in new tab)<\/span><\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nImmersive Multirobot Teleoperation<\/td>\nRoi Poranne (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Stelian Coros, ETH Zurich and
Federica Bogo<\/a>, Marc Pollefeys<\/a>, Bugra Tekin<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nProject Altair: Infrared Vision and AI-Decision Making for Longer Drone Flights | Video<\/a><\/td>\nFlorian Achermann (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Jen Jen Chung, Nicholas Lawrance, Roland Siegwart, ETH Zurich and
Debadeepta Dey<\/a>, Andrey Kolobov<\/a>, Microsoft Research)<\/td>\n<\/tr>\n
<\/td>\nFreetures: Localization in Signed Distance Function Maps | Video<\/a><\/td>\nAlexander Millane (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with
Jeff Delmerico<\/a>, Juan Nieto<\/a>, Marc Pollefeys<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTowards Markerless Surgical Tool and Hand Pose Estimation | Video<\/a><\/td>\nJonas Hein (opens in new tab)<\/span><\/a>, ETH Zurich, TUM and Microsoft
\n(collaboration with
Philipp F\u00fcrnstahl (opens in new tab)<\/span><\/a>, Balgrist University Hospital, Nassir Navab (opens in new tab)<\/span><\/a>, Technical University of Munich, Marc Pollefeys<\/a>, ETH Zurich (opens in new tab)<\/span><\/a> \/ Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTest-time Adaptable Neural Networks for Robust Medical Image Segmentation | Video<\/a><\/td>\nNeerav Karani (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with
Ender Konukoglu (opens in new tab)<\/span><\/a>, ETH Zurich)<\/td>\n<\/tr>\n
19:10<\/td>\nConclusions<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n
<\/div>\n

Computer Vision | Day 2<\/h2>\n

22 April | 16:30 – 19:20 CEST<\/h3>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
16:30\u201316:35<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
16:35\u201316:55<\/td>\nKeynote: Computer Vision for Social Presence in Mixed Reality | Video<\/a><\/td>\nJamie Shotton<\/a>, Microsoft<\/td>\n<\/tr>\n
16:55\u201317:55<\/td>\nHumans, Hands & Actions I<\/strong><\/td>\nChair: Marek Kowalski (opens in new tab)<\/span><\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nSemantic-Aware Vector Quantised-Variational Autoencoder for Human Grasp Prediction<\/td>\nMengshi Qi (opens in new tab)<\/span><\/a>, EPFL
\n(collaboration with Pascal Fua, Mathieu Salzmann, EPFL and
Marc Pollefeys<\/a>, Bugra Tekin<\/a>, Sudipta Sinha<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTreating Touchscreens as Image Sensors for Super-resolution Sensing<\/td>\nPaul Streli (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Christian Holz, ETH Zurich and
Ken Hinckley<\/a>, Microsoft Research)<\/td>\n<\/tr>\n
<\/td>\nReconstructing 3D Human with Learning-based Method | Video<\/a><\/td>\nBoyao Zhou (opens in new tab)<\/span><\/a>, INRIA
\n(collaboration with Edmond Boyer, Jean-Sebastien Franco, INRIA and
Federica Bogo<\/a>, Martin de la Gorce, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nControllable Human Motion Generation from Trajectories | Video<\/a><\/td>\nKacper Kania (opens in new tab)<\/span><\/a>, Warsaw University of Technology
\n(collaboration with Tomasz Trzcinski, Warsaw University of Technology and
Marek Kowalski (opens in new tab)<\/span><\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nLearning Motion Priors for 4D Human Body Capture in 3D Scenes | Video<\/a><\/td>\nSiwei Zhang (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Siyu Tang, ETH Zurich and
Federica Bogo<\/a>, Marc Pollefeys, <\/a>Jamie Shotton<\/a>, Microsoft)<\/a><\/td>\n<\/tr>\n
18:00\u201319:00<\/td>\nHumans, Hands & Actions II<\/strong><\/td>\nChair: Bugra Tekin<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nDigital Characters in Virtual Experiences | Video<\/a><\/td>\nDarren Cosker (opens in new tab)<\/span><\/a>, Microsoft \/ University of Bath<\/td>\n<\/tr>\n
<\/td>\nHow to Accelerate\u00a0NeRF\u00a0by 3 Orders of Magnitude<\/td>\nStephan Garbin (opens in new tab)<\/span><\/a>, Marek Kowalski (opens in new tab)<\/span><\/a>, Microsoft<\/td>\n<\/tr>\n
<\/td>\nTowards Unconstrained Joint Hand-object Reconstruction from RGB Videos<\/td>\nYana Hasson (opens in new tab)<\/span><\/a>, INRIA
\n(collaboration with Ivan Laptev, Josef Sivic, INRIA and
Marc Pollefeys<\/a>, Johannes Sch\u00f6nberger<\/a>, Bugra Tekin<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nH2O: Two Hands Manipulating Objects for First Person Interaction Recognition | Video<\/a><\/td>\nTaein Kwon (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with
Federica Bogo<\/a> Marc Pollefeys<\/a>, Bugra Tekin,<\/a> Microsoft)<\/td>\n<\/tr>\n
19:00<\/td>\nConclusions<\/td>\nMarc Pollefeys<\/a>, ETH Zurich \/ Microsoft<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

Opens in a new tab<\/span><\/p>\n

Systems | 19 May | 17:00 – 19:10 CEST<\/h2>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
17:00\u201317:05<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
17:05\u201317:20<\/td>\nCloud Confidential Computing | Video<\/a><\/td>\nC\u00e9dric Fournet<\/a>, Microsoft<\/td>\n<\/tr>\n
17:20\u201318:00<\/td>\nSystems Session I<\/strong><\/td>\nChair: Shruti Tople\u200b<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nNoise*: A library of Verified High-Performance Secure Channel Protocol Implementations | Video<\/a><\/td>\nSon Ho (opens in new tab)<\/span><\/a>, INRIA
\n(collaboration with Karthik Bhargavan, INRIA and
Antoine Delignat-Lavaud<\/a>, C\u00e9dric Fournet<\/a>, Florian Grould, Jonathan Protzenko<\/a>, Nikhil Swamy<\/a>, Santiago Zanella<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nReasoning about the TLA+ operator ENABLED within TLAPS | Video<\/a><\/td>\nIoannis Filippidis (opens in new tab)<\/span><\/a>, INRIA
\n(collaboration with Damien Doligez, Stephan Merz, INRIA and
Markus Kuppe<\/a>, Leslie Lamport<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nDataScope: Scaling up Data Shapley over Machine Learning Pipelines | Video<\/a><\/td>\nBojan Karla\u0161 (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Ce Zhang, ETH Zurich and
Matteo Interlandi<\/a>, Microsoft)<\/td>\n<\/tr>\n
18:00\u201318:15<\/td>\nEverCrypt: New Features and Deployments with Election Guard | Video<\/a><\/td>\nJonathan Protzenko<\/a>, Microsoft<\/td>\n<\/tr>\n
18:15\u201319:10<\/td>\nSystems Session II<\/strong><\/td>\nChair: Marios Kogias\u200b (opens in new tab)<\/span><\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nHovercRaft: Achieving Scalability and Fault-tolerance for Microsecond-scale Datacenter Services | Video<\/a><\/td>\nMarios Kogias (opens in new tab)<\/span><\/a>, Microsoft
\n(collaboration with Edouard Bugnion, Konstantinos Prasopoulos, EPFL and
Dan Ports<\/a>, Irene Zhang<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nEfficient Preparation of Sparse Quantum States<\/td>\nNiels Gleinig, ETH Zurich
\n(collaboration with Torsten Hoefler, Renato Renner, ETH Zurich and Martin Roetteler<\/a>, Matthias Troyer<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nUltrafast Optical Circuit Switching for Data Centers Using Integrated Soliton Microcombs | Video<\/a><\/td>\nArslan Raja (opens in new tab)<\/span><\/a>, EPFL \u200b
\n(collaboration with Tobias Kippenberg, EPFL and Hitesh Ballani, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nSynchronous Subnanosecond Clock and Data Recovery for Optically Switched Data Centres using Clock Phase Caching | Video<\/a><\/td>\nKari Clark (opens in new tab)<\/span><\/a>, UCL \u200b
\n(collaboration with Polina Bayvel, UCL and Hitesh Ballani, Microsoft)<\/td>\n<\/tr>\n
19:10<\/td>\nConclusions<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

Opens in a new tab<\/span><\/p>\n

Artificial Intelligence (AI) | 20 May | 17:00 – 19:00 CEST<\/h2>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
17:00\u201317:05<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
17:05\u201317:25<\/td>\nPerspective on AI Research: Human-centered AI and Robustness | Video<\/a><\/td>\nEmre Kiciman<\/a>, Microsoft<\/td>\n<\/tr>\n
17:25\u201318:20<\/td>\nAI Session I<\/strong><\/td>\nChair: Guy Leroy<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nMonitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces | Video<\/a><\/td>\nKristina Gligoric (opens in new tab)<\/span><\/a>, EPFL
\n(collaboration with Robert West, EPFL, Arnaud Chiolero, University of Fribourg and
Eric Horvitz<\/a>, Emre Kiciman<\/a>, Ryen White<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nGrounding Spatio-temporal Language with Transformers | Video<\/a><\/td>\nLaetitia Teodorescu, INRIA
\n(collaboration with Tristan Karch, Cl\u00e9ment Moulin-Frier, Pierre-Yves Oudeyer, INRIA and Katja Hofmann<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nStatistical Preconditioning for Distributed Optimization | Video<\/a><\/td>\nHadrien Hendrikx (opens in new tab)<\/span><\/a>, INRIA
\n(collaboration with Francis Bach, Laurent Massouli\u00e9, INRIA and
S\u00e9bastien Bubeck<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nInformation Theory for Representation Learning<\/td>\nMarco Federici (opens in new tab)<\/span><\/a>, University of Amsterdam
\n(collaboration with Patrick Forr\u00e9, Max Welling, University of Amsterdam and
Ryota Tomioka<\/a>, Microsoft)<\/td>\n<\/tr>\n
18:20\u201319:00<\/td>\nAI Session II<\/strong><\/td>\nChair: Evelyn Zuniga<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nInformation Directed Reward Learning for Reinforcement Learning | Video<\/a><\/td>\nDavid Lindner (opens in new tab)<\/span><\/a>, ETH Zurich
\n(collaboration with Andreas Krause, ETH Zurich and
Katja Hofmann<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nProbabilistic DAG Search | Video<\/a><\/td>\nJulia Grosse, University of T\u00fcbingen
\n(collaboration with Philipp Hennig, University of T\u00fcbingen and Cheng Zhang<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTeacher Algorithms for Deep Reinforcement Learning Students | Video<\/a><\/td>\nR\u00e9my Portelas (opens in new tab)<\/span><\/a>, INRIA
\n(collaboration with Pierre-Yves Oudeyer, INRIA and
Katja Hofmann<\/a>, Microsoft)<\/td>\n<\/tr>\n
19:00<\/td>\nConclusions<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"

This virtual event will bring together the PhD students and postdocs working on collaborative research engagements with Microsoft via the Swiss Joint Research Center, Mixed Reality & AI Lab, Inria Joint Center, their academic and Microsoft supervisors as well as the wider research community. The event continues in the tradition of the annual Swiss JRC Workshops. Phd students and postdocs will present project updates and discuss their research with their supervisors and other attendants. In addition, Microsoft speakers will provide updates on relevant Microsoft projects and initiatives. There will be four event sessions according to research themes: Computer Vision, Systems, and AI<\/p>\n","protected":false},"featured_media":731251,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2021-04-20","msr_enddate":"2021-05-20","msr_location":"Virtual","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13556,13562,13547],"msr-region":[239178],"msr-event-type":[197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-726349","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-systems-and-networking","msr-region-europe","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"\n\n

Time zone:<\/strong> CEST (UTC+02:00)<\/p>\n

Inria Joint Research Center Projects ><\/a><\/p>\n

Swiss Joint Research Center Projects ><\/a>Opens in a new tab<\/span><\/p>\n

This virtual event brought together the PhD students and postdocs working on collaborative research engagements with Microsoft via the Swiss Joint Research Center (opens in new tab)<\/span><\/a>, Mixed Reality & AI Zurich Lab (opens in new tab)<\/span><\/a>, Mixed Reality & AI Cambridge Lab (opens in new tab)<\/span><\/a>, Inria Joint Center, (opens in new tab)<\/span><\/a> their academic and Microsoft supervisors as well as the wider research community. The event continued in the tradition of the annual Swiss JRC Workshops (opens in new tab)<\/span><\/a>. PhD students and postdocs presented project updates and discussed their research with their supervisors and other attendants. In addition, Microsoft speakers provided updates on relevant Microsoft projects and initiatives. There were four event sessions according to research themes:<\/p>\n

‘Computer Vision’ on 20 and 22 of April<\/h3>\n

\"computer<\/p>\n

Speakers included:<\/em><\/p>\n

<\/div>\n

\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Marc Pollefeys (opens in new tab)<\/span><\/a><\/h4>\n

Professor at ETH Zurich and Lab Director of Microsoft Mixed Reality & AI Zurich Lab
\n
\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Jamie Shotton (opens in new tab)<\/span><\/a><\/h4>\n

Lab Director of Mixed Reality & AI Cambridge Lab
\n
<\/p>\n

<\/div>\n

‘Systems’ on 19 May<\/h3>\n

\"systems<\/p>\n

Speakers to included:<\/em><\/p>\n

<\/div>\n

\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

C\u00e9dric Fournet (opens in new tab)<\/span><\/a><\/h4>\n

Senior Principal Research Manager, Microsoft
\n
\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Jonathan Protzenko (opens in new tab)<\/span><\/a><\/h4>\n

Principal Researcher, Microsoft
\n
<\/p>\n

<\/div>\n

‘AI’ on 20 May<\/h3>\n

\"AI
\n
Speakers to included:<\/em><\/p>\n

<\/div>\n

\"Portrait (opens in new tab)<\/span><\/a><\/p>\n

Emre Kiciman (opens in new tab)<\/span><\/a><\/h4>\n

Senior Principal Researcher, Microsoft
\n
<\/p>\n

<\/div>\n
<\/div>\n

Microsoft\u2019s Event Code of Conduct<\/h3>\n

Microsoft\u2019s mission is to empower every person and every organization on the planet to achieve more. This includes virtual events Microsoft hosts and participates in, where we seek to create a respectful, friendly, and inclusive experience for all participants. As such, we do not tolerate harassing or disrespectful behavior, messages, images, or interactions by any event participant, in any form, at any aspect of the program including business and social activities, regardless of location.<\/p>\n

We do not tolerate any behavior that is degrading to any gender, race, sexual orientation or disability, or any behavior that would violate Microsoft\u2019s Anti-Harassment and Anti-Discrimination Policy, Equal Employment Opportunity Policy, or Standards of Business Conduct (opens in new tab)<\/span><\/a>. In short, the entire experience must meet our culture standards. We encourage everyone to assist in creating a welcoming and safe environment. Please report (opens in new tab)<\/span><\/a> any concerns, harassing behavior, or suspicious or disruptive activity. Microsoft reserves the right to ask attendees to leave at any time at its sole discretion.<\/p>\n

<\/div>\n
\n\t\n\t\tReport a concern\t<\/a>\n\n\t<\/div>\n

Opens in a new tab<\/span><\/p>\n

Computer Vision | Day 1<\/h2>\n

20 April | 16:30 – 19:15 CEST<\/h3>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
16:30\u201316:35<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
16:35\u201316:55<\/td>\nKeynote: HoloLens, Mixed Reality and Spatial Computing<\/td>\nMarc Pollefeys<\/a>, ETH Zurich \/ Microsoft<\/td>\n<\/tr>\n
16:55\u201317:40<\/td>\nDense Mapping<\/strong><\/td>\nChair: Ondrej Miksik<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nPatchmatchNet: Learned Multi-View Patchmatch Stereo<\/td>\nFangjinhua Wang<\/a>, ETH Zurich
\n(collaboration with
Silvano Galliani<\/a>, Marc Pollefeys<\/a>, Pablo Speciale<\/a> and Christoph Vogel<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nVolumetric Mapping for Long-term Robot Interaction | Video<\/a><\/td>\nLukas Schmid<\/a>, ETH Zurich
\n(collaboration with Cesar Cadena, Roland Siegwart, ETH Zurich and
Johannes Sch\u00f6nberger<\/a>, Juan Nieto<\/a>, Marc Pollefeys<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nMulti-view Appearance Super-resolution with CNNs | Video<\/a><\/td>\nMatthieu Armando<\/a>, INRIA
\n(collaboration with Edmond Boyer, Jean-Sebastien Franco, INRIA)<\/td>\n<\/tr>\n
17:40\u201318:15<\/td>\nPrivacy Preserving<\/strong> SfM<\/strong><\/td>\nChair: Ondrej Miksik<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nPrivacy-Preserving Image Features | Video<\/a><\/td>\nMihai Dusmanu<\/a>, ETH Zurich
\n(collaboration with
Marc Pollefeys<\/a> and Johannes Sch\u00f6nberger<\/a>, Sudipta Sinha<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nPrivacy Preserving Structure-from-Motion | Video<\/a><\/td>\nMarcel Geppert<\/a>, ETH Zurich
\n(collaboration with Viktor Larsson, ETH Zurich and
Pablo Speciale<\/a>, Marc Pollefeys<\/a>, Johannes Sch\u00f6nberger<\/a>, Microsoft)<\/td>\n<\/tr>\n
18:20\u201319:10<\/td>\nRobotics, Medical Imaging and Mixed Reality<\/strong><\/td>\nChair: M\u00e9lanie Bernhardt<\/a>, Jeff Delmerico<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nImmersive Multirobot Teleoperation<\/td>\nRoi Poranne<\/a>, ETH Zurich
\n(collaboration with Stelian Coros, ETH Zurich and
Federica Bogo<\/a>, Marc Pollefeys<\/a>, Bugra Tekin<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nProject Altair: Infrared Vision and AI-Decision Making for Longer Drone Flights | Video<\/a><\/td>\nFlorian Achermann<\/a>, ETH Zurich
\n(collaboration with Jen Jen Chung, Nicholas Lawrance, Roland Siegwart, ETH Zurich and
Debadeepta Dey<\/a>, Andrey Kolobov<\/a>, Microsoft Research)<\/td>\n<\/tr>\n
<\/td>\nFreetures: Localization in Signed Distance Function Maps | Video<\/a><\/td>\nAlexander Millane<\/a>, ETH Zurich
\n(collaboration with
Jeff Delmerico<\/a>, Juan Nieto<\/a>, Marc Pollefeys<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTowards Markerless Surgical Tool and Hand Pose Estimation | Video<\/a><\/td>\nJonas Hein<\/a>, ETH Zurich, TUM and Microsoft
\n(collaboration with
Philipp F\u00fcrnstahl<\/a>, Balgrist University Hospital, Nassir Navab<\/a>, Technical University of Munich, Marc Pollefeys<\/a>, ETH Zurich<\/a> \/ Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTest-time Adaptable Neural Networks for Robust Medical Image Segmentation | Video<\/a><\/td>\nNeerav Karani<\/a>, ETH Zurich
\n(collaboration with
Ender Konukoglu<\/a>, ETH Zurich)<\/td>\n<\/tr>\n
19:10<\/td>\nConclusions<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n
<\/div>\n

Computer Vision | Day 2<\/h2>\n

22 April | 16:30 – 19:20 CEST<\/h3>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
16:30\u201316:35<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
16:35\u201316:55<\/td>\nKeynote: Computer Vision for Social Presence in Mixed Reality | Video<\/a><\/td>\nJamie Shotton<\/a>, Microsoft<\/td>\n<\/tr>\n
16:55\u201317:55<\/td>\nHumans, Hands & Actions I<\/strong><\/td>\nChair: Marek Kowalski<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nSemantic-Aware Vector Quantised-Variational Autoencoder for Human Grasp Prediction<\/td>\nMengshi Qi<\/a>, EPFL
\n(collaboration with Pascal Fua, Mathieu Salzmann, EPFL and
Marc Pollefeys<\/a>, Bugra Tekin<\/a>, Sudipta Sinha<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTreating Touchscreens as Image Sensors for Super-resolution Sensing<\/td>\nPaul Streli<\/a>, ETH Zurich
\n(collaboration with Christian Holz, ETH Zurich and
Ken Hinckley<\/a>, Microsoft Research)<\/td>\n<\/tr>\n
<\/td>\nReconstructing 3D Human with Learning-based Method | Video<\/a><\/td>\nBoyao Zhou<\/a>, INRIA
\n(collaboration with Edmond Boyer, Jean-Sebastien Franco, INRIA and
Federica Bogo<\/a>, Martin de la Gorce, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nControllable Human Motion Generation from Trajectories | Video<\/a><\/td>\nKacper Kania<\/a>, Warsaw University of Technology
\n(collaboration with Tomasz Trzcinski, Warsaw University of Technology and
Marek Kowalski<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nLearning Motion Priors for 4D Human Body Capture in 3D Scenes | Video<\/a><\/td>\nSiwei Zhang<\/a>, ETH Zurich
\n(collaboration with Siyu Tang, ETH Zurich and
Federica Bogo<\/a>, Marc Pollefeys, <\/a>Jamie Shotton<\/a>, Microsoft)<\/a><\/td>\n<\/tr>\n
18:00\u201319:00<\/td>\nHumans, Hands & Actions II<\/strong><\/td>\nChair: Bugra Tekin<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nDigital Characters in Virtual Experiences | Video<\/a><\/td>\nDarren Cosker<\/a>, Microsoft \/ University of Bath<\/td>\n<\/tr>\n
<\/td>\nHow to Accelerate\u00a0NeRF\u00a0by 3 Orders of Magnitude<\/td>\nStephan Garbin<\/a>, Marek Kowalski<\/a>, Microsoft<\/td>\n<\/tr>\n
<\/td>\nTowards Unconstrained Joint Hand-object Reconstruction from RGB Videos<\/td>\nYana Hasson<\/a>, INRIA
\n(collaboration with Ivan Laptev, Josef Sivic, INRIA and
Marc Pollefeys<\/a>, Johannes Sch\u00f6nberger<\/a>, Bugra Tekin<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nH2O: Two Hands Manipulating Objects for First Person Interaction Recognition | Video<\/a><\/td>\nTaein Kwon<\/a>, ETH Zurich
\n(collaboration with
Federica Bogo<\/a> Marc Pollefeys<\/a>, Bugra Tekin,<\/a> Microsoft)<\/td>\n<\/tr>\n
19:00<\/td>\nConclusions<\/td>\nMarc Pollefeys<\/a>, ETH Zurich \/ Microsoft<\/td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

Opens in a new tab<\/span><\/p>\n

Systems | 19 May | 17:00 – 19:10 CEST<\/h2>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
17:00\u201317:05<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
17:05\u201317:20<\/td>\nCloud Confidential Computing | Video<\/a><\/td>\nC\u00e9dric Fournet<\/a>, Microsoft<\/td>\n<\/tr>\n
17:20\u201318:00<\/td>\nSystems Session I<\/strong><\/td>\nChair: Shruti Tople\u200b<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nNoise*: A library of Verified High-Performance Secure Channel Protocol Implementations | Video<\/a><\/td>\nSon Ho<\/a>, INRIA
\n(collaboration with Karthik Bhargavan, INRIA and
Antoine Delignat-Lavaud<\/a>, C\u00e9dric Fournet<\/a>, Florian Grould, Jonathan Protzenko<\/a>, Nikhil Swamy<\/a>, Santiago Zanella<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nReasoning about the TLA+ operator ENABLED within TLAPS | Video<\/a><\/td>\nIoannis Filippidis<\/a>, INRIA
\n(collaboration with Damien Doligez, Stephan Merz, INRIA and
Markus Kuppe<\/a>, Leslie Lamport<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nDataScope: Scaling up Data Shapley over Machine Learning Pipelines | Video<\/a><\/td>\nBojan Karla\u0161<\/a>, ETH Zurich
\n(collaboration with Ce Zhang, ETH Zurich and
Matteo Interlandi<\/a>, Microsoft)<\/td>\n<\/tr>\n
18:00\u201318:15<\/td>\nEverCrypt: New Features and Deployments with Election Guard | Video<\/a><\/td>\nJonathan Protzenko<\/a>, Microsoft<\/td>\n<\/tr>\n
18:15\u201319:10<\/td>\nSystems Session II<\/strong><\/td>\nChair: Marios Kogias\u200b<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nHovercRaft: Achieving Scalability and Fault-tolerance for Microsecond-scale Datacenter Services | Video<\/a><\/td>\nMarios Kogias<\/a>, Microsoft
\n(collaboration with Edouard Bugnion, Konstantinos Prasopoulos, EPFL and
Dan Ports<\/a>, Irene Zhang<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nEfficient Preparation of Sparse Quantum States<\/td>\nNiels Gleinig, ETH Zurich
\n(collaboration with Torsten Hoefler, Renato Renner, ETH Zurich and Martin Roetteler<\/a>, Matthias Troyer<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nUltrafast Optical Circuit Switching for Data Centers Using Integrated Soliton Microcombs | Video<\/a><\/td>\nArslan Raja<\/a>, EPFL \u200b
\n(collaboration with Tobias Kippenberg, EPFL and Hitesh Ballani, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nSynchronous Subnanosecond Clock and Data Recovery for Optically Switched Data Centres using Clock Phase Caching | Video<\/a><\/td>\nKari Clark<\/a>, UCL \u200b
\n(collaboration with Polina Bayvel, UCL and Hitesh Ballani, Microsoft)<\/td>\n<\/tr>\n
19:10<\/td>\nConclusions<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

Opens in a new tab<\/span><\/p>\n

Artificial Intelligence (AI) | 20 May | 17:00 – 19:00 CEST<\/h2>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Time (CEST)<\/strong><\/td>\nSession<\/strong><\/td>\nSpeaker<\/strong><\/td>\n<\/tr>\n
17:00\u201317:05<\/td>\nWelcome<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n
17:05\u201317:25<\/td>\nPerspective on AI Research: Human-centered AI and Robustness | Video<\/a><\/td>\nEmre Kiciman<\/a>, Microsoft<\/td>\n<\/tr>\n
17:25\u201318:20<\/td>\nAI Session I<\/strong><\/td>\nChair: Guy Leroy<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nMonitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces | Video<\/a><\/td>\nKristina Gligoric<\/a>, EPFL
\n(collaboration with Robert West, EPFL, Arnaud Chiolero, University of Fribourg and
Eric Horvitz<\/a>, Emre Kiciman<\/a>, Ryen White<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nGrounding Spatio-temporal Language with Transformers | Video<\/a><\/td>\nLaetitia Teodorescu, INRIA
\n(collaboration with Tristan Karch, Cl\u00e9ment Moulin-Frier, Pierre-Yves Oudeyer, INRIA and Katja Hofmann<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nStatistical Preconditioning for Distributed Optimization | Video<\/a><\/td>\nHadrien Hendrikx<\/a>, INRIA
\n(collaboration with Francis Bach, Laurent Massouli\u00e9, INRIA and
S\u00e9bastien Bubeck<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nInformation Theory for Representation Learning<\/td>\nMarco Federici<\/a>, University of Amsterdam
\n(collaboration with Patrick Forr\u00e9, Max Welling, University of Amsterdam and
Ryota Tomioka<\/a>, Microsoft)<\/td>\n<\/tr>\n
18:20\u201319:00<\/td>\nAI Session II<\/strong><\/td>\nChair: Evelyn Zuniga<\/a>, Microsoft<\/em><\/td>\n<\/tr>\n
<\/td>\nInformation Directed Reward Learning for Reinforcement Learning | Video<\/a><\/td>\nDavid Lindner<\/a>, ETH Zurich
\n(collaboration with Andreas Krause, ETH Zurich and
Katja Hofmann<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nProbabilistic DAG Search | Video<\/a><\/td>\nJulia Grosse, University of T\u00fcbingen
\n(collaboration with Philipp Hennig, University of T\u00fcbingen and Cheng Zhang<\/a>, Microsoft)<\/td>\n<\/tr>\n
<\/td>\nTeacher Algorithms for Deep Reinforcement Learning Students | Video<\/a><\/td>\nR\u00e9my Portelas<\/a>, INRIA
\n(collaboration with Pierre-Yves Oudeyer, INRIA and
Katja Hofmann<\/a>, Microsoft)<\/td>\n<\/tr>\n
19:00<\/td>\nConclusions<\/td>\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

Opens in a new tab<\/span><\/p>\n","tab-content":[{"id":0,"name":"About","content":"This virtual event brought together the PhD students and postdocs working on collaborative research engagements with Microsoft via the Swiss Joint Research Center<\/a>, Mixed Reality & AI Zurich Lab<\/a>, Mixed Reality & AI Cambridge Lab<\/a>, Inria Joint Center,<\/a> their academic and Microsoft supervisors as well as the wider research community. The event continued in the tradition of the annual Swiss JRC Workshops<\/a>. PhD students and postdocs presented project updates and discussed their research with their supervisors and other attendants. In addition, Microsoft speakers provided updates on relevant Microsoft projects and initiatives. There were four event sessions according to research themes:\r\n

'Computer Vision' on 20 and 22 of April<\/h3>\r\n\"computer\r\n\r\nSpeakers included:<\/em>\r\n
<\/div>\r\n\"Portrait<\/a>\r\n

Marc Pollefeys<\/a><\/h4>\r\nProfessor at ETH Zurich and Lab Director of Microsoft Mixed Reality & AI Zurich Lab\r\n
\"Portrait<\/a>\r\n

Jamie Shotton<\/a><\/h4>\r\nLab Director of Mixed Reality & AI Cambridge Lab\r\n
\r\n
<\/div>\r\n

'Systems' on 19 May<\/h3>\r\n\"systems\r\n\r\nSpeakers to included:<\/em>\r\n
<\/div>\r\n\"Portrait<\/a>\r\n

C\u00e9dric Fournet<\/a><\/h4>\r\nSenior Principal Research Manager, Microsoft\r\n
\"Portrait<\/a>\r\n

Jonathan Protzenko<\/a><\/h4>\r\nPrincipal Researcher, Microsoft\r\n
\r\n
<\/div>\r\n

'AI' on 20 May<\/h3>\r\n\"AI\r\n
Speakers to included:<\/em>\r\n
<\/div>\r\n\"Portrait<\/a>\r\n

Emre Kiciman<\/a><\/h4>\r\nSenior Principal Researcher, Microsoft\r\n
\r\n
<\/div>\r\n
<\/div>\r\n

Microsoft\u2019s Event Code of Conduct<\/h3>\r\nMicrosoft\u2019s mission is to empower every person and every organization on the planet to achieve more. This includes virtual events Microsoft hosts and participates in, where we seek to create a respectful, friendly, and inclusive experience for all participants. As such, we do not tolerate harassing or disrespectful behavior, messages, images, or interactions by any event participant, in any form, at any aspect of the program including business and social activities, regardless of location.\r\n\r\nWe do not tolerate any behavior that is degrading to any gender, race, sexual orientation or disability, or any behavior that would violate Microsoft\u2019s Anti-Harassment and Anti-Discrimination Policy, Equal Employment Opportunity Policy, or Standards of Business Conduct<\/a>. In short, the entire experience must meet our culture standards. We encourage everyone to assist in creating a welcoming and safe environment. Please report<\/a> any concerns, harassing behavior, or suspicious or disruptive activity. Microsoft reserves the right to ask attendees to leave at any time at its sole discretion.\r\n
<\/div>\r\n
[msr-button text=\"Report a concern\" url=\"https:\/\/app.convercent.com\/en-us\/Anonymous\/IssueIntake\/LandingPage\/65d3b907-0933-e611-8105-000d3ab03673\" new-window=\"true\" ]<\/div>"},{"id":1,"name":"Computer Vision","content":"

Computer Vision | Day 1<\/h2>\r\n

20 April | 16:30 - 19:15 CEST<\/h3>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time (CEST)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\n<\/tr>\r\n
16:30\u201316:35<\/td>\r\nWelcome<\/td>\r\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
16:35\u201316:55<\/td>\r\nKeynote: HoloLens, Mixed Reality and Spatial Computing<\/td>\r\nMarc Pollefeys<\/a>, ETH Zurich \/ Microsoft<\/td>\r\n<\/tr>\r\n
16:55\u201317:40<\/td>\r\nDense Mapping<\/strong><\/td>\r\nChair: Ondrej Miksik<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nPatchmatchNet: Learned Multi-View Patchmatch Stereo<\/td>\r\nFangjinhua Wang<\/a>, ETH Zurich\r\n(collaboration with Silvano Galliani<\/a>, Marc Pollefeys<\/a>, Pablo Speciale<\/a> and Christoph Vogel<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nVolumetric Mapping for Long-term Robot Interaction | Video<\/a><\/td>\r\nLukas Schmid<\/a>, ETH Zurich\r\n(collaboration with Cesar Cadena, Roland Siegwart, ETH Zurich and Johannes Sch\u00f6nberger<\/a>, Juan Nieto<\/a>, Marc Pollefeys<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nMulti-view Appearance Super-resolution with CNNs | Video<\/a><\/td>\r\nMatthieu Armando<\/a>, INRIA\r\n(collaboration with Edmond Boyer, Jean-Sebastien Franco, INRIA)<\/td>\r\n<\/tr>\r\n
17:40\u201318:15<\/td>\r\nPrivacy Preserving<\/strong> SfM<\/strong><\/td>\r\nChair: Ondrej Miksik<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nPrivacy-Preserving Image Features | Video<\/a><\/td>\r\nMihai Dusmanu<\/a>, ETH Zurich\r\n(collaboration with Marc Pollefeys<\/a> and Johannes Sch\u00f6nberger<\/a>, Sudipta Sinha<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nPrivacy Preserving Structure-from-Motion | Video<\/a><\/td>\r\nMarcel Geppert<\/a>, ETH Zurich\r\n(collaboration with Viktor Larsson, ETH Zurich and Pablo Speciale<\/a>, Marc Pollefeys<\/a>, Johannes Sch\u00f6nberger<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
18:20\u201319:10<\/td>\r\nRobotics, Medical Imaging and Mixed Reality<\/strong><\/td>\r\nChair: M\u00e9lanie Bernhardt<\/a>, Jeff Delmerico<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nImmersive Multirobot Teleoperation<\/td>\r\nRoi Poranne<\/a>, ETH Zurich\r\n(collaboration with Stelian Coros, ETH Zurich and Federica Bogo<\/a>, Marc Pollefeys<\/a>, Bugra Tekin<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nProject Altair: Infrared Vision and AI-Decision Making for Longer Drone Flights | Video<\/a><\/td>\r\nFlorian Achermann<\/a>, ETH Zurich\r\n(collaboration with Jen Jen Chung, Nicholas Lawrance, Roland Siegwart, ETH Zurich and Debadeepta Dey<\/a>, Andrey Kolobov<\/a>, Microsoft Research)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nFreetures: Localization in Signed Distance Function Maps | Video<\/a><\/td>\r\nAlexander Millane<\/a>, ETH Zurich\r\n(collaboration with Jeff Delmerico<\/a>, Juan Nieto<\/a>, Marc Pollefeys<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nTowards Markerless Surgical Tool and Hand Pose Estimation | Video<\/a><\/td>\r\nJonas Hein<\/a>, ETH Zurich, TUM and Microsoft\r\n(collaboration with Philipp F\u00fcrnstahl<\/a>, Balgrist University Hospital, Nassir Navab<\/a>, Technical University of Munich, Marc Pollefeys<\/a>, ETH Zurich<\/a> \/ Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nTest-time Adaptable Neural Networks for Robust Medical Image Segmentation | Video<\/a><\/td>\r\nNeerav Karani<\/a>, ETH Zurich\r\n(collaboration with Ender Konukoglu<\/a>, ETH Zurich)<\/td>\r\n<\/tr>\r\n
19:10<\/td>\r\nConclusions<\/td>\r\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\r\n<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n
<\/div>\r\n

Computer Vision | Day 2<\/h2>\r\n

22 April | 16:30 - 19:20 CEST<\/h3>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time (CEST)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\n<\/tr>\r\n
16:30\u201316:35<\/td>\r\nWelcome<\/td>\r\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
16:35\u201316:55<\/td>\r\nKeynote: Computer Vision for Social Presence in Mixed Reality | Video<\/a><\/td>\r\nJamie Shotton<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
16:55\u201317:55<\/td>\r\nHumans, Hands & Actions I<\/strong><\/td>\r\nChair: Marek Kowalski<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nSemantic-Aware Vector Quantised-Variational Autoencoder for Human Grasp Prediction<\/td>\r\nMengshi Qi<\/a>, EPFL\r\n(collaboration with Pascal Fua, Mathieu Salzmann, EPFL and Marc Pollefeys<\/a>, Bugra Tekin<\/a>, Sudipta Sinha<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nTreating Touchscreens as Image Sensors for Super-resolution Sensing<\/td>\r\nPaul Streli<\/a>, ETH Zurich\r\n(collaboration with Christian Holz, ETH Zurich and Ken Hinckley<\/a>, Microsoft Research)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nReconstructing 3D Human with Learning-based Method | Video<\/a><\/td>\r\nBoyao Zhou<\/a>, INRIA\r\n(collaboration with Edmond Boyer, Jean-Sebastien Franco, INRIA and Federica Bogo<\/a>, Martin de la Gorce, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nControllable Human Motion Generation from Trajectories | Video<\/a><\/td>\r\nKacper Kania<\/a>, Warsaw University of Technology\r\n(collaboration with Tomasz Trzcinski, Warsaw University of Technology and Marek Kowalski<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nLearning Motion Priors for 4D Human Body Capture in 3D Scenes | Video<\/a><\/td>\r\nSiwei Zhang<\/a>, ETH Zurich\r\n(collaboration with Siyu Tang, ETH Zurich and Federica Bogo<\/a>, Marc Pollefeys, <\/a>Jamie Shotton<\/a>, Microsoft)<\/a><\/td>\r\n<\/tr>\r\n
18:00\u201319:00<\/td>\r\nHumans, Hands & Actions II<\/strong><\/td>\r\nChair: Bugra Tekin<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nDigital Characters in Virtual Experiences | Video<\/a><\/td>\r\nDarren Cosker<\/a>, Microsoft \/ University of Bath<\/td>\r\n<\/tr>\r\n
<\/td>\r\nHow to Accelerate\u00a0NeRF\u00a0by 3 Orders of Magnitude<\/td>\r\nStephan Garbin<\/a>, Marek Kowalski<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
<\/td>\r\nTowards Unconstrained Joint Hand-object Reconstruction from RGB Videos<\/td>\r\nYana Hasson<\/a>, INRIA\r\n(collaboration with Ivan Laptev, Josef Sivic, INRIA and Marc Pollefeys<\/a>, Johannes Sch\u00f6nberger<\/a>, Bugra Tekin<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nH2O: Two Hands Manipulating Objects for First Person Interaction Recognition | Video<\/a><\/td>\r\nTaein Kwon<\/a>, ETH Zurich\r\n(collaboration with Federica Bogo<\/a> Marc Pollefeys<\/a>, Bugra Tekin,<\/a> Microsoft)<\/td>\r\n<\/tr>\r\n
19:00<\/td>\r\nConclusions<\/td>\r\nMarc Pollefeys<\/a>, ETH Zurich \/ Microsoft<\/td>\r\n<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>"},{"id":2,"name":"Systems","content":"

Systems | 19 May | 17:00 - 19:10 CEST<\/h2>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time (CEST)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\n<\/tr>\r\n
17:00\u201317:05<\/td>\r\nWelcome<\/td>\r\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
17:05\u201317:20<\/td>\r\nCloud Confidential Computing | Video<\/a><\/td>\r\nC\u00e9dric Fournet<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
17:20\u201318:00<\/td>\r\nSystems Session I<\/strong><\/td>\r\nChair: Shruti Tople\u200b<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nNoise*: A library of Verified High-Performance Secure Channel Protocol Implementations | Video<\/a><\/td>\r\nSon Ho<\/a>, INRIA\r\n(collaboration with Karthik Bhargavan, INRIA and Antoine Delignat-Lavaud<\/a>, C\u00e9dric Fournet<\/a>, Florian Grould, Jonathan Protzenko<\/a>, Nikhil Swamy<\/a>, Santiago Zanella<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nReasoning about the TLA+ operator ENABLED within TLAPS | Video<\/a><\/td>\r\nIoannis Filippidis<\/a>, INRIA\r\n(collaboration with Damien Doligez, Stephan Merz, INRIA and Markus Kuppe<\/a>, Leslie Lamport<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nDataScope: Scaling up Data Shapley over Machine Learning Pipelines | Video<\/a><\/td>\r\nBojan Karla\u0161<\/a>, ETH Zurich\r\n(collaboration with Ce Zhang, ETH Zurich and Matteo Interlandi<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
18:00\u201318:15<\/td>\r\nEverCrypt: New Features and Deployments with Election Guard | Video<\/a><\/td>\r\nJonathan Protzenko<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
18:15\u201319:10<\/td>\r\nSystems Session II<\/strong><\/td>\r\nChair: Marios Kogias\u200b<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nHovercRaft: Achieving Scalability and Fault-tolerance for Microsecond-scale Datacenter Services | Video<\/a><\/td>\r\nMarios Kogias<\/a>, Microsoft\r\n(collaboration with Edouard Bugnion, Konstantinos Prasopoulos, EPFL and Dan Ports<\/a>, Irene Zhang<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nEfficient Preparation of Sparse Quantum States<\/td>\r\nNiels Gleinig, ETH Zurich\r\n(collaboration with Torsten Hoefler, Renato Renner, ETH Zurich and Martin Roetteler<\/a>, Matthias Troyer<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nUltrafast Optical Circuit Switching for Data Centers Using Integrated Soliton Microcombs | Video<\/a><\/td>\r\nArslan Raja<\/a>, EPFL \u200b\r\n(collaboration with Tobias Kippenberg, EPFL and Hitesh Ballani, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nSynchronous Subnanosecond Clock and Data Recovery for Optically Switched Data Centres using Clock Phase Caching | Video<\/a><\/td>\r\nKari Clark<\/a>, UCL \u200b\r\n(collaboration with Polina Bayvel, UCL and Hitesh Ballani, Microsoft)<\/td>\r\n<\/tr>\r\n
19:10<\/td>\r\nConclusions<\/td>\r\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>"},{"id":3,"name":"AI","content":"

Artificial Intelligence (AI) | 20 May | 17:00 - 19:00 CEST<\/h2>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time (CEST)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\n<\/tr>\r\n
17:00\u201317:05<\/td>\r\nWelcome<\/td>\r\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
17:05\u201317:25<\/td>\r\nPerspective on AI Research: Human-centered AI and Robustness | Video<\/a><\/td>\r\nEmre Kiciman<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n
17:25\u201318:20<\/td>\r\nAI Session I<\/strong><\/td>\r\nChair: Guy Leroy<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nMonitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces | Video<\/a><\/td>\r\nKristina Gligoric<\/a>, EPFL\r\n(collaboration with Robert West, EPFL, Arnaud Chiolero, University of Fribourg and Eric Horvitz<\/a>, Emre Kiciman<\/a>, Ryen White<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nGrounding Spatio-temporal Language with Transformers | Video<\/a><\/td>\r\nLaetitia Teodorescu, INRIA\r\n(collaboration with Tristan Karch, Cl\u00e9ment Moulin-Frier, Pierre-Yves Oudeyer, INRIA and Katja Hofmann<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nStatistical Preconditioning for Distributed Optimization | Video<\/a><\/td>\r\nHadrien Hendrikx<\/a>, INRIA\r\n(collaboration with Francis Bach, Laurent Massouli\u00e9, INRIA and S\u00e9bastien Bubeck<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nInformation Theory for Representation Learning<\/td>\r\nMarco Federici<\/a>, University of Amsterdam\r\n(collaboration with Patrick Forr\u00e9, Max Welling, University of Amsterdam and Ryota Tomioka<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
18:20\u201319:00<\/td>\r\nAI Session II<\/strong><\/td>\r\nChair: Evelyn Zuniga<\/a>, Microsoft<\/em><\/td>\r\n<\/tr>\r\n
<\/td>\r\nInformation Directed Reward Learning for Reinforcement Learning | Video<\/a><\/td>\r\nDavid Lindner<\/a>, ETH Zurich\r\n(collaboration with Andreas Krause, ETH Zurich and Katja Hofmann<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nProbabilistic DAG Search | Video<\/a><\/td>\r\nJulia Grosse, University of T\u00fcbingen\r\n(collaboration with Philipp Hennig, University of T\u00fcbingen and Cheng Zhang<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
<\/td>\r\nTeacher Algorithms for Deep Reinforcement Learning Students | Video<\/a><\/td>\r\nR\u00e9my Portelas<\/a>, INRIA\r\n(collaboration with Pierre-Yves Oudeyer, INRIA and Katja Hofmann<\/a>, Microsoft)<\/td>\r\n<\/tr>\r\n
19:00<\/td>\r\nConclusions<\/td>\r\nScarlet Schwiderski-Grosche\u200b<\/a>, Microsoft<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>"}],"msr_startdate":"2021-04-20","msr_enddate":"2021-05-20","msr_event_time":"","msr_location":"Virtual","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"April 20, 2021","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"\"Joint","event_excerpt":"This virtual event will bring together the PhD students and postdocs working on collaborative research engagements with Microsoft via the Swiss Joint Research Center, Mixed Reality & AI Lab, Inria Joint Center, their academic and Microsoft supervisors as well as the wider research community. The event continues in the tradition of the annual Swiss JRC Workshops. Phd students and postdocs will present project updates and discuss their research with their supervisors and other attendants. In…","msr_research_lab":[199561,602418],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[611553,663087],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[753553,756076,755167,754126,754108,753655,753640,753628,753619,753607,753598,753589,753580,753571,753562,750730,753487,753478,753469,753460,753451,753439,753430,753421,753412,753397,753388,753379,753364],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/726349","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":24,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/726349\/revisions"}],"predecessor-version":[{"id":1146887,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/726349\/revisions\/1146887"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/731251"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=726349"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=726349"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=726349"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=726349"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=726349"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=726349"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=726349"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=726349"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=726349"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}