{"id":560223,"date":"2019-01-23T02:55:54","date_gmt":"2019-01-23T10:55:54","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=560223"},"modified":"2021-03-30T13:39:35","modified_gmt":"2021-03-30T20:39:35","slug":"swiss-jrc-workshop-2019","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/swiss-jrc-workshop-2019\/","title":{"rendered":"Swiss Joint Research Center Workshop 2019"},"content":{"rendered":"

Venue:<\/strong>\u00a0ETH Zurich Hauptgeb\u00e4ude<\/a><\/p>\n

This event is by invitation only.<\/strong><\/p>\n

\"\"<\/a>
\nSwiss JRC Workshop 2017, read more on this
blog<\/a>.<\/p>\n

\"4621.SwissJRC_blog\"<\/a>
\nSwiss JRC Workshop 2014, read more on\u00a0this
blog<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"

The 6th annual workshop of the Swiss Joint Research Center was held on January 31 – February 1, 2019, at ETH in Zurich. Project Principal Investigators (“PIs”) from ETH Zurich and EPFL introduced nine new research collaborations, selected in the recent Call for Proposals, or provide an update on existing research collaborations.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2019-01-31","msr_enddate":"2019-02-01","msr_location":"Z\u00fcrich, Switzerland","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"footnotes":""},"research-area":[13556,243138,13547],"msr-region":[239178],"msr-event-type":[197947],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-560223","msr-event","type-msr-event","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-quantum","msr-research-area-systems-and-networking","msr-region-europe","msr-event-type-universities","msr-locale-en_us"],"msr_about":"Venue:<\/strong>\u00a0ETH Zurich Hauptgeb\u00e4ude<\/a>\r\n\r\nThis event is by invitation only.<\/strong>\r\n\r\n\"\"<\/a>\r\nSwiss JRC Workshop 2017, read more on this blog<\/a>.\r\n\r\n\"4621.SwissJRC_blog\"<\/a>\r\nSwiss JRC Workshop 2014, read more on\u00a0this blog<\/a>.","tab-content":[{"id":0,"name":"About","content":"\"2017\r\n\r\nThe Swiss Joint Research Center<\/a> is a collaborative engagement between Microsoft Research and the two universities that make up the Swiss Federal Institutes of Technology: ETH Zurich<\/a> (Eidgen\u00f6ssische Technische Hochschule Z\u00fcrich<\/em>, which serves German-speaking students) and EPFL<\/a> (\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne<\/em>, which serves French-speaking students).\r\n\r\nThe 6th annual workshop of the Swiss Joint Research Center was held on January 31 - February 1, 2019, at ETH in Zurich. Project Principal Investigators (\"PIs\") from ETH Zurich and EPFL introduced nine new research collaborations, selected in the recent Call for Proposals, or provide an update on existing research collaborations.\r\n\r\nMore details can be found on the project overviews tab<\/a>. The full agenda is available on the agenda tab<\/a>.\r\n\r\n "},{"id":1,"name":"Agenda","content":"

Day 1 - Thursday, January 31<\/h2>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\nLocation<\/strong><\/td>\r\n<\/tr>\r\n
12:00 \u2013 13:30<\/td>\r\nSwiss JRC Workshop\r\nRegistration and Lunch<\/td>\r\n<\/td>\r\nETH Zurich Hauptgeb\u00e4ude<\/a>\r\nFoyer HG D Nord<\/td>\r\n<\/tr>\r\n
13:30 \u2013 13:45<\/td>\r\nWelcome<\/td>\r\nScarlet Schwiderski-Grosche, Microsoft<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
13:45 \u2013 14:15<\/td>\r\nApplied Machine Learning: The Dawn of a New Era<\/td>\r\nChris Bishop, Microsoft<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
14:15 \u2013 14:45<\/td>\r\nTTL-MSR Taiming Tail-Latency for Microsecond-scale RPCs<\/td>\r\nMarios Kogias, EPFL<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
14:45 \u2013 15:15<\/td>\r\nMonitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces<\/td>\r\nBob West, EPFL<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
15:15 \u2013 15:45<\/td>\r\nAfternoon Break and Group Photo<\/td>\r\n<\/td>\r\nFoyer HG D Nord<\/td>\r\n<\/tr>\r\n
15:45 \u2013 16:15<\/td>\r\nScalable Active Reward Learning for Reinforcement Learning<\/td>\r\nAndreas Krause, ETH Zurich<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
16:15 \u2013 16:45<\/td>\r\nPhotonic Integrated Multi-Wavelength Sources for Data Centers<\/td>\r\nTobias Kippenburg, EPFL<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
16:45 \u2013 17:15<\/td>\r\nUnderstanding and Reducing Data Movement Bottlenecks in Modern Workloads<\/td>\r\nJuan G\u00f3mez Luna, Mohammed Alser, ETH Zurich<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
19:00 Ap\u00e9ro\r\n19:45 Dinner<\/td>\r\nSwiss JRC Workshop Dinner (academia and industry)<\/td>\r\n<\/td>\r\nZum Gr\u00fcnen Glas<\/a>\r\nVia Banquet Entrance\r\nObere Z\u00e4une 16<\/strong>,\r\n8001 Z\u00fcrich<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n
<\/div>\r\n

Day 2 - Friday, February 1<\/h2>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\nLocation<\/strong><\/td>\r\n<\/tr>\r\n
9:00 \u2013 9:30<\/td>\r\nComputer Vision RnD Zurich introduction<\/td>\r\nMarc Pollefeys, ETH Zurich\/Microsoft<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
9:30 \u2013 10:00<\/td>\r\nHands in Contact for Augmented Reality<\/td>\r\nPascal Fua, EPFL<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
10:00 \u2013 10:30<\/td>\r\nProject Altair: Infrared Vision and AI Decision-Making for Longer Drone Flights<\/td>\r\nNick Lawrance, ETH Zurich<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
10:30 \u2013 11:00<\/td>\r\nMorning Break<\/td>\r\n<\/td>\r\nFoyer HG D Nord<\/td>\r\n<\/tr>\r\n
11:00 \u2013 11:30<\/td>\r\nSkilled assistive-care robots through immersive mixed-reality telemanipulation<\/td>\r\nStelian Coros, ETH Zurich<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
11:30 \u2013 12:00<\/td>\r\nA Modular Approach for Lifelong Mapping from End-User Data<\/td>\r\nCeasar Cadena and Juan Nieto, ETH Zurich<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
12:00 \u2013 12:30<\/td>\r\nQIRO - A Quantum Intermediate Representation for Program Optimization<\/td>\r\nTorsten Hoefler, ETH Zurich<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
12:30 \u2013 12:45<\/td>\r\nClosing Remarks<\/td>\r\n<\/td>\r\nAuditorium HG D 3.2<\/td>\r\n<\/tr>\r\n
12:45 \u2013 13:15<\/td>\r\nLunch and Event Close<\/td>\r\n<\/td>\r\nFoyer HG D Nord<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n
<\/div>\r\nBack to Swiss Joint Research Center ><\/a>"},{"id":2,"name":"Project Overviews","content":"[accordion]\r\n\r\n[panel header=\"Photonic Integrated Multi-Wavelength Sources for Data Centers\"]\r\n\r\nEPFL PI: Tobias J. Kippenberg; Microsoft PI: Hitesh Ballani<\/strong>\r\n\r\nThe substantial increase in optical data transmission, and cloud computing, has fueled research into new technologies that can increase communication capacity. Optical communication through fiber, which traditionally has been used for long haul fiber optical communication, is now also employed for short haul communication, even with data-centers. In a similar vein, the increasing capacity crunch in optical fibers, driven in particular by video streaming, can only be met by two degrees of freedom: spatial and wavelength division multiplexing. Spatial multiplexing refers to the use of optical fibers that have multiple cores, allowing to transmit the same carrier wavelength in multiple fibers. Wavelength division multiplexing (WDM or dense-DWM) refers to the use of multiple optical carriers on the same fiber. A key advantage of WDM is the ability to increase line-rates on existing legacy network, without requirements to change existing SMF28 single mode fibers. WDM is also expected to be employed in data-centers. Yet to date, WDM implementation within datacenters faces a key challenge: a CMOS compatible, power efficient source of multi-wavelengths. Currently employed existing solutions, such as multi-laser chips based on InP (as developed by Infinera) cannot be readily scaled to a larger number of carriers. As a result, the currently prevalently employed solution is to use a bank of multiple, individual laser modules. This approach is not viable for datacenters due to space and power constraints. Over the past years, a new technology has rapidly matured - that was developed by EPFL \u2013 microresonator frequency combs, or microcombs that satisfy these requirements. The potential of this new technology in telecommunications has recently been demonstrated with the use of microcombs for massively coherent parallel communication on the receiver and transmitter side. Yet to date the use of such micro-combs in data-centers has not been addressed.\r\n
    \r\n \t
  1. Kippenberg, T. J., Gaeta, A. L., Lipson, M. & Gorodetsky, M. L. Dissipative Kerr solitons in optical microresonators. Science 361, eaan8083 (2018).<\/li>\r\n \t
  2. Brasch, V. et al. Photonic chip\u2013based optical frequency comb using soliton Cherenkov radiation. Science aad4811 (2015). doi:10.1126\/science.aad4811<\/li>\r\n \t
  3. Marin-Palomo, P. et al. Microresonator-based solitons for massively parallel coherent optical communications. Nature 546, 274\u2013279 (2017).<\/li>\r\n \t
  4. Trocha, P. et al. Ultrafast optical ranging using microresonator soliton frequency combs. Science 359, 887\u2013891 (2018).<\/li>\r\n<\/ol>\r\n[\/panel]\r\n\r\n[panel header=\"Understanding and Reducing Data Movement Bottlenecks in Modern Workloads\"]\r\n\r\nETH Zurich PI: Juan G\u00f3mez Luna<\/strong>\r\n\r\nData movement between storage\/memory and compute units forms an increasingly critical bottleneck to overall system performance, scalability, and energy efficiency. Near-Data Processing (including Processing-in-Memory and Near-Storage Processing) is a promising paradigm to significantly alleviate the data movement bottleneck by placing computation closer to where the data resides. The Near-Data Processing paradigm is becoming a reality with a variety of new substrates, such as 3D-stacked DRAM, in-DRAM analog computation, or Open-Channel SSDs. However, what characteristics make an application a good fit for Near-Data Processing is yet an unanswered question.\r\n\r\nIn this talk, we will present the first large-scale characterization of over 345 applications to identify program characteristics that determine suitability for processing near data. Understanding these characteristics and the computing substrates also allows us to design new algorithmic expressions for data-intensive workloads that leverage the Near-Data Processing paradigm. For instance, one of the most fundamental computational steps in bioinformatics is the detection of the differences\/similarities between two genomic sequences. We can express the similarity measurement as bulk logic and arithmetic operations which are a perfect fit for analog computation within DRAM. Finally, we will outline some of our next research directions to enable computation in the context of tiering mechanisms for memory and storage (e.g., specialized DRAM, Open-Channel SSDs, Storage-Class Memories, tiered NVM devices).\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Monitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces\"]\r\n\r\nEPFL PIs: Robert West,\u00a0Arnaud Chiolero, Magali Rios-Leyvraz; Microsoft PIs: Ryen White, Eric Horvitz, Emre Kiciman\u00a0<\/strong>\r\n\r\nThe overall goal of this project is to develop methods for monitoring, modeling, and modifying dietary habits and nutrition based on large-scale digital traces. We will leverage data from both EPFL and Microsoft, to shed light on dietary habits from different angles and at different scales: Our team has access to logs of food purchases made on the EPFL campus with the badges carried by all EPFL members. Via the Microsoft collaborators involved, we have access to Web usage logs from IE\/Edge and Bing, and via MSR\u2019s subscription to the Twitter firehose, we gain full access to a major social media platform. Our agenda broadly decomposes into three sets of research questions: (1) Monitoring and modeling: How to mine digital traces for spatiotemporal variation of dietary habits? What nutritional patterns emerge? And how do they relate to, and expand, the current state of research in nutrition? (2) Quantifying and correcting biases: The log data does not directly capture food consumption, but provides indirect proxies; these are likely to be affected by data biases, and correcting for those biases will be an integral part of this project. (3) Modifying dietary habits: Our lab is co-organizing an annual EPFL-wide event called the Act4Change challenge, whose goal is to foster healthy and sustainable habits on the EPFL campus. Our close involvement with Act4Change will allow us to validate our methods and findings on the ground via surveys and A\/B tests. Applications of our work will include new methods for conducting population nutrition monitoring, recommending better-personalized eating practices, optimizing food offerings, and minimizing food waste.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"TTL-MSR Taiming Tail-Latency for Microsecond-scale RPCs\"]\r\n\r\nEPFL PIs: Marios Kogias, Edouard Bugnion; Microsoft PIs: Irene Zhang, Dan Ports<\/strong>\r\n\r\nThe deployment of a web-scale application within a datacenter can comprise of hundreds of software components, deployed on thousands of servers organized in multiple tiers and interconnected by commodity Ethernet switches. These versatile components communicate with each other via Remote Procedure Calls (RPCs) with the cost of an individual RPC service typically measured in microseconds. The end-user performance, availability and overall efficiency of the entire system are largely dependent on the efficient delivery and scheduling of these RPCs. Yet, these RPCs are ubiquitously deployed today on top fo general-purpose transport protocols such as TCP.\r\n\r\nWe propose to make RPC first-class citizens of datacenter deployment. This requires a revisitation of the overall architecture, application API, and network protocols. Our research direction is based on a novel RPC-oriented protocol, R2P2, which separates control flow from data flow and provides in-networking scheduling opportunities to tame tail latency. We are also building the tools that are necessary to scientifically evaluate microsesecond-scale services.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Hands in Contact for Augmented Reality\"]\r\n\r\nEPFL PIs: Pascal Fua, Mathieu Salzmann, Helge Rhodin; Microsoft PIs: Bugra Tekin, Sudipta Sinha, Federica Bogo, Marc Pollefeys<\/strong>\r\n\r\nIn recent years, there has been tremendous progress in camera-based 6D object pose, hand pose and human 3D pose estimation. They can now both be done in real time but not yet to the level of accuracy required to properly capture how people interact with each other and with objects, which is a crucial component of modeling the world in which we live. For example, when someone grasps an object, types on a keyboard, or shakes someone else\u2019s hand, the position of their fingers with respect to what they are interacting with must be precisely recovered for the resulting models to be used by AR devices, such as the HoloLens device or consumer-level video see-through AR ones. This remains a challenge, especially given the fact that hands are often severely occluded in the egocentric views that are the norm in AR.\r\n\r\nWe will, therefore, work on accurately capturing the interaction between hands and objects they touch and manipulate. At the heart of it, will be the precise modeling of contact points and the resulting physical forces between interacting hands and objects. This is essential for two reasons. First, objects in contact exert forces on each other; their pose and motion can only be accurately captured and understood if reaction forces at contact points and areas are modeled jointly. Second, touch and touch-force devices, such as keyboards and touch-screens are the most common human-computer interfaces, and by sensing contact and contact forces purely visually, every-day objects could be turned into tangible interfaces, that react as if they were equipped with touch-sensitive electronics. For instance, a soft cushion could become a non-intrusive input device that, unlike virtual mid-air menus, provides natural force feedback.\r\n\r\nIn this talk, I will present some of our preliminary results and discuss our research agenda for the year to come.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Scalable Active Reward Learning for Reinforcement Learning\"]\r\n\r\nETH Zurich PI: Andreas Krause; Microsoft PI: Sebastian Tschiatschek<\/strong>\r\n\r\nReinforcement learning (RL) is a promising paradigm in machine learning and gained considerable attention in recent years, partly because of its successful application in previously unsolved challenging games like Go and Atari. While these are impressive results, applying reinforcement learning in most other domains, e.g. virtual personal assistants, self-driving cars or robotics, remains challenging. One key reason for this is the difficulty of specifying the reward function a reinforcement learning agent is intended to optimize. For instance, in a virtual personal assistant, the reward function might correspond to the user\u2019s satisfaction with the assistant\u2019s behavior and is difficult to specify as a function of observations (e.g. sensory information) available to the system. In such applications, an alternative to specifying the reward function is to actually query the user for the reward. This, however, is only feasible if the number of queries to the user are limited and the user\u2019s response can be provided in a natural way such that the system\u2019s queries are non-irritating. Similar problems arise in other application domains such as robotics in which, for instance, the true reward can only be obtained by actually deploying the robot but an approximation to the reward can be computed by a simulator. In this case, it is important to optimize the agent\u2019s behavior while simultaneously minimizing the number of costly deployments. This project\u2019s aim is to develop algorithms for these types of problems via scalable active reward learning for reinforcement learning. The project\u2019s focus is on scalability in terms of computational complexity (to scale to large real-world problems) and sample complexity (to minimize the number of costly queries).\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Project Altair: Infrared Vision and AI Decision-Making for Longer Drone Flights\"]\r\n\r\nETH Zurich PIs: Roland Siegwart, Nicholas Lawrance, Jen Jen Chung; Microsft PIs: Andrey Kolobov, Debadeepta Dey<\/strong>\r\n\r\nA major factor restricting the utility of UAVs is the amount of energy aboard, which limits the duration of their flights. Birds face largely the same problem, but they are adept at using their vision to aid in spotting -- and exploiting -- opportunities for extracting extra energy from the air around them. Project Altair aims at developing infrared (IR) sensing techniques for detecting, mapping and exploiting naturally occurring atmospheric phenomena called thermals for extending the flight endurance of fixed-wing UAVs. In this presentation, we will introduce our vision and goals for this project.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"QIRO - A Quantum Intermediate Representation for Program Optimisation\"]\r\n\r\nETH Zurich PIs: Torsten Hoefler, Renato Renner; Microsoft PIs: Matthias Troyer, Martin Roetteler<\/strong>\r\n\r\nQIRO will establish a new internal representation for compilation systems on quantum computers. Since quantum computation is still emerging, I will provide an introduction to the general concepts of quantum computation and a brief discussion of its strengths and weaknesses from a high-performance computing perspective. This talk is tailored for a computer science audience with basic (popular-science) or no background in quantum mechanics and will focus on the computational aspects. I will also discuss systems aspects of quantum computers and how to map quantum algorithms to their high-level architecture. I will close with the principles of practical implementation of quantum computers and outline the project.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Skilled Assistive-Care Robots through Immersive Mixed-Reality Telemanipulation\"]\r\n\r\nETH Zurich PIs: Stelian Coros, Roi Poranne; Microsoft PIs: Federica Bogo, Bugra Tekin, Marc Pollefeys<\/strong>\r\n\r\nWith this project, we aim to accelerate the development of intelligent robots that can assist those in need with a variety of everyday tasks. People suffering from physical impairments, for example, often need help dressing or brushing their own hair. Skilled robotic assistants would allow these persons to live an independent lifestyle. Even such seemingly simple tasks, however, require complex manipulation of physical objects, advanced motion planning capabilities, as well as close interactions with human subjects. We believe the key to robots being able to undertake such societally important functions is learning from demonstration. The fundamental research question is, therefore, how can we enable human operators to seamlessly teach a robot how to perform complex tasks? The answer, we argue, lies in immersive telemanipulation. More specifically, we are inspired by the vision of James Cameron\u2019s Avatar, where humans are endowed with alternative embodiments. In such a setting, the human\u2019s intent must be seamlessly mapped to the motions of a robot as the human operator becomes completely immersed in the environment the robot operates in. To achieve this ambitious vision, many technologies must come together: mixed reality as the medium for robot-human communication, perception and action recognition to detect the intent of both the human operator and the human patient, motion retargeting techniques to map the actions of the human to the robot\u2019s motions, and physics-based models to enable the robot to predict and understand the implications of its actions.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"A Modular Approach for Lifelong Mapping from End-User Data\"]\r\n\r\nETH Zurich PIs: Roland Siegwart, Cesar Cadena, Juan Nieto; Microsoft PIs: Johannes Sch\u00f6nberger, Marc Pollefeys<\/strong>\r\n\r\nAR\/VR allow new and innovative ways of visualizing information and provide a very intuitive interface for interaction. At their core, they rely only on a camera and inertial measurement unit (IMU) setup or a stereo-vision setup to provide the necessary data, either of which are readily available on most commercial mobile devices. Early adoptions of this technology have already been deployed in the real estate business, sports, gaming, retail, tourism, transportation and many other fields. The current technologies in visual-aided motion estimation and mapping on mobile devices have three main requirements to produce highly accurate 3D metric reconstructions:\u00a0 An accurate spatial and temporal calibration of the sensor suite, a procedure which is typically carried out with the help of external infrastructure, like calibration markers, and by following a set of predefined movements.\u00a0 Well-lit, textured environments and feature-rich, smooth trajectories.\u00a0 The continuous and reliable operation of all sensors involved.\r\n\r\nThis project aims at relaxing these requirements, to enable continuous and robust lifelong mapping on end-user mobile devices. Thus, the specific objectives of this work are: 1. Formalize a modular and adaptable multi-modal sensor fusion framework for online map generation; 2. Improve the robustness of mapping and motion estimation by exploiting high-level semantic features; 3. Develop techniques for automatic detection and execution of sensor calibration in the wild. A modular SLAM (simultaneous localization and mapping) pipeline which is able to exploit all available sensing modalities can overcome the individual limitations of each sensor and increase the overall robustness of the estimation. Such an information-rich map representation allows us to leverage recent advances in semantic scene understanding, providing an abstraction from low-level geometric features - which are fragile to noise, sensing conditions and small changes in the environment - to higher-level semantic features that are robust against these effects. Using this complete map representation, we will explore new ways to detect miscalibrations and sensor failures, so that the SLAM process can be adapted online without the need for explicit user intervention.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Automatic Recipe Generation for ML.NET Pipelines\"]\r\n\r\nETH Zurich PI: Ce Zhang; Microsoft PI: Matteo Interlandi<\/strong>\r\n\r\n[\/panel]\r\n\r\n[panel header=\"Tiered NVM Designs, Software-NVM Interfaces, and Isolation Support\"]\r\n\r\nETH Zurich PI: Onur Mutlu; Microsoft PIs: Michael Cornwell, Kushagra Vaid<\/strong>\r\n\r\n[\/panel]\r\n\r\n[\/accordion]\r\n\r\nBack to Swiss Joint Research Center ><\/a>"}],"msr_startdate":"2019-01-31","msr_enddate":"2019-02-01","msr_event_time":"","msr_location":"Z\u00fcrich, Switzerland","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"January 31, 2019","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"The 6th annual workshop of the Swiss Joint Research Center was held on January 31 - February 1, 2019, at ETH in Zurich. Project Principal Investigators (\"PIs\") from ETH Zurich and EPFL introduced nine new research collaborations, selected in the recent Call for Proposals, or provide an update on existing research collaborations.","msr_research_lab":[199561],"related-researchers":[{"type":"user_nicename","display_name":"Clare Morgan","user_id":37625,"people_section":"Section name 1","alias":"clmorgan"},{"type":"guest","display_name":"Clare Morgan","user_id":496838,"people_section":"Section name 1","alias":""},{"type":"guest","display_name":"Clare Morgan","user_id":496844,"people_section":"Section name 1","alias":""}],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[611553],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/560223"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":11,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/560223\/revisions"}],"predecessor-version":[{"id":737065,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/560223\/revisions\/737065"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=560223"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=560223"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=560223"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=560223"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=560223"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=560223"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=560223"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=560223"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=560223"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}