{"id":548040,"date":"2018-11-14T12:46:29","date_gmt":"2018-11-14T20:46:29","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=548040"},"modified":"2020-05-06T13:18:19","modified_gmt":"2020-05-06T20:18:19","slug":"physics-ml-workshop","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/physics-ml-workshop\/","title":{"rendered":"Physics \u2229 ML"},"content":{"rendered":"

Venue: <\/strong>
\nMicrosoft, Building 99,
\nRedmond, Washington, USA<\/a><\/p>\n

Watch on demand:<\/strong>
\n
Day 1: 9:00 AM\u201310:30 AM<\/a>
\n
Day 1: 11:00 AM\u201312:30 PM<\/a>
\n
Day 1: 2:00 PM\u20134:05 PM<\/a>
\n
Day 2: 9:00 AM\u20139:45 AM<\/a>
\n
Day 2: 10:15 AM\u201312:30 PM<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

The goal of Physics \u2229 ML is to bring together researchers from machine learning and physics to learn from each other and push research forward together. In this inaugural edition, we will especially highlight some amazing progress made in string theory with machine learning and in the understanding of deep learning from a physical angle. Nevertheless, we invite a cast with wide ranging expertise in order to spark new ideas. <\/p>\n","protected":false},"featured_media":551232,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2019-04-25","msr_enddate":"2019-04-26","msr_location":"Microsoft Research Redmond","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"footnotes":""},"research-area":[13546],"msr-region":[197900],"msr-event-type":[197944,210063],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-548040","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-computational-sciences-mathematics","msr-region-north-america","msr-event-type-hosted-by-microsoft","msr-event-type-workshop","msr-locale-en_us"],"msr_about":"Venue: <\/strong>\r\nMicrosoft, Building 99,\r\nRedmond, Washington, USA<\/a>\r\n\r\nWatch on demand:<\/strong>\r\nDay 1: 9:00 AM\u201310:30 AM<\/a>\r\nDay 1: 11:00 AM\u201312:30 PM<\/a>\r\nDay 1: 2:00 PM\u20134:05 PM<\/a>\r\nDay 2: 9:00 AM\u20139:45 AM<\/a>\r\nDay 2: 10:15 AM\u201312:30 PM<\/a>","tab-content":[{"id":0,"name":"About","content":"

\r\n
Update May 2020:<\/strong> Following up on our workshop here, we have begun a virtual biweekly seminar series continuing our conversations on the interface between physics and ML. Please visit http:\/\/physicsmeetsml.org\/<\/a> for more information.<\/blockquote>\r\n<\/div>\r\n
<\/div>\r\nThe goal of Physics<\/em> \u2229 ML<\/em> (read 'Physics Meets ML') is to bring together researchers from machine learning and physics to learn from each other and push research forward together. In this inaugural edition, we will especially highlight some amazing progress made in string theory with machine learning and in the understanding of deep learning from a physical angle.\u00a0Nevertheless, we invite a cast with wide ranging expertise in order to spark new ideas.\u00a0Plenary sessions from experts in each field and shorter specialized talks will introduce existing research.\u00a0We will hold moderated discussions and breakout groups in which participants can identify problems and hopefully begin new collaborations in both directions. For example, physical insights can motivate advanced algorithms in machine learning, and analysis of geometric and topological datasets with machine learning can yield critical new insights in fundamental physics.\r\n

Organizers<\/h3>\r\nGreg Yang<\/a>, Microsoft Research\r\nJim Halverson<\/a>, Northeastern University\r\nSven Krippendorf, LMU Munich\r\nFabian Ruehle, CERN, Oxford University\r\nRak-Kyeong Seong<\/a>, Samsung SDS\r\nGary Shiu, University of Wisconsin\r\n

Microsoft Advisers<\/h3>\r\nChris Bishop<\/a>, Microsoft Research\r\nJennifer Chayes<\/a>, Microsoft Research\r\nMichael Freedman<\/a>, Microsoft Research\r\nPaul Smolensky<\/a>, Microsoft Research"},{"id":1,"name":"Agenda","content":"

Day 1 | Thursday, April 25<\/h2>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time (PDT)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nSession 1<\/strong><\/td>\r\nPlenary talks<\/strong><\/td>\r\n<\/td>\r\n<\/tr>\r\n
8:00 AM\u20139:00 AM<\/td>\r\nBreakfast<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
9:00 AM\u20139:45 AM<\/td>\r\nGauge equivariant convolutional networks<\/td>\r\nTaco Cohen<\/td>\r\n<\/td>\r\n<\/tr>\r\n
9:45 AM\u201310:30 AM<\/td>\r\nUnderstanding overparameterized neural networks<\/td>\r\nJascha Sohl-Dickstein<\/td>\r\n<\/td>\r\n<\/tr>\r\n
10:30 AM\u201311:00 AM<\/td>\r\nBreak<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
11:00 AM\u201311:45 AM<\/td>\r\nMathematical landscapes and string theory<\/td>\r\nMike Douglas<\/td>\r\n<\/td>\r\n<\/tr>\r\n
11:45 AM\u201312:30 PM<\/td>\r\nHolography, matter and deep learning<\/td>\r\nKoji Hashimoto<\/td>\r\n<\/td>\r\n<\/tr>\r\n
12:30 PM\u20132:00 PM<\/td>\r\nLunch<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
2:00 PM\u20134:05 PM<\/td>\r\nSession 2<\/strong><\/td>\r\nApplying physical insights to ML<\/strong><\/td>\r\n<\/td>\r\n<\/tr>\r\n
2:00 PM\u20132:45 PM<\/td>\r\nPlenary: A picture of the energy landscape of deep neural networks<\/td>\r\nPratik Chaudhari<\/td>\r\n<\/td>\r\n<\/tr>\r\n
2:45 PM\u20134:05 PM<\/td>\r\nShort talks<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nNeural tangent kernel and the dynamics of large neural nets<\/td>\r\nClement Hongler<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nOn the global convergence of gradient descent for over-parameterized models using optimal transport<\/td>\r\nL\u00e9na\u00efc Chizat<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nPathological spectrum of the Fisher information matrix in deep neural networks<\/td>\r\nRyo Karakida<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQ&A<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nFluctuation-dissipation relation for stochastic gradient descent<\/td>\r\nSho Yaida<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nFrom optimization algorithms to continuous dynamical systems and back<\/td>\r\nRene Vidal<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nThe effect of network width on stochastic gradient descent and generalization<\/td>\r\nDaniel Park<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQ&A<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nShort certificates for symmetric graph density inequalities<\/td>\r\nRekha Thomas<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nGeometric representation learning in hyperbolic space<\/td>\r\nMaximilian Nickel<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nThe fundamental equations of MNIST<\/td>\r\nCedric Beny<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQ&A<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQuantum states and Lyapunov functions reshape universal grammar<\/td>\r\nPaul Smolensky<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nMulti-scale deep generative networks for Bayesian inverse problems<\/td>\r\nPengchuan Zhang<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nVariational quantum classifiers in the context of quantum machine learning<\/td>\r\nAlex Bocharov<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQ&A<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
4:05 PM\u20134:30 PM<\/td>\r\nBreak<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
4:30 PM\u20135:30 PM<\/td>\r\nThe intersect \u2229<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n

Day 2 | Friday, April 26<\/h2>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Time (PDT)<\/strong><\/td>\r\nSession<\/strong><\/td>\r\nSpeaker<\/strong><\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nSession 3<\/strong><\/td>\r\nApplying ML to physics<\/strong><\/td>\r\n<\/td>\r\n<\/tr>\r\n
8:00 AM\u20139:00 AM<\/td>\r\nBreakfast<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
9:00 AM\u20139:45 AM<\/td>\r\nPlenary: Combinatorial Cosmology<\/td>\r\nLiam McAllister<\/td>\r\n<\/td>\r\n<\/tr>\r\n
9:45 AM\u201310:15 AM<\/td>\r\nBreak<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
10:15 AM\u201311:35 AM<\/td>\r\nShort talks<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nBypassing expensive steps in computational geometry<\/td>\r\nYang-Hui He<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nLearning string theory at Large N<\/td>\r\nCody Long<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nTraining machines to extrapolate reliably over astronomical scales<\/td>\r\nBrent Nelson<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQ&A<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nBreaking the tunnel vision with ML<\/td>\r\nSergei Gukov<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nCan machine learning give us new theoretical insights in physics and math?<\/td>\r\nWashington Taylor<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nBrief overview of machine learning holography<\/td>\r\nYi-Zhuang You<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQ&A<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nApplications of persistent homology to physics<\/td>\r\nAlex Cole<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nSeeking a connection between the string landscape and particle physics<\/td>\r\nPatrick Vaudrevange<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nPBs^-1 to science: novel approaches on real-time processing from LHCb at CERN<\/td>\r\nThemis Bowcock<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nQ&A<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nFrom non-parametric to parametric: manifold coordinates with physical meaning<\/td>\r\nMarina Meila<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nMachine learning in quantum many-body physics: A blitz<\/td>\r\nYichen Huang<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nKnot Machine Learning<\/td>\r\nVishnu Jejjala<\/td>\r\n<\/td>\r\n<\/tr>\r\n
11:35 AM\u201312:30 PM<\/td>\r\nPanel discussion with panelists Michael Freedman, Clement Hongler, Gary Shiu, Paul Smolensky, Washington Taylor<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
12:30 PM\u20131:30 PM<\/td>\r\nLunch<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nSession 4<\/strong><\/td>\r\nBreakout groups<\/strong><\/td>\r\n<\/td>\r\n<\/tr>\r\n
1:30 PM\u20133:00 PM<\/td>\r\nPhysics breakout groups<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nSymmetries and their realisations in string theory<\/td>\r\nSergei Gukov, Yang-Hui He<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nString landscape<\/td>\r\nMichael Douglas, Liam McAllister<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nConnections of holography and ML<\/td>\r\nKoji Hashimoto, Yi-Zhuang You<\/td>\r\n<\/td>\r\n<\/tr>\r\n
3:00 PM\u20134:30 PM<\/td>\r\nML breakout groups<\/td>\r\n<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nGeometric representations in deep learning<\/td>\r\nMaximilian Nickel<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nUnderstanding deep learning<\/td>\r\nYasaman Bahri, Boris Hanin, Jaehoon Lee<\/td>\r\n<\/td>\r\n<\/tr>\r\n
<\/td>\r\nPhysics and optimization<\/td>\r\nRene Vidal<\/td>\r\n<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n
<\/div>"},{"id":2,"name":"Abstracts","content":"[accordion]\r\n[panel header=\"Gauge equivariant convolutional networks\"]\r\n\r\nSpeaker:<\/strong> Taco Cohen\r\n\r\nThe principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry. This class includes and generalizes existing methods from equivariant- and geometric deep learning, and thus unifies these areas in a common gauge-theoretic framework.\r\n\r\nWe implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere. By choosing to work with this very regular manifold, we are able to implement the gauge equivariant convolution using a single conv2d call, making it a highly scalable and practical alternative to Spherical CNNs. Using this method, we demonstrate substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns.\r\n[\/panel]\r\n[panel header=\"Understanding overparameterized neural networks\"]\r\n\r\nSpeaker:<\/strong> Jascha Sohl-Dickstein\r\n\r\nAs neural networks become highly overparameterized, their accuracy improves, and their behavior becomes easier to analyze theoretically. I will give an introduction to a rapidly growing body of work which examines the learning dynamics and prior over functions induced by infinitely wide, randomly initialized, neural networks. Core results that I will discuss include: that the distribution over functions computed by a wide neural network often corresponds to a Gaussian process with a particular compositional kernel, both before and after training; that the predictions of wide neural networks are linear in their parameters throughout training; and that this perspective enables analytic predictions for how trainability depends on hyperparameters and architecture. These results provide for surprising capabilities\u2014for instance, the evaluation of test set predictions which would come from an infinitely wide trained neural network without ever instantiating a neural network, or the rapid training of 10,000+ layer convolutional networks. I will argue that this growing understanding of neural networks in the limit of infinite width is foundational for future theoretical and practical understanding of deep learning.\r\n\r\n[\/panel]\r\n[panel header=\"Mathematical landscapes and string theory\"]\r\n\r\nSpeaker:<\/strong> Mike Douglas\r\n\r\nA fundamental theory of physics must explain how to derive the laws of physics ab initio, from purely formal constructions. In string\/M theory there are general laws, such as general relativity and Yang-Mills theory, which are fixed by the theory. There are also specific details, such as the spectrum of elementary particles and the strengths of interactions between them. These are not determined uniquely, but are derived from the geometry of extra dimensions of space. This geometry must take one of a few special forms called special holonomy manifolds, for example a Calabi-Yau (CY) manifold, or a G2 manifold. These manifolds are of significant mathematical interest independent of their relevance to string theory, and mathematicians and physicists have been working together to classify them and work out the relations between them.\r\n\r\nSuch data \u2014a set of objects and relations between them, defined by simple axioms \u2014 can be called a mathematical landscape. There are many important landscapes besides special holonomy manifolds \u2014 finite groups, bundles on manifolds, other classes of manifolds, etc. Many landscapes, such as that of six-dimensional CY manifolds, turn out to be combinatorially large. It is a further challenge to extract simple and useful pictures from the vast wealth of their data. Machine learning will be an essential tool to meet this challenge.\r\n\r\nAs an example, in the mid-90\u2019s, the physicists Kreuzer and Skarke did a survey of the 6-d toric hypersurface CYs to study mirror symmetry. The plot which demonstrated this, revealed a `shield\u2019 pattern which nobody had anticipated or even thought to ask about, whose explanation was attempted in several later works, and which may be the key to understanding the finite number of these spaces. Future studies of mathematical landscapes will no doubt reveal new unexpected patterns, especially if we have ML to help look for the patterns.\r\n\r\nWe will give a high level survey of work in this direction, trying to minimize mathematical and physical technicalities, and to raise new questions.\r\n\r\n[\/panel]\r\n[panel header=\"Holography, matter and deep learning\"]\r\n\r\nSpeaker:<\/strong> Koji Hashimoto\r\n\r\nRevealing quantum nature of matter is indispensable for finding new features of known exotic matter and even new materials, and drives the progress in theoretical physics. One of the most important is to construct explicitly ground state wave functions for given Hamiltonians. On the other hand, the holographic principle, discovered in the context of string theory, claims equivalence between quantum matter and classical higher-dimensional gravity. It provided a completely new viewpoint on the understanding of quantum matter, such as quarks in QCD and strongly correlated electrons.\r\n\r\nThese need to solve inverse problems, and machine learning should be effective intrinsically, with the following additionally important similarity in research: matter wave functions are given by tensor network optimization, and holographic principle defines quantum gravity where networks are regarded as discretized spacetime. Therefore, network optimization will play a central role in these sciences.\r\n\r\nIn this talk, I give a brief review of several important topics related to the networks, and provide a concrete example of applying deep neural network to the holographic principle. The higher-dimensional curved spacetime is discretized to a deep neural network, and the input data (quark correlators) optimizes the network. The neural network weights are regarded as the emergent spacetime metric, and some other physical observables predicted from the trained network geometry can well be compared with supercomputer simulations of QCD.\r\n\r\n[\/panel]\r\n[panel header=\"A picture of the energy landscape of deep neural networks\"]\r\n\r\nSpeaker:<\/strong> Pratik Chaudhari\r\n\r\nDeep networks are mysterious. These over-parametrized machine learning models, trained with rudimentary optimization algorithms on non-convex landscapes in millions of dimensions have defied attempts to put a\r\nsound theoretical footing beneath their impressive performance.\r\n\r\nThis talk will shed light upon some of these mysteries. I will employ diverse ideas\u2014from thermodynamics and optimal transportation to partial differential equations, control theory and Bayesian inference\u2014and paint a picture of the training process of deep networks. Along the way, I will develop state-of-the-art algorithms for non-convex optimization.\r\n\r\nThe goal of machine perception is not just to classify objects in images but instead, enable intelligent agents that can seamlessly interact with our physical world. I will conclude with a vision of how advances in machine learning and robotics may come together to help build such an Embodied Intelligence.\r\n\r\n[\/panel]\r\n[panel header=\"Combinatorial Cosmology\"]\r\n\r\nSpeaker:<\/strong> Liam McAllister\r\n\r\nA foundational problem in string theory is to derive statistical predictions for observable phenomena in cosmology and in particle physics. I will give an accessible overview of this subject, assuming no prior knowledge of string theory.\r\n\r\nI will explain that the set of possible natural laws is contained in the set of solutions of string theory that have no fields with zero mass and zero spin. Each solution is determined by a finite number of integers that specify the topology of the six-manifold on which the six extra dimensions of string theory are compactified. The total number of solutions is plausibly finite, albeit large. This set of solutions, the `landscape of string theory', is the main object of study when relating string theory to observations.\r\n\r\nI will discuss how the work of deriving predictions from the string landscape can be formulated as a computational problem. The integers specifying the six-manifold topology are the fundamental parameters in nature, and the task is to find out which values they can take, and what observables result for each choice. I will illustrate this problem in the case of six-manifolds that are defined by triangulations of certain four-dimensional lattice polytopes. Such manifolds are finite in number, though the number may exceed 10^900, and the computational tasks are almost entirely combinatorial. In this realm, one can aim to use machine learning to find patterns, or to pick out solutions with desirable properties. I will suggest some problems of this sort.\r\n\r\n[\/panel]\r\n[\/accordion]"}],"msr_startdate":"2019-04-25","msr_enddate":"2019-04-26","msr_event_time":"","msr_location":"Microsoft Research Redmond","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"April 25, 2019","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"\"Bubble","event_excerpt":"The goal of Physics \u2229 ML is to bring together researchers from machine learning and physics to learn from each other and push research forward together. In this inaugural edition, we will especially highlight some amazing progress made in string theory with machine learning and in the understanding of deep learning from a physical angle. Nevertheless, we invite a cast with wide ranging expertise in order to spark new ideas.","msr_research_lab":[199565],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/548040"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":34,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/548040\/revisions"}],"predecessor-version":[{"id":656829,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/548040\/revisions\/656829"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/551232"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=548040"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=548040"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=548040"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=548040"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=548040"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=548040"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=548040"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=548040"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=548040"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}