{"id":803776,"date":"2022-01-06T08:45:24","date_gmt":"2022-01-06T16:45:24","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=803776"},"modified":"2024-01-17T14:49:44","modified_gmt":"2024-01-17T22:49:44","slug":"msriisc","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/msriisc\/","title":{"rendered":"Microsoft Research-IISc AI Seminar Series"},"content":{"rendered":"\n\n\n\n\n

The Microsoft Research\u2013Indian Institute of Science AI Seminar Series<\/strong> aims to organize widely accessible talks on cutting-edge AI research. The seminar series will feature speakers who are leaders in their areas. It will cover a wide variety of topics at the research frontier of AI: from applications and societal impact to theoretical foundations, from deep learning to cognitive science, from computer vision to NLP. We welcome everyone, from students to academic and industrial researchers. We will hold an extended Q&A session so that participants will get to interact with the speaker via moderators. We expect to hold talks once in a month or two. By regularly bringing the community together and through open dialogue we hope that the seminar series will further inspire creativity and foster collaboration in the Indian AI research ecosystem and beyond.<\/span><\/p>\n\n\n\n

Organizing Committee: Chiranjib Bhattacharyya (opens in new tab)<\/span><\/a>, Navin Goyal (opens in new tab)<\/span><\/a>, Ravi Kannan (opens in new tab)<\/span><\/a>, Sriram Rajamani (opens in new tab)<\/span><\/a>, Manik Varma (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

One-time registration<\/strong> provides access to all seminars in the series, making it easy to attend the sessions that work with your schedule. You\u2019ll receive reminders with event details and a link to join in advance of each event.<\/p>\n\n\n\n

\n
Register<\/a><\/div>\n<\/div>\n\n\n\n
<\/div>\n\n\n\n\n\n

The event will be held over Microsoft Teams Live (opens in new tab)<\/span><\/a> as well as in person for select talks. A lightweight registration is required. For logistical reasons, attendees will only be able to ask questions in Teams chat. Moderators will select the questions and mention your name and institution (if you include them) if your question is selected.<\/p>\n\n\n\n

One-time registration<\/strong> provides access to all seminars in the series, making it easy to attend the sessions that work with your schedule. You\u2019ll receive reminders with event details and a link to join in advance of each event.<\/p>\n\n\n\n

<\/div>\n\n\n\n

The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set Methods<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Tom Dietterich<\/h4>\n\n\n\n

CoRIS Institute, Oregon State University<\/p>\n\n\n\n

Date:<\/strong> February 2, 2023\u200e | 2:00 PM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

<\/div>\n<\/div><\/div>\n\n\n\n\n\n

In many applications, computer vision object recognition systems encounter objects belonging to categories unseen during training. Hence, the set of possible categories is an open set. Detecting such \u201cnovel category\u201d objects is usually formulated as an anomaly detection problem. Anomaly detection algorithms for feature-vector data identify anomalies as outliers, but outlier detection has not worked well in deep learning.  Instead, methods based on the computed logits of object recognition networks give state-of-the-art performance. This talk proposes the Familiarity Hypothesis that these methods succeed because they are detecting the absence of familiar learned features.  This talk will review evidence from the literature and from our own experiments that support this hypothesis. It then experimentally tests a set of predicted consequences of this hypothesis that provide additional support. The talk will conclude with a discussion of whether familiarity detection is an inevitable consequence of representation learning. The results reveal a second fundamental assumption of statistical learning beyond the usual stationary\/iid assumption\u2014namely, that the features available to the classifier can capture variation exhibited by data points belonging to novel categories, and, more generally, by data points that lie outside the training distribution.<\/p>\n\n\n\n\n\n

Tom Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; Ph.D. Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is one of the pioneers of the field of machine learning and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is the primary moderator of the cs.LG category on arXiv and was awarded the AAAI Distinguished Service Award in 2022 and the ACML Distinguished Contribution Award in 2020.<\/p>\n\n\n\n\n\n


\n\n\n\n

On Learning-Aware Mechanism Design<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Michael I. Jordan<\/h4>\n\n\n\n

University of California, Berkeley<\/p>\n\n\n\n

Date:<\/strong> January 5, 2023\u200e | 10:00\u201311:30 AM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

Statistical decisions are often given meaning in the context of other decisions, particularly when there are scarce resources to be shared. Managing such sharing is one of the classical goals of microeconomics, and it is given new relevance in the modern setting of large, human-focused datasets, and in data-analytic contexts such as classifiers and recommendation systems. I’ll discuss several recent projects that aim to explore the interface between machine learning and microeconomics, including leader\/follower dynamics in strategic classification, a Lyapunov theory for matching markets with transfers, and the use of contract theory as a way to design mechanisms that perform statistical inference.<\/p>\n\n\n\n\n\n

Michael I. Jordan (opens in new tab)<\/span><\/a> is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and a Foreign Member of the Royal Society. He is a Fellow of the American Association for the Advancement of Science. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015, and the ACM\/AAAI Allen Newell Award in 2009. He gave the Inaugural IMS Grace Wahba Lecture in 2022, the IMS Neyman Lecture in 2011, and an IMS Medallion Lecture in 2004. He is a Fellow of the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA and SIAM.<\/p>\n\n\n\n

In 2016, Prof. Jordan was named the “most influential computer scientist” worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.<\/p>\n\n\n\n\n\n


\n\n\n\n

Designing AI Systems with Steerable Long-Term Dynamics<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Prof. Thorsten Joachims<\/h4>\n\n\n\n

Cornell University<\/p>\n\n\n\n

Date:<\/strong> November 2, 2022\u200e | 3:00 PM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

The feedback that users provide through their choices (e.g. clicks, purchases) is one of the most common types of data readily available for training autonomous systems, and it is widely used in online platforms. However, naively training systems based on choice data may only improve short-term engagement, but not the long-term sustainability of the platform. In this talk, I will discuss some of the pitfalls of engagement-maximization, and explore methods that allow us to supplement engagement with additional criteria that are not limited to individual action-response metrics. The goal is to give platform operators a new set of macroscopic interventions for steering the dynamics of the platform, providing a new level of abstraction that goes beyond the engagement with individual recommendations or rankings.<\/p>\n\n\n\n\n\n

Thorsten Joachims is a Professor in the Department of Computer Science and in the Department of Information Science at Cornell University, and he is an Amazon Scholar. His research interests center on a synthesis of theory and system building in machine learning, with applications in information access, language technology, and recommendation. His past research focused on counterfactual and causal inference, learning to rank, structured output prediction, support vector machines, text classification, learning with preferences, and learning from implicit feedback. He is an ACM Fellow, AAAI Fellow, KDD Innovations Award recipient, and member of the ACM SIGIR Academy.<\/p>\n\n\n\n\n\n

Prof. Soumen Chakrabarti<\/a>, IIT Bombay & Manish Gupta<\/a>, Microsoft Research<\/p>\n\n\n\n\n\n


\n\n\n\n

A journey from ML and NNs to NLP and Beyond: Just more of the same isn’t enough?<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Prof. Jason Weston<\/h4>\n\n\n\n

Meta AI & NYU<\/p>\n\n\n\n

Date:<\/strong> October 17, 2022\u200e | 6:30 PM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n<\/div><\/div>\n\n\n\n\n\n

The first half of the talk will look back on the last two decades of machine learning, neural network and natural language processing research for dialogue, through my personal lens, to discuss advances that have been made and the circumstances in which they happened — to try to give clues of what we should be working on for the future. The second half will dive deeper into some current first steps in those future directions, in particular trying to fix the problems of neural generative models to enable deeper reasoning with short and long-term coherence, and to ground such dialogue agents to an environment where they can act and learn. We will argue that just scaling up current techniques, while a worthy investigation, will not be enough to solve these problems.<\/p>\n\n\n\n\n\n

Jason Weston is a research scientist at Meta AI, USA and a Visiting Research Professor at NYU. He earned his PhD in machine learning at Royal Holloway, University of London and at AT&T Research in Red Bank, NJ (advisors: Alex Gammerman, Volodya Vovk and Vladimir Vapnik) in 2000. From 2000 to 2001, he was a researcher at Biowulf technologies. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2003 to 2009 he was a research staff member at NEC Labs America, Princeton. From 2009 to 2014 he was a research scientist at Google, NY. His interests lie in statistical machine learning, with a focus on reasoning, memory, perception, interaction and communication. Jason has published over 100 papers, including best paper awards at ICML and ECML, and a Test of Time Award for his work “A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning”, ICML 2008 (with Ronan Collobert). He was part of the YouTube team that won a National Academy of Television Arts & Sciences Emmy Award for Technology and Engineering for Personalized Recommendation Engines for Video Discovery. He was listed as the 16th most influential machine learning scholar at AMiner and one of the top 50 authors in Computer Science in Science.<\/p>\n\n\n\n\n\n

Prathosh A P<\/a>, IISc Bangalore & Sunayana Sitarama<\/a>, Microsoft Research <\/p>\n\n\n\n\n\n


\n\n\n\n

Deep Learning for Video Understanding<\/h3>\n\n\n\n
\"portrait<\/figure>
\n

Prof. Andrew Zisserman<\/h4>\n\n\n\n

University of Oxford<\/p>\n\n\n\n

Date:<\/strong> August 17, 2022 | 2:30 PM IST \u200e(UTC+5:30)\u200e
In-person:<\/strong> Faculty Hall, Indian Institute of Science, Bangalore
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

\n
See slides<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

One of the long term aims of computer vision is video understanding: to be able to recognize the visual content of the video and describe what is happening. Deep learning has led to tremendous progress on many of the required visual tasks such as object recognition, human face recognition and human pose prediction.<\/p>\n\n\n\n

This talk will be in three parts, and will cover progress on the tasks of human action recognition and object discovery in video. Deep networks for visual representations are very data hungry in training, and one of the key challenges is obtaining sufficient data for supervised learning.<\/p>\n\n\n\n

In the first part of the talk, we describe self-supervision for deep learning where the supervision uses prediction from within the video stream, or multi-modal prediction from the audio and visual streams, in order to learn the visual representation. In the second part, we describe how self-supervision with particular network models can be used to discover objects, such as animals moving in the video, and their effects. In the final part of the talk we move onto weak supervision from text using discriminative or generative training. Once the networks are trained, a language model can be used to search for videos given a text description, or generate a text description given a video.<\/p>\n\n\n\n\n\n

Andrew Zisserman is a Royal Society Research Professor at the Department of Engineering Science, University of Oxford, where he heads the Visual Geometry Group (VGG). His research has investigated and made contributions to computer vision, including: multiple view geometry, visual recognition, and large scale retrieval in images and video. He has authored over 500 peer reviewed papers in computer vision, and co-edited and written several books in this area. His papers have won best paper awards at international conferences, and multiple ‘test of time’ awards. His recent research focusses on audio and visual recognition. He is a fellow of the Royal Society (FRS) and the Indian National Academy of Engineering (INAE).<\/p>\n\n\n\n\n\n

Soma Biswas<\/a>, Indian Institute of Science & Akshay Nambi<\/a>, Microsoft Research<\/p>\n\n\n\n\n\n


\n\n\n\n

GFlowNets and System 2 Deep Learning<\/h3>\n\n\n\n
\"Portrait<\/figure>
\n

Prof. Yoshua Bengio<\/h4>\n\n\n\n

Universit\u00e9 de Montr\u00e9al and Mila \u2013 Quebec AI Institute<\/p>\n\n\n\n

Date:<\/strong> June 14, 2022 | 6:30 PM\u20148:00 PM IST \u200e(UTC+5:30)\u200e<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

GFlowNets are instances of a larger family of approaches at the intersection of generative modeling and RL that can be used to train probabilistic inference functions in a way that is related to variational inference and opens a lot of new doors, especially for brain-inspired AI. Instead of maximizing some objective (like expected return), these approaches seek to sample latent random variables from a distribution defined by an energy function, for example a posterior distribution (given past data, current observations, etc). Recent work showed how GFlowNets can be used to sample a diversity of solutions in an active learning context. We will also discuss ongoing work to explore how to train such inference machinery for learning energy-based models, to approximately marginalize over infinitely many variables, perform efficient posterior Bayesian inference and incorporate inductive biases associated with conscious processing and reasoning in humans. These inductive biases include modular knowledge representation favoring systematic generalization, the causal nature of human thoughts, concepts, explanations and plans and the sparsity of dependencies captured by reusable relational or causal knowledge. Many open questions remain to develop these ideas, which will require many collaborating minds!<\/p>\n\n\n\n\n\n

Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, \u201cthe Nobel Prize of Computing,\u201d with Geoffrey Hinton and Yann LeCun. He is a Full Professor at Universit\u00e9 de Montr\u00e9al, and the Founder and Scientific Director of Mila \u2013 Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.<\/p>\n\n\n\n

In 2019, he was awarded the prestigious Killam Prize and in 2021, became the second most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada, Knight of the Legion of Honor of France and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.<\/p>\n\n\n\n\n\n

Amit Sharma (opens in new tab)<\/span><\/a>, Rajiv Soundararajan (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n


\n\n\n\n

Where on Earth is AI Headed?<\/h3>\n\n\n\n
\"Headshot<\/figure>
\n

Prof. Tom M. Mitchell<\/h4>\n\n\n\n

Carnegie Mellon University<\/p>\n\n\n\n

Date:<\/strong> May 10, 2022 | 4:00 PM IST \u200e(UTC+5:30)\u200e<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

Ten years ago computers could not understand spoken words well, but today we routinely speak to our mobile phones, and computer vision algorithms have now reached human or super-human level performance for many types of images. Self-driving cars are already appearing on our streets, and a deep network called GPT-3 writes surprisingly human-like paragraphs of text. What is coming next? This talk will look forward to suggest what technical progress we might work toward and even expect over the coming years, and its possible impacts on our lifestyles, products, business models, society and international politics.<\/p>\n\n\n\n\n\n

Tom M. Mitchell is the Founders University Professor in the School of Computer Science at Carnegie Mellon University, where he founded the world’s first academic Machine Learning Department. Mitchell\u2019s research explores machine learning theory, algorithms and applications, as well as the impact of AI on society. He has testified to the U.S. Congress several times on AI impacts on society, and he co-chaired the 2017 U.S. National Academy study on \u201cInformation Technology, Automation, and the U.S. Workforce.\u201d Mitchell advises a variety of young and old companies internationally on their AI product and business strategies, and his research has been featured in popular press from the New York Times, to CCTV (China’s national television network), to CBS’s 60 Minutes. He is an elected member of the U.S. National Academy of Engineering, the American Academy of Arts and Sciences, and a Fellow and Past President of the Association for the Advancement of Artificial Intelligence (AAAI).<\/p>\n\n\n\n\n\n

Amit Sharma<\/a>, Partha Pratim Talukdar<\/a>, Bill Thies<\/a><\/p>\n\n\n\n\n\n


\n\n\n\n

Learning to Walk<\/h3>\n\n\n\n
\"Jitendra<\/figure>
\n

Prof. Jitendra Malik<\/h4>\n\n\n\n

University of California, Berkeley<\/p>\n\n\n\n

Date:<\/strong> March 04, 2022 | 09:00 AM IST (UTC+5:30)\u200e<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

Legged locomotion is commonly studied and programmed as a discrete set of structured gait patterns, like walk, trot, gallop. However, studies of children learning to walk (Adolph et al) show that real-world locomotion is often quite unstructured and more like \u201cbouts of intermittent steps\u201d. We have developed a general approach to walking which is built on learning on varied terrains in simulation and then fast online adaptation (fractions of a second) in the real world. This is made possible by our Rapid Motor Adaptation (RMA) algorithm. RMA consists of two components: a base policy and an adaptation module, both of which can be trained in simulation. We thus learn walking policies that are much more flexible and adaptable. In our set-up gaits emerge as a consequence of minimizing energy consumption at different target speeds, consistent with various animal motor studies. We then incrementally add a navigation layer to the robot from onboard cameras and tightly couple it with locomotion via proprioception without retraining the walking policy. This is enabled by the use of additional safety monitors which are trained in simulation to predict the safe walking speed for the robot under varying conditions and also detect collisions which might get missed by the onboard cameras. The planner then uses these to plan a path for the robot in a locomotion aware way. You can see our robot walking at https:\/\/www.youtube.com\/watch?v=nBy1piJrq1A (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n\n\n

Jitendra Malik is Arthur J. Chick Professor of EECS at UC Berkeley. He obtained his B.Tech degree in EE from IIT Kanpur in 1980 and a PhD in Computer Science from Stanford University in 1985. His research has spanned computer vision, machine learning, modeling of human vision, computer graphics, and most recently robotics. He has advised more than 70 PhD students and postdocs, many of whom are now prominent researchers. His honors include numerous best paper prizes, the 2013 Distinguished Researcher award in computer vision, the 2016 ACM\/AAAI Allen Newell Award, the 2018 IJCAI Award for Research Excellence in AI, and the 2019 IEEE Computer Society\u2019s Computer Pioneer Award for \u201cleading role in developing Computer Vision into a thriving discipline through pioneering research, leadership, and mentorship\u201d. He is a member of the US National Academy of Sciences, the US National Academy of Engineering, and the American Academy of Arts and Sciences.<\/p>\n\n\n\n\n\n

Prof. Shishir N. Y. Kolathaya (opens in new tab)<\/span><\/a>, Akshay Nambi (opens in new tab)<\/span><\/a>, Prof. Rohan Paul (opens in new tab)<\/span><\/a><\/p>\n\n\n\n\n\n

<\/div>\n\n\n","protected":false},"excerpt":{"rendered":"

The Microsoft Research\u2013Indian Institute of Science AI Seminar Series aims to organize widely accessible talks on cutting-edge AI research. The seminar series will feature speakers who are leaders in their areas. It will cover a wide variety of topics at the research frontier of AI: from applications and societal impact to theoretical foundations, from deep […]<\/p>\n","protected":false},"featured_media":999666,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2023-02-02","msr_enddate":"2023-02-02","msr_location":"Virtual & Bangalore","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"https:\/\/info.microsoft.com\/Microsoft-Research-IISc-AI-Seminar-Series-Registration.html","msr_event_link_redirect":false,"msr_event_time":"India Standard Time \u200e(UTC+5:30)\u200e","msr_hide_region":false,"msr_private_event":false,"footnotes":""},"research-area":[13556],"msr-region":[256048],"msr-event-type":[197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-803776","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-region-global","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"\n\n\n\n\n

The Microsoft Research\u2013Indian Institute of Science AI Seminar Series<\/strong> aims to organize widely accessible talks on cutting-edge AI research. The seminar series will feature speakers who are leaders in their areas. It will cover a wide variety of topics at the research frontier of AI: from applications and societal impact to theoretical foundations, from deep learning to cognitive science, from computer vision to NLP. We welcome everyone, from students to academic and industrial researchers. We will hold an extended Q&A session so that participants will get to interact with the speaker via moderators. We expect to hold talks once in a month or two. By regularly bringing the community together and through open dialogue we hope that the seminar series will further inspire creativity and foster collaboration in the Indian AI research ecosystem and beyond.<\/span><\/p>\n\n\n\n

Organizing Committee: Chiranjib Bhattacharyya<\/a>, Navin Goyal<\/a>, Ravi Kannan<\/a>, Sriram Rajamani<\/a>, Manik Varma<\/a><\/p>\n\n\n\n

One-time registration<\/strong> provides access to all seminars in the series, making it easy to attend the sessions that work with your schedule. You\u2019ll receive reminders with event details and a link to join in advance of each event.<\/p>\n\n\n\n

\n
Register<\/a><\/div>\n<\/div>\n\n\n\n
<\/div>\n\n\n\n\n\n

The event will be held over Microsoft Teams Live<\/a> as well as in person for select talks. A lightweight registration is required. For logistical reasons, attendees will only be able to ask questions in Teams chat. Moderators will select the questions and mention your name and institution (if you include them) if your question is selected.<\/p>\n\n\n\n

One-time registration<\/strong> provides access to all seminars in the series, making it easy to attend the sessions that work with your schedule. You\u2019ll receive reminders with event details and a link to join in advance of each event.<\/p>\n\n\n\n

<\/div>\n\n\n\n

The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set Methods<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Tom Dietterich<\/h4>\n\n\n\n

CoRIS Institute, Oregon State University<\/p>\n\n\n\n

Date:<\/strong> February 2, 2023\u200e | 2:00 PM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

<\/div>\n<\/div><\/div>\n\n\n\n\n\n

In many applications, computer vision object recognition systems encounter objects belonging to categories unseen during training. Hence, the set of possible categories is an open set. Detecting such \u201cnovel category\u201d objects is usually formulated as an anomaly detection problem. Anomaly detection algorithms for feature-vector data identify anomalies as outliers, but outlier detection has not worked well in deep learning.  Instead, methods based on the computed logits of object recognition networks give state-of-the-art performance. This talk proposes the Familiarity Hypothesis that these methods succeed because they are detecting the absence of familiar learned features.  This talk will review evidence from the literature and from our own experiments that support this hypothesis. It then experimentally tests a set of predicted consequences of this hypothesis that provide additional support. The talk will conclude with a discussion of whether familiarity detection is an inevitable consequence of representation learning. The results reveal a second fundamental assumption of statistical learning beyond the usual stationary\/iid assumption\u2014namely, that the features available to the classifier can capture variation exhibited by data points belonging to novel categories, and, more generally, by data points that lie outside the training distribution.<\/p>\n\n\n\n\n\n

Tom Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; Ph.D. Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is one of the pioneers of the field of machine learning and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is the primary moderator of the cs.LG category on arXiv and was awarded the AAAI Distinguished Service Award in 2022 and the ACML Distinguished Contribution Award in 2020.<\/p>\n\n\n\n\n\n


\n\n\n\n

On Learning-Aware Mechanism Design<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Michael I. Jordan<\/h4>\n\n\n\n

University of California, Berkeley<\/p>\n\n\n\n

Date:<\/strong> January 5, 2023\u200e | 10:00\u201311:30 AM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

Statistical decisions are often given meaning in the context of other decisions, particularly when there are scarce resources to be shared. Managing such sharing is one of the classical goals of microeconomics, and it is given new relevance in the modern setting of large, human-focused datasets, and in data-analytic contexts such as classifiers and recommendation systems. I'll discuss several recent projects that aim to explore the interface between machine learning and microeconomics, including leader\/follower dynamics in strategic classification, a Lyapunov theory for matching markets with transfers, and the use of contract theory as a way to design mechanisms that perform statistical inference.<\/p>\n\n\n\n\n\n

Michael I. Jordan<\/a> is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and a Foreign Member of the Royal Society. He is a Fellow of the American Association for the Advancement of Science. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015, and the ACM\/AAAI Allen Newell Award in 2009. He gave the Inaugural IMS Grace Wahba Lecture in 2022, the IMS Neyman Lecture in 2011, and an IMS Medallion Lecture in 2004. He is a Fellow of the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA and SIAM.<\/p>\n\n\n\n

In 2016, Prof. Jordan was named the \"most influential computer scientist\" worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.<\/p>\n\n\n\n\n\n


\n\n\n\n

Designing AI Systems with Steerable Long-Term Dynamics<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Prof. Thorsten Joachims<\/h4>\n\n\n\n

Cornell University<\/p>\n\n\n\n

Date:<\/strong> November 2, 2022\u200e | 3:00 PM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

The feedback that users provide through their choices (e.g. clicks, purchases) is one of the most common types of data readily available for training autonomous systems, and it is widely used in online platforms. However, naively training systems based on choice data may only improve short-term engagement, but not the long-term sustainability of the platform. In this talk, I will discuss some of the pitfalls of engagement-maximization, and explore methods that allow us to supplement engagement with additional criteria that are not limited to individual action-response metrics. The goal is to give platform operators a new set of macroscopic interventions for steering the dynamics of the platform, providing a new level of abstraction that goes beyond the engagement with individual recommendations or rankings.<\/p>\n\n\n\n\n\n

Thorsten Joachims is a Professor in the Department of Computer Science and in the Department of Information Science at Cornell University, and he is an Amazon Scholar. His research interests center on a synthesis of theory and system building in machine learning, with applications in information access, language technology, and recommendation. His past research focused on counterfactual and causal inference, learning to rank, structured output prediction, support vector machines, text classification, learning with preferences, and learning from implicit feedback. He is an ACM Fellow, AAAI Fellow, KDD Innovations Award recipient, and member of the ACM SIGIR Academy.<\/p>\n\n\n\n\n\n

Prof. Soumen Chakrabarti<\/a>, IIT Bombay & Manish Gupta<\/a>, Microsoft Research<\/p>\n\n\n\n\n\n


\n\n\n\n

A journey from ML and NNs to NLP and Beyond: Just more of the same isn't enough?<\/h3>\n\n\n\n
\"headshot<\/figure>
\n

Prof. Jason Weston<\/h4>\n\n\n\n

Meta AI & NYU<\/p>\n\n\n\n

Date:<\/strong> October 17, 2022\u200e | 6:30 PM IST (UTC+5:30)
Virtual:<\/strong> Microsoft Teams<\/p>\n<\/div><\/div>\n\n\n\n\n\n

The first half of the talk will look back on the last two decades of machine learning, neural network and natural language processing research for dialogue, through my personal lens, to discuss advances that have been made and the circumstances in which they happened -- to try to give clues of what we should be working on for the future. The second half will dive deeper into some current first steps in those future directions, in particular trying to fix the problems of neural generative models to enable deeper reasoning with short and long-term coherence, and to ground such dialogue agents to an environment where they can act and learn. We will argue that just scaling up current techniques, while a worthy investigation, will not be enough to solve these problems.<\/p>\n\n\n\n\n\n

Jason Weston is a research scientist at Meta AI, USA and a Visiting Research Professor at NYU. He earned his PhD in machine learning at Royal Holloway, University of London and at AT&T Research in Red Bank, NJ (advisors: Alex Gammerman, Volodya Vovk and Vladimir Vapnik) in 2000. From 2000 to 2001, he was a researcher at Biowulf technologies. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2003 to 2009 he was a research staff member at NEC Labs America, Princeton. From 2009 to 2014 he was a research scientist at Google, NY. His interests lie in statistical machine learning, with a focus on reasoning, memory, perception, interaction and communication. Jason has published over 100 papers, including best paper awards at ICML and ECML, and a Test of Time Award for his work \"A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning\", ICML 2008 (with Ronan Collobert). He was part of the YouTube team that won a National Academy of Television Arts & Sciences Emmy Award for Technology and Engineering for Personalized Recommendation Engines for Video Discovery. He was listed as the 16th most influential machine learning scholar at AMiner and one of the top 50 authors in Computer Science in Science.<\/p>\n\n\n\n\n\n

Prathosh A P<\/a>, IISc Bangalore & Sunayana Sitarama<\/a>, Microsoft Research <\/p>\n\n\n\n\n\n


\n\n\n\n

Deep Learning for Video Understanding<\/h3>\n\n\n\n
\"portrait<\/figure>
\n

Prof. Andrew Zisserman<\/h4>\n\n\n\n

University of Oxford<\/p>\n\n\n\n

Date:<\/strong> August 17, 2022 | 2:30 PM IST \u200e(UTC+5:30)\u200e
In-person:<\/strong> Faculty Hall, Indian Institute of Science, Bangalore
Virtual:<\/strong> Microsoft Teams<\/p>\n\n\n\n

\n
See slides<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

One of the long term aims of computer vision is video understanding: to be able to recognize the visual content of the video and describe what is happening. Deep learning has led to tremendous progress on many of the required visual tasks such as object recognition, human face recognition and human pose prediction.<\/p>\n\n\n\n

This talk will be in three parts, and will cover progress on the tasks of human action recognition and object discovery in video. Deep networks for visual representations are very data hungry in training, and one of the key challenges is obtaining sufficient data for supervised learning.<\/p>\n\n\n\n

In the first part of the talk, we describe self-supervision for deep learning where the supervision uses prediction from within the video stream, or multi-modal prediction from the audio and visual streams, in order to learn the visual representation. In the second part, we describe how self-supervision with particular network models can be used to discover objects, such as animals moving in the video, and their effects. In the final part of the talk we move onto weak supervision from text using discriminative or generative training. Once the networks are trained, a language model can be used to search for videos given a text description, or generate a text description given a video.<\/p>\n\n\n\n\n\n

Andrew Zisserman is a Royal Society Research Professor at the Department of Engineering Science, University of Oxford, where he heads the Visual Geometry Group (VGG). His research has investigated and made contributions to computer vision, including: multiple view geometry, visual recognition, and large scale retrieval in images and video. He has authored over 500 peer reviewed papers in computer vision, and co-edited and written several books in this area. His papers have won best paper awards at international conferences, and multiple 'test of time' awards. His recent research focusses on audio and visual recognition. He is a fellow of the Royal Society (FRS) and the Indian National Academy of Engineering (INAE).<\/p>\n\n\n\n\n\n

Soma Biswas<\/a>, Indian Institute of Science & Akshay Nambi<\/a>, Microsoft Research<\/p>\n\n\n\n\n\n


\n\n\n\n

GFlowNets and System 2 Deep Learning<\/h3>\n\n\n\n
\"Portrait<\/figure>
\n

Prof. Yoshua Bengio<\/h4>\n\n\n\n

Universit\u00e9 de Montr\u00e9al and Mila \u2013 Quebec AI Institute<\/p>\n\n\n\n

Date:<\/strong> June 14, 2022 | 6:30 PM\u20148:00 PM IST \u200e(UTC+5:30)\u200e<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

GFlowNets are instances of a larger family of approaches at the intersection of generative modeling and RL that can be used to train probabilistic inference functions in a way that is related to variational inference and opens a lot of new doors, especially for brain-inspired AI. Instead of maximizing some objective (like expected return), these approaches seek to sample latent random variables from a distribution defined by an energy function, for example a posterior distribution (given past data, current observations, etc). Recent work showed how GFlowNets can be used to sample a diversity of solutions in an active learning context. We will also discuss ongoing work to explore how to train such inference machinery for learning energy-based models, to approximately marginalize over infinitely many variables, perform efficient posterior Bayesian inference and incorporate inductive biases associated with conscious processing and reasoning in humans. These inductive biases include modular knowledge representation favoring systematic generalization, the causal nature of human thoughts, concepts, explanations and plans and the sparsity of dependencies captured by reusable relational or causal knowledge. Many open questions remain to develop these ideas, which will require many collaborating minds!<\/p>\n\n\n\n\n\n

Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, \u201cthe Nobel Prize of Computing,\u201d with Geoffrey Hinton and Yann LeCun. He is a Full Professor at Universit\u00e9 de Montr\u00e9al, and the Founder and Scientific Director of Mila \u2013 Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.<\/p>\n\n\n\n

In 2019, he was awarded the prestigious Killam Prize and in 2021, became the second most cited computer scientist in the world. He is a Fellow of both the Royal Society of London and Canada, Knight of the Legion of Honor of France and Officer of the Order of Canada. Concerned about the social impact of AI and the objective that AI benefits all, he actively contributed to the Montreal Declaration for the Responsible Development of Artificial Intelligence.<\/p>\n\n\n\n\n\n

Amit Sharma<\/a>, Rajiv Soundararajan<\/a><\/p>\n\n\n\n\n\n


\n\n\n\n

Where on Earth is AI Headed?<\/h3>\n\n\n\n
\"Headshot<\/figure>
\n

Prof. Tom M. Mitchell<\/h4>\n\n\n\n

Carnegie Mellon University<\/p>\n\n\n\n

Date:<\/strong> May 10, 2022 | 4:00 PM IST \u200e(UTC+5:30)\u200e<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

Ten years ago computers could not understand spoken words well, but today we routinely speak to our mobile phones, and computer vision algorithms have now reached human or super-human level performance for many types of images. Self-driving cars are already appearing on our streets, and a deep network called GPT-3 writes surprisingly human-like paragraphs of text. What is coming next? This talk will look forward to suggest what technical progress we might work toward and even expect over the coming years, and its possible impacts on our lifestyles, products, business models, society and international politics.<\/p>\n\n\n\n\n\n

Tom M. Mitchell is the Founders University Professor in the School of Computer Science at Carnegie Mellon University, where he founded the world's first academic Machine Learning Department. Mitchell\u2019s research explores machine learning theory, algorithms and applications, as well as the impact of AI on society. He has testified to the U.S. Congress several times on AI impacts on society, and he co-chaired the 2017 U.S. National Academy study on \u201cInformation Technology, Automation, and the U.S. Workforce.\u201d Mitchell advises a variety of young and old companies internationally on their AI product and business strategies, and his research has been featured in popular press from the New York Times, to CCTV (China's national television network), to CBS's 60 Minutes. He is an elected member of the U.S. National Academy of Engineering, the American Academy of Arts and Sciences, and a Fellow and Past President of the Association for the Advancement of Artificial Intelligence (AAAI).<\/p>\n\n\n\n\n\n

Amit Sharma<\/a>, Partha Pratim Talukdar<\/a>, Bill Thies<\/a><\/p>\n\n\n\n\n\n


\n\n\n\n

Learning to Walk<\/h3>\n\n\n\n
\"Jitendra<\/figure>
\n

Prof. Jitendra Malik<\/h4>\n\n\n\n

University of California, Berkeley<\/p>\n\n\n\n

Date:<\/strong> March 04, 2022 | 09:00 AM IST (UTC+5:30)\u200e<\/p>\n\n\n\n

\n
Watch now<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n\n\n

Legged locomotion is commonly studied and programmed as a discrete set of structured gait patterns, like walk, trot, gallop. However, studies of children learning to walk (Adolph et al) show that real-world locomotion is often quite unstructured and more like \u201cbouts of intermittent steps\u201d. We have developed a general approach to walking which is built on learning on varied terrains in simulation and then fast online adaptation (fractions of a second) in the real world. This is made possible by our Rapid Motor Adaptation (RMA) algorithm. RMA consists of two components: a base policy and an adaptation module, both of which can be trained in simulation. We thus learn walking policies that are much more flexible and adaptable. In our set-up gaits emerge as a consequence of minimizing energy consumption at different target speeds, consistent with various animal motor studies. We then incrementally add a navigation layer to the robot from onboard cameras and tightly couple it with locomotion via proprioception without retraining the walking policy. This is enabled by the use of additional safety monitors which are trained in simulation to predict the safe walking speed for the robot under varying conditions and also detect collisions which might get missed by the onboard cameras. The planner then uses these to plan a path for the robot in a locomotion aware way. You can see our robot walking at https:\/\/www.youtube.com\/watch?v=nBy1piJrq1A<\/a>.<\/p>\n\n\n\n\n\n

Jitendra Malik is Arthur J. Chick Professor of EECS at UC Berkeley. He obtained his B.Tech degree in EE from IIT Kanpur in 1980 and a PhD in Computer Science from Stanford University in 1985. His research has spanned computer vision, machine learning, modeling of human vision, computer graphics, and most recently robotics. He has advised more than 70 PhD students and postdocs, many of whom are now prominent researchers. His honors include numerous best paper prizes, the 2013 Distinguished Researcher award in computer vision, the 2016 ACM\/AAAI Allen Newell Award, the 2018 IJCAI Award for Research Excellence in AI, and the 2019 IEEE Computer Society\u2019s Computer Pioneer Award for \u201cleading role in developing Computer Vision into a thriving discipline through pioneering research, leadership, and mentorship\u201d. He is a member of the US National Academy of Sciences, the US National Academy of Engineering, and the American Academy of Arts and Sciences.<\/p>\n\n\n\n\n\n

Prof. Shishir N. Y. Kolathaya<\/a>, Akshay Nambi<\/a>, Prof. Rohan Paul<\/a><\/p>\n\n\n\n\n\n

<\/div>\n\n\n","tab-content":[{"id":0,"name":"About","content":"

The Microsoft Research--Indian Institute of Science AI Seminar Series aims to organize widely accessible talks on cutting-edge AI research. The seminar series will feature speakers who are leaders in their areas. It will cover a wide variety of topics at the research frontier of AI: from applications and societal impact to theoretical foundations, from deep learning to cognitive science, from computer vision to NLP. We welcome everyone, from students to academic and industrial researchers. We will hold an extended Q&A session so that participants will get to interact with the speaker via moderators. We expect to hold talks once in a month or two. By regularly bringing the community together and through open dialogue we hope that the seminar series will further inspire creativity and foster collaboration in the Indian AI research ecosystem and beyond.<\/span><\/p>\r\n \r\n

Organizing Committee: Chiranjib Bhattacharyya<\/a>,\u00a0Navin Goyal<\/a>,\u00a0Ravi Kannan<\/a>,\u00a0Sriram Rajamani<\/a>,\u00a0Manik Varma<\/a><\/p>\r\n \r\n\r\n "},{"id":1,"name":"Upcoming talks","content":"The event will be held over Microsoft Teams Live<\/a>. A lightweight registration is required. For logistical reasons, attendees will only be able to ask questions in Teams chat. Moderators will select the questions and mention your name and institution (if you include them) if your question is selected.\r\n\r\nOur inaugural speaker is Prof. Jitendra Malik from University of California, Berkeley.\r\n

Learning to Walk<\/h3>\r\n\"Jitendra\r\n

Prof. Jitendra Malik<\/strong><\/p>\r\nUniversity of California, Berkeley\r\n\r\nDate:<\/strong> March 04, 2022 | 09:00 AM IST\r\n

[msr-button text=\"Register\" url=\"https:\/\/www.microsoftevents.com\/profile\/12771231\" ]<\/div>\r\nTalk abstract<\/strong>\r\n\r\nLegged locomotion is commonly studied and programmed as a discrete set of structured gait patterns, like walk, trot, gallop.\u00a0\u00a0However, studies of children learning to walk (Adolph et al) show that real-world locomotion is often quite unstructured and more like \"bouts of intermittent steps\". We have developed a general approach to walking which is built on learning on varied\u00a0terrains\u00a0in simulation and then fast online adaptation (fractions of a second) in the real world. This is made possible by our\u00a0\u00a0Rapid Motor Adaptation (RMA) algorithm. RMA consists of two components: a base policy and an adaptation module, both of which can be trained in simulation.\u00a0\u00a0We thus learn walking\u00a0policies that are much more\u00a0flexible and adaptable. In our\u00a0set-up\u00a0gaits emerge as a consequence\u00a0of minimizing energy consumption at different target speeds, consistent with various animal motor studies.\u00a0We then incrementally add a navigation layer to the robot from onboard cameras and tightly couple\u00a0it with locomotion via proprioception without retraining the walking policy. This is enabled by the use of additional safety monitors which are trained in simulation to predict the safe walking speed for the robot under varying conditions and also detect collisions which might get missed by the onboard cameras. The planner then uses these to plan a path for the robot in a locomotion aware way.\u00a0You can see our robot walking at\u00a0https:\/\/www.youtube.com\/watch?v=nBy1piJrq1A<\/a>.\r\n\r\nBio<\/strong>\r\n

Jitendra Malik is Arthur J. Chick Professor of EECS at UC Berkeley.\u00a0 He obtained his B.Tech degree in EE from IIT Kanpur in 1980 and a PhD in Computer Science from Stanford University in 1985. His research has spanned computer vision, machine learning, modeling of human vision, computer graphics, and most recently robotics. He has advised more than 70 Phd students and postdocs, many of whom are now prominent researchers. His honors include numerous best paper prizes, the 2013 Distinguished Researcher award in computer vision, the 2016 ACM\/AAAI Allen Newell Award, the 2018 IJCAI Award for Research Excellence in AI,\u00a0\u00a0and the 2019 IEEE Computer Society\u2019s Computer Pioneer Award\u00a0\u00a0for \u201cleading role in developing Computer Vision into a thriving discipline through pioneering research, leadership, and mentorship\u201d. He is a member of the US National Academy of Sciences,\u00a0 the US National Academy of Engineering, and the American Academy of Arts and Sciences.<\/p>\r\nModerators: Prof. Shishir N. Y. Kolathaya<\/a>, Akshay Nambi<\/a>, Prof. Rohan Paul<\/a>"}],"msr_startdate":"2023-02-02","msr_enddate":"2023-02-02","msr_event_time":"India Standard Time \u200e(UTC+5:30)\u200e","msr_location":"Virtual & Bangalore","msr_event_link":"https:\/\/info.microsoft.com\/Microsoft-Research-IISc-AI-Seminar-Series-Registration.html","msr_event_recording_link":"","msr_startdate_formatted":"February 2, 2023","msr_register_text":"Watch now","msr_cta_link":"https:\/\/info.microsoft.com\/Microsoft-Research-IISc-AI-Seminar-Series-Registration.html","msr_cta_text":"Watch now","msr_cta_bi_name":"Event Register","featured_image_thumbnail":"\"\"","event_excerpt":"The Microsoft Research--Indian Institute of Science AI Seminar Series aims to organize widely accessible talks on cutting-edge AI research. The seminar series will feature speakers who are leaders in their areas. It will cover a wide variety of topics at the research frontier of AI: from applications and societal impact to theoretical foundations, from deep learning to cognitive science, from computer vision to NLP. We welcome everyone, from students to academic and industrial researchers. We…","msr_research_lab":[199562],"related-researchers":[{"type":"user_nicename","display_name":"Sridhar Vedantham","user_id":33713,"people_section":"Section name 0","alias":"sriv"}],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[144940],"related-projects":[],"related-opportunities":[],"related-publications":[],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/803776"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":67,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/803776\/revisions"}],"predecessor-version":[{"id":999669,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/803776\/revisions\/999669"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/999666"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=803776"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=803776"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=803776"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=803776"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=803776"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=803776"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=803776"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=803776"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=803776"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}