RSS feed\u00a0<\/a><\/li>\n<\/ul>\n
\nTranscript<\/h3>\n
Arul Menezes: The thing about research is, you never know when those breakthroughs are going to come through, you know? So, when we started this project last year, we thought it would take a couple of years, but uh, you know, we made faster progress than we expected, and then sometime last month, we were like, it looks like we\u2019re there! We should just publish! And that\u2019s what we did!<\/p>\n
Host: You\u2019re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I\u2019m your host, Gretchen Huizinga. <\/strong><\/p>\nHumans are wired to communicate, but we don\u2019t always understand each other. Especially when we don\u2019t speak the same language. But Arul Menezes, the Partner Research Manager who heads MSR\u2019s Machine Translation team, is working to remove language barriers to help people communicate better. And with the help of some innovative machine learning techniques, and the combined brainpower of machine translation, natural language and machine learning teams in Redmond and Beijing, it\u2019s happening sooner than anyone expected. <\/strong><\/p>\nToday, Menezes talks about how the advent of deep learning has enabled exciting advances in machine translation, including applications for people with disabilities, and gives us an inside look at the recent \u201chuman parity\u201d milestone at Microsoft Research, where machines translated a news dataset from Chinese to English with the same accuracy and quality as a person. That and much more on this episode of the Microsoft Research Podcast.<\/strong><\/p>\nHost: Arul Menezes, welcome to the podcast today.<\/strong><\/p>\nArul Menezes: Thank you. I\u2019m delighted to be here.<\/p>\n
Host: So, you\u2019re a Partner Research Manager at Microsoft Research, and you head the machine translation team, which, if I\u2019m not wrong, falls under the umbrella of Human Language Technologies?<\/strong><\/p>\nArul Menezes: Yes.<\/p>\n
Host: What gets you up in the morning? What\u2019s the big goal of your team?<\/strong><\/p>\nArul Menezes: Well, translation is just a fascinating problem, right? I\u2019ve been working in it for almost two decades now, and, uh, it never gets old, because there\u2019s always something interesting or unusual or unique about getting the translations right. The nice thing is that we\u2019ve been getting steadily better over the last few years. So, it\u2019s not a solved problem, but we\u2019re making great progress. So, it\u2019s sort of like a perfect problem to work on.<\/p>\n
Host: So , it\u2019s enough to get you out of bed and keep you going, because you\u2019re making…<\/strong><\/p>\nArul Menezes: Yeah, it\u2019s not so hard that you give up, and it\u2019s not solved yet, so it\u2019s perfect.<\/p>\n
Host: You don\u2019t want to go back to bed. So, your team has just made a major breakthrough in machine translation, and we\u2019ll get into the technical weeds about how you did it in a bit. But for now, tell us what you achieved and why it\u2019s noteworthy. <\/strong><\/p>\nArul Menezes: So, the result we showed was that our latest research system is essentially at parity with professional human translators. And the way we showed that is that we got a public test set that\u2019s generally used in the research community of Chinese-English news. We had it translated by professional translators, and we also had it translated by our latest research systems. And then we gave it to some evaluators who are bilingual speakers. And of course, it\u2019s a blind test, so they couldn\u2019t tell which was which. And at the end of the evaluation, our system and the humans scored essentially the same. So, you know, the result is that for the first time, really, we have a credible result that says that humans and machines are at parity for machine translation. Now, of course, keep in mind, this is a very specific domain. This is news, and it\u2019s one language pair. So, you know, we don\u2019t want to sort of oversell it. But it is exciting.<\/p>\n
Host: What about the timing of it? You had had plans to do this, but did it come when you expected?<\/strong><\/p>\nArul Menezes: The thing about research is you never know when those breakthroughs are going to come through, you know? So, when we started this project last year, we thought it would take a couple of years, but uh, you know, we made faster progress than we expected, and then sometime last month, we were like, it looks like we\u2019re there! We should just publish! And that\u2019s what we did!<\/p>\n
Host: Is this sort of like a Turing Test for machine translation? \u201cWhich one did it, a computer or a human?\u201d<\/strong><\/p>\nArul Menezes: In a limited sense. We didn\u2019t ask people to actually detect which was the human and which was the machine, because there may be little tells like, you know, maybe there\u2019s a certain capitalization pattern or whatever. What we did was we had people just score the translation on a scale, like, just a slider, really, and tell us how good the translation was. So, it was a very simple set of instructions that the evaluators got. And the reason we do that is so that we can get very consistent results and people can understand the instructions. And so, you score the translations, sentence by sentence, and then you take the averages across the different humans and the machine. It turned out they basically were the same score.<\/p>\n
Host: Why did you choose Chinese-English translation first?<\/strong><\/p>\nArul Menezes: So, we wanted to pick a publicly-used test set, because, you know, we\u2019re trying to engage with the research community here and we wanted to have a test set that other people have worked on that we could release all of our findings and results and evaluations. There\u2019s an annual workshop in machine translation, that\u2019s been going on for the last ten or more years, called the WMT. And so, we use the same methodology that they use for evaluation, and we also wanted to use the same test set. And they recently added Chinese. They used to be focused more on European languages, but they added Chinese. And so, we thought that would be a good one to tackle, especially because it\u2019s an important language pair, and um, you know, it\u2019s hard, but not too hard, obviously. At least as it turned out.<\/p>\n
Host: You\u2019ve had another very impressive announcement recently, just this month even, that impacts what I can do with machine translation on my phone. And I\u2019m all ears. What is it, and why is it different from other machine translation apps?<\/strong><\/p>\nArul Menezes: Yeah, so we\u2019re super excited about that, because, you know, we\u2019ve had a translator app for Android and Apple phones for a while. And one of the common use cases is, of course, when people are traveling. And the number one request we get from users is, \u201cCan I do translation on my phone even though I\u2019m not connected? Because when I\u2019m traveling, I don\u2019t always have a data plan. I\u2019m not always connected with wi-fi at the point when I\u2019m trying to communicate with someone like a taxi driver or a waiter or reception at a hotel.\u201d And so, we\u2019ve had for a while what we call an offline pack. You can download this pack before you travel, and then once you have that, you can do translations on your phone without being connected to the Cloud. But the thing about these packs is that they haven\u2019t been using the latest neural net technology because neural nets are very expensive. They take a lot of computation. And no one\u2019s been able to really run neural machine translation on a phone before. And so last year, we started working with a major phone manufacturer. And they had a phone that has a special neural chip. And we thought it would be super exciting to run neural translation offline, on the phone, using this chip. And so this month, we have been working to improve the efficiency, do a lot of careful engineering, and we managed to get it working on any phone without relying on the special hardware. So, what we released this month was that anyone who has an Android or iPhone can download these packs, and then they\u2019ll have neural translation on their phone. So that means even if they\u2019re not connected to the Cloud, they\u2019re going to get really fluent translations.<\/p>\n
Host: So, it\u2019s the latest cutting-edge translation technology?<\/strong><\/p>\nArul Menezes: Right, yeah.<\/p>\n
Host: On a regular phone.<\/strong><\/p>\nArul Menezes: Running right on your phone, yeah. Super exciting.<\/p>\n
Host: I wish I had that last summer.<\/strong><\/p>\nArul Menezes: Me too actually, yeah. You know, it\u2019s a very useful app when you travel.<\/p>\n
Host: Is it unique to Microsoft Research and Microsoft in general, or\u2026?<\/strong><\/p>\nArul Menezes: Yeah, as far I know nobody else has neural translation running on the phone. Now, this is only text translation. We don\u2019t yet have the speech recognition.<\/p>\n
Host: Are you working on that?<\/strong><\/p>\nArul Menezes: We are. Uh, we don\u2019t really have a date for that yet, but it\u2019s something that we\u2019re interested in.<\/p>\n
Host: I\u2019ll postpone my next trip until you\u2019ve got it done. <\/strong><\/p>\n(music plays) <\/strong><\/p>\nHost: Let\u2019s get specific about the technology behind MSR\u2019s research in machine translation. You told me that neural network architectures are the foundation for the AI training systems.<\/strong><\/p>\nArul Menezes: Right.<\/p>\n
Host: But your team used some additional training methods to help boost your efforts to achieve human parity in Chinese-English news translation. So, let me ask you about each one in turn. And let\u2019s start with a \u201cround-trip\u201d translation technique called Dual Learning. What is it? How did it help?<\/strong><\/p>\nArul Menezes: Right. So, one of the techniques we used to improve the quality of our research system that reached the human parity, was what we call Dual Learning. The way you train a regular machine translation system is, typically, with parallel data. So these are previously translated documents in, say, Chinese and English, that are aligned at the sentence level, and then the neural net model essentially learns to translate the sentence from Chinese into English, and that\u2019s the signal that we use to train the models. Now, you can do the same thing in the opposite direction in English-Chinese. So, what we do with Dual Learning is now we couple those two systems, and we train them jointly. So, you use the signal from the English to Chinese translation to improve the Chinese to English, and vice versa. So, it literally is very much like what a human would do, where you might do a round-trip translation where you translate from English to Chinese, but you\u2019re not sure if it\u2019s good, so you translate back into English and see how it went. And if you get it consistent, you have some faith that the translation may be good. And so, this is what we did, so it\u2019s basically a joint loss function for the two systems. And then there\u2019s another thing you can do when you have this dual learning working, which is that, in addition to the parallel data, you can also use monolingual data. Let\u2019s say you have Chinese text. You can send it through the Chinese to English system and then the English to Chinese system, and then compare the results. And that\u2019s a signal you can use to train both systems.<\/p>\n
Host: So, another technique you used is called Deliberation Networks. What is that, and how does that add to the translation accuracy?<\/strong><\/p>\nArul Menezes: Right. So, the other thing that our team did \u2013 and I should say that both the Dual Learning and the Deliberation Network work was actually done by our partners in Microsoft Research Asia. The effort was a joint effort of my team here in Redmond, which is the machine translation team, and the two teams in Microsoft Research Beijing, the natural language group and the machine learning team there. Both the Dual Learning and the Deliberation Network came out of the machine learning team in MSR Beijing.<\/p>\n
Host: Cool.<\/strong><\/p>\nArul Menezes: The way Deliberation Networks work is essentially it\u2019s a two-pass translation process. So you can think about it as creating a rough translation and then refining it. And, you know, a human might do the same thing, is where you essentially create sort of a first draft and then you edit it. So, the architecture of the Deliberation Network is that you have a first-pass neural network encoder-decoder that produces the first translation. Then, you have a second pass which takes both the original input in Chinese, as well as the first pass output, and it takes both of those as inputs in parallel, and then produces a translation by looking over both the original input, as well as the first pass output. It\u2019s essentially learning, let\u2019s say, which part of the first pass translation to copy over, say? And which parts maybe need to be changed, and the parts that it changed, it would decide to look at the original. So, the output of the second pass is our final translation. I mean, in theory, you could keep doing this, but, you know, we just do two passes, and that seems to be enough.<\/p>\n
Host: Yeah. I was actually going to ask that. It\u2019s like, how many passes is enough before you kind of land on…<\/strong><\/p>\nArul Menezes: I would imagine that after, like, two passes, you\u2019re likely to converge.<\/p>\n
Host: So, the third tool that we talked about is called Joint Training, or left to right, right to left consistency. Explain that and how it augments the system.<\/strong><\/p>\nArul Menezes: Yeah, so again, this is a work from the natural language group in MSR Beijing. They noticed that if you produce a translation output one word at a time from left to right, or you train a different system that produces the translation again, one word at a time, but from right to left, you actually get different translations. And the idea was, if you could make these two translations consistent, you might get better translation and the reason is if you think about, in many languages when you produce a sentence, there\u2019s later parts of the sentence that need to be consistent say, grammatically or in terms of gender or number or pronoun, with something earlier in the sentence. But sometimes, you need something earlier in the sentence to be consistent with something later in the sentence, but you haven\u2019t produced that yet, so you don\u2019t know what to produce. Whereas if you did it right to left, you\u2019d be able to get that right. So, by forcing the left to right system and the right to left system to be consistent with each other, we could improve the quality of the translation. And again, this is a very similar iterative process to what we\u2019re talking about with the Dual Learning, except that instead of the consistency being Chinese to English and English to Chinese, it\u2019s left to right and right to left.<\/p>\n
Host: So, what was the fourth technique that you added to the mix to get this human parity in the Chinese to English translation?<\/strong><\/p>\nArul Menezes: Yeah, so we also did what\u2019s called System Combination. So, we trained a number of different systems with different techniques, with variations on different techniques, with different initializations. And then we took our best six systems and did a combination. In our case, it was what\u2019s called a sentence level combination. So, it really is just picking, of the six, which one is the best. So essentially, each of the six systems produces an end-best list, like say the ten best candidates for translation, so now you\u2019ve got sixty translations and you rescore them and pick the best. People have done system combination at the word level before where you take part of a translation from one and part of a translation from the other. But that doesn\u2019t work very well with neural translation because you can really destroy the fluency of the sentence by just sort of cutting and pasting some pieces from here and there.<\/p>\n
Host: Right. Yeah, we\u2019ve seen that done without machines. It gets butchered in translation. <\/strong><\/p>\n(music plays) <\/strong><\/p>\nHost: Most of us have fairly general machine translation needs, but your researchers addressed some of the needs in a very domain-specific arena, in the form of Presentation Translator. Can you tell us more about that?<\/strong><\/p>\nArul Menezes: Right, so, uh, Presentation Translator is this unique add-in that we have developed for PowerPoint where, when you are giving a presentation, you just click the button and you can get transcripts of your lecture displayed on screen so that people in the audience can follow along. In addition, the transcripts are made available to audience members on their own phone. So, they use our app and just enter a code, and then they can connect to the same transcription feed. And they can get it either in the language of the speaker, or in their own language. And so essentially, with this one add-in, we\u2019re addressing two real needs. One is for people who are deaf or hard of hearing, where the transcript can help them follow along with what\u2019s going on in the classroom or in a lecture. And also, language learners, foreign students, who can follow along in their own language if they are not that familiar with the language of the speaker. And so, we\u2019ve had a lot of excitement about this in the educational field with both school districts, as well as colleges. And in particular, the Rochester Institute of Technology, which has \u2013 one of the colleges in the university is called the National Institute for the Deaf \u2013 and so they have a very large student body of deaf students. And so, they have been providing sign language interpretation. This gave them an opportunity to expand the coverage by providing this transcription in the classroom.<\/p>\n
Host: So is it from text to text on the PowerPoint presentation to…<\/strong><\/p>\nArul Menezes: So, it\u2019s the users speak…<\/p>\n
Host: It is?<\/strong><\/p>\nArul Menezes: Yeah, so the professor is lecturing, and everything that they say is transcribed both on screen and on people\u2019s phones.<\/p>\n
Host: Oh my gosh.<\/strong><\/p>\nArul Menezes: And then because it\u2019s on their phone, they can also save the transcript and then that becomes class notes. And the other thing that\u2019s really cool about Presentation Translator is that it uses the content of your PowerPoint \u2013 this is why it\u2019s connected to PowerPoint \u2013 it uses the content of your slides to customize the speech recognition system so that you can actually use the specialized terminology of the class and it will be recognized. So, you know, if someone\u2019s teaching a biology class, it\u2019ll recognize things like \u201cmitochondria\u201d or \u201cribosome,\u201d which in other contexts would not be recognized.<\/p>\n
Host: So, you told me about how you can use this with domain-specific \u2013 or business-specific \u2013 needs as well. So, tell us about that.<\/strong><\/p>\nArul Menezes: Right. One of the things we\u2019re super-excited about is that we have the ability to customize our machine translation system for the domain and the terminology of specific companies. We have a lot of customers who use translation to translate their documentation, their internal communications, product listings\u2026 and the way to get really high-quality translation for all of these scenarios is to customize the translation to the terminology that\u2019s being used by that business.<\/p>\n
Host: Part of the challenge of machine translation is that human language can\u2019t be reduced to ones and zeros. It\u2019s got nuance, it\u2019s got richness and fluidity. And so, there are detractors that start to criticize how \u201cunsophisticated\u201d machine translation is. But you said that they\u2019re missing the point, sort of, of what the goal is?<\/strong><\/p>\nArul Menezes: Yeah.<\/p>\n
Host: Talk about that a little bit and how should we manage our expectations around machine translation?<\/strong><\/p>\nArul Menezes: Yeah so, I mean, the kind of scenarios that we\u2019re focused on with machine translation today have to do with, sort of, everyday needs that people have, whether you\u2019re a traveler, or you maybe want to read a website, or a news article, or a newspaper. Or you\u2019re a company where you\u2019re communicating with customers that speak a different language, or you\u2019re communicating between different branches of the enterprise that speak different languages. Most of the language that is being translated today is pretty prosaic. I mean, it\u2019s not that hard\u2026 Well, it is hard, but we\u2019ve got it to the point where we can do a pretty good job of translating that kind of text. Of course, you know, if you start getting into fiction and poetry it is very hard. And we\u2019re nowhere, obviously with that kind. But that\u2019s not our goal at this point.<\/p>\n
Host: So, how would you define your goal?<\/strong><\/p>\nArul Menezes: I think the goal for translation today is to make the language barrier disappear for people in everyday contexts, you know, at work, when they\u2019re traveling, so that they can communicate without a language barrier.<\/p>\n
Host: Right. So that kind of leads into the idea that every language has its own formal grammar and semantics. And it also has local patois, as it were. And it often leads to humorous mistranslations. So how are machine learning researchers tackling that \u201clost-in-translation\u201d problem so machines don\u2019t end up making that classic video game mistranslation, \u201cAll your base are belong to us?\u201d<\/strong><\/p>\nArul Menezes: There\u2019s two things. With better techniques, we have gotten a lot better at producing fluent translations. So, we would not produce something like that today. But it is still the case that we\u2019re very dependent on the data we have available. So, in the languages where we have sufficient data, we can do a really good job. When you get to languages where there\u2019s not that much data, or you get to, you know, dialects or variations of language where there\u2019s not that much data, it becomes a lot tougher. And I think this is something machine translation shares with all AI and machine learning fields, is that you know we\u2019re very dependent on the data. There are ways to get iteratively better by continually learning based on how people use your product, right.<\/p>\n
Host: How much are you dealing, inter-disciplinarily, with other fields? You\u2019re computer scientists, right? And your data is language, which is human and expressive and all diff\u2026 all over the world. Who do you bring in to help you?<\/strong><\/p>\nArul Menezes: So, we have linguists on our team that, you know, make sure that we\u2019re translating things correctly. So, for example, one of the linguists on our team looks for things that our automatic metrics don\u2019t catch. So, you know, every time we produce a new version of our translation system, we have various scoring functions. The one that we use, which is a very popular metric, is called Bleu. And so that gives you a single number that says, how well is your system doing? So, you know, in principal, if the version of your system this month is, you know, a slightly better Bleu score than the version last month, you\u2019re like great, it\u2019s better! Ship it! But then what Lee, who\u2019s the linguist on my team, does, is she looks at it and tries to spot things that may not be caught by that score. So, for example, how are we doing with names? How are we doing with capitalization? How are we doing with dates and times and numbers? There\u2019s a lot of like phenomena that are very noticeable to humans that are not necessarily picked up by the automatic metric.<\/p>\n
(music plays) <\/strong><\/p>\nHost: Let\u2019s talk about you for a second. How did you end up doing machine translation research at Microsoft Research?<\/strong><\/p>\nArul Menezes: Yeah, so I was in a PhD program in, sort of, the systems area in the computer science department at Stanford. And I spent a couple of summers up here, and at the end of my second summer, I decided I wanted to stay. And so, I did. I just never went back. And I worked on a number of products at Microsoft. But at some point, I wanted to get back into research. And so, I moved to Microsoft Research, and I started the translation project actually in about the year 2000. So, basically, the year my daughter was born, and now she\u2019s going off to college, so\u2026<\/p>\n
Host: And you\u2019ve watched her language grow over the years.<\/strong><\/p>\nArul Menezes: Yeah, it\u2019s actually, when you\u2019re studying language, listening to how kids learn language is fascinating. It\u2019s just wonderful.<\/p>\n
Host: There\u2019s a spectrum here, at Microsoft of, you know, pure research to applied research, stuff that ends up in products. You seem to straddle what your work is about, being in products, but also in the research phase.<\/strong><\/p>\nArul Menezes: Yeah, one of the things that\u2019s super exciting about our team is that \u2013 and it makes us somewhat unique I think \u2013 is we have everything from the basic research and translation, to the web service that serves up the APIs, you know, the cloud service that people call, to the apps that we have on the phone. So, we have everything from, you know, the things that users are directly using down the basic research, and it\u2019s all in one team, so, you know, when somebody comes up with something cool, we can get it out to users very quickly. And that\u2019s very exciting.<\/p>\n
Host: I always ask my podcast guests my version of the \u201cwhat could possibly go wrong\u201d question. Which is, is there anything about your work in machine translation that keeps you up at night?<\/strong><\/p>\nArul Menezes: Well, so, we always have this challenge that we are learning from the data. And the data is sometimes misleading. And so, we have things that we do to try and clean up the data. We do have a mechanism, for example, to be able to respond to those kinds of issues quickly. And it has happened. We\u2019ve had situations where somebody discovered a translation that we produced that was offensive and posted it on Twitter. And, you know, it kind of went viral, and some people were upset about it, and so we had to respond quickly and fix it. And so, we have people who are on call 24 hours a day to fix any issue that arises like that.<\/p>\n
Host: So, it\u2019s a thing that literally does keep somebody up at night?<\/strong><\/p>\nArul Menezes: Definitely, yeah.<\/p>\n
Host: At least doing the night shift version of it! As we wrap up, Arul, what advice would you give to aspiring researchers that might be interested in working in human language technologies, and why would someone want to come to Microsoft Research to work on those problems?<\/strong><\/p>\nArul Menezes: So, I think we live in an absolutely fascinating time, right? Like, people have been working on AI for \u2013 or machine translation for that matter \u2013 for fifty, sixty years. And for decades, it was a real struggle. And I would say, just in the last ten years, with the advent of deep learning, we\u2019re making amazing progress towards these really, really hard tasks that people, at some point, had almost given up hope, you know, that we would ever be successful at recognizing speech or translating anywhere close to the level that a human can. But here we are. It\u2019s a super exciting time. What\u2019s even more exciting is, not only have we made tremendous progress on the research side, but now all of those techniques are being put into products, and they\u2019re impacting people on a daily basis, and, um, I think Microsoft is an amazing place to be doing this, because we have such a breadth, you know? We have a range of products that go all the way from individual users in their homes to multi-national companies. And so, we have just so many places that our technology can be used in. The range of opportunity here at Microsoft I think is incredible.<\/p>\n
(music plays)<\/strong><\/p>\nHost: Arul Menezes, thank you for taking time to come out and talk to us today. It\u2019s been really interesting.<\/strong><\/p>\nArul Menezes: Thank you. Thank you.<\/p>\n
To learn more about Arul Menezes and the exciting advances in machine translation, visit Microsoft.com\/research.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Episode 24, May 16, 2018 – Menezes talks about how the advent of deep learning has enabled exciting advances in machine translation, including applications for people with disabilities, and gives us an inside look at the recent \u201chuman parity\u201d milestone at Microsoft Research, where machines translated a news dataset from Chinese to English with the same accuracy and quality as a person.<\/p>\n","protected":false},"author":37074,"featured_media":485835,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"https:\/\/player.blubrry.com\/id\/33985517","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[240054],"tags":[],"research-area":[13545],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-485829","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-msr-podcast","msr-research-area-human-language-technologies","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"https:\/\/player.blubrry.com\/id\/33985517","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[171415,170906],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"","byline":"","formattedDate":"May 16, 2018","formattedExcerpt":"Episode 24, May 16, 2018 - Menezes talks about how the advent of deep learning has enabled exciting advances in machine translation, including applications for people with disabilities, and gives us an inside look at the recent \u201chuman parity\u201d milestone at Microsoft Research, where machines…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/485829"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37074"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=485829"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/485829\/revisions"}],"predecessor-version":[{"id":486429,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/485829\/revisions\/486429"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/485835"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=485829"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=485829"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=485829"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=485829"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=485829"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=485829"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=485829"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=485829"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=485829"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=485829"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=485829"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}