{"id":644958,"date":"2020-03-25T03:00:18","date_gmt":"2020-03-25T10:00:18","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=644958"},"modified":"2020-06-18T07:32:26","modified_gmt":"2020-06-18T14:32:26","slug":"microsofts-ai-transformation-project-turing-and-smarter-search-with-rangan-majumder","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/podcast\/microsofts-ai-transformation-project-turing-and-smarter-search-with-rangan-majumder\/","title":{"rendered":"Microsoft\u2019s AI Transformation, Project Turing and smarter search with Rangan Majumder"},"content":{"rendered":"
Rangan Majumder (opens in new tab)<\/span><\/a> is the Partner Group Program Manager of Microsoft\u2019s Search and AI, and he has a simple goal: to make the world smarter and more productive. But nobody said simple was easy, so he and his team are working on better \u2013 and faster \u2013 ways to help you find the information you\u2019re looking for, anywhere you\u2019re looking for it.<\/p>\n Today, Rangan talks about how three big trends have changed the way Microsoft is building \u2013 and sharing \u2013 AI stacks across product groups. He also tells us about Project Turing (opens in new tab)<\/span><\/a>, an internal deep learning moonshot that aims to harness the resources of the web and bring the power of deep learning to a search box near you.<\/p>\n Rangan\u00a0Majumder:\u00a0At the time, deep learning was really impressive in terms of these perception tasks like vision,\u00a0you know,\u00a0speech\u2026\u00a0so we were thinking,\u00a0like,\u00a0could it be really good at these other higher level tasks like language? So, that\u2019s when we started Project Turing,\u00a0and the idea was, what if we could do,\u00a0like,\u00a0end-to-end deep learning across the entire web to be able to answer these questions?<\/p>\n Host:\u00a0<\/b>You\u2019re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I\u2019m your host, Gretchen Huizinga.<\/b><\/p>\n Host:\u00a0<\/b>Rangan<\/b>\u00a0Majumder is the Partner Group Program Manager of Microsoft\u2019s Search and AI, and he has a simple goal: to make the world smarter and more productive. But nobody said simple was easy, so he and his team are working on better \u2013 and faster \u2013 ways to help you find the information you\u2019re looking for, anywhere you\u2019re looking for it.<\/b><\/p>\n Today,\u00a0<\/b>Rangan<\/b>\u00a0talks about how three big trends have changed the way Microsoft is building \u2013 and sharing \u2013 AI stacks across product groups. He also tells us about Project Turing, an internal deep learning moonshot that aims to harness the resources of\u00a0<\/b>the\u00a0<\/b>web and bring the power of deep learning to a search box near you.<\/b>\u00a0<\/b>T<\/b>hat and much more on this episode of the Microsoft Research Podcast.<\/b><\/p>\n Host:\u00a0<\/b>Rangan<\/b>\u00a0Majumder, welcome to the podcast.<\/b><\/p>\n Rangan\u00a0Majumder: Thank you. It\u2019s great to be here.<\/p>\n Host: So you\u2019re a different kind of guest here in the booth. You\u2019re a Partner Group Program Manager over in Search and AI at Microsof<\/b>t<\/b>. Let\u2019s start by situating your group and its work since you\u2019re not in Microsoft Research per se, but you do a lot of work with the folks here. How and where do you \u201croll up\u201d as they say?<\/b><\/p>\n Rangan\u00a0Majumder: Yeah, great question. So, as you know, the broader organization is called Microsoft AI and Research, and Microsoft Research is one of the sub-groups there. So another sister team of Microsoft Research is the Bing Search team,\u00a0and my group is actually Search and AI, which is inside of Bing. So we\u2019re a sister team to Microsoft Research. And it\u2019s really great to be on this team because what we get to do is work closely with Microsoft researchers and then productionize some of their great research efforts\u2026<\/p>\n Host: Yeah.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026put it into production, and then, once we get it to work at scale in Bing, we can actually go take that technology and place it elsewhere, like in Office and Dynamics and other parts of Microsoft.<\/p>\n Host: Right. So I\u2019m getting the visual of those nesting dolls with each part going inside the other. So<\/b>\u00a0<\/b>top big doll is Microsoft AI and Research.<\/b><\/p>\n Rangan\u00a0Majumder: Correct.<\/p>\n Host: And then Microsoft Research is part of that.<\/b><\/p>\n Rangan\u00a0Majumder: Right.<\/p>\n Host: And Bing is part of that.<\/b><\/p>\n Rangan\u00a0Majumder: That\u2019s right.<\/p>\n Host: And then your group, Search and AI, is nested within the Bing group.<\/b><\/p>\n Rangan\u00a0Majumder: That\u2019s correct.<\/p>\n Host: Okay, and who do you all roll up to?<\/b><\/p>\n Rangan\u00a0Majumder: Kevin Scott, who is our CTO\u2026<\/p>\n Host: Okay.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026and also EVP.<\/p>\n Host: Got it.\u00a0<\/b>Al<\/b>l\u00a0<\/b>right,\u00a0<\/b>well let\u2019s talk about what you all do in Search and AI and now that we\u2019ve situated you<\/b>.<\/b>\u00a0<\/b>Y<\/b>ou have a delightfully short, but incredibly ambitious mission statement. Tell us what it is and, if you can, what\u2019s your high-level strategy for making it real?<\/b><\/p>\n Rangan\u00a0Majumder: Yeah, so\u00a0our mission statement is to make the world smarter and more productive. And you\u2019ll notice that our mission statement doesn\u2019t just talk about search because search is obviously the biggest thing that we do, but it\u2019s important to understand what is the underlying user need for why people are searching, and it\u2019s really to learn about something or to get something done, right? So people want to learn a lot about what\u2019s happening with the coronavirus today. So that\u2019s an example of how our technology helps make people smarter so they can know what\u2019s going on in the world. An example of where we\u2019re helping make people productive is something like when, you know, I got my sink clogged, right? So that\u2019s something I just want to learn really quickly like, how do I unclog my sink? So that\u2019s an example of\u00a0productivity. So, the reason you need to understand the underlying user need versus like how they do it today is, the solutions actually change over time.<\/p>\n Host: Okay.<\/b><\/p>\n Rangan\u00a0Majumder: So, we want to be really close to, what is the user need that people have?\u00a0And then our technology helps provide that and satisfy that need. As I said, the mission is about making the world smarter and more productive. If we just focused on the users on Bing, we can still have a lot of impact.\u00a0But if you look at the entire pie of customers from Microsoft, there\u2019s a lot more we can do. So that\u2019s where we\u2019ve been working a lot with Office, taking our AI technology and not just bringing it to Bing, but bringing it to Office. So\u00a0that\u2019s an example where we increase the number of people we can impact by, like, a billion.<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: Because there\u2019s a lot more users using Word. And then, if you think about, like, Azure becoming, you know, the world\u2019s computer. So there\u2019s a lot more impact we could have by bringing our technology into Azure as well.<\/p>\n Host: Well let\u2019s talk about what gets you up in the morning. In your role as a Partner Group Program Manager for Search and AI, do you have a personal mission, or a personal passion, is maybe a better way to put it?<\/b><\/p>\n Rangan\u00a0Majumder: Yeah, well as any program manager, your goal is really to maximize product\/market fit, but my personal mission is basically the same as my team\u2019s mission, which is really around making the world smarter and more productive, and if you just look at what\u2019s happening today, which is, people are finding new ways to look for information, right? Like ten years ago was all about search. Like, people just kind of typed in words. But now people want to find stuff more naturally like smart assistants,\u00a0people just want to ask a question, you know, in an ambient room and get the answer.<\/p>\n Host: Mm-hmm.<\/b><\/p>\n Rangan\u00a0Majumder: People want to be able to take a picture of a flower and say, hey, what is this flower? How do I take care of this?\u00a0Then the amount of information is changing too, so people aren\u2019t just writing web pages like they were ten years ago. People are now taking pictures, uploading photos, uploading videos. So going back to my example of, you know, how do I unclog a sink? You don\u2019t just want a web page walking through the steps. Sometimes you want a video\u2026<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026that just shows you, hey, here\u2019s how I unclog a sink. So I think there\u2019s just a lot to do in that mission, and something that I feel like we\u2019ll be doing for like easily a decade or more.<\/p>\n Host: You know, as you brought up the flower and taking a picture of it, I\u2019m thinking of this music app that I use, Shazam,\u00a0<\/b>where you find out what a song<\/b>\u00a0is.\u00a0<\/b>I\u2019ve often said I want Shazam for all these different categories. What\u2019s that tree? I don\u2019t even know what it is, but if I take a picture of it could you tell me? Are you guys working on stuff like that too?<\/b><\/p>\n Rangan\u00a0Majumder:\u00a0Uh, we\u2019ve actually shipped it already! So if you go install the Bing app you can actually go take a\u2026 I\u2019ve done this when I moved into my new house, like there were these flowers and I\u2019m like what are these flowers? They look really interesting. I could take a picture of it and it tells you what it is, and then you can find out more information. So plants, dogs, those kinds of things, the Bing Visual Search app (opens in new tab)<\/span><\/a> does really well so, go install it today and try it out!<\/p>\n Host: Well, much of what we call AI is still work in progress<\/b>,<\/b>\u00a0and there are some fundamentally different ways of doing AI within a company, especially one as large as Microsoft, so give us a snapshot of how product groups have traditionally approached building AI stacks and then tell us some of the big trends that you\u2019ve noted in the science of AI that have enabled a disruption in that approach.<\/b><\/p>\n Rangan\u00a0Majumder: I think this is probably the most exciting thing happening at Microsoft today.\u00a0So the way we\u2019re doing AI is definitely transforming.\u00a0If you think about how we used to do AI, maybe five years ago,\u00a0we\u00a0would have multiple different product groups doing AI kind of independently, and for the most part didn\u2019t share anything. But there\u2019s three trends that have really been changing that. The first trend is really around\u00a0transfer learning, which is this concept that, as you train a model on one type of data and one set of tasks, you can actually reuse that model for other tasks and it does, sometimes, even better than it would if you just trained it on that task specifically.<\/p>\n Host: Huh.<\/b><\/p>\n Rangan\u00a0Majumder: The second one that\u2019s happening is this trend with\u00a0large pre-trained models.\u00a0I\u00a0think there\u2019s a couple of these out there, right, like you probably heard about Open AI\u2019s GPT, Google has BERT, Microsoft has\u00a0its\u00a0MT-DNN. So you can take these models and just train them on a bunch of data in a self-supervised way, it\u2019s called, make it very large, and then you can actually apply it on lots of other tasks and it just does\u00a0phenomenal. Just to give you an example, like, let\u2019s say the Search team was about a hundred people and they\u2019re working on various parts of\u00a0search all the time so what we did is take about ten folks and said, okay, I want you guys to look at these large transformer networks and see what kind of impact could you have. So in just, like, a few months they were able to ship an improvement so large that it was larger than all the other, like, ninety folks, all the work they did, combined. So we were just, like, shocked how important and how impactful this kind of work was.<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: So much so that, at first, we thought, well, does that mean we don\u2019t need these other ninety folks? We can just work on these ten folks?\u00a0But instead we really embraced it and we said well, let\u2019s get all hundred folks working on these large transformer networks. And then, in the end, like we just had a wave of improvements over the last six months of just like improvement after improvement equally as impactful as the one we had before,\u00a0so this is a really big trend right now in these large pre-trained models.<\/p>\n Host: Okay.<\/b><\/p>\n Rangan\u00a0Majumder: The third trend is really around the culture of Microsoft and how it\u2019s changing. And this really started with Satya when he became CEO. He really\u00a0has been focused on changing the culture and making it a lot more\u00a0collaborative. In fact, he\u2019s changed incentive structure in the team, so when you\u2019re actually going through a performance review, it\u2019s not just about, you know, what did you do? But it\u2019s about, how did you use someone else\u2019s work or how did you contribute to someone else\u2019s work? The other, like,\u00a0person\u00a0who\u2019s really changed a lot is Kevin Scott, our CTO. So he did a bunch of AI reviews and realized like there\u2019s a lot of teams doing similar stuff, but some teams are a little bit better than others. So why don\u2019t we do this work in a coordinated way? So when you take those three trends together, what we\u2019re doing is, we\u2019re starting to build this coordinated AI stack across Microsoft where we have certain teams saying, look, we are going to build these really large NLP models for the company, not just ourselves, because the problem is, if each team tried to do that, it would be just way too costly, and then, through transfer learning, I can now reuse this model in other parts. So the stack is kind of looking like this: at the very top you have applications like Bing, the different Office apps, you know, Dynamics, Azure Cognitive Services. The layer underneath is a bunch of these pre-trained models. Like we have one called the Turing Neural Language Representation, we\u2019ve got Language Generation, we\u2019ve got these vision models\u2026 The layer underneath is these software systems, which can actually run these models really, really fast because the models are very big and they\u2019re very expensive.<\/p>\n Host: Yeah.<\/b><\/p>\n Rangan Majumder: So if you, if you run them in a na\u00efve way, it would just, like, take too long and you\u2019d hurt the customer experience so you need to actually do a lot of software optimizations. And then the final layer is around the hardware, so that\u2019s around like CPU, GPUs and we even have our own little effort on chips with FPGAs.<\/p>\n (music plays)<\/i><\/b><\/p>\n Host: I want to talk a little bit about four big areas you\u2019ve identified as important to progress and innovation in Search and AI and you\u2019ve sort of labeled them web search,\u00a0<\/b>question answering, multi-media, and platform. So why are each of these areas important, especially as it relates to the customer experience<\/b>,<\/b>\u00a0and what innovations are you exploring as you seek to improve that experience?<\/b><\/p>\n Rangan\u00a0Majumder: Yeah, so I would say, about five years ago these things seemed pretty different. Like web search, question answering, multi-media and then the platform team would sort of support all those teams. I\u2019ve noticed, and now you\u2019ll see it more and more, that these experiences are very integrated. So if you go to Bing today and you search for, what do alligators eat?\u00a0You\u2019ll see, at the very top, an answer that says, you know, alligators eat things like fish, turtles, birds\u2026 but then you\u2019ll also see an image there, sort of fused in with that answer, because an image actually helps you really get the emotional part. So just reading it is one thing, but humans also need that emotional experience, so by showing that image right next to the answer, it just makes the answer come to life.<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: So that\u2019s one way where these things are kind of related. Like the experience, putting them all together, makes it much better for the customer\u2026<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026but also the technology stacks are becoming very similar too, especially with deep learning. So with deep learning, it\u2019s mostly operating on vectors. So the first step in all of these systems, whether it\u2019s question answering, web search and multi-media, is really taking this corpus of information and converting it to vectors using an encoder. So that part is pretty different for each one, but then, once you have this corpus of vectors, the rest of the stack is very similar. Like the first thing you do when a query comes in is, you do a vector search to say, all right, what are the most similar vectors here? And then you run a cascade of different deep learning models, and each one gets heavier and a little bit more costly, and that\u2019s what\u2019s been super interesting, where, before, each team had its own very different stack, but with deep learning and everything just betting on this\u00a0vectors,\u00a0there\u2019s just a few services I need to build really, really well. One is this inference service which is, you know, given some content, vectorize it really quick. The other one is this vector search service which is, given a set of vectors how do I search them extremely fast?<\/p>\n Host: Your team has been involved in achieving several impressive milestones over the past five years. So take us on a little guided tour of that timeline and tell us about the challenges you face, along with the rewards that you reap when you try to bring research milestones into production.<\/b><\/p>\n Rangan\u00a0Majumder: So first, I think a lot of the milestones,\u00a0I have to give most of the credit to Microsoft Research because they\u2019re the ones really leading the way on pushing the state-of-the-art on those benchmarks. Like our team doesn\u2019t really focus too much on the academic\u00a0benchmarks. So, ever since we went on this mission of, let\u2019s really push deep learning for NLP, the first academic data set that came out that was\u00a0really aligned with that mission was\u00a0by Stanford called the\u00a0Stanford Question Answering Dataset,\u00a0SQuAD.\u00a0So it came out around 2016 and Microsoft Research Asia was actually at the top of the leader board, like, throughout its existence. So for, like 2016, 2017, they kept building better and better models until around 2018 they actually achieved human parity, which is just a big milestone in general when you have these academic benchmarks. I think that was like one of the most exciting milestones around the natural language space, that we were able to achieve human parity on this\u00a0SQuAD\u00a0data set (opens in new tab)<\/span><\/a>\u2026<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026within two years. And then, I think around 2018, another dataset came out, which is\u00a0Conversational Question Answering.\u00a0A year later, 2019, once again Microsoft Research Asia, along with some other folks in, I think, XD\u2019s Speech Team was able to\u2026<\/p>\n Host: Yeah.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026achieve human parity on that. Around that same time there was this GLUE benchmark, which was also very interesting.<\/p>\n Host: And GLUE stands for?<\/b><\/p>\n Rangan\u00a0Majumder: General Language Understanding benchmark. So they had, I think, ten very different natural language tasks. So they thought, well this\u00a0one\u2019s going to be very hard. If we can build one model that can do well on all ten of these, that\u2019s going to be pretty impressive and once again, in a year, Microsoft Research was able to do that.<\/p>\n Host: Unbelievable.<\/b><\/p>\n Rangan\u00a0Majumder: So that\u2019s where they came up with this MT-DNN model.<\/p>\n Host: Which stands for?<\/b><\/p>\n Rangan\u00a0Majumder: Multi-Task Deep Neural Network.<\/p>\n Host:\u00a0<\/b>Right<\/b>.<\/b><\/p>\n Rangan\u00a0Majumder: Yeah. So basically, like, in language, Microsoft Research has been doing a really awesome job.<\/p>\n Host: Yeah.<\/b><\/p>\n Rangan\u00a0Majumder: And while they\u2019re doing that, our team is just taking those models and productionizing them. And what\u2019s interesting is, just because you do well on academic tasks doesn\u2019t necessarily mean it\u2019s really ready to be shipped into production. And the first big learning was with the\u00a0SQuAD\u00a0dataset\u2026<\/p>\n Host: Yeah.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026which I talked about back in 2016,\u00a02018. So the model they used there was called Reading Net or R-Net.<\/p>\n Host: Mm-hmm.<\/b><\/p>\n Rangan\u00a0Majumder: And we realized that data set they had was a little bit biased because every \u2013 like the way this data set works is,\u00a0you have a question and you have, like, a passage and you\u2019re basically trying to answer the question, but their entire dataset was guaranteed that every question has an answer.<\/p>\n Host: Hmm.<\/b><\/p>\n Rangan\u00a0Majumder: But in a production context, when people are asking questions to the search engine, not every question has an answer. And in fact, some questions shouldn\u2019t be answered at all, right?\u00a0So we need to actually also add unanswerable questions.<\/p>\n Host: Well, I want to talk about a project that you\u2019ve been involved in called Project Turing, named after the eponymous Alan Turing, and you call it your internal deep learning moonshot, which I love! What was the motivation and inspiration behind that project and what are some of the cool products, or product features, that have come out of that work?<\/b><\/p>\n Rangan\u00a0Majumder: Yeah, so Project Turing was started about 2016. The motivation for it was, we were doing a bunch of analysis on, basically, the types of queries we were getting. And there was one segment that really stood out because it was the fastest growing segment of queries. It was question queries. So people were no longer just typing in key words, they were asking questions to a search engine. So, like, instead of\u00a0people\u00a0typing in, you know, fishing license, they would say like,\u00a0fishing age in Washington, right?\u00a0What is the fishing age in Washington when I could go fish? So we looked at that and we thought well, people just don\u2019t want to click on a web page, they just want you to find the answer for them.<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: And then, many times, the words that were in the question and the words that were actually in the answer were very different.<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder:\u00a0So the previous approach, which\u00a0was,\u00a0like,\u00a0let\u2019s just do key word matching,\u00a0was not going to work.\u00a0We had to match at a different level, at the semantic level. So,\u00a0at the time, deep learning was really impressive in terms of these perception tasks like vision,\u00a0you know,\u00a0speech\u2026\u00a0so we were thinking like could it be really good at these other higher level tasks like language? So, that\u2019s when we started Project Turing,\u00a0and the idea was, what if we could do,\u00a0like,\u00a0end-to-end deep learning across the entire web to be able to answer these questions?\u00a0And it basically completely changed our search\u00a0architecture\u00a0to be able to do this kind of thing. And that\u2019s why it was a moonshot. So today, every time you issue a query,\u00a0we\u2019re running deep learning across,\u00a0basically,\u00a0the entire web to get that answer. And if we didn\u2019t use deep learning, we wouldn\u2019t be able to answer a lot of these questions\u2026<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: \u2026because key word matching just wouldn\u2019t work.<\/p>\n Host: So that actually is happening now?<\/b><\/p>\n Rangan\u00a0Majumder: Yes, that\u2019s correct. That is happening and as we did it, there are all sorts of new innovations that came out of it that we realized are reusable for other parts of the company. And as we kept pushing the system, we noticed users kept asking harder and harder questions so then we just had to build better and better models. So, there are a lot of interesting things that came out of Project Turing. So first was, we\u2019ve got this deep learning search stack, deep learning question answering system, but then we started to build these Turing Neural Language Representation. And then, just recently we announced the Turing NLG, or Natural Language Generation. So we realized, many times, the passage itself can be kind of long, that comes from a web page, so sometimes we need to rewrite it and shorten it for people, so that\u2019s why we started to look into this generation task. We were able to train one of the largest deep learning\u00a0language models and that\u2019s called Turing NLG and we announced that I think last month.<\/p>\n Host: Right, so it\u2019s very new.<\/b><\/p>\n Rangan\u00a0Majumder: Yes, very new. It\u2019s seventeen billion parameters, it was\u00a0like impressing\u2026<\/p>\n Host: Wait, wait, wait. Seventeen billion?<\/b><\/p>\n Rangan\u00a0Majumder: Yes, seventeen billion parameters.<\/p>\n Host: Oh, my gosh.<\/b><\/p>\n Rangan\u00a0Majumder: Yeah, and just like three years ago, our biggest model was probably ten million parameters. So\u00a0it\u00a0just shows you how quickly the space is growing.<\/p>\n Host: Okay, so with that kind of context, where\u2019s the next number of parameters? Are we going to hit a trillion? I mean, is, is this scalable to that level?<\/b><\/p>\n Rangan\u00a0Majumder: Yeah, that\u2019s a good question. So definitely we\u2019re going to keep pushing it because every time we get an order of magnitude, we notice it could just do better, so we\u2019re not seeing it slowing down. So as long as you get improvements that could ship to customers, we\u2019re going to keep pushing the boundaries. But at the same time, we need to be more and more efficient with our computation and also\u00a0just not chase something for vanity\u2019s sake, right?<\/p>\n Host: Right.<\/b><\/p>\n Rangan\u00a0Majumder: Like just because we can get to a hundred billion parameters, which we want to be able to do, we also need to make sure we\u2019re really maximizing\u00a0the value that the model is actually getting with all those parameters too.<\/p>\n Host: I guess I should have said a hundred billion before jumping to a trillion\u2026 It\u2019s like a \u201ctriple dog dare\u201d right after the \u201cdare you.\u201d<\/b><\/p>\n (music plays)<\/i><\/b><\/p>\n Host: So drilling in a little bit on these different manifestations of your technology, I know that there\u2019s one called Brainwave that is part of the search experience now, and you had talked a little bit about the fact that Project Turing and Brainwave were co-developed, or concurrently developed, because they each had gaps that they needed to fill. Tell our listeners how Turing and Brainwave came about together and how it speaks to the importance of collaboration, which you\u2019ve already referred to earlier on, across research and product boundaries.<\/b><\/p>\nRelated:<\/h3>\n
\n
\nTranscript<\/h3>\n