Microsoft Research Podcast<\/a>, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. I\u2019m your host, Gretchen Huizinga.<\/strong><\/p>\nTeaching computers to read, think and communicate like humans is a daunting task, but it\u2019s one that Dr. Geoff Gordon embraces with enthusiasm and optimism. Moving from an academic role at Carnegie Mellon University to a new role as research director of the Microsoft Research Lab in Montreal, Dr. Gordon embodies the current trend toward the partnership between academia and industry, as we enter what many believe will be a new era of progress in machine learning and artificial intelligence.<\/strong><\/p>\nToday, Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather pattern of AI winters, talks about how collaboration is essential to innovation and machine learning, shares his vision of the mindset it takes to tackle the biggest questions in AI, and reveals his life-long quest to make computers less\u2026 well, less computer-like. That and much more on this episode of the Microsoft Research Podcast.<\/strong><\/p>\nHost: Geoff Gordon, thanks for coming all the way from Montreal to join us in the studio today. Welcome to the podcast.<\/strong><\/p>\nGeoff Gordon: Thank you, I\u2019m glad to be here.<\/p>\n
Host: Montreal has become a global center for AI research, and you\u2019ve just moved from Carnegie Mellon University to take on the role of Research Director at the Microsoft Research Montreal Lab. We\u2019ll get to your work a bit later, but right now, what\u2019s the deal with Montreal? How has it come to be one of the most exciting places in AI research?<\/strong><\/p>\nGeoff Gordon: Yeah, it\u2019s pretty amazing how Montreal has been sort of taking over AI. It started with the deep learning revolution, because a lot of the people who were sort of early in on that revolution were from Canada and from this area. Yoshua Bengio is a key figure that everybody sort of credits with bringing so much AI to Montreal. But at this point, it\u2019s taken on a life of its own. It\u2019s growing by leaps and bounds.<\/p>\n
Host: Yeah, it\u2019s like you get one person there who starts something, and then it becomes a magnet<\/strong>.<\/p>\nGeoff Gordon: And there\u2019s a lot of credit to the Canadian government as well. They\u2019ve been trying really hard to make it an attractive place. They sort of understand what research needs to prosper, and they\u2019ve actively been trying to recruit talent to the area.<\/p>\n
Host: So that it\u2019s more than just industry.<\/strong><\/p>\nGeoff Gordon: Right, it\u2019s government. It\u2019s the universities in the area and industry all together, making a great place to do AI work.<\/p>\n
Host: Let\u2019s talk about you for just a quick second here. You told me what you say to people when they ask you what you do for a living. Give our listeners your elevator pitch, and then maybe unpack it a bit more, assuming we\u2019re interested enough to ask the next question.<\/strong><\/p>\nGeoff Gordon: Sure, sure. So, what I always say is, you know how computers are annoying and inflexible? My job is to try and change that. AI is sort of all about making computers less computer-like.<\/p>\n
Host: That\u2019s a very concise statement, packed with technical work, to make it…<\/strong><\/p>\nGeoff Gordon: Turns out it\u2019s easier said than done.<\/p>\n
Host: I like the simplicity of it. I\u2019ve already interviewed a couple of \u2013 I\u2019ll call them rock stars from your band there. Harm Van Seijen, who\u2019s doing great stuff with reinforcement learning, and Adam Trischler, who\u2019s doing machine comprehension. I have to say, it\u2019s just fascinating what\u2019s going on at the Montreal lab. What drew you there to be the front man, so to speak?<\/strong><\/p>\nGeoff Gordon: So, the Montreal area is one thing. The other thing is just respect for the work that MSR has been doing. Microsoft Research has developed sort of reputation for putting together the best of academic and industrial-style research, the sort of focus on the long-term big goals, which I like from academia, and very few industrial research labs really try for that. And then there\u2019s the ability to build a focused team that\u2019s doing something big. In academia, your team is you and however many grad students you can recruit.<\/p>\n
Host: And how much the grant will pay for.<\/strong><\/p>\nGeoff Gordon: Right. Well yes, there is that.<\/p>\n
Host: So, the goal of the MSR Montreal Lab is very specific \u2013 to teach machines to read, think, and communicate like humans.<\/strong><\/p>\nGeoff Gordon: Mm-hmm, that\u2019s right.<\/p>\n
Host: And this kind of learning has traditionally been the domain of humans, so why do you think this is possible for a machine, and where are we on the path to achieving that?<\/strong><\/p>\nGeoff Gordon: Right. Well, we\u2019re a long way from fully achieving it, but we\u2019re making lots of progress on the way. The first wave of AI was what I would call very logical. So, somebody would put in a bunch of logical facts and syllogisms and that sort of thing. And they would hope that you would wind up with enough facts and enough rules so that you would know how to think. And that didn\u2019t work out so well. You were able to do with that some pretty impressive things. But you were completely unable to handle uncertainty. And humans are great are handling uncertainty. They may not like it a lot all of the time, but you\u2019re handling uncertainty every second of every day.<\/p>\n
Host: Even right now, we\u2019re handling uncertainty.<\/strong><\/p>\nGeoff Gordon: Mm-hmm, yeah.<\/p>\n
Host: I don’t know what you\u2019re going to say next.<\/strong><\/p>\nGeoff Gordon: No, me either.<\/p>\n
Host: Isn\u2019t that cool, though? Because we can actually formulate what we\u2019re going to say next.<\/strong><\/p>\nGeoff Gordon: Right, right. We make it up as we go along. That\u2019s what humans are good at.<\/p>\n
Host: How do I know what I think until I see what I say?<\/strong><\/p>\nGeoff Gordon: Yeah. Hmm. That\u2019s profound.<\/p>\n
Host: I\u2019ve used it before. So, keep going. AI history.<\/strong><\/p>\nGeoff Gordon: Right, the next sort of big thing was, well, okay, let\u2019s try to handle uncertainty, and there were these things like pattern recognition. And that was sort of very short-term thinking, right? It was, you see the picture, and you tell it it\u2019s a face, and then you\u2019re done. Right? And then the next picture comes up. So that\u2019s the sort of thing that\u2019s in your cell phone camera now. People made progress on it starting many years ago, but now it\u2019s in everybody\u2019s cell phone camera, right? It draws a box around everybody\u2019s face and focuses there.<\/p>\n
Host: And it still to some degree shocks me.<\/strong><\/p>\nGeoff Gordon: Oh, it amazes me. You look at kids and what they learn. I spend all of my working hours trying to get computers to learn things, and then I go home, and I talk with my kids, and I\u2019m like, man, I have a long way to go.<\/p>\n
Host: So where are we now?<\/strong><\/p>\nGeoff Gordon: I think now we\u2019re sort of getting to the point where we\u2019re trying to combine these 2 capabilities where there\u2019s this sort of reasoning ahead capability, with this capability to recognize and react. And I think, you know, language is a good example where you combine the structure. Language is ambiguous. We never realize it, right? But, I saw the man with the telescope. Were you using the telescope to see the man, or was the man carrying a telescope?<\/p>\n
Host: Right.<\/strong><\/p>\nGeoff Gordon: That\u2019s a source of our uncertainty. But there\u2019s a lot of sort of structure to it, right? You know what the phrases are within a sentence. You know what the relationships are among the words.<\/p>\n
Host: Well, okay, so we\u2019re making progress. We\u2019re a long way off.<\/strong><\/p>\nGeoff Gordon: Mm-hmm.<\/p>\n
Host: And yet, this is a big commitment of the lab in Montreal is to…<\/strong><\/p>\nGeoff Gordon: Yeah, it\u2019s a big bet.<\/p>\n
Host: …see the next thing.<\/strong><\/p>\nGeoff Gordon: Yeah. I mean, the nice thing is that this is the sort of bet that you always win. Research is, you know, you go in thinking you\u2019re going to accomplish one thing, and you know, a large fraction of the time, you don\u2019t accomplish that, but you accomplish something cool anyway. There\u2019s a famous quote, \u201cResearch is what I\u2019m doing when I don’t know what I\u2019m doing\u201d. Right?<\/p>\n
Host: I love that. It kind of feels like a metaphor for life in some ways.<\/strong><\/p>\nGeoff Gordon: Mm-hmm, yeah. I must be doing research a lot of the time.<\/p>\n
Host: We all are, right? It\u2019s like trial and error. So, there are a lot of approaches to machine learning that researchers are exploring. And MSR Montreal is really known for its work in at least two of them. Deep learning and reinforcement learning.<\/strong><\/p>\nGeoff Gordon: Mm-hmm, that\u2019s right.<\/p>\n
Host: So, give us an overview of the technical side of these approaches and how they\u2019ve advanced the science of machine-learning.<\/strong><\/p>\nGeoff Gordon: So, people have known how, a little bit about how brains work for a long time, right? There\u2019s these simple processing elements, but there\u2019s a billion of them, and they work together to achieve something that no single one of them could do on their own. And so people have tried to make computers learn that way. So, people when they first saw that you could do this, they were like, oh my god, this is going to be great. This is going to solve AI. And they sort of overhyped it. And maybe 10 years later, the government stopped sending money into grants for that. The companies stopped investing in it, and it was what was called the AI winter. And then people started realizing that it was actually a pretty cool idea. And people had sort of the next wave of neural networks, maybe 20, 30 years later, where they were able to train simple neural networks to do interesting things. And again, everybody\u2019s like, oh my god, this is going to change the world, and overhyped it, and after a little while, we had the second AI winter. Now I think, at the risk of making a prediction – we\u2019re back where people are really excited about neural networks. And I think the change this time is that people have figured out how to work with much larger and more complicated networks and have shown success in training these ones. And they can do sort of qualitatively more than their earlier cousins. And now a student of history would say, well, we\u2019re headed for another AI winter. But I\u2019m a little skeptical of that, because now there are so many real shipping products that actually have this technology baked into them, right? Like it\u2019s not going to go away. Things might change in the future, but I don\u2019t think it\u2019s going to play out the same way that it did last time.<\/p>\n
Host: Right. There are tools in place, compute-power algorithms…<\/strong><\/p>\nGeoff Gordon: Mm-hmm, yes.<\/p>\n
Host: …and massive data sets that weren\u2019t there for the neural networks 50 years ago.<\/strong><\/p>\nGeoff Gordon: Right, absolutely. I mean, the neural networks 50 years ago, I\u2019ve seen a photo of them. It\u2019s like a bank of hardware bigger than this room. And it was there to train a neural network with 1 unit. And it had potentiometers, variable resisters, and motors. And it would train by physically turning the knobs with the motors.<\/p>\n
Host: It\u2019s like the Alan Turing enigma machine.<\/strong><\/p>\nGeoff Gordon: Mm-hmm, right, yeah.<\/p>\n
Host: It\u2019s all the knobs and dials and \u2013 wow. Going off script a little bit \u2013 I just had a thing on my Twitter feed that said there\u2019s eleven seasons in Washington State: Winter, fool Spring, second Winter, Spring of deception, third Winter, mud season, actual Spring, Summer, false Fall, second Summer, actual Fall. There\u2019s a lot of winters in there, though.<\/strong><\/p>\nGeoff Gordon: Yeah.<\/p>\n
Host: And, you know, you never know…<\/strong><\/p>\nGeoff Gordon: Right, you never know.<\/p>\n
Host: …if it\u2019s a psych, or…<\/strong><\/p>\nGeoff Gordon: Yeah.<\/p>\n
Host: Much like AI.<\/strong><\/p>\nGeoff Gordon: The one thing \u2013 we were talking about uncertainty, right? The one thing that you know is that you don\u2019t know what\u2019s going to happen.<\/p>\n
Host: Right, and yet you\u2019re working on the discovery, and that\u2019s the exciting thing about being a researcher.<\/strong><\/p>\nGeoff Gordon: Yeah, it is.<\/p>\n
Host: Talk a little bit more about reinforcement learning, because that\u2019s a big bet of the lab. Harm gave me a good explanation a while ago, but I\u2019d like to hear just for the audience now if they haven\u2019t heard Harm\u2019s podcast. And they should go back and listen. Because he\u2019s really a rock star. I love him.<\/strong><\/p>\nGeoff Gordon: So, reinforcement learning, it\u2019s the problem where you have your AI, the thing that you\u2019re trying to train. And it interacts with its environment. It\u2019s usually called the agent and the world. The agent is the AI, and the world is the thing it\u2019s interacting with. And what happens? You look at the world, so you get observations of the world. Those keep coming in over time. And when each observation comes in, you have to choose actions to affect the world. And this sort of cycle of action and observation with some thought in between, hopefully, is the definition of the reinforcement learning problem. And there\u2019s one more component, which is that based on your entire history of actions and observations, the world can from time to time give you a reward or a penalty. So, you eat a dot in Ms. Pacman, right, it gives you 10 points or something like that. Or you run into the ghost and you lose a life, and that\u2019s bad. And so, your goal in reinforcement learning is to have the AI realize which actions tend to lead to reward, and which ones tend to lead to penalty. And it\u2019s difficult because you never know. You might have done something several seconds ago that leads to you being trapped by the ghost. And it\u2019s not what you did like right before you got eaten by the ghost, right? It\u2019s the thing that you did that got you trapped, and you have to go back and figure out what that was.<\/p>\n
Host: Let\u2019s talk about language, because that\u2019s the epicenter of your work. When I was talking to Adam about how a computer, how a machine can comprehend and work with in the domain of language, he was saying you\u2019ll give it data sets of a statement, like machine-learning is hard, or machine-learning is difficult.<\/strong><\/p>\nGeoff Gordon: Right.<\/p>\n
Host: It\u2019s like \u2013 and then the computer is only recognizing what it\u2019s been fed. Why are we kind of poised to see big advances in the domain of language as far as computers are concerned?<\/strong><\/p>\nGeoff Gordon: Right. There\u2019s fairly amazing language comprehension baked into products that we use every day, right? My phone will do it. I think part of the reason that it\u2019s been more recent that that\u2019s made its way into products is actually not a flaw in the technology, but it\u2019s taken this long for that much computing power to become cheap enough to put into consumer products. It keeps getting cheaper, and we passed a threshold at some point where now companies can ship $100 consumer electronics product that can recognize your voice. One of the other things that has been great for language research is the fact that people put up so much language on the web. And so you can get a billion-word corpus of language. You have to be a major company in order to have the resources to process that.<\/p>\n
Host: Well, you\u2019re working for one.<\/strong><\/p>\nGeoff Gordon: Well, yeah, as it turns out. But you know, you can go and you can say, okay, hard is used in a lot of the same contexts as difficult, and so those are similar words. And actually, easy is a similar word to hard, because easy and hard are used in a lot of the same contexts. Whereas, you know, avocado – very few sentences could you substitute avocado for the word difficult, right?<\/p>\n
Host: And yet, you could also use hard for like, you know, a rock.<\/strong><\/p>\nGeoff Gordon: Right, absolutely. And so you\u2019re effectively – I mean, you can cluster the different meanings, right? These are called vector space embeddings of words, so you can come and sort of make a description of a word in terms of a list of numbers, such that the list of numbers is similar for words that occur in similar contexts. And when you do that, words that have sort of similar vectors, similar lists of numbers, wind up being often quite close in meaning.<\/p>\n
Host: So, I imagine the algorithms behind all this are pretty complicated.<\/strong><\/p>\nGeoff Gordon: Oh, but that\u2019s the fun part.<\/p>\n
Host: For you. So, let\u2019s talk about learning for inference. That\u2019s an interesting thread of your research. What goes into developing a machine\u2019s skillset for long-term thinking, and so on?<\/strong><\/p>\nGeoff Gordon: Well, so the key problem that you have to solve is that when you learn some concept in isolation, you wind up not necessarily being able to use that concept. And so, we\u2019re essentially trying to design algorithms that make that distinction, that give a computer experience at using a learned concept, and train it so that it\u2019s better at doing that, rather than just trying to learn the concepts in isolation. You can start by learning them in isolation, but then you have to try putting them together, actually using them to make a chain of reasoning.<\/p>\n
Host: Okay, so what\u2019s the math look like behind that?<\/strong><\/p>\nGeoff Gordon: I mean, there\u2019s a lot of it. The simplest bit is gradient descent, which is ubiquitous in machine-learning. So, what that means is that you have your hypothesis described. You have your AI described by a whole bunch of numbers, a whole bunch of knobs that each one of the knobs has a small effect on its behavior. And you go and you have an example. And you look at the example, and you see, well, did it get it right? And if it didn\u2019t, would it have gotten closer to getting it right if you tweaked the knobs a little bit? And if you do, you tweak them a little bit in that direction. And then you keep going, see another example, and tweak the knobs again. And if you see a million such examples, then all of those little tweaks add up to a fairly well-tuned artificial intelligence hypothesis. But there\u2019s a lot more, right? I mean, there\u2019s linear Algebra. There\u2019s functional analysis. I mean, I wind up having to learn a lot of tools in order to be able to put them together to design a new algorithm.<\/p>\n
Host: How as a researcher do you sort through and say, I want to follow that path for a while?<\/strong><\/p>\nGeoff Gordon: Right. I mean, so there\u2019s a lot of sort of trial and error. You learn to recognize what promising paths are. And then there\u2019s a lot of collaboration. Like one of the key things for research is that you can\u2019t do it in a vacuum. You really need to get a good team of people together to make progress. And I don\u2019t just mean a good team of people at one lab. Although, obviously that\u2019s really important. There\u2019s a whole research community, right? And they all build on each other\u2019s work. And so it\u2019s really important to have that community.<\/p>\n
Host: I think we called that open source research.<\/strong><\/p>\nGeoff Gordon: Yes.<\/p>\n
Host: As opposed to open source code. But I mean, there\u2019s that necessary collaboration.<\/strong><\/p>\nGeoff Gordon: Right. I mean, you can\u2019t do it on your own. The problems are just too hard. And so you have to – one person will have an idea to advance the state of the art just a little bit in some direction, and then somebody else will say, oh, well now that I know that, this thing I was trying to do becomes easier. And now I know how to do this other thing. And if you have 1,000 people each learning about the other people\u2019s contributions, you make more than 1,000 times as much progress.<\/p>\n
Host: So let\u2019s circle back to Montreal for a second. You have big plans for the lab there. What are you looking for? Who are you looking for? Where are you looking for them?<\/strong><\/p>\nGeoff Gordon: I mean, we are looking for people who are creative about how they decide to attack problems. We\u2019re looking for people with great skill. I mentioned that there are all of these mathematical tools that you need to put together to design the algorithms. If you don\u2019t know the tools, then you can\u2019t be creative about how to use them. And, you know, there\u2019s a sort of a desire for exploration, right? That\u2019s like the key thing in a researcher. You have to be driven by wanting to know what makes the world tick. Because otherwise, you would just never be able to devote your, so much time and energy into solving a problem.<\/p>\n
Host: For people like you, researchers with PhDs who have that joy of discovery and the requisite skillset, there have been traditionally two places you could go. Either academia or industry. How is Microsoft Research similar or different?<\/strong><\/p>\nGeoff Gordon: Microsoft had decided that Microsoft Research will attack the big questions with the sort of – which implies, right, that you cannot know ahead of time exactly what\u2019s going to come out, because if you knew, it wouldn\u2019t be research. And it implies that you don\u2019t expect your payoffs to be measured in months or even necessarily a couple of years. But it could be that the things you\u2019re doing now pay off 10 years later. And so Microsoft has decided that MSR is in it for the long-term. And that changes the type of research that you can do, right? You can afford to make big bets when you don\u2019t have to deliver the result of your work into a product in 2 months.<\/p>\n
Host: So very much pure research as opposed to applied research.<\/strong><\/p>\nGeoff Gordon: Yes.<\/p>\n
Host: However, you\u2019re seeing more and more of the work coming out of MSR.<\/strong><\/p>\nGeoff Gordon: That\u2019s right. So you wind up doing this work. And then once you – you know, you don\u2019t necessarily know what you\u2019re going to get when you start, but once you do it, you look and you see that it\u2019s going to wind up being incredibly useful for some software or hardware product that Microsoft is making or is considering making, right? I\u2019ve seen researchers who ask a question that I would have said was completely abstract, right? No immediate connection to a product. And then a couple of years later, it turns out being sold to businesses around the world as a new piece of software.<\/p>\n
Host: So that to me suggests that there is a lifting of the burden of success immediately…<\/strong><\/p>\nGeoff Gordon: Right.<\/p>\n
Host: …that allows you to ask a bunch of different kinds of questions.<\/strong><\/p>\nGeoff Gordon: That\u2019s right. That\u2019s right. The freedom from having to know that you will get results in the short-term, right, allows you to ask harder questions. Where you\u2019ll get results, but maybe not the ones you were looking for in the longer term, right? Because you never achieve what you set out to achieve exactly, right? You achieve something good, but never exactly what you thought you would. And so it makes it tough to plan. But, you know, it makes it fun, right? Because you\u2019re always seeing something new, something you didn\u2019t expect.<\/p>\n
Host: Yeah, difficult to plan and also difficult to measure if you have a performance review. It\u2019s like, well, I really did accomplish quite a bit. You just can\u2019t see it yet.<\/strong><\/p>\nGeoff Gordon: Yeah, I mean, there\u2019s \u2013 you know, people have developed a whole bunch of imperfect metrics. So, there\u2019s the bad metric of just counting how many papers you publish, right? But researchers, as they make progress will write down pieces of that progress. And you can look at them, and you can see, oh, you know, that was clever to be able to make that chunk of progress even if I don\u2019t know what it\u2019s going to be good for right yet. But I\u2019ll bet that something good will come out of that. And that\u2019s the sort of thing that you try to learn through experience and training, how to recognize what types of research outputs are likely to lead to sort of further progress down the road. And again, it\u2019s hard. But that\u2019s sort of your short-term measure of progress. You look at, you know, what did you figure out today?<\/p>\n
Host: I love that though.<\/strong><\/p>\nGeoff Gordon: Yeah.<\/p>\n
Host: I mean, how rewarding is that?<\/strong><\/p>\nGeoff Gordon: Oh, it\u2019s very rewarding. For people who really want to take apart the world and see what makes it tick, it\u2019s really cool to discover something new that you didn\u2019t know about how the world works, to add to the sum of human knowledge.<\/p>\n
Host: Geoff, we\u2019re experiencing a kind of AI gold rush. You and I talked about the fact that companies of all stripes are trying to snap up the best talent.<\/strong><\/p>\nGeoff Gordon: Yes.<\/p>\n
Host: And their forest that they\u2019re clearcutting is often the universities.<\/strong><\/p>\nGeoff Gordon: Yes.<\/p>\n
Host: What if we clear cut all the talent in training the next generation? As Yoshua Bengio said, it takes a long time to train a PhD.<\/strong><\/p>\nGeoff Gordon: It does. I\u2019ve experienced that. So, I think that we have to try and set it up so that it\u2019s not just a stark decision between pulling people into industry and training the next generation. One of the things that\u2019s great about MSR Montreal is we have an explicit goal of working with the local universities, of contributing to training. So, a lot of our faculty in universities are interested in collaborating with the researchers at Microsoft Research. And a lot of the researchers at Microsoft Research are interested in for example, seeking out adjunct positions at local universities, being able to work with the students there and sometimes teach classes, sometimes teach in the sense of apprenticeship, so that it doesn\u2019t have to be either or, right? You can be part of a great lab like MSR and still contribute to training the next generation.<\/p>\n
Host: And there are different motivators for different people.<\/strong><\/p>\nGeoff Gordon: Right. Well, it\u2019s the same thing that I said before. In order to be good at research, you have to enjoy the thrill of discovery, right? You have to really want to know how the world works. And I think if you set up a research lab well, it makes a big difference for how fun it is to work there, how interesting, how rewarding it is to work there. I think MSR has really done a great job of setting that balance.<\/p>\n
Host: I ask this of all the researchers I interview, usually after a long conversation that essentially covers what gets them up in the morning. So, given the work you do, even though we don\u2019t know all the future implications \u2013 let\u2019s say we do succeed. Succeed in teaching machines to interact with humans the way humans interact with humans. Is there anything about the scenario that keeps you up at night?<\/strong><\/p>\nGeoff Gordon: I think if you\u2019re going to worry about something with AI, you should worry about people misusing AI. That could be sort of intentional misuse where you design an AI to accomplish some evil task, right, as you sit here and stroke your fluffy white cat, right? But more likely it\u2019s going to be accidental. There\u2019s all sorts of things where if you don\u2019t spend a little bit of thought about how your AI is going to learn, then you can treat people very poorly, very unfairly. And so there\u2019s this whole area of AI research called FATE. Fairness, accountability, transparency, and ethics. And that\u2019s actually one of the areas that MSR is strong in, very strong. And so, you know, we\u2019re looking at, for example, how do you train AI algorithms so that they are not biased when they make their loan decisions? The problem is, you train these things on past decisions, and so you learn to copy them, right? You told the algorithm, copy these human decisions.<\/p>\n
Host: That\u2019s your data set.<\/strong><\/p>\nGeoff Gordon: Right. And so if the data set is biased, then the algorithms are going to learn to copy that bias very efficiently and extend it to more people. And so you have to be careful, right? If you do things naively without thinking, you will freeze whatever bias was in the training set, and then ship it out to a much larger number of people. And so there have been a lot of examples where people have accidentally done that. But there are also lots of researchers working on, how do you, despite biased training data, train your AI to be fair and to be unbiased?<\/p>\n
Host: As we finish here, Geoff, what advice would you give to researchers where they\u2019re in that space now where they\u2019re going to make a career decision, what do I do with my life? Where do I go? I\u2019ve got lots of options now that I didn\u2019t have.<\/strong><\/p>\nGeoff Gordon: Right. I mean, when you have lots of options, you should always think about what you can accomplish with them. Do you just want to, you know, go with the gold rush, stick your pan in the water, and hopefully find a few nuggets, or do you want to solve the world\u2019s problems? Or do you want to learn what makes the world work? And when you have the options, you have the luxury of being able to make choices that actually achieve the goals that you have for yourself. And so, my advice is to think carefully about what you want. Think about whether you want to learn how the world works, or whether you just want to make some money, right? And there\u2019s a big difference in how you treat your options in those 2 cases.<\/p>\n
Host: Well, Geoff Gordon, you are so fascinating to me. Thanks for coming in.<\/strong><\/p>\nGeoff Gordon: Thank you.<\/p>\n
Host: To learn more about Dr. Geoff Gordon, and the latest innovations in machine learning, visit Microsoft.com\/research<\/a>.<\/strong><\/p>\n <\/p>\n
<\/p>\n","protected":false},"excerpt":{"rendered":"
Episode 21, April 25, 2018 – Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather-pattern of AI winters, talks about how collaboration is essential to innovation in machine learning, shares his vision of the mindset it takes to tackle the biggest questions in AI, and reveals his life-long quest to make computers less\u2026 well, less computer-like.<\/p>\n","protected":false},"author":37074,"featured_media":481506,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"https:\/\/player.blubrry.com\/id\/33371099","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[240054],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-480639","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-msr-podcast","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"https:\/\/player.blubrry.com\/id\/33371099","podcast_episode":"","msr_research_lab":[437514],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[629145],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"","byline":"","formattedDate":"April 25, 2018","formattedExcerpt":"Episode 21, April 25, 2018 - Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather-pattern of AI winters, talks about how collaboration is essential to innovation in machine learning, shares his vision…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/480639"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37074"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=480639"}],"version-history":[{"count":9,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/480639\/revisions"}],"predecessor-version":[{"id":485676,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/480639\/revisions\/485676"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/481506"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=480639"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=480639"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=480639"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=480639"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=480639"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=480639"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=480639"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=480639"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=480639"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=480639"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=480639"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}