{"id":469611,"date":"2018-02-28T06:21:16","date_gmt":"2018-02-28T14:21:16","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=469611"},"modified":"2018-05-23T14:50:52","modified_gmt":"2018-05-23T21:50:52","slug":"keeping-an-eye-on-ai-with-dr-kate-crawford","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/podcast\/keeping-an-eye-on-ai-with-dr-kate-crawford\/","title":{"rendered":"Keeping an Eye on AI with Dr. Kate Crawford"},"content":{"rendered":"
Dr. Kate Crawford – Principal Researcher<\/p><\/div>\n
Episode 14, February 28, 2018<\/strong><\/p>\n Artificial intelligence has captured our imagination and made many things we would have thought impossible only a few years ago seem commonplace today. But AI has also raised some challenging issues for society writ large. Enter Dr. Kate Crawford<\/a>, a principal researcher at the New York City lab of Microsoft Research<\/a>. Dr. Crawford, along with an illustrious group of colleagues in computer science, engineering, social science<\/a>, business and law, has dedicated her research to addressing the social implications of AI<\/a>, including big topics like bias, labor and automation, rights and liberties, and ethics and governance.<\/p>\n Today, Dr. Crawford talks about both the promises and the problems of AI; why\u2014 when it comes to data \u2013 bigger isn\u2019t necessarily better; and how \u2013 even in an era of increasingly complex technological advances \u2013 we can adopt AI design principles that empower people to shape their technical tools in ways they\u2019d like to use them most.<\/p>\n Related:<\/strong><\/p>\n Transcript<\/strong><\/p>\n Kate Crawford: There is no quick technical fix to bias. It\u2019s really tempting to want to think that there\u2019s going to be some type of silver-bullet solution that we can just tweak our algorithms or, you know, use different sorts of training data sets, or try to boost signal in particular ways. The problem of this is that it really doesn\u2019t look to the deep social and historical issues that human data is made from.<\/p>\n (music)<\/strong><\/p>\n Host: You\u2019re listening to the Microsoft Research podcast. A show that brings you closer to the cutting-edge of technology research and the scientists behind it. I\u2019m your host Gretchen Huizinga.<\/strong><\/p>\n Artificial intelligence has captured our imagination and made many things we would have thought impossible only a few years ago seem commonplace today. But AI has also raised some challenging issues for society writ large. Enter Dr. Kate Crawford, a principal researcher at the New York City lab of Microsoft Research. Dr. Crawford, along with an illustrious group of colleagues in computer science, engineering, social science, business and law, has dedicated her research to addressing the social implications of AI, including big topics like bias, labor and automation, rights and liberties and ethics and governance.<\/strong><\/p>\n Today, Dr. Crawford talks about both the promises and the problems of AI, why, when it comes to data, bigger isn\u2019t necessarily better, and how even in an era of increasingly complex technological advances, we can adopt AI design principles that empower people to shape their technical tools in ways they would like to use them most.<\/strong><\/p>\n That and much more on this episode of the Microsoft Research podcast.<\/strong><\/p>\n (music)<\/strong><\/p>\n Host: Welcome Kate Crawford to the podcast. Great to have you here with us from New York City.<\/strong><\/p>\n Kate Crawford: Thank you so much Gretchen. It\u2019s a pleasure to be here.<\/p>\n Host: So, you\u2019re in the, as we\u2019ve said, New York City lab of Microsoft Research. What goes on in the Big Apple?<\/strong><\/p>\n Kate Crawford: Ahhh! So much. It\u2019s a city that doesn\u2019t sleep when research is concerned. Look, there\u2019s so much going on here. It is actually \u2014 we a have an abundance of exciting research initiatives. Obviously here, we have the Microsoft Research New York office. In here, we have really sort of four groups writ large. We have a really fantastic machine learning group. We have a computational social science group. We have an algorithmic economics group. And we have the FATE group, which is a group that I co-founded with my colleagues Hannah Wallach and Fernando Diaz, and it stands for fairness, accountability, transparency and ethics. And that group is coming into its third year now. We really, early on, saw that there were going to be some real concerns around insuring that large-scale, decision-making systems were producing fair results and fair predictions, and we also needed to start thinking much more about accountability for decision-making; particularly in relation to machine learning and artificial intelligence. And, of course, ethics, which is a very board term that can mean all sorts of things. For us, it means really looking at how people are affected by a whole range of technologies that are now touching our lives, be that in criminal justice or education or in healthcare. So that was the reason we formed the FATE group here in New York. In addition to what\u2019s happening here at MSR NYC, there\u2019re also other groups. There\u2019s Data and Society which is headed by danah boyd, just around the corner from this building. And then at NYU, we have a brand-new research institute that I co-founded with Meredith Whitaker, called the AI Now Institute. And that\u2019s the first university institute to focus on the social implications of artificial intelligence. So, there\u2019s a lot going on in New York right now.<\/p>\n Host: I\u2019ll say. You know, I want to ask you some specific questions about a couple of those things that you mentioned, but drill in a little bit on what computational social science means.<\/strong><\/p>\n Kate Crawford: Yeah. That group is headed by Duncan Watts and realistically, they are looking at large-scale data to try and make sense of particular types of patterns. So, what can you learn about how a city is working when you look at traffic flows, for example. What are the ways in which you could contextualize people\u2019s search information in order to give them different types of data that could help. I mean there are lots of things that the CSS group does here in New York.<\/p>\n Host: Well let\u2019s go back to FATE, because that\u2019s kind of a big interest of yours right now. Fairness, accountability, transparency and ethics.<\/strong><\/p>\n Kate Crawford: Yeah.<\/p>\n Host: And you sort of gave us an overview of why you started the group and it\u2019s fairly nascent. Is it having an impact already? Is it having any impact that you hoped for in the high-tech industry?<\/strong><\/p>\n Kate Crawford: Absolutely. We\u2019ve been really thrilled. I mean, even though it\u2019s only 3 years old, in some ways, that\u2019s actually quite established for thinking about these issues. I\u2019ve been as a researcher focusing on questions around fairness and due-process in sort of large-scale data systems and machine learning for over ten years. But it\u2019s really only been in the last 18 months or so that we\u2019ve seen this huge uptick in interest across both academia and industry. So, we\u2019re starting to see algorithmic accountability groups emerge in kind of the key technology companies here in the US. We also have conferences like the FAT\/ML conference. Which stands for fairness, accountability and transparency, you guessed it. Which has now is now become a blockbuster hit. It actually taking place here in New York City in just two weeks to a full house and an extensive waiting list. So, it\u2019s really actually taking off. But we are also seeing it really start to have impact within Microsoft itself. I mean, here, Hannah and myself for example, and various product groups who are coming up with questions where they think a system might produce say a discriminatory output, what should they do? Or they might have concerns with the data that a system has been trained on. What sorts of questions might you want to ask? Including all the way through to, what are the big policy questions that we need to ask here. And we\u2019re doing things like speaking to the European Commission, to the UN, etc. So, um, for a small group, it\u2019s just four of us, I think it\u2019s already having a pretty outsized impact.<\/p>\n Host: Interesting that you say you\u2019ve been doing this for about ten years, which I think is surprising to a lot of people, with machine learning coming to the forefront now. Why, do you think, only in the last 18 months have we seen an uptick?<\/strong><\/p>\n Kate Crawford: I think it has a lot to do with scale. I mean what\u2019s particularly interesting to me is that just in the last actually about three months, we\u2019ve seen leaders from major technology companies including Satya Nadella, including one of the co-founders of Deep Mind, Mustafa Suleyman, and the head of Google AI, all say that fairness and bias were the key challenges for the next five years. So, it\u2019s gone from being something that was a relatively bespoke concern, shall we say, to becoming front-of-mind for all of the sort of leaders of the major technology companies. And I think the reason why, and certainly the reason that Mustafa gave when he was sort of making his statement about this, is because if you have a system that\u2019s producing a discriminatory result say, for example, in a search, or in ads, that is affecting a billion to two billion users a day, easily. So, at that scale, that can be extremely costly and extremely dangerous for the people who are subject to those decisions. So, I think, in that sense, it\u2019s really a question of how many people can be affected by these sorts of systems, given their truly vast reach.<\/p>\n (music)<\/strong><\/p>\n Host: You recently gave a talk at the NIPS conference \u2014 Neural Information Processing Systems. And the topic was called The Trouble with Bias. What is the trouble with bias, Kate?<\/strong><\/p>\n Kate Crawford: Well, NIPS is an amazing conference, I should say, out of the gate. It was a real honor to speak there. It is, in fact, the largest machine learning conference in the world. I was speaking in this room as the sort of opening keynote to around 8,000 people, so it kind of felt like being at a Van Halen concert or something extremely stadium rock. It was extraordinary to see. But I was really interested to talk about what\u2019s been happening with this concept of bias. And I looked at, particularly, some of the cases that we\u2019ve seen emerge that have been interesting to researchers. So, for example, everything from, if you do a search in Bing or in Google, for \u201cCEO\u201d and you do an image search, you\u2019ll find a lot of images of white men in suits. And, depending on which way the algorithm is blowing that day, the first woman that you\u2019ll see is, quite often, CEO Barbie. So, it raises very troubling questions about how we represent a gender. And then of course, there are a whole series of studies coming out now, looking at how we represent race. And here we can look at anything from the way that a training data set like Labeled Faces in the Wild is around 79% male and 84% white, so they\u2019re the sorts of populations that systems that are trained on Labeled Faces in the Wild will perform best for. So, that\u2019s in the space of facial recognition. But what I was talking about at NIPS was, really sharing brand new research that I\u2019ve been working on with Solon Barocas at Cornell and Hannah Wallach here at Microsoft Research and Aaron Shapiro at U Penn, where we\u2019re looking at the way that bias in computer science has traditionally been looked at. And basically, the way that it\u2019s been studied so far, we did a big analysis of all of the papers, is that it really looks at the types of harms that cause a type of economic, or what we call allocative, harm. So, it means that a system is determined to be biased if you don\u2019t get a job or if it decides that you don\u2019t get bail or if you can\u2019t get access to credit. But there\u2019s a whole range of other harms which we call representational harms, which doesn\u2019t mean that you don\u2019t necessarily get a job, but it might mean that it\u2019s just a denigration of a particular category or community.<\/p>\n Host: Let\u2019s talk for a minute about the data sets that actually train our AI models. As far as the data goes, we\u2019re often told that bigger is better. But what you just said, suggests this might not be the case. Can you explain how bigger data isn\u2019t necessarily better data?<\/strong><\/p>\n Kate Crawford: Yeah, I mean there has been I think this perception, for some time now, that the more data we have, the more representative it is. But that\u2019s simply not the case. If you are over-sampling from a particular population, we could think about here, you know, say, for example, Twitter data. There was a period, so five years ago, when people really thought Twitter data was going to be the best way to understand what was happening during a natural disaster or humanitarian crisis. But it was very, very clear, from early on, and certainly in some work that I was doing many years ago, that showed just how skewed the demographics are of people who were using Twitter at that time, let alone people who have access to smartphones. So, depending where you in the world, that means it\u2019s not particularly a very reliable signal. So, even if you have hundreds of thousands of data points, if all of those data points are being drawn from affluent populations, who live in urban centers, then you\u2019re only seeing one part of the picture and it\u2019s going to be extremely difficult for you to extrapolate from that. And there are very, sort of, related problems happening with training data right now. Training data is you know, often comes from sets that have hundreds of thousands, if not millions, of particular items within them. But if they are very particularly sampled from an already skewed pool, then that\u2019s going to still produce results. I mean, there have been some really, you know, interesting examples I think that we can look at here. And they come from all sorts of interesting places. In the case of you know criminal justice, there\u2019s been a lot of controversy around the use of the Compass Risk Assessment System, which essentially tries to predict a risk score for whether or not somebody will re-offend as a violent criminal. But of course, it\u2019s, you know, trained on data that is historical crime data. And again, many criminologists and social scientists point to the fact that there is a long history of racist policing in the US. So, if you\u2019re already coming from a baseline where people of color and low-income communities are far more likely to be stopped by the police, to be arrested by the police, and be generally surveilled, they will be over-represented in those samples. And then if you\u2019re training a system on that data, how do you actually accommodate that? These are really hard questions and I think what I\u2019ve certainly learned from looking at these questions as a researcher is that there is no quick technical fix to bias. It\u2019s really tempting to want to think that there\u2019s going to be some type of silver-bullet solution, that we can just tweak our algorithms or use different sorts of training data sets or try to boost signal in particular ways. The problem of this is that it really doesn\u2019t look to the deep social and historical issues that human data is made from. That essentially, data reflects our human history. And our human history itself has many, many instances of bias. So, I think what\u2019s so interesting, and when I talk about the trouble with bias, is that I think, particularly in computer science, we tend to scope the problem too narrowly and to think of it purely as something we can address technically. I think it\u2019s incredibly important that we understand it as a socio-technical problem. That means addressing these sorts of issues very much in an interdisciplinary context. So, if you\u2019re designing a system that is doing anything to do with criminal justice system, you should be working side-by-side with people who have been doing the most important work in those fields. This pertains to every one of the domains of healthcare, education, criminal justice, policing, you name it. We have area experts who I think have just been, to this point, kind of left out of those development cycles, and we really need them in the room.<\/p>\n Host: Well, I think there\u2019s been a misconception, first of all that data doesn\u2019t lie. And it can if you have you know, specific populations that you\u2019re only representing. But also, this idea of the sort of \u201cseparation of church and state\u201d between technology and the humanities, or technology and social science. And so, what I\u2019m hearing, not just from you but over and over, is we have to start talking, cross-pollinating, silo-busting, whatever you want to call it, to solve some of these bigger problems, yeah?<\/strong><\/p>\n Kate Crawford: Absolutely. I couldn\u2019t agree more. I mean this was really one of the really big motivations behind establishing the AI Now Institute at NYU, is that we realize we really needed to create a university center that was inviting people from all disciplines to come and collaborate on these sorts of questions. And particularly in terms of issues around, you know, bias and fairness. But even more broadly, in terms of things like labor and automation and the future of work, right through to what happens when we start applying machine learning to critical infrastructure, like the power grid or to hospitals. In order to answer any of those questions, you kind of need a really deep bench of people from very different disciplines and we ended up trying to address that by working with six different faculty incentives to establish AI Now. So, it\u2019s a co-production if you will between computer science, engineering, social science, business and law, as well as the Center for Data Science, really because I think you need all those people to really make it work.<\/p>\n (music)<\/strong><\/p>\n Host: Let\u2019s switch to another topic for a bit. And that is this concept of autonomous experimentation. With the proliferation of sensors and massive amounts of data gathering, people may not be aware much of the time that they are in fact data and not necessarily with their knowledge or consent. Can you speak to that?<\/strong><\/p>\n Kate Crawford: Oh, absolutely. I mean I should say the autonomous experimentation research that we\u2019ve been doing here is very much a collaborative project which has been work I\u2019ve been doing alongside people like Hannah Wallach and Fernando Diaz and Sarah Bird. And it\u2019s been really fascinating to, essentially look at how a series of new systems that are really being deployed far more widely than people realize, are actually doing forms of automated experimentation in real-time. And we were looking at a range of systems including, say, what happens when you use a traffic app. You know, just using a mapping app that\u2019s saying where is the traffic bad? Where is it good? The way these systems are working is that they\u2019re essentially constantly running experiments, large-scale experiments, often on hundreds of thousands of people, simultaneously. And this can be good. In many cases, it\u2019s for things like you know, load balancing where people go. If we all got the same directions to go to, you know, say, downtown Manhattan from uptown Manhattan, then the roads would be unusable. They would be completely congested. So, you kind of have to load-balance between different parts. But what that also means is that some people will always be allocated to the less ideal condition set of that experiment and some of that will be allocated to the ideal condition set. Which means that you know, you might be getting the fastest way to get to work that day, and somebody else will be getting a slightly slower way. And now, this sounds absolutely fine when you\u2019re thinking about you know, perhaps if you\u2019re just going to work in a few minutes, either side isn\u2019t going to ruin your day. But what if you\u2019re going to hospital? What if you have a sick kid in the back of your car and it\u2019s really urgent that you get to a hospital. How can you say, \u201cNo, I really don\u2019t want to be allocated into the less ideal group\u201d? Or this could happen as well in health apps. You know, how can you indicate that, in experimentation, which has been used to try and, say, make you jog more or do more exercise, that you\u2019re somebody who\u2019s recovering from injury or somebody who has a heart condition. These are the sorts of issues that really indicated to us that it\u2019s important that we start doing more work on feedback mechanisms. On what are the sorts of consent mechanisms that we can think about when people are being allocated into experiments, often without their knowledge. This is very common now. We\u2019re kind of getting used to the fact that AB experiments at-scale really began with search. But now they\u2019re really moving and submerging into much more intimate systems that guide our everyday lives from anything that you\u2019re using on your phone about your health or your engagements with friends, right through to you know, how you spend your time. So, how do we start to think about the consent mechanisms around experimentation? That was the real motivation behind those series of ongoing studies.<\/p>\n Host: Well and it does speak to this dichotomy between self-governance and government regulation. And because we are in kind of a Wild West phase of AI, a lot of things haven\u2019t caught up yet. However, the European Union has the GDPR that does attempt to address some of these issues. What is your thinking on whether we go with our own oversight, \u201cwho is watching the watch dog\u201d kind of thing or invite regulations. What\u2019s going on in your mind about that?<\/strong><\/p>\n Kate Crawford: It\u2019s such a good question. It\u2019s an incredibly complex question and unfortunately there are no easy answers here. I mean, certainly GDPR comes into effect in May this year and I think it\u2019s going to be extraordinarily interesting to see what happens as it begins to be rolled out. Certainly, a lot of the technology companies have been doing a lot of work to think about how their products are going to change in terms of what they\u2019re offering. But it will also be interesting to see if it has flow on effects to other parts of the world like the US and a whole range of other countries that aren\u2019t covered by GDPR.<\/p>\n Host: Well, the interesting thing for me is that say, the European Union has it. It has far-reaching tentacles, right? It\u2019s not just for Europe. It\u2019s for anyone who does business with Europe. And it does as you say represent a very complex question.<\/strong><\/p>\n Kate Crawford: It does. I mean I think about this a lot in terms of artificial intelligence writ large, and that term can mean many things. So, I\u2019m using it here to really refer to not just sort of machine learning based systems, but a whole range of other technologies that fit under the AI banner. And this is something that is going to have enormous impacts over the next ten years. And there\u2019s a lot of attention being paid to, what are the types of regulatory infrastructures, what are the types of state and corporate pressures on these sectors and how is it going to change the way that people are judged by these systems? As we know China has a social credit score. Some people find this quite a disturbing system, but there are many things in the US that are not dissimilar. So, we\u2019re already moving quite rapidly into a state where these systems are being used to have direct impacts on people\u2019s lives. And I think there\u2019s going to be a lot of questions that we have to ask about how to ensure that that is both ethical and, I think, equitable.<\/p>\n Host: Right. And that is where, interestingly I think, some good research could be happening, both on the technical side and the social science side, as we address these. And all of the sort of expertise and disciplines that you talked that are working in the FATE group.<\/strong><\/p>\n (music)<\/strong><\/p>\n Host: So, let\u2019s talk for a second about\u2026 there\u2019s so many questions I want to ask you, Kate, and this is just not really enough time. So, I\u2019m coming to New York. I\u2019m going to get into the bad traffic there and come see you.<\/strong><\/p>\n Kate Crawford: Great.<\/p>\n Host: Listen, the overarching question for me right now is about, how do we take these big, thorny, prickly, questions\u2026 issues, and start to address them practically? What are your thoughts on how we can maybe re-assert or re-capture our \u201cagency\u201d in what I\u2019d call a get app world?<\/strong><\/p>\n Kate Crawford: Yeah, I think that\u2019s a really fascinating question. I mean what\u2019s interesting, of course, is how many systems that touch our lives these days that we don\u2019t even have a screen interface for. So many, many major cities, you\u2019re walking down the street, you know, your face is being recorded. Your emotions are being predicted based on your expressions that day. You know, your geolocation is being tracked through your phone. These are all things that don\u2019t involve any type of permission structure, a screen, or even you being aware that, you know, your data and your movements are being ingested by a particular system. So, I think my concern is that while more granular permission-based structures are possible, urban computing has shifted away from the devices directly in front of us to be embedded throughout architectures, throughout streets, throughout so many systems, that are in many ways invisible to us. And they\u2019re already having a real impact on how people are being assessed, and the sort of impacts that they might experience just walking around the city in a day. So, I think we are coming up with things that would\u2019ve been great to have, and would still be useful in some context, but they don\u2019t resolve, I think, these much bigger questions around, what is agency, when you\u2019re really just a point amongst many, many other data points and being used by a whole range of systems that are in many ways completely opaque to you? I think we need to do a lot more work. And certainly, I would agree with you. I mean, this is an urgent area for research. We just desperately need more people working in these areas, both technically and I think on these more social science perspectives.<\/p>\n Host: You know, it\u2019s funny as you\u2019re speaking, Kate, you actually just identified basically a generation skip, as it were. You know like if a country doesn\u2019t have landline phone infrastructure and goes straight to cell phones, right? And so just when we\u2019re thinking we ought to be more cognizant about giving consent to apps and technologies, you\u2019re bringing up the excellent point that there\u2019s so much going on that we don\u2019t even get asked for consent.<\/strong><\/p>\n Kate Crawford: Absolutely. And also, I mean, there\u2019s another thing here. I think that we\u2019ve really moved beyond the discussion of, you know, the idea of a single person, you know, looking at an app and deciding whether or not you should allow it to have access to all of your contacts. We\u2019re in a very different state where you could be using an app or system and you\u2019re thinking that it\u2019s for one thing but it\u2019s actually doing something else. I mean, the classic case here is say, you know, sort of, the Pok\u00e9mon Go craze where you are out to sort of catch little Pok\u00e9mons in a sort of augmented reality environment in cities. But that became\u2026 or was being used to harvest a massive training data set to really generate sort of new maps and geo-locative data. So, people, in some ways, think that they\u2019re doing one thing, but their data is being used to train a completely different type of system that, you know, they may have no idea about. So, I think, again, this idea that we\u2019re even aware of what we\u2019re participating in, I think, has really moved on.<\/p>\n Host: Yeah, again with the idea that we\u2019re on a podcast and no one can see me shaking my head. This does bring up questions about the practical potential solutions that we have. Is there any, from your end, any recommendation, not just about what the problems are, but who should be tackling them and how, I mean\u2026<\/strong><\/p>\n Kate Crawford: Yeah, absolutely. I\u2019m actually, I mean, for a while, while our conversation does cover some really thorny and I think, you know, quite confronting questions that we\u2019re going to need to contend with as researchers and as an industry, I think that there\u2019s also a lot of hope and there\u2019s a lot that we can do. One of the things that I work on at the AI Now institute is we release an annual State of AI report. And in that report, we make a series of recommendations every year. And in the last report, which just came out a couple of months ago, we really sort of made some very clear and direct recommendations about how to address some of these core problems. One of them is we just think that before releasing an AI system, particularly in a high-stakes domain, so, all of those things we\u2019ve chatted about like, you know, healthcare, criminal justice, education, that companies should be running rigorous pre-release trials to ensure that those systems aren\u2019t going to amplify, you know, errors and biases. And I see that as a fairly basic request. I mean it\u2019s certainly something we expect of pharmaceutical drugs before they go on the market. That they\u2019ve been tested, that they won\u2019t cause harm. The same is true of, you know, consumer devices. Ideally, they don\u2019t blow up, you know, with some exceptions. You really want to make sure that there\u2019s some pretty extensive testing and trials. And then trials that can be sort of publicly-shared. That we can say that we have assessed that this system, you know, doesn\u2019t produce disparate impact on different types of communities. I think it\u2019s also really important that another recommendation, another thing we can do is, you know, after releasing these sorts of complex, often, you know, algorithmically-driven systems, um, should I say, that we just continue to monitor their use across different contexts and communities. Often, a type of system is released, and it\u2019s assumed that it\u2019s just going to work for everybody for an indefinite period of time. It\u2019s just not the case. I was looking at a study recently that suggested that medical health data has a 4-month half-life. That\u2019s 4 months before it becomes out-of-date, before the data in that training data set is actually going to become contradicted by things that have happened after the fact. So, we need to keep thinking about, how long is a system relevant? How might it be performing differently for different communities? So, this is another recommendation that we feel very strongly about. But there are many more, and if people are interested in particular types of research questions or concrete steps that we can take moving forward, the AI Now 2017 report has a lot of those in there for more reading.<\/p>\n Host: Yeah. I would actually encourage our listeners to get it, and read it, because it is fascinating, and it addresses things from a, sort of ,an upstream point-of-view, which is I think where we need to go. We\u2019re a bit downstream. Because we\u2019ve released the Kraken, in many ways, of AI. It\u2019s like suddenly they\u2019re\u2026 in fact, back to the original\u2026 you know the last 18 months, I think part of the reason, maybe, that we\u2019re seeing an increased interest is because people are realizing, \u201cHey this is happening, and it\u2019s happening to me, and now I care!\u201d Prior to that, I just kind of went along and didn\u2019t pay attention. So, attention is a good thing. Let me ask you this. You mentioned a concept from architecture called \u201cdesire lines\u201d in a talk that I heard, and I loved it in the sense that it\u2019s like shortcuts in public spaces, where, you know, you find a path on the grass just because people don\u2019t want to go around on the concrete. And I would say things like Napster and BitTorrent are sort of examples of technical desire lines, where it\u2019s like I\u2019m going to go around. Is there anything in that space that\u2019s sort of happening now from a grassroots perspective, where we\u2019re sort of \u201ctake back the night\u201d kind of thing, in the AI world?<\/strong><\/p>\n Kate Crawford: Yes, I love this. You know, I think there is. There are some really nice examples of very small grassroots efforts that have been extremely successful in doing things like essentially, creating spaces of anonymity where you\u2019re much more protected from your data being harvested, for reasons that you may or may not have agreed to. Even things like the private messaging service signal, which again was created really by you know, one guy and a few of his friends. Moxie Marlinspike has been I think very much a champion that individuals can you know, create these types of systems that can create more freedom and more agency for people. And there are others, too. I think it\u2019s going to be interesting to think about how that will happen in an AI-driven world. And even, I think, in the major technology companies, I think it\u2019s really important to create ways that people can start to shape these tools in the way that they\u2019d most like to use them, and to give some space for sort of desire lines where you can say, well I don\u2019t actually want my AI system to work this way, or I don\u2019t want it to have access to this information. But how can I train it to do what I want it to do, to make it an advocate for me. So, rather than serving the interests of, you know, just a particular company, it\u2019s really there as my agent, as someone who\u2019s going to look out for things that I care about. These are things that we can think of as design principles. And certainly, something that people do talk about a lot at Microsoft Research. And it\u2019s something that, I think, is really exciting and inspiring for more work to happen now.<\/p>\n Host: I can\u2019t agree more. It feels like there\u2019s a \u201ccall\u201d both to the technical designers and makers, and the end users that need to say hey, I need to pay attention. I just can\u2019t be lazy. I need to take agency.<\/strong><\/p>\n Kate Crawford: I think that\u2019s absolutely right.<\/p>\n Host: Kate, before we go, what thoughts or advice would you leave with our listeners, many of whom are aspiring researchers, that might have an interest in the social impact of AI. I would say go broad on this for both computer scientists, social scientists, any kind of, you know\u2026 the interdisciplinary crowd.<\/strong><\/p>\n Kate Crawford: Wow. Well, first of all, I\u2019d say, welcome! You\u2019re actually asking some of the most important questions that can be asked right now in the research world. And it would be amazing to see people really start to dig into specific domain questions, specific tools, and really start to say, you know, what kind of world do we want to be living in, and how can our tools best serve us there? In terms of resources that you can go to now, there really are some great conferences. Even the big you know, machine-learning conferences like NIPS have workshops focused on things like fairness and accountability. We have the FAT\/ML conference which is annual. But there\u2019s also, you know, the AI Now conferences which happen every year. And a lot of, I think, discussion that\u2019s been happening in a series of groups and reading groups in various cities that people can connect with. And I just think there\u2019s like a thriving research community now that I\u2019m certainly starting to see grow very rapidly because these questions are so pressing. So, in essence, I\u2019d say regardless of your field, AI is going to be changing the way we think and work. And that means that, likely if you\u2019re a researcher, this is something that you want to start caring about. And please, if there are ways that I can help or if people want to get in touch, they are welcome to do so.<\/p>\n\n
\n