Collaborators Archives - Microsoft Research http://approjects.co.za/?big=en-us/research/podcast-series/collaborators/ Thu, 14 Nov 2024 17:37:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Collaborators: Prompt engineering with Siddharth Suri and David Holtz http://approjects.co.za/?big=en-us/research/podcast/collaborators-prompt-engineering-with-siddharth-suri-and-david-holtz/ Mon, 11 Nov 2024 14:30:00 +0000 http://approjects.co.za/?big=en-us/research/?p=1101900 Researcher Siddharth Suri and professor David Holtz give a brief history of prompt engineering, discuss the debate behind their recent collaboration, and share what they found from studying how people’s approaches to prompting change as models advance.

The post Collaborators: Prompt engineering with Siddharth Suri and David Holtz appeared first on Microsoft Research.

]]>
Illustrated images of Siddharth Suri and David Holtz. “Collaborators: A Microsoft Research Podcast” runs along the bottom.

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

How significant will prompt engineering be as generative AI models continue to advance? After previous successful collaborations, Siddharth Suri, a Microsoft senior principal researcher, and David Holtz, an assistant professor at the University of California, Berkeley and a former intern of Suri’s, reunited to address the debate with data. In this episode, they discuss their study of how prompting approaches change as models advance. They share how the work required finding a variety of additional perspectives in what they describe as an Ocean’s Eleven-style recruitment effort; why mastering chain-of-thought prompting and other specialized methods might not be a prerequisite for getting what you want from a model; and, for aspiring researchers, what some butterflies can tell you about the types of challenges you’re pursuing. Suri and Holtz’s work is part of the Microsoft Research initiative AI, Cognition, and the Economy, or AICE, and is supported by the Microsoft Research initiative Accelerate Foundation Models Research, or AFMR.

Transcript

[TEASER]

[MUSIC PLAYS UNDER DIALOGUE]

SIDDHARTH SURI: So, it’s, like, just before Thanksgiving 2020. My manager came to me, and she was like, Sid, we need somebody to understand, what are the effects of AI on society? And I was like, “Oh, yeah, small question! Yeah, I can do that by myself! Yeah. I’ll get you an answer by Tuesday,” OK? I felt like I was dropped in outer space, and I had to find Earth. And I didn’t even … I couldn’t even see the sun. Like, I … there was this entirely new system out there. No one knew how to use it. What are the right questions to ask? We were using the system to study how people use the system? Like, what the heck is going on?

DAVID HOLTZ: And I remember thinking, this seems like the most important thing that a person could be working on and studying right now. Like, anything else that I’m working on seems unimportant in comparison to the impact that this technology is poised to have on so many different facets of, you know, life and the economy and things like that.

[TEASER ENDS]

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC FADES]

Today I’m talking to Dr. Siddharth Suri, also known as Sid, who’s a computational social scientist and a senior principal researcher at Microsoft Research. With him is Dr. David Holtz, an assistant professor in the Haas School of Business at the University of California, Berkeley. Sid and David are co-leading a team of researchers who are exploring the fascinating world of prompt engineering as part of the AI, Cognition, and the Economy, or AICE, initiative at Microsoft Research. I can’t wait to get into the meat of this research, but before we do, let’s meet our researchers. Sid, you first!

SIDDHARTH SURI: Hey, Gretchen, thanks for having me.

HUIZINGA: Tell us about yourself. At what intersection do your research interests lie, and what path led you to what you’re doing at Microsoft Research today?

SURI: So I got to where I am now through a very long and circuitous route, and I’ll give you the sort of CliffsNotes version of it, if you will. If you start back in grad school, my dream was to become a theoretical computer scientist. And what that basically means is writing algorithms. And what that basically means is pushing Greek symbols around a page. [LAUGHTER] And it turns out I’m good at that, but I’m not great at that. And towards the end of grad school, I was working with another professor, and he was doing these experiments that involved humans, and what we would do is we bring undergraduates into a lab. They were sitting in front of a computer using our software. We’d arrange them in different networks, so you’re trying to solve a problem with the people who are next to you in this network. And then we would change the structure of that network and have them solve the problem again. And we would try to understand, how does the structure of this network affect their ability to solve this problem? And I remember analyzing this data. I just was swimming around in this data and having a grand old time. I … nights, weekends … I remember riding the bus to school in Philadelphia, and I was trying to think about new analyses I could do. And it was just so … it was fun. I couldn’t get enough. And I remember my adviser talking to me one day, and he’s like, Sid, you’re really good at this. And I responded with, really good at what? I’m just doing the obvious thing that anybody would do. And he was like, bro, this is not obvious. Like, you know, you got a knack for this. And then that, sort of, set me on this path, and then, just to make a little long story short, I don’t have tons of self-awareness. So it took me like 10 full years to go from, like, deciding to hang up being a theoretical computer scientist and understanding humans, human behavior, and using technology to understand human behavior. And that’s, kind of, where I ended up as a computational social scientist. I’ve sort of gone all in in that space, as a computational social scientist. And that’s how David and I met. He’s a rising star in that space, as well. He became my intern. And that’s how we met. I’ll let him share his origin story with you.

HUIZINGA: Well, let’s do, David. I noticed you have a strong science background, but now you’re an assistant professor in a business school. So you got to do a little dot-connecting here. How did a guy with a degree in physics and astronomy—and should I also mention theater and dance? I’m so intrigued—um, how did that guy wind up working with MBAs and economists?

DAVID HOLTZ: Yeah, thanks for having me, Gretchen. Similar to Sid, my path to where I am today is also long and circuitous, and I will try to give you the CliffsNotes version. When I was young, I was always super interested in physics, and I think what drew me to physics was the way that it combined math, which I was very good at when I was younger, and the ability to answer big existential questions. Where does the universe come from? What’s the universe made out of? Is it growing? Is it shrinking? Things like that. And so when I went to college, I didn’t think too deeply about what I was going to study. I just, sort of, you know, always wanted to do physics. I’m going to do physics. And so I majored in physics. And then … I did my undergrad at Princeton, and there’s something about the physics department at Princeton where it’s almost just assumed everyone’s going to go get their PhD. And so there was a lot of “ambient pressure” to apply to graduate school. And so I actually started my physics PhD at Johns Hopkins. And as a PhD student, I was working on these large telescopes that look at remnant light from right after the Big Bang and try to characterize, you know, tiny fluctuations in this field of light that fills the night sky in a wavelength-like range that is not visible to the human eye. And by, sort of, characterizing those fluctuations in the light field, you can learn things about what the universe is made out of and how it’s evolving and all these types of things. It all sounds very cool. But the teams that conduct this research at this point are really big. It’s like you’re in a company, essentially. So there’s a hundred people working on building this telescope, analyzing these telescopes, so on and so forth. And so the actual day to day of my life as a physics PhD student was really far removed from the big existential questions that I was actually really interested in. My PhD dissertation probably would have been developing a system that moved a mirror in exactly this way so that light polarization appears, you know, in the experimental apparatus. You’re basically doing an engineering degree. And on top of all that, like Sid, I was good at physics, but I think I realized I was not great at physics. And I saw a lot of people around me in my classes and in my labs that were great at physics and moreover were having a really hard time finding a job as a physics professor after they graduated despite being great at physics. And so I started having these realizations during graduate school and had never done anything really except physics and so took a leave of absence and actually came out to the Bay Area and started working out here in advertising, which is not something that I’m necessarily super excited about—and as a product manager, which is not what I do. But it was kind of the hop that I needed to try something different. And after some amount of time, moved from doing product management to doing data science. This was right when the data science boom was starting. I think the year that I came to the Bay Area, DJ Patil, who used to be the chief data scientist for the US, had written this very famous HBR article about, you know, how data science was the sexiest job of the 21st century …

HUIZINGA: Right!

HOLTZ: … so I, kind of, took my physics credentials and became a data scientist and eventually also moved out of advertising and went and worked at Airbnb, which at the time was growing really quickly and, you know, was sort of a young company where a lot of exciting things were happening. You know, I loved working at Airbnb. I learned a lot. I met a lot of interesting people. I learned a lot working in ad tech, as well, and eventually just found myself feeling pulled back to academia. Like, I really liked the questions that I was working on, the types of work that I was doing. Similar to Sid, I found that I was really good at analyzing data. I didn’t feel like I was doing anything particularly crazy, but people around me were saying, no man, you’re really good at this! And so I started looking for PhD programs where I could do the type of work that I was doing as a data scientist at Airbnb but in a more academic environment. And that, sort of, naturally led me to PhD programs in business schools. I didn’t know what a PhD in a business school entailed, but there were professors in those departments that were doing the research that I wanted to do. And so that’s how I ended up there. And so my research when I started out as a PhD student was, I think, relative to a lot of people, I didn’t start from, like, first principles. I don’t know that I necessarily had this one little thing that I was super interested in. I was really interested in solving applied problems and, in particular, I think some of the applied problems that I had seen out in the world working in tech. And over time, I think I found that I’m just really interested in new technologies and how those technologies affect, you know, the flow of information, how people collaborate, what happens to the economy, so on and so forth. And so I sort of started by just trying to answer a few problems that were in front of me and discovered this was kind of, you know, sort of the unifying theory of the things that I was interested in studying. And I think … you know, in hindsight, I think one thing that is true that has kind of guided, you know, my path—and this connects back to the theater and dance, you know, minor that you had alluded to earlier—is I’ve always been a really social person. I’ve always been really interested in humans and how they interact. I think that type of storytelling is really at the crux of, you know, theater and music and things like that. And when I was younger, for sure, I spent a lot of time writing music, playing music, doing improv comedy, performing on stage. And as a physicist, that itch wasn’t necessarily getting scratched, both because I was just studying, you know, extremely small particles and was doing it in a pretty lonely lab. And a nice thing about being a computational social scientist is that I’m studying humans, which is really interesting. I think it plugs into something that I’m really passionate about. And a cool thing about getting to do that in particular in a business-school setting, I think, is that, you know, I’m talking often to people at companies and, you know, lecturing to MBA students, who are really outgoing, gregarious people. And so it presents a really nice opportunity to, kind of, fuse, you know, my interest in science and information and technology with that other interest in humans and connection and, you know, the opportunity to, sort of, interact with people.

HUIZINGA: Yeah, yeah. Well, escaping from middle management in physics is probably a good thing … Well, before we get into the details of your collaboration on prompt engineering, let’s make sure everyone knows what we’re talking about. Sid, when we talked before, I told you, to be honest, when I first heard the phrase “prompt engineer” a couple years ago, I laughed because I thought it was a joke, like sanitation engineer. Then when I heard it was a real job, I laughed a little bit less. And then when I heard it was not only a real job but one that, if you were good at it, could pay six figures, I stopped laughing altogether and started paying attention. So I’d like you, Sid, to give us a brief history of prompt engineering. What is it, when and how did it become a thing, and why is it different from anything I’d do in garden-variety internet search?

SURI: So generative AI wants to do just that. It wants to generate something for you. But how do you express what you want? What do you want the system to give you? And the answer is a prompt. So I’ll give you an example. Whenever there’s a new model out there, especially one that generates images, a prompt I use—you might laugh at this—is, “Show me a picture of Bruno Mars on the surface of Mars eating a Mars bar.” [LAUGHTER] And the reason why I use that prompt is because Mars bars aren’t in the training data. There’s not a lot of pictures of Mars in the training data. And everybody knows who Bruno Mars is. So that’s me describing to the model what I want. That is a prompt. Show me a picture with these elements in it, OK? But this is where the hard part starts. It sends you something. Oh. I didn’t want Mars to be that color of red. Could you change it to a deeper red or more of an orange? OK. Now, could you put a little dust in the atmosphere? OK. Well, I want a moon in the background. I didn’t know I wanted a moon in the background, but now I do. Where’s the sun in this image? I don’t know. And then the whole thing, kind of, becomes much more rich and a much bigger exploration compared to, say, putting keywords into a search engine. It’s a really much more rich space to explore. Now you asked me … a part of your question was, why is prompt engineering difficult? It’s difficult for a number of reasons. Number one, you don’t always know what you want.

HUIZINGA: Yeah …

SURI: And so it’s that conversation with the system to figure that out. Number two, you might not be expressing what you want as clearly as you think you are.

HUIZINGA: Right …

SURI: Number three, the problem could be on the receiver end. These models are new. You might be expressing it clearly, but they might not be understanding what you’re saying as clearly as you would hope. And then the fourth reason is the one I just said, which is, like, what you’re asking for is not just like, “Give me a document relevant to these keywords,” or “Give me some information relative to these keywords,” as you would do in traditional search. You’re asking for something much more rich. And to get that richness that you were hoping for requires this prompt. And that requires an exploration of the idea in your head and an expression of that idea in the real world. So that’s what prompt engineering is, and that’s why it’s hard.

HUIZINGA: OK, and when would you say it became a thing? I mean, prompt engineer is an actual job, but it was a thing first, right? It didn’t start out to be a job; it started out to be something you did, so …

SURI: So when these models came out, you know, what was it, late, around 2020, late 2020, I think, when they first started becoming popular. So prompting had been around in academia a few years prior to that, but it first hit the mainstream when these models, sort of, first came out around 2020, and why … why this job? Why this six-figure salary? Why all … what’s all the hoopla about it? And like I said before, these systems are new. No one knew how to use them. No one knew how to express what they want, A. B, there’s a lot of arcane ways to prompt that aren’t obvious at the beginning. Like, I’ll give you a few examples. One way to prompt is to give the system examples of what you’re looking for. Say you want something to classify an email as spam or not spam. You might give it a few emails that are spam and a few emails that are not spam and say, hey, if it’s more like this, call it spam; if it looks more like that, call it not spam. And so that’s one example. Another example would be like, OK, I’m a small-business owner. I need some advice. This is the problem I’m facing. Give me some advice to solve this problem as if you were Bill Gates.

HUIZINGA: Oh …

SURI: That’s, like, adopting a persona. That’s another example. A third example would be, like, OK, you have a math problem. You’re trying to solve this math problem, and to get it done correctly, some of these systems need what’s known as chain-of-thought prompting, which is tell me all the steps you’re going through to solve this problem. Don’t just give me the answer 17. Give me all the steps you needed to get to 17. And that helps the system guide it, more likely, towards a correct answer. And so these are all arcane, esoteric methodologies to getting one of these models to give you the right answer, the answer you want. And being a prompt engineer means you’re an expert in these things and you’re more likely to get these correct answers than maybe someone off the street who isn’t familiar with these techniques.

HUIZINGA: Right, right, right. Well, we’re going to talk a lot more about technique and the research that you did. And you’ve alluded to, at the beginning here, a visual, like describing … I heard graphic designers hearing the client when you were talking about it: “I didn’t want that red. Maybe put the moon in …” [LAUGHS]

SURI: Yeah, exactly!

HUIZINGA: Can you just tell me what you want to begin with? No, apparently not. But you’re also talking about verbal prompts and writing and so on. So we’ll get into that in a bit. But I want to go over and talk a little bit more about this research and why it’s where it is. This episode is the latest in our “series within a series” on AI, Cognition, and the Economy at Microsoft Research. And so far, we’ve talked about the impacts of AI on both cognition with Abi Sellen and the economy with Mert [Demirer] and Brendan [Lucier]. You can look up those episodes, fantastic episodes. This topic is a little less obvious, at least to me. So, David, maybe you could shed some light on how research for prompt engineering became part of AICE and why it’s an important line of research right now.

HOLTZ: So I think this project relates to both cognition and the economy. And let me lay out for you the argument for both. So first, you know, I’m not a cognitive scientist, but I think there are some interesting questions around how people, and in particular common people who are not computer scientists, conceive of and interact with these models, right. So how do they learn how to prompt? Do they think about different generative models as all being the same, or are they sort of developing different prompting strategies for different models? What are the types of tricks that they discover or use when they’re prompting models? And at the time that we started working on this project, there wasn’t a lot of research on this and there wasn’t a lot of data on this. You know, the data that existed typically is on the servers of big companies like Microsoft. It’s not really available to the public or to many researchers. And then the research is all, you know, sort of disproportionately focused on these esoteric prompting strategies that Sid mentioned, like chain-of-thought prompting, which are useful but are not things that, you know, my family members that are not scientists are going to be using when they’re trying to interact with, you know, the latest large language model that has been launched. So that was one draw of the project. The other thing that I think is interesting—and the reason that this project was well-suited to the AICE program—is that around the time that we were starting to work on this project, a bunch of research was coming out, and I’ve contributed to some of this research on a different project, on the impacts that generative AI can have on different economic outcomes that we care about. So things like productivity and job performance. And one interesting pattern that has emerged across numerous different studies trying to answer those types of questions is that the benefits of generative AI are often not uniform. Usually, generative AI really helps some workers, and there are other workers that it doesn’t help as much. And so there’s some interesting questions around why is it that some people are able to unlock big productivity gains using generative AI and others can’t. And one potential reason for this is the ways that people prompt the models, right. So I think understanding how people are actually interacting with these models when they’re trying to do work is a big part of understanding the potential impact that these models can have on the economy.

HUIZINGA: OK, it’s “how I met your mother” time. Let’s talk for a minute about how you two came to be working, along with what you’ve referred to as a “crack team” of researchers, on this study. So, Sid, why don’t you tell us, as you remember it, who called who, how the conversation went down, and who’s all involved. And then David can confirm, deny, or add color from his perspective.

SURI: OK, I need you to mentally rewind back to, like, November 2020. So it’s, like, just before Thanksgiving 2020. My manager came to me, and she was like, Sid, we need somebody to understand, what are the effects of AI on society? And I was like, “Oh, yeah, small question! Yeah, I can do that by myself! Yeah. I’ll get you an answer by Tuesday,” OK? Like, what the heck, man? That was like one of the biggest questions of all time. The first thing I did was assemble a team. We write an agenda; we start going forward from there. You know, Scott Counts is a colleague of mine; he was on that team. Not long after that … as I had mentioned before, David was my intern, and he and I started brainstorming. I don’t remember who called who. Maybe David does. I don’t remember that. But what I do remember is having several fun, productive brainstorming conversations with him. I remember vividly, I was, like, sort of walking around my house, you know, upstairs, kind of, trying to bounce ideas off of him and get the creative juices flowing. And one of the things we were talking about was, I just felt like, again, this is early on, but prompting is the thing. Like, everybody’s talking about it; nobody knows how to do it; people are arguing. So David and I were brainstorming, and then we came up with this idea of studying prompting and how prompting changes as the models get better and better, which they are, at a torrid rate. And so that was our, sort of, key question. And then David actually was primarily involved in assembling the crack team, and he’s going to talk more about that. But as a side note, it’s really cool for me to see David, kind of, grow from being, you know, just a great, sort of, individual scientist to, like, the leader of this team, so that was, kind of, a cool thing for me to see.

HUIZINGA: Hmm. You know, you tell that story … Peter Lee, who’s the president of Microsoft Research, tells a similar story where a certain CEO from a certain company came and dropped him in the middle of the AI and healthcare ocean and said find land. So did it have that same sort of “overwhelmed-ness” to it when you got asked to do this?

SURI: Overwhelmed would be an understatement! [LAUGHTER] It was overwhelming to the point where I was borderline afraid.

HUIZINGA: Oh, dear!

SURI: Like, you know, Peter has this analogy you mentioned, you know, “dropped in the ocean, find land.” I felt like I was dropped in outer space and I had to find Earth. And I didn’t even … I couldn’t even see the sun. Like, I … there was this entirely new system out there. No one knew how to use it. What are the right questions to ask? We were using the system to study how people use the system? Like, what the heck is going on? This was, like, stress levels were on 12. It was a sort of wild, white-knuckle, anxiety-inducing, fun, intense ride. All of those emotions wrapped up together. And I’m happy it’s over [LAUGHS] because, you know, I don’t think it was sustainable, but it was an intensely productive, intensely … again, just in case there’s any budding scientists out there, whenever you’re like swimming around in a problem and your gut is a little scared, like, I don’t know how to do this. I don’t know if I’m doing this right. You’re probably working on the right problem. Because if you know how to do it and you know how to do it right, it’s probably too easy.

HUIZINGA: Yeah!

SURI: And in this moment, boy, my gut was telling me that nobody knows how to do this and we got to figure this out.

HUIZINGA: Right. David, from your theater background, did you have some of these same emotions?

HOLTZ: Yeah, I think so. I think Sid and I, it’s interesting, we have different perspectives on this kind of interesting generative AI moment. And to use the theater analogy, I think being, you know, like, a researcher at Microsoft, Sid has kind of been able, the whole time, to see behind the curtain and see everything that’s going on. And then as someone that is, you know, a researcher in academia, I’ve sort of been in the audience to some extent. Like, I can see what’s coming out onto the stage but haven’t seen all the craziness that was happening behind the curtain. And so I think for me, the way that I would tell the story of how this project came together is, after I had finished my internship and Sid and I—and a number of coauthors—had this very successful remote work paper, we just kept in touch, and every few weeks we’d say, hey, you know, want to chat, see what we’re both working on, swap research ideas?

HUIZINGA: Yeah …

HOLTZ: And for me, I was always looking for a way to work together with Sid. And if you look around at, you know, the history of science, there’s these Kahneman and Tversky, like, Watson and Crick. Like, there are these teams that stay together over long periods of time and they’re able to produce really amazing research, and so I realized that one thing that I should prioritize is trying to find people that I really like working together, that I really click with, and just trying to keep on working with those people. Because that’s one of the keys to having a really successful career. At the same time, all this generative AI stuff was happening, and I went to a few talks. One of them was on the Berkeley campus, and it was a talk by someone at Microsoft Research, and it was about, sort of, early signs of how amazing, you know, GPT-4 was. And I remember thinking, this seems like the most important thing that a person could be working on and studying right now. Like, anything else that I’m working on seems unimportant in comparison to the impact that this technology …

HUIZINGA: Wow …

HOLTZ: … is poised to have on so many different facets of, you know, life and the economy and things like that. And so I think things kind of came together nicely in that there was this opportunity for Sid and I to work together again and to work together again on something that we both agreed was just so incredibly important. And I think we realized this is really important. We really want to work on this problem. But we’re also both super busy people, and we don’t necessarily have all the skills that we need to do this project. And given how important this question is and how quickly things are moving, we can’t afford to have this be a project where it’s like, every now and then … we come back to it … maybe we’ll have a paper in, like, three years. You know, like, things needed to happen really quickly. And so that’s where we got to thinking, OK, we need to put together a team. And that’s kind of where this, like, almost, like, Ocean’s Eleven, sort of, scene emerged [LAUGHTER] where we’re like, we’re putting together a team. We need a set of people that all have very particular skills, you know, and I’m very lucky that I did my PhD at MIT in this sort of community that is, I would say, one of the highest concentrations of really skilled computational social scientists in the world, basically.

HUIZINGA: Wow.

HOLTZ: And so I, sort of, went to, you know, to that community and looked for people. I reached out to people that I had met during the PhD admissions program that were really promising, you know, young PhD students that might want to work on the project and, sort of, put the team together. And so this project is not just Sid and I. It’s six other people: Eaman Jahani, Ben Manning, Hong-Yi TuYe, Joe Zhang, Mohammed Alsobay, and Christos Nicolaides. And everyone has brought something unique and important to the project. And it’s really kind of crazy when you think about it because on the one hand, you know, sometimes, when we’re talking, it’s like, wow, eight people. It’s really a lot of people to have on a paper. But at the same time, you, kind of, look at the contributions that every single person made to the project and you, kind of, realize, oh, this project actually could not have happened if any one of these people were not involved. So it’s been a really interesting and fun project in that way.

SURI: One thing I just wanted to add Gretchen is, I’m a little bit older than David, and when I look back at my career and my favorite projects, they all have that property that David was alluding to. If you knocked one of the coauthors off that project, it wouldn’t have been as good. To this day, I can’t figure out why is that so important, but it is. It’s just this notion that everyone contributed something and that something was unique that no one else would have figured out.

HUIZINGA: Well, and the allusion to Ocean’s Eleven is exactly that. Like, they have to get someone who can crack a safe, and they have to get someone who’s a contortionist and can fit into a box that no one can see, and blah, blah, blah. And I don’t know if you’ve argued about which one of you is George Clooney and which one of you is Brad Pitt, but we’ll leave that for a separate podcast.

SURI: Well, actually … [LAUGHTER] it’s not even a question because Eaman Jahani is by far the most handsome one of us, so he’s Brad Pitt. It’s not even close. [LAUGHS]

HUIZINGA: David’s giggling!

HOLTZ: Yeah, I think Sid … I’d agree with that. I think Sid is probably George Clooney.

SURI: I’ll take it. I’ll take it!

HUIZINGA: Anytime! Well, we’ll talk about some more movies in a minute, but let’s get into the details of this research. And, Sid, I was looking at some of the research that you’re building on from your literature, and I found some interesting papers that suggest there’s some debate on the topic. You’ve just alluded to that. But let’s talk about the titles: AI’s hottest job: Prompt engineer, and, like, Tech’s hottest new job: AI whisperer. No coding required. But then there’s this Harvard Business Review article titled AI prompt engineering isn’t the future. And that left me wondering who’s right. So I suspect this was part of the “prompting” for this research. Tell us exactly what you did and how you did it.

SURI: Sure, so where we came to this question was, we came at it from a couple directions. One is what you just said. There’s this conversation going on in the public sphere, which is on the one hand, there’s these jobs; there’s this notion that prompting, prompt engineering, is a super important thing; it’s paying six figures. On the other hand, there’s also this notion that these models are getting better and better. They’re more able to figure out what you needed and guess what you needed and so maybe we’re not going to need prompting going forward.

HUIZINGA: Right.

SURI: And David and I were like, this is perfect. One of my mentors, Duncan Watts, I always joke with him that every introduction of our paper is the same. It’s “There’s this group of people that say x, and there’s this group of people that say the opposite of x. So we did an experiment to figure it out.” And the reason why every introduction of one of my papers is the same is because you can never say at the end it was obvious. If it was so obvious, then how come there’s two groups of people disagreeing on what the outcome’s going to be? So what we did in the experiment—it’s very simple to explain—is we gave people a target image, and then they randomly either got DALL-E 2 or DALL-E 3. And we said, “OK, write a prompt to generate this target image that we’ve given you,” and we give them 10 tries. “And you can iterate; you can improve; you can experiment. Do whatever you want.” And the notion was, as models progress, what is the relationship between people’s ability to prompt them to get to the target?

HUIZINGA: That’s the end of it. [LAUGHS]

SURI: Yeah. [LAUGHS]

HUIZINGA: That’s the most succinct explanation of a research study that I’ve ever heard. Congratulations, Sid Suri! So I have a question, and this is like … you’ve talked a bit already about how you iterate to get to the target image. My experience is that it can’t remember what I told it last time. [LAUGHTER] So if I put something in and then I say, well, I want you to change that, it starts over, and it doesn’t remember what color red it put in the first image. Is that part of the process, or are these models better than what I’ve done before?

SURI: The models are changing, and that is … and, sort of, the history, the context, the personalization is what you’re referring to. That is coming online in these models already and in the near future. Maybe at the time we did the study, it wasn’t so common. And so they were suffering the same issue that you just alluded to. But going forward, I do expect that to, sort of, fade away a little.

HUIZINGA: OK. Well, David, Sid’s just given us the most beautifully succinct description of people trying to get the model to give them the target image and how many tries they got. What did you find? What were the big takeaways of this research?

HOLTZ: So let me start out with the most obvious finding that, you know, like, Sid was saying, ideally, you know, you’re, kind of, answering a question where it makes sense that people are on both sides of this argument. One thing that we looked at that you’d be surprised if there was someone on the other side of the argument is, OK, do people do a better job when we give them the better model? If we give them DALL-E 3 instead of DALL-E 2, do they do a better job of re-creating the target image? And the answer is unsurprisingly, yes. People do a better job when we give them the better model. The next thing that we looked at—and this is where I think the results start to get interesting—is why do they do better with the better model? And there’s a couple of different reasons why this can be the case. The first could be that they’re writing the exact same prompts. They interact with the model exactly the same, whether it’s DALL-E 2 or DALL-E 3, and it’s just the case that DALL-E 3 is way better at taking that input and translating it into an image that is the image that you had in mind with that prompt. So, you know, sort of, imagine there’s two different artists. One is like a boardwalk caricature artist; the other one is Vincent van Gogh. Like, one of them is probably going to be better at taking your input and producing a really high-quality image that’s what you had in mind. The other possibility is that people, sort of, pick up on the fact that one of these models is different than the other. Maybe it’s more expressive. Maybe it responds to different types of input differently. And as you start to figure that out, you’re going to actually prompt the model, kind of, differently. And so I think the analogy I would draw here is, you know, imagine that you’re driving a couple of different cars maybe, like, one has really nice power steering and four-wheel drive and things like that. The other one doesn’t have all these cool features. You know, you’re probably going to actually handle that car a little bit differently when you take it out on the road relative to a really simple car. And what we find when we actually analyze the data is that both of these factors contributes to people doing better with the higher-quality model. And they actually both contribute equally, right. So insofar as people do better with DALL-E 3, half of that is because DALL-E 3 is just a better model at, like, taking the same input and giving you, like, an image that’s closer to what you had in mind. But the other half is due to the fact that people, sort of, figure out on their own, oh, this model is different. This model is better. It can maybe respond to my inputs a little bit more expressively. And they start prompting differently. And one thing that’s really neat and interesting about the study is we didn’t tell people whether they were given DALL-E 2 or DALL-E 3. So it’s not even like they said, oh, you gave me the good model. OK, let me start prompting differently. They kind of just figure this out by interacting with the tool and kind of, you know, realizing what it can do and what it can’t do. And specifically when we look at what people are doing differently, they’re, kind of, writing longer prompts; they’re writing more descriptive prompts. They have way more nouns and verbs. They’re kind of doing less feeling around in the dark and kind of finding, like, a way of interacting with the model that seems to work well. And they’re kind of doubling down on that way of interacting with the model. And so that’s what we saw. And so when it connects back to your question of, you know, OK, prompt engineering, like, is it here to stay, …

HUIZINGA: Yeah.

HOLTZ: … or is prompt engineering going away? I think one way that we think about interpreting these results is that the prompts do matter, right. Like, if you didn’t think about how to prompt different models and you just wrote the same prompts and left that prompt “as is” for, you know, months or years, you’d be missing out on tons of the gains that we stand to experience from these new, more powerful models because you need to update the prompts so that they take advantage of the new model capabilities. But on the flip side, it’s not like these people needed to, you know, go read the literature on all these complicated, esoteric prompting strategies. They kind of figured it out on their own. And so it seems like prompting is important, but is it necessarily prompt engineering, where it’s this really, you know, heavy-duty, like, thing that you need to do or you maybe need to go take, like, a class or get a master’s degree? Maybe not. Maybe it’s just a matter of people interacting with the models and, kind of, learning how to engage with them.

HUIZINGA: Well, David, I want to ask you another question on that same line, because AI is moving so fast on so many levels. And it’s still a relatively new field. But now that you’ve had some time to reflect on the work you just did, is there anything that’s already changed in the conversation around prompt engineering? And if so, what are you thinking about now?

HOLTZ: Yeah. Thanks for the question. Definitely things are changing. I mean, as Sid mentioned, you know, more and more the way that people interact with these models, the models have some notion of history. They have some notion of context. You know, I think that informs how people are going to write prompts. And also, the types of things that people are trying to do with these models is constantly changing, right. And so I think as a result, the way that we think about prompting and, sort of, how to construct prompts is also evolving. So I think the way that we think about this study is that it’s by no means, you know, the definitive study on prompt engineering and how people learn to prompt. I think everyone on our team would agree there’s so much more to do. But I think the thing that struck us was that this debate that we mentioned earlier, you know, is prompting important? Will prompt engineering stay? Maybe it doesn’t matter? It was really a debate that was pretty light on evidence. And so I think the thing that we were excited to do is to sort of, you know, start to chip away at this big question with data and with, you know, an experiment and just try to start developing some understanding of how prompting works. And I think there’s tons more to do.

HUIZINGA: Right, right, right.

SURI: Just to add to that …

HUIZINGA: Yeah, please.

SURI: Again, if there’s any sort of young scientists out there, one of the things I hate doing with other scientists is arguing about what’s the answer to this question. So what I always do when there’s an argument is I just shift the argument to instead of arguing about is this question going to be yes or no, is what’s the data we need to answer the question? And that’s where David and I, sort of, came in. There was this argument going on. Instead of just arguing between the two of us about what we think it’s going to be, we just shifted the conversation to, OK dude, what data do we need to gather to figure out the answer to this question? And then boom, this project was off and running.

HUIZINGA: You know, that could solve so many arguments, you know, in real life, just like, you don’t know and I don’t know, why are we arguing? Let’s go find out.

SURI: Yeah, so instead of arguing about who knows what, let’s argue about what’s the data we need so that we’ll be convinced!

HUIZINGA: Well, on that line, Sid, another paper in the literature that you looked at was called The prompt report: A systematic survey of prompting techniques. And we’ve talked a little bit about what those techniques involve. But what has your research added to the conversation? Specifically, I’m interested to know, I mean, we did talk about tricks, but is there coaching involved or is this just sort of feel-your-way-in-the-dark kind of thing? And how fine is the line between what you referred to as alchemy and chemistry in this field?

SURI: The alchemy and chemistry analogy was David’s brilliant analogy, and what he was saying was, way back when, there was alchemy, and then out of that grew chemistry. And at the moment, there’s these, sort of, niche, esoteric ways of prompting—chain-of-thought, embody a persona, this kind of thing. And how are those going to get propagated out into the mainstream? That’s how we go from alchemy to, sort of, chemistry. That was his brilliant analogy. And there’s several punchlines of our work, but one of the punchlines is, people can figure out how to take advantage of the new capabilities of these models on their own, even when they don’t know the model changed. So that’s a great democratization argument.

HUIZINGA: Hmm …

SURI: That, OK, you don’t need to be the six-figure Silicon Valley hotshot to figure this out. That maybe, maybe everyone in the world who has access—who has internet access, electricity, and access to one of these models—they can sort of pick themselves up by their own bootstraps, learn how to use these things on their own. And I want to go back to an analogy you said a while ago, which was the analogy to traditional internet search, …

HUIZINGA: Yeah.

SURI: OK? People forgot this, but we’ve learned how to search over the course of about 30 years. I’m 45 years old, so I remember the early search engines like AltaVista, Lycos, things like that. And basically, getting anything useful out of them was pretty much impossible. I really wanted to swear right there, but I didn’t. [LAUGHTER] And what people forgot, people forgot that they didn’t know how to ride a bike, OK? And they forgot that we didn’t actually know … these systems didn’t work that well; we didn’t know how to query them that well; we didn’t know how to get anything useful out of them. And then 30 years later, no one thinks about searching the internet as a thing we do. It’s like turning on the faucet. You just do it. It’s taken for granted. It’s part of our workflows. It’s part of our daily life. We do it without thinking about it. Right now, we’re back in those AltaVista/Lycos days, like, where, you know, it’s still esoteric. It’s still niche. We’re still not getting what we need out of these models. The models are going to change. People are going to get better at it. And part of what we’re arguing in our paper is that people can get better at it on their own. All they need is access and a few tries and they figure it out.

HUIZINGA: Right. You know what’s really funny is, I was trying to find some information about a paper on Sparks. That’s the Sparks paper. And I was doing some internet search, and I wasn’t getting what I wanted. And then I moved over to ChatGPT and put basically the same question, but it was a little more question-oriented instead of keywords, and it gave me everything I was looking for. And I thought, wow, that’s a huge leap from even … that I could use ChatGPT like a search engine only better. So … well, listen, anyone who’s ever listened to my podcast knows I’m borderline obsessed with thinking about unintended consequences of technical innovation, so I always ask what could possibly go wrong if you got everything right. But as I’ve said on this series before, one of the main mandates of AICE research is to identify unintended consequences and try to get ahead of them. So, David, rather than talking about the potential pitfalls of prompt engineering, instead talk about what we need to do to keep up with or keep ahead of the speeding train of generative AI. And by we, I mean you.

HOLTZ: Yeah, I mean, I think the thing to keep in mind—and I think this has come up a couple of times in this conversation already—is at least right now, and presumably for the foreseeable future, you know, generative AI is moving so fast and is also not a monolith, right. Like, I think we tend to talk about generative AI, but there’s different types of models, even within a particular class of models. There’s so many different models that are floating around out there. And so I think it’s important to just keep on sort of revisiting things that we think we already know, seeing if those things remain true. You know, I think from a research perspective, like, kind of, answering the same questions over and over with different models over time and seeing if the results stay the same. And I think that’s one of the big takeaways from, like, sort of, a policy or applications perspective from our research, as well, is that just generative AI is moving really quickly. These models are evolving, and the way that we interact with them, the way that we prompt them, needs to change. So if you think about it, you know, there are many tech companies, many startups, that are building products or building entire, you know, companies on, basically, on top of API calls to OpenAI or to Anthropic or something like that. And behind the scenes, those models are changing all the time, whether it’s, you know, sort of a publicly announced shift from GPT-3.5 to GPT-4 or whether it’s the fact that maybe, you know, GPT-4 is kind of being tweaked and adjusted, you know, every couple of weeks based on things that are happening internally at the company. And one of the takeaways from our research is that, you know, all those tweaks are actually pretty meaningful. The prompts that you wrote two weeks ago might not be as effective you know today if they aren’t as well suited to the to the newest, latest, greatest model. And so I think just being really cognizant of that moving target, of the fact that we are living through, sort of, like, very exciting, unprecedented, crazy times and kind of just staying alert and staying on our toes is I think probably the most important thing.

HUIZINGA: Yeah. You know, when I was thinking about that question, I, my mind went to the Wallace & Gromit … I don’t know if you’re familiar with those animations, but there’s a scene where they’re on a toy train track chasing a criminal penguin, and they run out of track and then Gromit miraculously finds spare track. He starts laying it as the train is going. And it sort of feels like there’s a little bit of that in your research! [LAUGHS] I usually ask my guests on Collaborators where their research is on the spectrum from lab to life. But you’ve actually completed this particular study, and it leans more toward policy than product. And again, we’ve talked about a lot of this. Sometimes there seems to be a Venn diagram overlap with my questions. But, Sid, I want to know from your perspective, what would be a good outcome for this particular study, in your mind?

SURI: So AI systems are more and more being embedded in the workflows of companies and institutions. It used to just be all software, but now it’s specifically custom-built software, AI systems, and their prompts. I see it all the time here at Microsoft. It’s part of our workflows. It’s part of our products. It’s part of our day-to-day life. And as the models are getting better and better and these prompts are sort of embedded in our systems, someone’s got to pay attention to those prompts to make sure they’re still behaving the way we thought they were because they were written for an older version, the model changed, and now is that new model interpreting that prompt in the same way? That’s one question. The second question is, well, the new model has new capabilities, so now can you boost these prompts to take advantage of those new capabilities, to get the full economic gain, the full productivity gain of these new models? So you want to get your value for your money, so you need to adjust your prompts in response to those new models to get the full value. And part of the point of this paper is that that’s actually not that big a deal. That, as the models get better and better, even when people don’t know about it, they can still take advantage of the new affordances, the new capabilities, even when they aren’t made aware that, hey, it does a different thing right now.

HUIZINGA: Interesting.

SURI: But the point we’re making with this paper is, you have to pay attention to that.

HUIZINGA: OK, it’s last word time and I want to go a little off script with you two for this show. NVIDIA’s co-founder and CEO Jensen Huang recently said, and I paraphrase Willie Nelson here, “Mamas don’t let your babies grow up to be coders.” In essence, he’s predicting that AI is going to do that for us in the future and people would be better served pursuing different educational priorities. So that’s a bold claim. Do you guys want to make a bold claim? Here’s your chance to make a pithy prediction from your perch in research. What’s something you think will be true some years out? You don’t have to say how many years, but that you might have been reluctant to say out loud for fear that it wouldn’t age well. Remember, this is a podcast, not a paper, so no one’s going to hold you to your word, but you might end up being prophetic. Who knows? David, you go first, and then Sid can close the show. Tell us what’s going to happen in the future.

HOLTZ: I’m not sure how bold of a prediction this is, but I think there’s a lot of concern right now about the impact that AI will have in various creative domains, right. As generative AI gets better and AI can produce images and music and videos, you know, what will happen to all of the people that have been making a living creating this type of content? And my belief is that, if anything, as we just get flooded with more and more AI-generated content, people are going to place a really heavy premium on content that is produced by humans. Like, I think so much of what people value about art and creative output is the sort of human connection and the idea that something sort of emerged from someone’s lived experiences and hardships. I mean, this is why people really like reading, you know, the curator’s notes when they go to a museum, so that they can kind of understand what’s behind, you know, behind the image. And so I think generative AI is going to be really amazing in a lot of ways, and I think it will have really big impacts that we’ll need to deal with as a society in terms of how it affects work and things like that. But I don’t think that we’re moving towards a future where, you know, we’re all just consuming AI-generated, you know, art all the time and we don’t care at all about things being made by people.

HUIZINGA: You know, there’s a podcast called Acquired, and they talked about the brand Hermès,which is the French luxury leather company, and saying that to get a particular kind of bag that’s completely handmade—it’s an artifact from a human—that’s why you pay tens of thousands of dollars for those instead of a bag that comes off a factory line. So I like that. Sid, what do you think?

SURI: So I’m going to make two points. David made the argument about AI affecting the creative space. I want to zoom in on the knowledge workspace.

HUIZINGA: Hmm …

SURI: And one of the big issues in knowledge work today is it’s incredibly difficult still to get insights out of data. To give you an example, in the remote work study that David and I did, it took a handful of PhDs, tons of data, two years, sophisticated statistical techniques to make sense of what is the effect of remote work on information workers, OK? And I feel, where I see knowledge work going is there’s going to be this great democratization on how to get insights out of data. These models are very good at classifying things, summarizing things, categorizing things. Massive amounts of data. In the old days, you had to like basically be an advanced statistician, be an advanced machine learning person, train one of these models. They’re very esoteric. They’re very arcane. They’re very hard to use. And then unleash it on your data. Now if you just know how to prompt a little bit, you can get these same insights as a professional statistician would a few years ago in a much, much shorter time, you know, one-tenth of the time. So I feel like there’s going to be this great democratization of getting insights out of data in the knowledge workspace. That’s prediction number one. And then the second point I wanted to make, and I want to give a little credit to some of the academics who’ve inspired this notion, which is Erik Brynjolfsson and David Autor, and that is this: I think a lot of people are looking for the impact of AI in kind of the wrong way. Rewind in your mind back to the time when, like, the internal combustion engine was invented. OK, so we used to get around with horses; now we have cars. OK, horses went 20 miles an hour; cars go 40 miles an hour. OK, big deal. What no one foresaw was there’s going to be an entire aviation industry that’s going to make it possible to do things we couldn’t do before, speed up the economy, speed up everything, add trillions of dollars of value to the world. And I feel like right now everyone’s focusing on AI to do things we already know how to do. And I don’t think that’s the most interesting use case. Let’s instead turn our attention to, what could we not do before that we can do now?

HUIZINGA: Right.

SURI: And that’s where the really exciting stuff is. So those are the two points I’d like to leave you.

HUIZINGA: I love it. I hope you’re not saying that I could rewind my mind to when the internal combustion engine was developed …

SURI: No, no. Present company excluded! [LAUGHTER]

HUIZINGA: Oh my gosh. Sid Suri, David Holtz, this has been fantastic. I can’t get the phrase “AI whisperer” out of my head now, [LAUGHTER] and I think that’s what I want to be when I grow up. So thanks for coming on the show to share your insights on the topic and help to illuminate the path. This is awesome.

SURI: Thank you.

HOLTZ: Well, thank you.

SURI: That was fun.

[MUSIC FADES]

The post Collaborators: Prompt engineering with Siddharth Suri and David Holtz appeared first on Microsoft Research.

]]>
Collaborators: Silica in space with Richard Black and Dexter Greene http://approjects.co.za/?big=en-us/research/podcast/collaborators-silica-in-space-with-richard-black-and-dexter-greene/ Thu, 05 Sep 2024 16:00:00 +0000 http://approjects.co.za/?big=en-us/research/podcast/collaborators-silica-in-space-with-richard-black-and-dexter-greene/ College freshman Dexter Greene and Microsoft research manager Richard Black discuss how technology that stores data in glass is supporting students as they expand earlier efforts to communicate what it means to be human to extraterrestrials.

The post Collaborators: Silica in space with Richard Black and Dexter Greene appeared first on Microsoft Research.

]]>
Headshots of Richard Black and Dexter Greene for the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with. 

Nearly 50 years ago, Voyager 1 and 2 took off for space, each with a record comprising a sampling of earthly sounds and sights. The records’ purpose? To give extraterrestrials a sense of humanity. Thanks to students at Avenues: The World School, the universe might be receiving an update. In this episode, college freshman and Avenues alum Dexter Greene and Microsoft research manager Richard Black talk about how Project Silica, a technology that uses tiny laser pulses to store data in small glass “platters,” is supporting the Avenues Golden Record 2.0 project; what it means for data storage more broadly; and why the students’ efforts are valuable even if the information never gets to its intended recipients.

Transcript

[TEASER] 

[MUSIC PLAYS UNDER DIALOGUE] 

DEXTER GREENE: So the original Golden Record is … I like to think of it as, sort of, a time capsule of humanity that was designed to represent us—who we are as a species, what we love, why we love it, what we do, and, sort of, our diversity, why we’re all different, why we do different things—to possible extraterrestrials. And so the Golden Record was produced in 1977 by a relatively small team led by Carl Sagan. What we’re doing, my team, is we’re working on creating an updated Golden Record. And I began researching different storage methods, and I began to realize that we hadn’t made that much headway in storage since then. Of course, we’ve made progress but nothing really spectacular until I found 5D storage. And I noticed that there were only two real places that I could find information about this. One was the University of Southampton, and one was Project Silica at Microsoft. I reached out to the University of Southampton and Dr. Black, and somehow, kind of, to my surprise, Dr. Black actually responded!

RICHARD BLACK: I was in particularly intrigued by the Avenues Golden Record application because I could see it was an application not just where Silica was a better media than what people use today but really where Silica was the only media that would work because none of the standard media really work over the kind of time scales that are involved in space travel, and none of them really work in the harsh environments that are involved in space and outer space and space travel. So in some ways for me, it was an easy way to communicate just what a transformative digital media technology Silica is, and that’s why as an application, it really grabbed my interest.

[TEASER ENDS] 

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC FADES] 

Today I’m talking to Dr. Richard Black, a senior principal research manager and the research director of Project Silica at Microsoft Research. And with him is Dexter Greene, a rising freshman at the University of Michigan and a recent graduate of Avenues: The World School in New York City. Richard and Dexter are involved in a unique multidisciplinary, multi-institutional, and multigenerational collaboration called Avenues Golden Record, a current effort to communicate with extraterrestrial intelligence. We’ll get into that in a lot more detail shortly, but first, let’s meet our collaborators.

Richard, let’s start with you. As I’ve just noted, you’re a research manager at the Cambridge UK lab of Microsoft Research and the research director of a really cool technology called Silica. In a second, I want you to talk about that more specifically, but right now, tell us about yourself. What’s your background? What are your research interests writ large? And what excites you about the broad remit of your work at Cambridge?

RICHARD BLACK: So my background is a computer scientist. I’ve been at Microsoft Research for 24 years, and before that, I had a faculty position at a university here in the UK. So I also have an interest in education, and it’s been a delight to interact with Dexter and the other students at Avenues. My research interests really cover all aspects of computer systems, which means operating systems, networking, and computer architecture. And the exciting thing for me about being at Microsoft Research is that this is really a period of rapid change with the cloud, digital transformation of society. It gives really a huge motivation to research better underlying technologies for everything that we do. And for me in the last few years, that’s been in archival storage with Project Silica.

HUIZINGA: Hmm. Richard, I’m interested to know a little bit more about your background. Where did you go to school, what led you to this kind of research, and what university were you teaching at?

BLACK: Yeah, I went to university and did my PhD here in Cambridge. I was teaching at the University of Glasgow, which is in Scotland in the UK, and teaching again computer systems, so those operating systems, computer architecture, and computer networking.

HUIZINGA: Well, Dexter, you’re the first student collaborator we’ve featured on this show, which is super fun. Tell us about yourself and about Avenues: The World School, where this particular collaboration was born.

DEXTER GREENE: Thanks for having me. I’m super excited to be here. And like you said, it’s very cool to be the first student collaborator that you featured on the show. So I’m 18. I just graduated high school a few months ago, and I will be attending the University of Michigan’s College of Engineering in the fall. If you know me personally, you know that I love robotics. I competed in the FIRST Tech Challenge all throughout high school. The FIRST Tech Challenge is a student robotics competition. There is the FIRST Tech Challenge, FIRST Robotics Competition, and FIRST LEGO League. So it’s, like, three different levels of robotics competition, which is run all around the world. And every year, there’s, like, a championship at the end to declare a winner. And I plan to major in either robotics or mechanical engineering. So more about Avenues. Avenues is a K-through-12 international immersion school, which is very interesting. So younger students might do a day in Spanish and a day in English or a day in Mandarin and then a day in English, going through all their classes in that language. So I actually attended Avenues since second grade, so when I was younger, I would do a full day in Spanish and then I would switch to a full day in English, doing my courses like math, history, English, all in my language, Spanish for me. And Avenues is a very interesting school and very different in many ways. They like to, sort of, think outside the box. There’s a lot of very unique classes, unique programs. A great example is what they call J-Term, or June and January Term, which is where students will have one course every day for the entire month where they can really dive deep into that subject. And I was actually lucky enough to do the Golden Record for a full month in 11th grade, which I’ll talk about this more, but that’s actually when I first made contact with Dr. Black and found this amazing technology, which is, I guess why we’re all here today.

HUIZINGA: Right.

GREENE: So, yeah, there’s many really cool parts about Avenues. There’s travel programs that you can do where you can go all around the world. You can go between different campuses. There’s online classes that you can take. The list goes on …

HUIZINGA: Well, it’s funny that you say “when I first made contact with Dr. Black” because it sounds like something that you’re working on! So let’s talk about that for a second. So the project we’re talking about today is Avenues Golden Record, but it’s not the first Golden Record to exist. So for those of our listeners who don’t know what Golden Record even is, Dexter, give us a little history lesson and chronicle the story from the original Golden Record way back in 1977 all the way to what you’re doing today with the project.

GREENE: Yeah. So I guess let me start with, what is the Golden Record? So the original Golden Record is … I like to think of it as, sort of, a time capsule of humanity that was designed to represent us—who we are as a species, what we love, why we love it, what we do, and, sort of, our diversity, why we’re all different, why we do different things—to possible extraterrestrials. And so the Golden Record was produced in 1977 by a relatively small team led by Carl Sagan[1], an American astronomer who was a professor at, I believe, Cornell. And so it’s basically a series of meticulously curated content. So that could be images, audios, sounds of nature, music, the list goes on. Really anything you can think of. That’s, sort of, the beauty of it. Anything can go on it. So it’s just a compilation of what we are, who we are, and why we are—what’s important to us. A great example, one of my favorite parts of the Golden Record, is one of the first audios on it is a greeting in 55 languages. It’s, sort of, meant to be, like, a welcome … I guess less of a welcome, but more like a hello because we’re not welcoming anyone to Earth, [LAUGHTER] but it’s, like, a hello, nice to meet you, in 55 languages to show that we’re very diverse, very different. And, yeah, you can actually … if you’re interested and if you’d like to learn more, you can actually go see all the content that’s on the Golden Records. NASA has a webpage for that. I definitely recommend if you have a chance to check it out.

HUIZINGA: Yeah.

GREENE: And I guess moving on to future attempts … so what we’re doing, my team, is we’re working on creating an updated Golden Record. So it’s been 47 years now since the original Golden Record—kind of a long time. And of course a lot’s changed. Some for the better, some for the worse. And we think that it’s about time we update that. Update who we are, what we are, and what we care about, what we love.

HUIZINGA: Right.

GREENE: So our team has begun working on that. One project that I’m familiar with, other than our own, that’s, sort of, a similar attempt is known as Humanity’s Message to the Stars, which is led by Dr. Jonathan Jiang, who is a researcher at NASA’s Jet Propulsion Laboratory.[2] Very cool. That’s the only project that’s similar that I’m aware of, but I’m sure there have been other attempts in the past.

HUIZINGA: Yeah … just to make a note right now, we’re using the term “record,” and the original medium was actually a record, like an LP. But excitingly, we’ll get to why Dr. Black is on the show today [LAUGHS] and talk about the new media. Before we do that, as I was preparing this episode, it began to feel like a story of contrasting couplets, like earthlings and aliens, content and media, veteran researcher and high school student. … So let’s talk about the last pairing for a second, the two of you, and how you got together on this project. It’s a fun story. I like to call this question “how I met your mother.” So how did a high school kid from New York come to be a research collaborator with a seasoned scientist from Cambridge? Dexter, tell your side of the story. It’s cool. And then Richard can fill in the blanks from across the pond!

GREENE: Yeah, so let me actually rewind a little bit further than that, about how I got into the project myself, …

HUIZINGA: Good!

GREENE: … which, I think, is a pretty fun story. So one of my teachers—my design and engineering teacher at the time, Mr. Cavalier—gave a presentation at one of our gradewide assemblies. And the first slide was something along the lines of “the most challenging project in human history,” which immediately caught my eye. I was like, I have to do this! There’s no way I’m not doing this project! [LAUGHTER] And the slides to come of course made me want to partake in the project even more. But that first slide … really, I was sold. It was a done deal! So I applied to the project. I got in. And then we began working and researching, and I’ll talk about this more later, as well, but we, sort of, split up into two teams at the beginning: content and media. Media being the form, or medium, that we send it on. And so that was the team that I was on. And I began researching different storage methods and, sort of, advancements in storage methods since the original Golden Record in 1977. And I began to realize that we hadn’t made that much headway in storage since then. Of course we’ve made progress but nothing really spectacular until I found 5D storage. And I was immediately, just, amazed by the longevity, durability, capacity—so many things. I mean, there’s just so many reasons to be amazed. But … so I began researching and I noticed that there were only two real places that I could find information about this. One was the University of Southampton, I believe, and one was Project Silica at Microsoft. And so I actually reached out to both. I reached out to the University of Southampton and Dr. Black, and somehow, [LAUGHS] kind of, to my surprise, Dr. Black actually responded! And I was, kind of, stunned when he responded because I was like, there’s no way this researcher at Microsoft is going to respond to this high school student that he’s never met in the middle of nowhere. So when Dr. Black did respond, I was just amazed and so excited. And, yeah, it went from there. We began communicating back and forth. And then, I believe, we met once over the following summer, and now we’re here!

HUIZINGA: OK, there’s so many parallels right now between this communication contact and what you’re doing with potential extraterrestrial intelligence. It’s like, I contacted him, he contacted me back, and then we started having a conversation. … Yeah, so, Richard, you were the guy who received the cold email from this high school student. What was your reaction, and how did you get interested in pursuing a relationship in terms of the science of this?

BLACK: Yeah, so let me say I was really intrigued by the Avenues Golden Record application. I do get quite a lot of cold emails, [LAUGHTER] and I try to reply to most of them. I do have a few canned answers because I don’t have time to interact with everybody who reaches out to me. But I was in particularly intrigued by the Avenues Golden Record application because I could see it was an application not just where Silica was a better media than what people use today but really where Silica was the only media that would work because none of the standard media really work over the kind of time scales that are involved in space travel, and none of them really work in the harsh environments that are involved in space and outer space and space travel. So in some ways for me, it was an easy way to communicate just what a transformative digital media technology Silica is, and that’s why as an application it really grabbed my interest.

HUIZINGA: So did you have any idea when the initial exchange happened that this would turn into a full-blown project?

BLACK: I didn’t know how much time Dexter and his fellow students would have to invest in it. So for me, at the beginning, I was just quite happy to answer a few questions that they have, to point them in the right direction, to fill in a few blanks, and things like that. And it was only much later, I think, after perhaps we’d had our first meeting, that I realized that Dexter and his team were actually serious, [LAUGHTER] and they had some time, and they were going to actually invest in this and think it through. And so I was happy to work with them and to continue to answer questions that they had and to work towards actually, you know, writing a couple of Silica platters with the output that they were creating and providing it for them.

HUIZINGA: Well, let’s dig in there. Richard, let’s talk about digital data and the storage mediums that love it. I want to break this into two parts because I’m interested in it from two angles. And the first one is purely technical. I’ll take a second to note that we did an episode on Project Silica way back in 2019. I say way back, like … but in technical years right now, [LAUGHS] that seems like a long time! And on that episode, your colleague Ant Rowstron talked with me and Mark Russinovich, the CTO of Microsoft’s Azure. So we’ll put a link in the show notes for that super-fun, interesting show. But right now, Richard, would you give our listeners an overview of the current science of data on glass? What is Silica? How is it different from other storage media? And what’s changed in the five years since I talked to Ant and Mark?

BLACK: Sure. So Silica is an archival storage technology that stores data inside fused silica glass. And it does that using ultrashort laser pulses that make a permanent, detectable, and yet transparent modification to the glass crystal, so the data ends up as durable as the piece of glass itself.

HUIZINGA: Wow.

BLACK: And being transparent means that we can get hundreds of layers of data inside a block of glass that’s only two millimeters thin, making for really incredibly high densities. And since this new physics was discovered at the University of Southampton in the UK, we’ve been working to tame that, and we’ve improved density, energy over a hundred-fold in the time period that we’ve been working on it, and the speed over ten thousand-fold. And we continue to, in our research, to make Silica better and faster. And, yes, you’re right, five years might seem like quite a long time. A comparison that you might think of here is the history of the hard drive. In the history of the hard drive, there was a point in history at which humans discovered the physical effect of magnetism. And it took us actually quite a long time as a species to go from magnetism to hard drives. In this case, this new physical effect that was discovered at Southampton, this new physical effect, you can think of it a bit like discovering magnetism, and taking it all the way from there to actually a real operating storage system actually takes quite a lot of research and effort and development, and that’s the path that we’ve been on doing that, taming and improving densities and speeds and energies and so on during the years of the project.

HUIZINGA: Well, talk a little bit more about the reading and writing of this medium. What’s involved technically on how you get the data on and how you retrieve it?

BLACK: Yeah, and so interestingly the writing of the data and the reading of the data are actually completely different. So writing the data is done with an ultrashort laser pulse. It’s actually a femtosecond-length pulse, and a femtosecond is one-thousandth of one-millionth of one-millionth of a second. And if you take even quite a small amount of energy and you compress it in time into a pulse that short and then you use a lens to focus it in space into just a tiny point, then the intensity of the light at that point during that pulse is just so mind-bogglingly high that you actually get something called a plasma-induced nano-explosion. [LAUGHTER] And I’m not an appropriate physicist of the right sort by background, but I can tell you that what that does is it really transforms the glass crystal at that point but in a way in which it’s, just, it’s so short—the time pulse is so short—it doesn’t really get to damage the crystal around that point. And that’s what enables the data to be incredibly durable because you’ve made this permanent, detectable, and yet transparent change to the glass crystal.

HUIZINGA: So that’s writing. What about reading?

BLACK: Reading you do with a microscope!

HUIZINGA: Oh, my gosh.

BLACK: So it’s a much more straightforward process. A reader is basically a computer-controlled, high-speed, high-quality microscope. And you focus the microscope at an appropriate depth inside the glass, and then you just photograph it. And you get to, if it’s an appropriate sort of microscope, you get to see the changes that you’ve made to the glass crystal. And then we process those images, in fact, using machine learning neural networks to turn it back into the data that we’d originally put into the glass platter. So reading and writing quite different. And on the reading, we’re just using regular light, so the reading process can’t possibly damage the data that’s been stored inside the glass.

HUIZINGA: I imagine you wouldn’t want to get your eye in the path of a femtosecond laser …

BLACK: Yes, femtosecond lasers are not for use at home! That’s quite true. In fact, your joke comment about the eye is … eye surgery is also actually done with femtosecond lasers. That’s one of the other applications.

HUIZINGA: Oh, OK! So maybe you would!

BLACK: But, yes, no, this is definitely something that, for many reasons, Silica is something that’s related to cloud technology, the writing process. And I think we’ll get back to that perhaps later in our discussion.

HUIZINGA: Yeah, yeah.

BLACK: But, yeah, definitely not something for the home.

HUIZINGA: How powerful is the microscope that you have to use to read this incredibly small written data?

BLACK: It’s fairly straightforward from a power point of view, but it has been engineered to be high-speed, high-quality, and under complete computer control that enables us to move rapidly around the piece of glass to wherever the data is of interest and then image at high speed to get the data back out.

HUIZINGA: Yeah. Well, so as you describe it, these amazingly tiny laser pulses store zettabytes of data. Talk for one second, still technically, about how you find and extract the data. You know, I’ve used this analogy before, but at the end of the movie Indiana Jones, the Ark of the Covenant is stored in an army warehouse. And the camera pulls back and there’s just box after box after crate after crate. … It’s like, you’ll never find it. Once you’ve written and stored the data, how do you go about finding it?

BLACK: So like all storage media, whether it be hard drive, tape, flash that might be in your phone in your pocket, there are standard indexing methods. You know, there’s an addressing system, you know, blocks and sectors and tracks. And, you know, we use all of these, kind of, standard terminology in terms of the way we lay the data out on the glass, and then each piece of glass is uniquely identified, and the glass is stored in the library. And actually, we’ve done some quite interesting work and novel work on the robotics that we use for handling and moving the pieces of glass in Silica. It’s interesting Dexter is talking about being interested in robotics. We’ve done a whole bunch of new interesting robotics in Silica because we wanted the shelving or the library system that we keep the glass on to last as long as the glass. And so we wanted it to be completely passive. And we wanted all of the, kind of, the active components to be in the robotics. So we have these new robots that we call shuttles that can, kind of, climb around the library and retrieve the bits of glass that are needed and take them to a reader whenever reading is needed, and that enables us really to scale out a library to enormous scale over many decades or centuries and to just keep growing a passive, completely passive, library.

HUIZINGA: Yeah, I saw a video of the retrieval and it reminded me of those old-fashioned ladders in libraries where you scoot along and you’re on the wall of books and this is, sort of, like the wall of glass. … So, Richard, part two. Let’s talk about Silica from a practical point of view because apparently not all data is equal, and Silica isn’t for everyone’s data all the time. So who are you making this for generally speaking and why? And did you have aliens on your bingo card when you first started?!

BLACK: So, no, I didn’t have aliens [LAUGHTER] on the bingo card when I first started, definitely not. But as I mentioned, yeah, Project Silica is really about archival data. So that’s data that needs to be kept for many years—or longer—where it’s going to be accessed infrequently, and when you do need to access it, you don’t need it back instantaneously. And there’s actually a huge and increasing amount of data that fits those criteria and growing really very rapidly. Of course it’s not the kind of data that you keep in your pocket, but there is a huge amount of it. A lot of archival records that in the past might have been generated and kept on paper, they’re now, in the modern world, they’re all born digital. And we want to look for a low-cost- and low-environment-footprint way of really keeping it in that digital format for the length of time that it needs to be kept. And so Silica is really for data that’s kept in the cloud, not the pocket or the home or the business. Today most organizations already use the cloud for their digital data to get advantages of cost, sustainability, efficiency, reliability, availability, geographic redundancy, and so on. And Silica is definitely designed for that use case. So archival data in the cloud, data that needs to be kept for a long time period, and there’s huge quantities of it and it’s pouring in every day.

HUIZINGA: So concrete example. Financial data, medical data, I mean, what kinds of verticals or sectors would find this most useful?

BLACK: Yeah, so the financial industry, there’s a lot of regulatory requirements to keep data. Obviously in the healthcare situation, there’s a lot of general record keeping, any archives, museums, and so on that exist today. We see a lot of growth in things like the extractive industries, any kind of mining. You want to keep really good records of what it was that you did to, you know, did underground or did to the earth. The media and entertainment industry is one where they create a lot of content that needs to be kept for long time periods. We see scientific research studies where they measure and accumulate a large quantity of data that they want to keep for future analysis, possibly, you know, use it later in training ML models or just for future analysis. Sometimes that data can’t be reproduced. You know, it represents a measurement of the earth at some point and then, you know, things have changed and it wouldn’t be possible to go back and recapture that data.

HUIZINGA: Right.

BLACK: We see stuff in government and local government. One example is we see some local governments who want, essentially, to create a digital twin of their city. And so when new buildings are being built, they want to keep the blueprints, the photographs of the construction site, all of the data about what was built from floor plans and everything else that would help not only emergency services but just help the city in general to understand what’s in its environment, and they want all of that to be kept while that building exists in their city. So there’s lots and lots and lots of growing data that needs to be kept—sometimes for legal reasons, sometimes for practical reasons—lots of it a really fast-growing tier within the data universe.

HUIZINGA: Yeah. Dexter, let’s go back to you. On the Avenues website, it says the purpose of the Golden Record is to, as you mentioned before, “represent humanity and Earth to potential extraterrestrial beings, encapsulating our existence through a collection of visuals and sounds.” That’s pretty similar to the first Golden Record’s mission. But yours is also different in many ways. So talk about what’s new with this version, not just the medium but how you’re going about putting things together, both conceptually and technically.

GREENE: Yeah. So that’s a great question. I can take it in a million different directions. I’ll start by just saying of course the new technology that Dr. Black is working on is, like, the biggest change, at least in my view, because I like this kind of stuff. [LAUGHTER] But that’s like really the huge thing—durability, longevity, and capacity, capacity being one of the main aspects. We could just fit so much more content than was possible 50 years ago. But there’s a lot more. So on the original Golden Record, they only had weeks to work on the project before it had to be ready to go, to put on the Voyager 1 and 2 spacecrafts. So they had a huge time constraint, which of course we don’t have now. We’ve got as much time as we need. And then … I’ll talk about how we’ve been working on the project. So we split up into two main teams, content and form. Form being media, which I, like I said earlier, is the team that I work on. And our content team has been going through loads of websites and online databases, which is another huge difference. When they created the original Golden Record 50 years ago, they actually had to look through books and, like, photocopy each image they wanted. Of course now we don’t have to do that. We just find them online and drag and drop them into a folder. So there’s that aspect, which makes it so much easier to compile so much content and good-quality content that is ethically sourced. So we can find big databases that are OK with giving us their data. Diversity is another big aspect that we’ve been thinking about. The original Golden Record team didn’t have a lot of time to really focus on diversity and capturing everything, the whole image of what we are, which is something that we’ve really been working on. We’re trying to get a lot of different perspectives and cover really everything there is to cover, which is why we actually have an online submission platform on our website where any random person can take an image of their cat that they like [LAUGHTER] or an image of their house or whatever it may be and they can submit that and it will make its way into the content and actually be part of the Golden Record that we hopefully send to space.

HUIZINGA: Right. So, you know, originally, like you say, there’s a sense of curation that has to happen. I know that originally, they chose not to include war or conflict or anything that might potentially scare or frighten any intelligence that found it, saying, hey, we’re not those people. But I know you’ve had a little bit different thinking about that. Tell us about it.

GREENE: Yeah, so that’s something that we’ve talked about a lot, whether or not we should include good and bad. It’s funny. I actually wrote some of my college essays about that, so I have a lot to say about it. I’ll just give you my point of view, and I think most of my team shares the same point of view. We should really capture who we are with the fullest picture that we can without leaving anything out. One of the main reasons that I feel that way is what might be good to us could be bad to extraterrestrials. So I just don’t think it’s worth it to exclude something if we don’t even know how it’s perceived to someone else.

HUIZINGA: Mm-hmm. So back to the space limitations, are you having to make choices for limiting your data, or are you just sort of saying, let’s put everything on?

GREENE: So on the original Golden Record, of course they really meticulously curated everything that went on the record because there wasn’t that much space.

HUIZINGA: Yeah …

GREENE: So they had to be very careful with what they thought was worth it or not. Now that we have so much space, it seems worth it just to include everything that we can include because maybe they see something that we don’t see from an image.

HUIZINGA: Right.

GREENE: The one thing that we … at the very beginning, during my J-Term in 11th grade, we were actually lucky enough to have Jon Lomberg[3], one of the members of the original team, come in to talk to us a bit. And he gave us a, sort of, a lesson about how to choose images, and he was actually the one that chose a lot of the images for the original record. So it was really insightful. One thing we talked a lot about was, like, shadows. A shadow could be very confusing and, sort of, mess up how they perceive the image, but it also might just be worth including because, why not? We can include it, and maybe they get something … they learn about shadows from it even though it’s confusing. So that’s, sort of, how we have thought about it.

HUIZINGA: Well, that’s an interesting segue, because, Richard, at this point, I usually ask what could possibly go wrong if you got everything right. And there are some things that you think, OK, we don’t know. Even on Earth, we have different opinions about different things. And who knows what any other intelligence might think or see or interpret? But, I want to steer away from that question because when we talked earlier, Richard, I was intrigued by something you said, and I want you to talk about it here. I’ll, kind of, paraphrase, but you basically said, even if there’s no intelligent life outside our planet, this is a worthwhile exercise for us as humans. Why’d you say that?

BLACK: Well, I had two answers to that, one, kind of, one selfish and one altruistic! [LAUGHTER] I talk to a lot of archival data users, and those who are serious about keeping their data for many hundreds of years, they think about the problem in, kind of, three buckets. So one is the keeping of the bits themselves. And of course that’s what we are working on in Project Silica and what Silica is really excellent at. One is the metadata, or index, that records what is stored, where it’s stored, and so on. And that’s really the provenance or the remit of the archivist as curator. And then the third is really ensuring that there’s an understanding of how to read the media that persists to those future generations who’ll want to read it. And this is sometimes called the Rosetta Stone problem, and that isn’t the core expertise of me or my team. But the Golden Record, kind of, proves that it can be solved. You know, obviously, humanity isn’t going to give up on microscopes, but if we can explain to extraterrestrials how they would go about reading a Silica platter, then it should be pretty obvious that we can explain to our human descendants how to do so.

HUIZINGA: Hmmm.

BLACK: The altruistic reason is that I think encouraging humanity to reflect on itself—where we are, the challenges ahead for us as a species here on planet Earth—you know, this is a good time to think those thoughts. And any time capsule—and the Golden Record, you can, kind of, view it a bit like a time capsule—it’s a good time to step back and think those philosophical thoughts.

HUIZINGA: Dexter, do you have any thoughts? I know that Dr. Black has, kind of, taken the lead on that, but I wonder if you’ve given any thought to that yourself.

GREENE: Yeah, we’ve given a lot of thought to that: even if the record doesn’t reach extraterrestrials, is it worth it? Why are we doing this? And we feel the exact same as Dr. Black. It’s so worth it just for us to reflect on where we are and how we can improve what we’ve done in the past and what we can do in the future. It’s a … like Dr. Black said, it’s a great exercise for us to do. And it’s exciting. One of the beautiful parts about this project is that there’s no, like, right or wrong answer. Everyone has a different perspective on it.

HUIZINGA: Yeah …

GREENE: And I think this is a great way to think about that.

HUIZINGA: Yeah. So, Dexter, I always ask my collaborators where their project is on the spectrum from lab to life. But this research is a bit different from some of the other projects we featured. What is the, sort of, remit of your timeline? Is there one for completing the record in any way? Who, if anyone, are you accountable to? And what are your options for getting it up into space once it’s ready to go? Because there is no Voyager just imminently leaving right now, as I understand it. So talk a little bit about the scope from lab to life on this.

GREENE: Yeah. So, like you said, we don’t really have an exact timeline. This is, sort of, one of those projects where we could compile content forever. [LAUGHTER] There’s always more content to get. There’s always more perspectives to include. So I could do this forever. But I think the goal is to try and get all the content and get everything ready within the next couple years. As for who we’re accountable to, we’re, sort of, just accountable to ourselves. The way we’ve been working on this is not really like a club, I wouldn’t say, more just like a passion project that a few students and a few teachers have taken a liking to, I guess. So we’re just accountable to ourselves. We of course, like, we have meetings every week, and my teacher was the one that, like, organized the meetings. So I was, sort of, accountable to my teacher but really just doing it for ourselves.

HUIZINGA: Mm-hmm.

GREENE: As for getting it up into space, we have been talking a bit with the team led by Dr. Jiang. So ideally, in the future, we would collaborate more with them and [LAUGHS] go find our ticket to space on a NASA spaceship! But there are of course other options that we’ve been looking at. There’s a bunch of space agencies all around the world. So we’re not just looking at the United States.

HUIZINGA: Well, there’s also private space exploration companies …

GREENE: Yeah, and there’s also private space like SpaceX and etc. So we’ve thought about all of that, and we’ve been reaching out to other space agencies.

HUIZINGA: I love that “ticket to outer space” metaphor but true because there are constraints on what people can put on, although glass of this size would be pretty light.

GREENE: I feel the same way. You do have to get, like, approved. Like, for the original Golden Record, they had to get everything approved to make it to space. But I would think that it would be pretty reasonable—given the technology is just a piece of glass, essentially, and it’s quite small, the smallest it could be, really—I would think that there wouldn’t be too much trouble with that.

HUIZINGA: So, so … but that does lead to a question, kind of, about then extracting, and you’ve addressed this before by kind of saying, if the intelligence that it gets to is sophisticated enough, they’ll probably have a microscope, but I’m assuming you won’t include a microscope? You just send the glass?

GREENE: Yeah. So on the original record, they actually included a … I’m not sure what it’s called, but the device that you need to …

HUIZINGA: A phonograph?

GREENE: … play a rec … yeah, a phonograph, yes. [LAUGHTER] So they include—sorry! [LAUGHS]—they included a phonograph [cartridge and stylus] on the original Voyagers. And we’ve thought about that. It would probably be too difficult to include an actual microscope, but something that I’ve been working on is instructions on not exactly how to make the microscope that you would need but just to explain, “You’re going to need a microscope, and you’re going to need to play around with it.” One of the assumptions that we’ve made is that they will be curious and advanced. I mean, to actually retrieve the data, they would need to catch a spaceship out of the sky as it flies past them …

HUIZINGA: Right!

GREENE: … which we can’t do at the moment. So we’re assuming that they’re more advanced than us, curious, and would put a lot of time into it. Time and effort.

HUIZINGA: I always find it interesting that we always assume they’re smarter than us or more advanced than us. Maybe they’re not. Maybe it’s The Gods Must Be Crazy, and they find a computer and they start banging it on a rock. Who knows? Richard, setting aside any assumptions that this Golden Record on glass makes it into space and assuming that they could catch it and figure it out, Silica’s main mission is much more terrestrial in nature. And part of that, as I understand it, is informing the next generation of cloud infrastructure. So if you could, talk for a minute about the vision for the future of digital storage, particularly in terms of sustainability, and what role Silica may play in helping huge datacenters on this planet be more efficient and maybe even environmentally friendly.

BLACK: Yes, absolutely. So Microsoft is passionate about improving the sustainability of our operations, including data storage. So today archival data uses tape or hard drives, but those have a lifetime of only a few years, and they need to be continually replaced over the lifetime of the data. And that contributes to the costs both in manufacturing and it contributes to e-waste. And of course, those media also can consume electricity during their lifetime, either keeping them spinning or in the careful air-conditioning that’s required to preserve tape. So the transformative advantage of Silica is really in the durability of the data permanently stored in the glass. And this allows us to move from costs—whatever way you think about cost, either money or energy or a sustainability cost—move from costs that are based on the lifetime of the data to costs that are based on the operations that are done to the data. Because the glass doesn’t really need any cost while it’s just sitting there, while it’s doing nothing. And that’s a standout change in the way we can think about keeping archival data because it moves from, you know, a continual, as it were, monthly cost associated with keeping the thing over and over and over to, yeah, you have to pay to write. If you need to read the data, you have to pay the cost to read the data. But in the meantime, there’s no cost to just keeping it around in case you need it. And that’s a big change. And so actually, analysis suggests that Silica should be about a factor of 10 better for sustainability over archival time periods for archival data.

HUIZINGA: And I would imagine “space” is a good proof of concept for how durable and how long you expect it to be able to last and be retrieved. Well …

BLACK: Absolutely. You know, Dexter mentioned the original Golden Record had to get a, kind of, approval to be considered space-worthy. In fact, the windows on spacecraft that we use today are made of fused silica glass. So the fused silica glass is already considered space-worthy! You know, that’s a problem that’s already solved. And, you know, it is known to be very robust and to survive the rigors of outer space.

HUIZINGA: Yeah, and the large datacenter! Well, Dexter, you’re embarking on the next journey in your life, heading off to university this fall. What are you going to be studying, and how are you going to keep going with Avenues’ Golden Record once you’re at college because you don’t have any teachers or groups or whatever?

GREENE: Yeah, that’s a great question. So, like I said, I plan to major in robotics engineering. That’s still, I guess, like, TBD. I might do mechanical engineering, but I’m definitely leaning more towards robotics. And as for the project, I definitely want to continue work on the project. That’s something I’ve made very clear to my team. Like you said, like, I won’t have a teacher there with me, but one of the teachers that works on the project was my physics teacher last year, and I’ve developed a very good relationship with him. I can say for sure that I’ll continue to stay in touch with him, the rest of the team, and this project, which I’m super excited to be working on. And I think we’re really … we, sort of, got past the big first hump, which was like the, I guess, the hardest part, and I feel like it will be smooth sailing from here!

HUIZINGA: Do you think any self-imposed deadlines will help you close off the process? Because I mean, I could see this going … well, I should ask another question. Are there other students at Avenues, or any place else, that are involved in this that haven’t graduated yet?

GREENE: Yes, there are a few of us. Last year when we were working on the project, there were only a handful of us. So it was me and my best friend, Arthur Wilson, who also graduated. There were three other students. One was a ninth grader, and two were 10th graders. So they’re all still working on the project. And there’s one student from another campus that’s still working very closely on the project. And we’ve actually been working on expanding our team within our community. So at the end of last year, we were working on finding other students that we thought would be a great fit for the project and trying to rope them into it! [LAUGHTER] So we definitely want to continue to work on the project. And to answer your question from before about the deadlines, we like to set, sort of, smaller internal deadlines. That’s something that we’ve gotten very used to. As for a long-term deadline, we haven’t set one yet. It could be helpful to set a long-term deadline because if we don’t, we could just do the project forever.

HUIZINGA: [LAUGHS] Right …

GREENE: We might never end because there’s always more to add. But yeah, we do set smaller internal deadlines, so like get x amount of content done by this time, reach out to x number of space agencies, reach out to x number of whatever.

HUIZINGA: Mm-hmm. Yeah, it feels like there should be some kind of, you know, “enough is enough” for this round.

GREENE: Yeah.

HUIZINGA: Otherwise, you’re the artist who never puts enough paint on the canvas and …

GREENE: I also really like what you said just now with, like, “this round” and “next round.” That’s a very good way to look at it. Like Dr. Black said, he produced two platters for us already towards the end of my last school year. And I think that was a very good, like, first round and a good way to continue doing the project where we work on the project and we get a lot of content done and then we can say, let’s let this be a great first draft or a great second draft for now, and we have that draft ready to go, but we can continue to work on it if we want to.

HUIZINGA: Well, you know the famous computer science tagline “Shipping is a feature.” [LAUGHS] So there’s some element of “let’s get it out there” and then we can do the next iteration of upgrades and launch then.

GREENE: Exactly.

HUIZINGA: Well, Richard, while most people don’t put scientists and rock stars in the same bucket, Dexter isn’t the first young person to admit being a little intimidated—and even starstruck—by an accomplished and well-known researcher, but some students aren’t bold enough to cold email someone like you and ask for words of wisdom. So now that we’ve got you on the show, as we close, perhaps you could voluntarily share some encouraging words or direction to the next generation of students who are interested in making the next generation of technologies. So I’ll let you have the last word.

BLACK: Oh, I have a couple of small things to say. First of all, researchers are just people, too. [LAUGHTER] And, you know, they like others to talk to them occasionally. And usually, they like opportunities to be passionate about their research and to communicate the exciting things that they’re doing. So don’t be put off; it’s quite reasonable to talk. You know, I’m really excited by, you know, the, kind of, the passion and imagination that I see in some of the young people around today, and Dexter and his colleagues are an example of that. You know, advice to them would be, you know, work on a technology that excites you and in particular something that, if you were successful, it would have a big impact on our world and, you know, that should give you a kind of motivation and a path to having impact.

HUIZINGA: Hmm. What you just said reminded me of a Saturday Night Live skit with Christopher Walken—it’s the “More Cowbell” skit—but he says, we’re just like other people; we put our pants on one leg at a time, but once our pants are on, we make gold records! I think that’s funny right there!

[MUSIC]

Richard and Dexter, thank you so much for coming on and sharing this project with us today on Collaborators. Really had fun!

GREENE: Yeah, thank you so much for having us.

BLACK: Thank you.

[MUSIC FADES]


[1] (opens in new tab) It was later noted that the original Golden Record team was also led by astrophysicist Frank Drake (opens in new tab), whose efforts to search for extraterrestrial intelligence (SETI) inspired continued work in the area.

[2] (opens in new tab) While Dr. Jiang leads the Humanity’s Message to the Stars (opens in new tab) project, it is independent of NASA at this stage. 

[3] (opens in new tab) In his capacity as Design Director for the original Golden Record, Lomberg (opens in new tab) chose and arranged the images included.

The post Collaborators: Silica in space with Richard Black and Dexter Greene appeared first on Microsoft Research.

]]>
Collaborators: AI and the economy with Brendan Lucier and Mert Demirer http://approjects.co.za/?big=en-us/research/podcast/collaborators-ai-and-the-economy-with-brendan-lucier-and-mert-demirer/ Thu, 08 Aug 2024 13:00:00 +0000 http://approjects.co.za/?big=en-us/research/?p=1065351 Researcher Brendan Lucier and professor Mert Demirer are applying their micro- and macroeconomic expertise, respectively, to forecasting the economic impact of AI. They share how they’re using a task-level breakdown of occupations to help predict the future.

The post Collaborators: AI and the economy with Brendan Lucier and Mert Demirer appeared first on Microsoft Research.

]]>
Headshots of Brendan Lucier and Mert Demirer for the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with. 

What can the breakdown of jobs into their specific tasks tell us about the long-term impact of AI on the economy? Microsoft Senior Principal Researcher Brendan Lucier and MIT Assistant Professor Mert Demirer are combining their expertise in micro- and macroeconomics, respectively, to build a framework for answering the question and ultimately helping the world prepare for and responsibly steer the course of disruption accompanying the technology. In this episode, they share how their work fits into the Microsoft research initiative AI, Cognition, and the Economy, or AICE; how the evolution of the internet may indicate the best is yet to come for AI; and their advice for budding AI researchers.

Transcript 

[TEASER] 

[MUSIC PLAYS UNDER DIALOGUE] 

BRENDAN LUCIER: What we’re doing here is a prediction problem. And when we were trying to look into the future this way, one way we do that is we try to get as much information as we can about where we are right now. And so we were lucky to have, like, a ton of information about the current state of the economy and the labor market and some short-term indicators on how generative AI seems to be, sort of, affecting things right now, in this moment. And then the idea is to layer some theory models on top of that to try to extrapolate forward, right, in terms of what might be happening, sort of get a glimpse of this future point. 

MERT DEMIRER: So this is a prediction problem that we cannot use machine learning, AI. Otherwise, it would have been a very easy problem to solve. So what you need instead is a model or, like, framework that will take, for example, inputs of the productivity gains or for, like, microfoundation as an input and then generate predictions for the entire economy. 

[TEASER ENDS] 

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC FADES] 

On today’s episode, I’m talking to Dr. Brendan Lucier, a senior principal researcher in the economics and computation group at Microsoft Research, and Dr. Mert Demirer, an assistant professor of applied economics at the MIT Sloan School of Management. Brendan and Mert are exploring the economic impact of job automation and generative AI as part of Microsoft’s AI, Cognition, and the Economy, or AICE, research initiative. And since they’re part of the AICE Accelerator Pilot collaborations, let’s get to know our collaborators. Brendan, let’s start with you and your “business address,” if you will. Your research lives at the intersection of microeconomic theory and theoretical computer science. So tell us what people—shall we call them theorists?—who live there do and why they do it! 

BRENDAN LUCIER: Thank you so much for having me. Yeah, so this is a very interdisciplinary area of research that really gets at, sort of, this intersection of computation and economics. And what it does is it combines the ideas from algorithm design and computational complexity that we think of when we’re building algorithmic systems with, sort of, the microeconomic theory of how humans will use those systems and how individuals make decisions, right. How their goals inform their actions and how they interact with each other. And where this really comes into play is in the digital economy and platforms that we, sort of, see online that we work with on an everyday basis, right. So we’re increasingly interacting with algorithms as part of our day-to-day life. So we use them to search for information; we use them to find rides and find jobs and have recommendations on what products we purchase. And as we do these things online, you know, some of the algorithms that go into this, like, help them grow into these huge-scale, you know, internet-sized global platforms. But fundamentally, these are still markets, right. So even though there’s a lot of algorithms and a lot of computational ideas that go into these, really what they’re doing is connecting human users to the goods and the services and to each other over the course of what they need to do in their day-to-day life, right. And so this is where this microeconomic view really comes into play. So what we know is that when people are interacting with these platforms to get at what they want, they’re going to be strategic about this, right. So people are always going to use tools in the ways that, sort of, work best for them, right, even if that’s not what the designer has in mind. And so when we’re designing algorithms, in a big way, we’re not necessarily designing solutions; we’re designing the rules of a game that people are going to end up playing with the platform or with each other.

HUIZINGA: Wow. 

LUCIER: And so a big part of, sort of, what we do in this area is that if we’re trying to understand the impact of, like, a technology change or a new platform that we’re going to design, we need to understand what it is that the users want and how they’re going to respond to that change when they interact with it. 

HUIZINGA: Right.

LUCIER: When we think about, sort of, microeconomic theory, a lot of this is, you know, ideas from game theory, ideas about how it is that humans make decisions, either on their own or in interaction with each other, right.

HUIZINGA: Yeah.

LUCIER: So when I’m in the marketplace, maybe I’m thinking not only about what’s best for me, but, sort of, I’m anticipating maybe what other people are going to be doing, as well. And I really need to be thinking about how the algorithms that make up the fundamentals of those marketplaces are going to influence the way people are thinking about not only what they’re doing but what other people are doing. 

HUIZINGA: Yeah, this is so fascinating because even as you started to list the things that we use algorithms—and we don’t even think about it—but we look for a ride, a job, a date. All of these things that are part of our lives have become algorithmic! 

LUCIER: Absolutely. And it’s fascinating that, you know, when we think about, you know, someone might launch a new algorithm, a new advance to these platforms, that looks on paper like it’s going to be a great improvement, assuming that people keep behaving the way they were behaving before. But of course, people will naturally respond, and so there’s always this moving target of trying to anticipate what it is that people actually are really trying to do and how they will adapt. 

HUIZINGA: We’re going to get into that so deep in a few minutes. But first, Mert, you are an assistant professor of economics at MIT’s famous Sloan School of Management, and your homepage tells us your research interests include industrial organization and econometrics. So unpack those interests for our listeners and tell us what you spend most of your time doing at the Sloan School. 

MERT DEMIRER: Thank you so much for having me. My name is Mert Demirer. I am an assistant professor at MIT Sloan, and I spend most of my time doing research and teaching MBAs. And in my research, I’m an economist, so I do research in a field called industrial organization. And the overarching theme of my research is firms and firm productivity. So in my research, I ask questions like, what makes firms more productive? What are the determinants of firm growth, or how do industries evolve over time? So what I do is I typically collect data from firms, and I use some econometric model or sometimes a model of industrial or the firm model and then I answer questions like these. And more recently, my research focused on new emerging technologies and how firms use these emerging technologies and what are the productivity effect of these new technologies. And I, more specifically, I did research on cloud computing, which is a really important technology … 

HUIZINGA: Yeah … 

DEMIRER: … transforming firms and industries. And more recently, my research focuses on AI, both, like, the adoption of AI and the productivity impact of AI. 

HUIZINGA: Right, right. You know, even as you say it, I’m thinking, what’s available data? What’s good data? And how much data do you need to make informed analysis or decisions? 

DEMIRER: So finding good data is a challenge in this research. In general, there are, like, official data sources like census or, like, census of manufacturers, which have been commonly used in productivity research. That data is very comprehensive and very useful. But of course, if you want to get into the details of, like, new technologies and, like, granular firm analysis, that’s not enough. So what I have been trying to do more recently is to find industry partners which have lots of good data on other firms. 

HUIZINGA: Gotcha. 

DEMIRER: So these are typically the main data sources I use. 

HUIZINGA: You know, this episode is part of a little series within a series we’re doing on AI, Cognition, and the Economy, and we started out with Abi Sellen from the Cambridge, UK, lab, who gave us an overview of the big ideas behind the initiative. And you’re going to give us some discussion today on AI and, specifically, the economy. But before we get into your current collaboration, let’s take a minute to “geolocate” ourselves in the world of economics and how your work fits into the larger AICE research framework. So, Brendan, why don’t you situate us with the “micro” view and its importance to this initiative, and then Mert can zoom out and talk about the “macro” view and why we need him, too. 

LUCIER: Yeah, sure. Yeah, I just, I just love this AICE program and the way that it puts all this emphasis on how human users are interacting with AI systems and tools, and this is really, like, a focal point of a lot of this, sort of, micro view, also. So, like, from this econ starting point of microeconomics, one place I think of is imagining how users would want to integrate AI tools into their day-to-day, right—into both their workflow as part of their jobs; in terms of, sort of, what they’re doing in their personal lives. And when we think about how new tools like AI tech, sort of, comes into those workflows, an even earlier question is, how is it that users are organizing what they do into individual tasks and, like, why are they doing them that way in the first place, right? So when we want to think about, you know, how it is that AI might come in and help them with pain points that they’re dealing with, we, sort of, need to understand, like, what it is they’re trying to accomplish and what are the goals that they have in mind. And this is super important when we’re trying to build effective tools because we need to understand how they’ll change their behavior or adjust to incorporate this new technology and trying to zoom into that view. 

HUIZINGA: Yeah. Mert, tell us a little bit more about the macro view and why that’s important in this initiative, as well. 

DEMIRER: Macro view is very complementary to micro view, and it takes a more holistic approach and analyzes the economy with its components rather than focusing on individual components. So instead of focusing on one component and analyze the collectivity effect of AI on a particular, like, occupation or sector, you just analyze this whole economy and you model the interactions between these components. And this holistic view is really essential if you want to understand AI because this is going to allow you to make, like, long-term projections and it’s going to help you understand how AI is going to affect, like, the entire economy. And to make things, like, more concrete—and going back to what Brendan said—that suppose you analyze a particular task or you figured out how AI saw the pain point and it increased the productivity by like x amount, so that impact on that occupation or, let’s say, the industry won’t be limited to that industry, right? The wage is going to change in this industry, but it’s going to affect other industries, potentially, like, labor from one industry which is affected significantly by AI to other industries, and, like, maybe new firms are going to emerge, some firms are going to exit, and so on. So this holistic view, it essentially models all of these components in just one system and also tries to understand the interactions between those. And as I said, this is really helpful because first of all, this helps you to make long-term projections about AI, how AI is going to impact the economy. And second, this is going to let you go beyond the first-order impact. Because you can essentially look at what’s going on and analyze or measure the first-order impact, but if you want to get the second- or third-order impact, then you need a framework or you need a bigger model. And typically, those, like, second- or third-order effects are typically the unintended effects or the hidden effects. 

HUIZINGA: Right. 

DEMIRER: And that’s why this, like, more holistic approach is useful, particularly for AI. 

HUIZINGA: Yeah, I got to just say right now, I feel like I wanted to sit down with you guys for, like, a couple hours—not with a microphone—but just talking because this is so fascinating. And Abi Sellen mentioned this term “line of sight” into future projections, which was sort of an AICE overview goal. Interestingly, Mert, when you mentioned the term productivity, is that the metric? Is productivity the metric that we’re looking to in terms of this economic impact? It seems to be a buzzword, that we need to be “more productive.” Is that, kind of, a framework for your thinking? 

DEMIRER: I think it is an important component. It’s an important component how we should analyze and think about AI because again, like, when you zoom into, like, the micro view of, like, how AI is going to affect my day-to-day work, that is, like, very natural to think that in terms of, like, productivity—oh, I saved, like, half an hour yesterday by using, like, AI. And, OK, that’s the productivity, right. That’s very visible. Like, that’s something you can see, something you can easily measure. But that’s only one component. So you need to understand how that productivity effect is going to change other things. 

HUIZINGA: Right! 

LUCIER: Like how I am going to spend the additional time, whether I’m going to spend that time for leisure or I’m going to do something else. 

HUIZINGA: Right. 

DEMIRER: In that sense, I think productivity is an important component, and maybe it is, like, the initial point to analyze these technologies. But we will definitely go beyond the productivity effect and understand how these, like, potential productivity effects are going to affect, like, the other parts of the economy and how the agents—like firms, people—are going to react to that potential productivity increase. 

HUIZINGA: Yeah, yeah, in a couple questions I’ll ask Brendan specifically about that. But in the meantime, let’s talk about how you two got together on this project. I’m always interested in that story. This question is also known as “how I met your mother.” And the meetup stories are often quite fun and sometimes surprising. In fact, last week, one person told his side of the story, and the other guy said, hey, I didn’t even know that! [LAUGHS] So, Brendan, tell us your side of who called who and how it went down, and then Mert can add his perspective. 

LUCIER: Great. So, yeah, so I’ve known Mert for quite some time! Mert joined our lab as a—the Microsoft Research New England lab—as an intern some years ago and then as a postdoc in, sort of, 2020, 2021. And so over that time, we got to know each other quite well, and I knew a lot about the macroeconomic work that Mert was doing. And so then, fast-forward to more recently, you know, this particular project initially started as discussions between myself and my colleague Nicole Immorlica at Microsoft Research and John Horton, who’s an economist at MIT who was visiting us as a visiting researcher, and we were discussing how the structure of different jobs and how those jobs break down into tasks might have an impact on how they might be affected by AI. And then very early on in that conversation, we, sort of, realized that, you know, this was really a … not just, like, a microeconomic question; it’s not just a market design question. The, sort of, the macroeconomic forces were super important. And then immediately, we knew, OK, like, Mert’s top of our list; we need, [LAUGHTER] we need, you know, to get Mert in here and talking to us about it. And so we reached out to him. 

HUIZINGA: Mert, how did you come to be involved in this from your perspective? 

DEMIRER: As Brendan mentioned, I spent quite a bit of time at Microsoft Research, both as an intern and as a postdoc, and Microsoft Research is a very, like, fun place to be as an economist and a really productive place to be as an economist because it’s very, like, interdisciplinary. It is a lot different from a typical academic department and especially an economics academic department. So my time at Microsoft Research has already led to a bunch of, like, papers and collaborations. And then when Brendan, like, emailed me with the research question, I thought it’s, like, no-brainer. It’s an interesting research question, like part of Microsoft Research. So I said, yeah, let’s do it! 

HUIZINGA: Brendan, let’s get into this current project on the economic impact of automation and generative AI. Such a timely and fascinating line of inquiry. Part of your research involves looking at a lot of current occupational data. So from the vantage point of microeconomic theory and your work, tell us what you’re looking at, how you’re looking at it, and what it can tell us about the AI future. 

LUCIER: Fantastic. Yeah, so in some sense, the idea of this project and the thing that we’re hoping to do is, sort of, get our hands on the long-term economic impact of generative AI. But it’s fundamentally, like, a super-hard problem, right? For a lot of reasons. And one of those reasons is that, you know, some of the effects could be quite far in the future, right. So this is things where the effects themselves but especially, like, the data we might look at to measure them could be years or decades away. And so, fundamentally, what we’re doing here is a prediction problem. And when we were trying to, sort of, look into the future this way, one way we do that is we try to get as much information as we can about where we are right now, right. And so we were lucky to have, like, a ton of information about the current state of the economy and the labor market and some short-term indicators on how generative AI seems to be, sort of, affecting things right now in this moment. And then the idea is to, sort of, layer some theory models on top of that to try to extrapolate forward, right, in terms of what might be happening, sort of get a glimpse of this future point. So in terms of the data we’re looking at right now, there’s this absolutely fantastic dataset that comes from the Department of Labor. It’s the O*NET database. This is the, you know, Occupational Information Network—publicly available, available online—and what it does is basically it breaks down all occupations across the United States, gives a ton of information about them, including—and, sort of, importantly for us—a very detailed breakdown of the individual tasks that make up the day-to-day in terms of those occupations, right. So, for example, if you’re curious to know what, like, a wind energy engineer does day-to-day, you could just go online and look it up, and so it basically gives you the entire breakdown. Which is fantastic. I mean, it’s, you know, I love, sort of, browsing it. It’s an interesting thing to do with an afternoon. [LAUGHTER] But from our perspective, the fact that we have these tasks—and it actually gives really detailed information about what they are—lets us do a lot of analysis on things like how AI tools and generative AI might help with different tasks. There’s a lot of analysis that we and, like, a lot of other papers coming out the last year have done in looking at which tasks do we think generative AI can have a big influence on and which ones less so in the present moment, right. And there’s been work by, you know, OpenAI and LinkedIn and other groups, sort of, really leaning into that. We can actually take that one step further and actually look also at the structure between tasks, right. So we can see not only, like, what fraction of the time I spend are things that can be influenced by generative AI but also how they relate to, like, my actual, sort of, daily goals. Like, when I look at the tasks I have to do, do I have flexibility in when and where I do them, or are things in, sort of, a very rigid structure? Are there groups of interrelated tasks that all happen to be really exposed to generative AI? And, you know, what does that say about how workers might reorganize their work as they integrate AI tools in and how that might change the nature of what it is they’re actually trying to do on a day-to-day basis? 

HUIZINGA: Right. 

LUCIER: So just to give an example, so, like, one of the earliest examples we looked at as we started digging into the data and testing this out was radiology. And so radiology is—you know, this is medical doctors that specialized in using medical imaging technology—and it happens to be an interesting example for this type of work because you know there are lots of tasks that make that up and they have a lot of structure to them. And it turns out when you look at those tasks, there’s interestingly, like, a big group of tasks that all, sort of, are prerequisites for an important, sort of, core part of the job, … 

HUIZINGA: Right … 

LUCIER: … which is, sort of, recommending a plan of which tests to, sort of, perform, right. So these are things like analyzing medical history and analyzing procedure requests, summarizing information, forming reports. And these are all things that we, sort of, expect that generative AI can be quite effective at, sort of, assisting with, right. And so the fact that these are all, sort of, grouped together and feed into something that’s a core part of the job really is suggestive that there’s an opportunity here to delegate some of those, sort of, prerequisite tasks out to, sort of, AI tools so that the radiologist can then focus on the important part, which is the actual recommendations that they can make. 

HUIZINGA: Right. 

LUCIER: And so the takeaway here is that it matters, like, how these tasks are related to each other, right. Sort of, the structure of, you know, what it is that I’m doing and when I’m doing them, right. So this situation would perhaps be very different if, as I was doing these tasks where AI is very helpful, I was going back and forth doing consulting with patients or something like this, where in that, sort of, scenario, I might imagine that, yeah, like an AI tool can help me, like, on a task-by-task basis but maybe I’m less likely to try to, like, organize all those together and automate them away. 

HUIZINGA: Right. Yeah, let me focus a little bit more on this idea of you in the lab with all this data, kind of, parsing out and teasing out the tasks and seeing which ones are targets for AI, which ones are threatened by AI, which ones would be wonderful with AI. Do you have buy-in from these exemplar-type occupations that they say, yes, we would like you to do this to help us? I mean, is there any of that collaboration going on with these kinds of occupations at the task level? 

LUCIER: So the answer is not yet. [LAUGHTER] But this is definitely an important part of the workflow. So I would say that, you know, ultimately, the goal here is that, you know, as we’re looking for these patterns across, like, individual exemplar occupations, that, sort of, what we’re looking for is relationships between tasks that extrapolate out, right. Across lots of different industries, right. So, you know, it’s one thing to be able to say, you know, a lot of very deep things about how AI might influence a particular job or a particular industry. But in some sense, the goal here is to see patterns of tasks that are repeated across lots of different occupations, across lots of different sectors that say, sort of, these are the types of patterns that are really amenable to, sort of, AI being integrated well into the workforce, whereas these are scenarios where it’s much more of an augmenting story as opposed to an automating story. But I think one of the things that’s really interesting about generative AI as a technology here, as opposed to other types of automated technology, is that while there are lots of aspects of a person’s job that can be affected by generative AI, there’s this relationship between the types of work that I might use an AI for versus the types of things that are, sort of, like the core feature of what I’m doing on a day-to-day. 

HUIZINGA: Right. Gotcha … 

LUCIER: And so, maybe it’s, like, at least in the short term, it actually looks quite helpful to say that, you know, there are certain aspects of my work, like going out and summarizing a bunch of heavy data reports, that I’m very happy to have an AI, sort of, do that part of my work. So then I can go and use those things forward in, sort of, the other half of my day. 

HUIZINGA: Yeah. And that’s to Mert’s point: look how much time I just saved! Or I got a half hour back! We’ll get to that in a second. But I really now am eager, Mert, to have you explain your side of this. Brendan just gave us a wonderful task-centric view of AI’s impact on specific jobs. I want you to zoom out and talk about the holistic, as you mentioned before, or macroeconomic view in this collaboration. How are you looking at the impact of AI beyond job tasks, and what role does your work play in helping us understand how these advances in AI might affect job markets and the economy writ large? 

DEMIRER: One thing Brendan mentioned a few minutes ago is this is a prediction task. Like, we need to predict what will be the effect of AI, how AI is going to affect the economy, especially in the long run. So this is a prediction problem that we cannot use machine learning, AI. Otherwise, it would have been a very easy problem to solve. 

HUIZINGA: Right … [LAUGHS] 

DEMIRER: So what you need instead is a model or, like, framework that will take, for example, inputs of, like, the productivity gains, for example, like Brendan talked about, or for, like, microfoundation as an input and then generate predictions for the entire economy. To do that, what I do in my research is I develop and use models of industries and firms. So these models essentially incorporate a bunch of economic agents. Like, this could be labor; this could be firms; this could be [a] policymaker who is trying to regulate the industry. And then you write down the incentives of these, like, different agents in the economy, and then you write down this model, you solve this model with the available data, and then this model gives you predictions. So you can, once you have a model like this, you can ask what would be the effect of a change in the economic environment on like wages, on productivity, on industry concentration, let’s say. So this is what I do in my research. So, like, I briefly mentioned my research on cloud computing. I think this is a very good example. When you think about cloud computing, always … everyone always, like, thinks about it helps you, like, scale very rapidly, which is true, and, like, which is the actual, like, the firm-level effect of cloud computing. But then the question is, like, how that is going to affect the entire industry, whether the industry is going to be more concentrated or less concentrated, it’s going to grow, like, faster, or which industry is going to grow faster, and so on. So essentially, in my research, I develop models like this to answer questions—these, like, high-level questions. And when it comes to AI, we have these, like, very detailed micro-level studies, like these exposure measures Brendan already mentioned, and the framework, the micro framework, we developed is a task view of AI. What you do is, essentially, you take the output of that micro model and then you feed it into a bigger economy-level model, and you develop a higher-level prediction. So, for example, you can apply this, like, task-based model on many different occupations. You can get a number for every occupation, like for occupation A, productivity will be 5 percent; for occupation B, it’s going to be like 10 percent; and so on. You can aggregate them at the industry level—you can get some industry-level numbers—you feed those numbers into a more, like, general equilibrium model and then you solve the model and then you answer questions like, what will be the effect of AI on wage on average? Or, like, what will be the effect of AI on, like, total output in the economy? So my research is, like, more on this answering, like, bigger industry-level or economic-level questions. 

HUIZINGA: Well, Brendan, one of our biggest fears about AI is that it’s going to “steal our jobs.” I just made air quotes on a podcast again. But this isn’t our first disruptive technology rodeo, to use a phrase. So that said, it’s the first of its kind. What sets AI apart from disruptive technologies of the past, and how can looking at the history of technological revolutions help us manage our expectations, both good and bad? 

LUCIER: Fantastic. Such an important question. Yeah, like there’s been, you know, just so much discussion and “negativity versus optimism” debates in the world in the public sphere … 

HUIZINGA: Hope versus hype … 

LUCIER: … and in the academic sphere … yeah, exactly. Hope versus hype. But as you say, yeah, it’s not our first rodeo. And we have a lot of historical examples of these, you know, disruptive, like, so-called general-purpose technologies that have swept through the economy and made a lot of changes and enabled things like electricity and the computer and robotics. Going back further, steam engine and the industrial revolution. You know, these things are revolutions in the sense that, you know, they sort of rearrange work, right. They’re not just changing how we do things. They change what it is that we even do, like just the nature of work that’s being done. And going back to this point of automation versus augmentation, you know, what that looks like can vary quite a bit from revolution to revolution, right. So sometimes this looks like fully automating away certain types of work. But in other cases, it’s just a matter of, sort of, augmenting workers that are still doing, in some terms, what they were doing before but with a new technology that, like, substantially helps them and either takes part of their job and makes it redundant so they can focus on something that’s, you know, more core or just makes them do what they were doing before much, much faster. 

HUIZINGA: Right. 

LUCIER: And either way, you know, this can have a huge impact on the economy and especially, sort of, the labor market. But that impact can be ambiguous, right. So, you know, if I make, you know, a huge segment of workers twice as productive, then companies have a choice. They can keep all the workers and have twice the output, or they can get the same output with half as many workers or something in between, and, you know, which one of those things happens depends not even so much on the technology but on, sort of, the broader economic forces, right. The, you know, the supply and demand and how things are going to come together in equilibrium, which is why this macroeconomic viewpoint is so important to actually give the predictions on, you know, how companies might respond to these changes that are coming through the new technology. Now, you know, where GenAI is, sort of, interesting as an example is the way that, you know, what types of work it impacts, right. So generative AI is particularly notable in that it impacts, you know, high-skill, you know, knowledge-, information-based work directly, right[1]. And it cuts across so many different industries. We think of all the different types of occupations that involve, you know, summarizing data or writing a report or writing emails. There’s so many different types of occupations where this might not be the majority of what they do, but it’s a substantial fraction of what they do. And so in many cases, you know, this technology—as we were saying before—can, sort of, come in and has the potential to automate out or at least really help heavily assist with parts of the job but, in some cases, sort of, leave some other part of the job, which is a core function. And so these are the places where we really expect this human-AI collaboration view to be especially impactful and important, right. Where we’re going to have lots of different workers in lots of different occupations who are going to be making choices on which parts of their work they might delegate to, sort of, AI agents and which parts of the work, you know, they really want to keep their own hands on. 

HUIZINGA: Right, right. Brendan, talk a little more in detail about this idea of low-skill work and high-skill work, maybe physical labor and robotics kind of replacements versus knowledge worker and mental work replacements, and maybe shade it a little bit with the idea of inequalities and how that’s going to play out. I mean, I imagine this project, this collaboration, is looking at some of those issues, as well? 

LUCIER: Absolutely. So, yeah, when we think about, you know, what types of work get affected by some new technology—and especially, sort of, automation technology—a lot of the times in the past, the sorts of work that have been automated out are what we’d call low-skill or, like, at least, sort of, more physical types of labor being replaced or automated by, you know, robotics. We think about the potential of manufacturing and how that displaces, like, large groups of workers who are, sort of, working in the factory manually. And so there’s a sense when this, sort of, happens and a new technology comes through and really disrupts work, there’s this transition period where certain people, you know, even if at the end of the day, the economy will eventually reach sort of new equilibrium which is generally more productive or good overall, there’s a big question of who’s winning and who’s losing both in the long term but especially in that short term, … 

HUIZINGA: Yeah! 

LUCIER: … sort of intermediate, you know, potentially very chaotic and disruptive period. And so very often in these stories of automation historically, it’s largely marginalized low-skill workers who are really getting affected by that transition period. AI—and generative AI in particular—is, sort of, interesting in the potential to be really hitting different types of workers, right. 

HUIZINGA: Right. 

LUCIER: Really this sort of, you know, middle sort of white-collar, information-work class. And so, you know, really a big part of this project and trying to, sort of, get this glimpse into the future is getting, sort of, this—again, as you said—line of sight on which industries we expect to be, sort of, most impacted by this, and is it as we might expect, sort of, those types of work that are most directly affected, or are there second- or third-order effects that might do things that are unanticipated? 

HUIZINGA: Right, and we’ll talk about that in a second. So, Mert, along those same lines, it’s interesting to note how new technologies often start out simply by imitating old technologies. Early movies were stage plays on film. Email was a regular letter sent over a computer. [LAUGHS] Video killed the radio star … But eventually, we realized that these new technologies can do more than we thought. And so when we talked before, you said something really interesting. You said, “If a technology only saves time, it’s boring technology.” What do you mean by that? And if you mean what I think you mean, how does the evolution—not revolution but evolution—of previous technologies serve as a lens for the affordances that we may yet get from AI? 

DEMIRER: Let me say first, technology that saves time is still very useful technology! [LAUGHTER] Who wouldn’t want a technology that will save time? 

HUIZINGA: Sure … 

DEMIRER: But it is less interesting for us, like, to study and maybe it’s, like, less interesting in terms of, like, the broader implications. And so why is that? Because if a technology saves time, then, OK, so I am going to have maybe more time, and the question is, like, how I’m going to spend that time. Maybe I’m going to have more leisure or maybe I’m going to have to produce more. It’s, like, relatively straightforward to analyze and quantify. So however, like, the really impactful technologies could allow us to accomplish new tasks that were previously impossible, and they should open up new opportunities for creativity. And I think here, this knowledge-worker impact of AI is particularly important because I think as a technology, the more it affects knowledge worker, the more likely it’s going to allow us to achieve new things; it’s going to allow us to create more things. So I think in that sense, I think generative AI has a huge potential in terms of making us accomplish new things. And to give you an example from my personal experience, so I’m a knowledge worker, so I do research, I teach, and generative AI is going to help my work, as well. So it’s already affecting … so it’s already saving me time. It’s making me more productive. So suppose that generative AI just, like, makes me 50 percent more productive, let’s say, like five years from now, and that’s it. That’s the only effect. So what’s going to happen to my job? Either I’m going to maybe, like, take more time off or maybe I’m going to write more of the same kind of papers I am writing in economics. But … so imagine, like, generative AI is helping me writing a different kind of paper. How is that possible? So I have a PhD in econ, and if I try really hard, maybe I can do another PhD. But that’s it. Like, I can specialize only one or, like, two topics. But imagine generative AI as an, like, agent or collaborator having PhD in, like, hundreds of different fields, and then you can, like, collaborate and, like, communicate and get information through generative AI on really different fields. That will allow me to do different kinds of research, like more interdisciplinary kinds of research. In that sense, I think the really … the most important part of generative AI is going to be this … what it will allow us to achieve new things, like what creative new things we are going to do. And I can give you a simple example. Like, we were talking about previous technologies. Let’s think of internet. So what was the first application of internet? It’s sending an email. It saves you time. Instead of writing things on a paper and, like, mailing it, you just, like, send it immediately, and it’s a clear time-saving technology. But what are the major implications for internet, like, today? It’s not email. It is like e-commerce, or it is like social media. It allows us to access infinite number of products beyond a few stores in our neighborhood, or it allows us to communicate or connect with people all around the world … 

HUIZINGA: Yeah … 

DEMIRER: … instead of, again, like limiting ourselves to our, like, social circle. So in that sense, I think we are currently in the “email phase” of AI, … 

HUIZINGA: Right … 

DEMIRER: … and we are going to … like, I think AI is going to unlock so many other new capabilities and opportunities, and that is the most exciting part. 

HUIZINGA: Clearly, one of the drivers behind the whole AICE research initiative is the question of what could possibly go wrong if we got everything right, and I want to anchor this question on the common premise that if we get AI right, it will free us from drudgery—we’ve kind of alluded to that—and free us to spend our time on more meaningful or “human”—more air quotes there—pursuits. So, Brendan, have you and your team given any thought to this idea of unintended consequences and what such a society might actually look like? What will we do when AI purportedly gives us back our time? And will we really apply ourselves to making the world better? Or will we end up like those floating people in the movie WALL-E

LUCIER: [LAUGHS] I love that framing, and I love that movie, so this is great. Yeah. And I think this is one of these questions about, sort of, the possible futures that I think is super important to be tackling. In the past, people, sort of, haven’t stopped working; they’ve shifted to doing different types of work. And as you’re saying, there’s this ideal future in which what’s happening is that people are shifting to doing more meaningful work, right, and the AI is, sort of, taking over parts of the, sort of, the drudgery, you know. These, sort of, annoying tasks that, sort of, I need to do as just, sort of, side effects of my job. I would say that where the economic theory comes in and predicts something that’s slightly different is that I would say that the economic theory predicts that people will do more valuable work in the sense that people will tend to be shifted in equilibrium towards doing things that complement what it is that the AI can do or doing things that the AI systems can’t do as well. And, you know, this is really important in the sense that, like, we’re building these partnerships with these AI systems, right. There’s this human-AI collaboration where human people are doing the things that they’re best at and the AI systems are doing the things that they’re best at. And while we’d love to imagine that, like, that more valuable work will ultimately be more meaningful work in that it’s, sort of, fundamentally more human work, that doesn’t necessarily have to be the case. You know, we can imagine scenarios in which I personally enjoy … there are certain, you know, types of routine work that I happen to personally enjoy and find meaningful. But even in that world, if we get this right and, sort of, the, you know, the economy comes at equilibrium to a place where people are being more productive, they’re doing more valuable work, and we can effectively distribute those gains to everybody, there’s a world in which, you know, this has the potential to be the rising tide that lifts all boats. 

HUIZINGA: Right. 

LUCIER: And so that what we end up with is, you know, we get this extra time, but through this different sort of indirect path of the increased standard of living that comes with an improved economy, right. And so that’s the sort of situation where that source of free time I think really has the potential to be somewhere where we can use it for meaningful pursuits, right. But there are a lot of steps to take to, sort of, get there, and this is why it’s, I think, super important to get this line of sight on what could possibly be happening in terms of these disruptions. 

HUIZINGA: Right. Brendan, something you said reminded me that I’ve been watching a show called Dark Matter, and the premise is that there’s many possible lives we could live, all determined by the choices we make. And you two are looking at possible futures in labor markets and the economy and trying to make models for them. So how do existing hypotheses inform where AI is currently headed, and how might your research help predict them into a more optimal direction? 

LUCIER: Yeah, that’s a really big question. Again, you know, as we’ve said a few times already, there’s this goal here of getting this heads-up on which segments of the economy can be most impacted. And we can envision these better futures as the economy stabilizes, and maybe we can even envision pathways towards getting there by trying to address, sort of, the potential effects of inequality and the distribution of those gains across people. But even in a world where we get all those things right, that transition is necessarily going to be disruptive, right. 

HUIZINGA: Right. 

LUCIER: And so even if we think that things are going to work out well in the long term, in the short term, there’s certainly going to be things that we would hope to invest in to, sort of, improve for everyone. And so even in a world where we believe, sort of, the technology is out there and we really think that people are going to be using it in the ways that make most sense to them, as we get hints about where these impacts can be largest, I think that an important value there is that it lets us anticipate opportunities for responsible stewardship, right. So if we can see where there’s going to be impact, I think we can get a hint as to where we should be focusing our efforts, and that might look like getting ahead of demand for certain use cases or anticipating extra need for, you know, responsible AI guardrails, or even just, like, understanding, you know, [how] labor market impacts can help us inform policy interventions, right. And I think that this is one of the things that gets me really excited about doing this work at Microsoft specifically. Because of how much Microsoft has been investing in responsible AI, and, sort of, the fundamentals that underlie those guardrails and those possible actions means that we, sort of, in this company, we have the ability to actually act on those opportunities, right. And so I think it’s important to really, sort of, try to shine as much light as possible on where we think those will be most effective. 

HUIZINGA: Yeah. Mert, I usually ask my guests on Collaborators where their research is on the spectrum from “lab to life,” but this isn’t that kind of research. We might think of it more in terms of “lab for life” research, where your findings could actually help shape the direction of the product research in this field. So that said, where are you on the timeline of this project, and do you have any learnings yet that you could share with us? 

DEMIRER: I think the first thing I learned about this project is it is difficult to study AI! [LAUGHTER] So we are still in, like, the early stages of the project. So we developed this framework we talked about earlier in the podcast, and now what we are doing is we are applying that framework to a few particular occupations. And the challenge we had is these occupations, when you just describe them, it’s like very simple, but when you go to this, like, task view, it’s actually very complex, the number of tasks. Sometimes we see in the data, like, 20, 30 tasks they do, and the relationship between those tasks. So it turned out to be more difficult than I expected. So what we are currently doing is we are applying the framework to a few specific tasks which help us understand how the model works and whether the model needs any adjustment. And then the goal is once we understand the model on any few specific cases, we’ll scale that up. And then we are going to develop these big predictions on the economy. So we are currently not there yet, but we are hoping to get there pretty soon. 

HUIZINGA: And just to, kind of, follow up on that, what would you say your successful outcome of this research would be? What’s your artifact that you would deliver from this project as collaboration? 

DEMIRER: So ultimately, our goal is to develop predictions that will inform the trajectory the AI is taking, that’s going to inform, like, the policy. That’s our goal, and if we generate that output, and especially if it informs policy of how firms or different agents of the economy adopt AI, I think that will be the ideal output for this project. 

HUIZINGA: Yeah. And what you’ve just differentiated is that there are different end users of your research. Some of them might be governmental. Some of them might be corporate. Some of them might even be individuals or even just layers of management that try to understand how this is working and how they’re working. So wow. Well, I usually close each episode with some future casting. But that basically is what we’ve been talking about this whole episode. So I want to end instead by asking each of you to give some advice to researchers who might be just getting started in AI research, whether that’s the fields that develop the technology itself or the fields that help define its uses and the guardrails we put around it. So what is it important for us to pay attention to right now, and what words of wisdom could you offer to aspiring researchers? I’ll give you each the last word. Mert, why don’t you go first? 

DEMIRER: My first advice will be use AI yourself as much as possible. Because the great thing about AI is that everyone can access this technology even though it’s a very early stage, so there’s a huge opportunity. So I think if you want to study AI, like, you should use it as much as possible. That personally allows me to understand the technology better and also develop research questions. And the second advice would be to stay up to date with what’s happening. This is a very rapidly evolving technology. There is a new product, new use case, new model every day, and it’s hard to keep up. And it is actually important to distinguish between questions that won’t be relevant two months from now versus questions that’s going to be important five years from now. And that requires understanding how the technology is evolving. So I personally find it useful to stay up to date with what’s going on. 

HUIZINGA: Brendan, what would you add to that? 

LUCIER: So definitely fully agree with all of that. And so I guess I would just add something extra for people who are more on the design side, which is that when we build, you know, these systems, these AI tools and guardrails, we oftentimes will have some anticipated, you know, usage or ideas in our head of how this is going to land, and then there’ll always be this moment where it, sort of, meets the real users, you know, the humans who are going to use those things in, you know, possibly unanticipated ways. And, you know, this can be oftentimes a very frustrating moment, but this can be a feature, not a bug, very often, right. So the combined insight and effort of all the users of a product can be this, like, amazing strong force. And so, you know, this is something where we can try to fight against it or we can really try to, sort of, harness it and work with it, and this is why it’s really critical when we’re building especially, sort of, user-facing AI systems, that we design them from the ground up to be, sort of, collaborating, you know, with our users and guiding towards, sort of, good outcomes in the long term, you know, as people jointly, sort of, decide how best to use these products and guide towards, sort of, good usage patterns. 

[MUSIC] 

HUIZINGA: Hmmm. Well, Brendan and Mert, as I said before, this is timely and important research. It’s a wonderful contribution to the AICE research initiative, and I’m thrilled that you came on the podcast today to talk about it. Thanks for joining us. 

LUCIER: Thank you so much. 

DEMIRER: Thank you so much. 

[MUSIC FADES] 


[1] (opens in new tab) For more information, Lucier notes two resources about the economic impact of GenAI: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (opens in new tab) and Preparing the Workforce for Generative AI (opens in new tab)

The post Collaborators: AI and the economy with Brendan Lucier and Mert Demirer appeared first on Microsoft Research.

]]>
Collaborators: Sustainable electronics with Jake Smith and Aniruddh Vashisth http://approjects.co.za/?big=en-us/research/podcast/collaborators-sustainable-electronics-with-jake-smith-and-aniruddh-vashisth/ Thu, 11 Jul 2024 13:00:00 +0000 http://approjects.co.za/?big=en-us/research/?p=1054395 Printed circuit boards are abundant—in the stuff we use and in landfills. Researcher Jake Smith and professor Aniruddh Vashisth discuss the development of vitrimer-based PCBs that perform comparably to traditional PCBs but have less environmental impact.

The post Collaborators: Sustainable electronics with Jake Smith and Aniruddh Vashisth appeared first on Microsoft Research.

]]>
photos of Jake Smith and Aniruddh Vashisth for the Microsoft Research Collaborators podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

Printed circuit boards (PCBs) are abundant—in the items we use daily and then in landfills when they’ve reached end of life. In this episode, Senior Researcher Jake Smith (opens in new tab) and Aniruddh Vashisth (opens in new tab), assistant professor of mechanical engineering at the University of Washington, join host Gretchen Huizinga to talk about the development of vitrimer-based PCBs, or vPCBs, that perform comparably to traditional circuit boards but have less environmental impact. Smith and Vashisth explore machine learning’s role in accelerating the discovery of more sustainable materials and what the more healable vitrimer polymer could mean not only for e-waste but more broadly for aerospace, the automotive industry, and beyond.

Transcript

[TEASER]

[MUSIC PLAYS UNDER DIALOGUE]

ANIRUDDH VASHISTH: From the computation point of view, we always thought that if somebody gave us, like, a hundred different chemistries, we can do a bunch of simulations; tell you, like, 10 of these actually work. What we’ve been able to do specifically for vitrimers is that we’re able to look at the problem from the other side, and we are able to say that if you tell me a particular application, this particular chemistry would work best for you. In essence, what we were thinking of is that if aliens abducted all the chemists from the world, can we actually come up with a framework? [LAUGHTER]

JAKE SMITH: If all of this work is successful, in 10 years, maybe our materials design process looks completely different, where we’ve gone from this kind of brute-force screening to an approach where you start with the properties that you care about—they’re defined by the application that you have in mind—and we use this, like, “need space” to define the material that we would like, and we can use machine learning, artificial intelligence, in order to get us to the structure that we need to make in order to actually achieve this design space.

[TEASER ENDS]

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC FADES]

I’m thrilled to be in the booth today, IRL, with Dr. Jake Smith, a senior researcher at Microsoft Research and part of the Microsoft Climate Research Initiative, or MCRI. And with him is Dr. Aniruddh Vashisth. He’s an assistant professor of mechanical engineering at the University of Washington and director of the Vashisth Research Lab. Jake and Aniruddh are working on a project that uses machine learning to help scientists design sustainable polymers with a particularly exciting application in the field of the ubiquitous printed circuit board, or PCB. But before we get all sustainable, let’s meet our collaborators!

Jake, I’ll start with you. You’re a self-described “chemist with relatively broad interests across applications” and you’ve done some pretty cool things in your career. Tell us about those interests and where they’ve led you and how they’ve contributed to the work you’re doing now in MCRI, or the Microsoft Climate Research Initiative.

JAKE SMITH: Yes. Thank you very much for having me. So I started, like most chemists, poking things around in the lab and learning really fundamentally about how atoms interact with one another and how this affects what we do or what we see at our microscopic level. And so after I left grad school doing this super-basic research, I wanted to do something more applied, and so I did a couple of postdocs, first, looking at how we can more effectively modify proteins after we’ve synthesized them so they might have a property that we care about and then later doing similar work on small molecules in a more traditional drug-design sense. But after I finished that, I wound up here at Microsoft. We were very interested in one molecule in particular, one family of molecules, which is DNA, and we wanted to know, how do we make DNA at just gigantic scale so that we can take that DNA and we could store digital data in it? And because DNA has this nice property that it kind of lasts forever, …

HUIZINGA: Yeah.

SMITH: … at least on our, you know, human scale, it makes a very, you know, nice archival storage medium. So we worked on this project for a while, and at some point, we determined we can, kind of, watch it blossom and find the next challenge to go work on.

HUIZINGA: Interesting …

SMITH: And the challenge that we, you know, wound up at I’ll describe as the Microsoft Climate Research Initiative, the MCRI. We were a group of applied scientists from, like, natural scientist backgrounds within Microsoft, and we said, how can we make a difference for Microsoft? And the difference that we thought was Microsoft has climate goals.

HUIZINGA: Oh, yeah!

SMITH: Microsoft wants to be carbon negative, it wants to be water positive, and it wants to be zero waste. And in order to make this happen, we need novel materials, which really are a macroscopic view of, once again, atomic behavior. And we said, hey, we understand atomic behavior. We’re interested in this.

HUIZINGA: [LAUGHS] We can help! We’re from the government …

SMITH: Yeah, maybe this is something we could help on. Yeah. And so here we are. We wound up with Aniruddh, and we’ll go into that later, I’m sure.

HUIZINGA: Yeah, yeah. So just quickly back to the DNA thing. Was that another collaboration? I had Karin Strauss on the podcast a while ago, and she talked about that.

SMITH: Oh, absolutely. Yeah, this was with Karin, and we had great collaborators, also at the University of Washington in the Molecular Information Systems Lab, or MISL, who did a lot of work with us on the practicalities of working with DNA once it’s synthesized and how would you do things like retrieve information from a big pool of DNA.

HUIZINGA: Right. Right. They could … people could go back to that podcast because she does unpack that quite a bit. Well, Aniruddh, you describe yourself as a “trained mechanician who hangs out with chemists,” hence your friendship with Jake here, but for your day job, you’re a professor and you have your own lab that conducts interdisciplinary research at the intersection, as you say, of mechanics and material science. So what made you want to move to that neighborhood, and what goes on there?

ANIRUDDH VASHISTH: Yeah. Well, again, thank you so much for having me here. I’m super excited about this. Yeah, just a little bit of background about me. So I started off with my undergrad in civil and mechanics from IIT BHU, did a PhD in mechanics at Penn State, and moved to Texas …

HUIZINGA: Go back … go back to, what’s the first one?

VASHISTH: It’s Indian Institute of Technology, in India, so that’s …

HUIZINGA: IIT …

VASHISTH: … IIT. I did my undergrad there and then straight away came to the US to do my PhD in mechanics at Penn State and then ended up going to Texas, to Texas A&M University, and postdoc-ed in a chemical engineering lab, and that’s how I became, like, super familiar and fond of chemical engineers and chemists! [LAUGHTER] And we moved to Seattle, when I got the job at University of Washington in 2021, with my wife and my daughter. And what we do in our lab is we make and break things now! [LAUGHS] We try to see, like, you know, when we are making and breaking these things, we try to see them from an experimental and a simulation point of view and try to gain some understanding of the mechanics of these different types of materials. Especially, we are very interested in polymers. I always joke with my students and my class that go about one day without touching a polymer, and I’m always surprised by the smiles or the smirks that I get! But in general, like, we have been super, super excited and interested about sustainable polymers, making sustainable composites. Particularly, we are very excited and interested in vitrimer polymers. So let me just take, like, a step back. I’ll probably wear my professor hat straight away here.

HUIZINGA: Yeah. Let’s do! Let’s go. [LAUGHTER]

VASHISTH: And I’ll tell you, just, like, taking a step back, what are the different types of polymers. So in general, you can think of polymers as thermosets or thermoplastics. So to Jake’s point, let’s just go to the molecular scale there, and you can think of polymers as bunch of these pasta noodles which can slide over each other, right. Or these bunch of pasta noodles which are packed together. So thermoset, as the name suggests, it’s a set network. The pasta noodles are kind of, like, set in their place. Thermoplastics is when these pasta noodles can slide over each other. So you’ve probably put too much sauce in there! [LAUGHTER] Yeah, so a good analogy there would be a lot of the adhesives that we use are thermosets because they set after a while. Thermoplastic … we use plastics for 3D printing a lot, so those are thermoplastics. So they’re solid. You can heat them up, you can make them flow, print something, and they solidify. Vitrimers are very exciting because, just like thermoplastics, they have this flowability associated to them but more at a molecular scale. Like, if you think of a single pasta noodle, it can unclick and re-click back again. So it’s like, you know, it’s made up of these small LEGO blocks that can unclick and re-click back again …

HUIZINGA: LEGO pasta …

VASHISTH: LEGO pasta …

HUIZINGA: I like that! [LAUGHS]

VASHISTH: Exactly. So this unclicking and re-clicking can make them re-processable, reusable, recyclable. Gives them, like, much longer life because you can heal them. And then vitrimers basically become the vampires of the polymer universe!

HUIZINGA: Meaning they don’t die?

VASHISTH: Well …

HUIZINGA: Or …

VASHISTH: They have like much longer life! [LAUGHTER]

SMITH: They sleep every now and then to regenerate! Yes … [LAUGHS]

HUIZINGA: Aniruddh, sticking with you for a minute, before we get into the collaboration, let’s do a quick level set on what we might call “The Secret Life of Circuit Boards.” For this, I’d like you to channel David Attenborough and narrate this PCB documentary. Where do we find printed circuit boards in their natural habitat? How many species are there? What do they do during the day? How long do they live? And what happens when they die?

VASHISTH: OK, so do I have to speak like David … ?

HUIZINGA: Yes, I’d appreciate it if you’d try. [LAUGHTER] … No. Just be your voice.

VASHISTH: Yeah. Yeah. So PCBs are, if you think about it, they are everywhere. PCBs are in these laptops that we have in front of us. Probably there are PCBs in these mics. Automobiles. Medical devices. So PCBs are, they’re just, like, everywhere. And depending upon, like, what is their end applications, they have a composite part of it, where you have, like, some sort of a stiff inclusion in a polymeric matrix, which is holding this part together and has bunch of electronics on top of it. And depending on the end application, it might come in different flavors: something that can sustain much higher temperatures; something which is flexible. Things of that sort. And they live as long as we use the material for, like, you know, as long as we are using these laptops or as long as we end up using our cars. And unfortunately, there is a lot of e-waste which is created at the end.

HUIZINGA: Right …

VASHISTH: There’s been a lot of effort in recycling and reusing these materials, but I’m confident we can do more.

HUIZINGA: Right.

VASHISTH: I think there’s like close to 50 million metric tons of …

HUIZINGA: Wow!

VASHISTH: … of e-waste which is generated—more than that actually—every year, so …

HUIZINGA: OK.

VASHISTH: … a lot of scope for us to work there.

HUIZINGA: Um, so right now, are they sort of uniform? The printed circuit board? I know we’re going to talk about vitrimer-based ones, but I mean, other than that, are there already multiple materials used for these PCBs? Jake, you can even address that.

SMITH: Yeah. Of course. So there are, like, kind of, graded ranks of circuit board materials …

HUIZINGA: OK.

SMITH: … that as Aniruddh said, you know, might be for specialty applications where you need higher-temperature tolerance than normal or you need lower noise out of your circuit board.

HUIZINGA: Gotcha.

SMITH: But, kind of, the bog-standard circuit board, the green one that you think about if you’ve ever seen a circuit board, this is like anti-flammability coating on a material called FR-4. So FR-4—which is an industrial name for a class of polymers that are flame-retardant, thus FR, and 4 gives you the general class—this is the circuit board material …

HUIZINGA: OK …

SMITH: … that, you know, we really targeted with this effort.

HUIZINGA: Interesting. So, Jake, let’s zoom out for a minute and talk about the big picture and why this is interesting to Microsoft Research. I keep hearing two phrases: sustainable electronics and a circular economy. So talk about how the one feeds into the other and what an ultimate success story would look like here.

SMITH: Yeah, absolutely. So I’ll start with the latter. When we set out to start the Microsoft Climate Research Initiative, we started with this vision of a circular economy that would do things that avoid what we, you know, can avoid using. But there are many cases where you can’t avoid using something that is nonrenewable. And there, what we really want to do is we want to recapture what we can’t avoid. And this project, you know, falls in the latter. There’s a lot of things that fall in the latter case. So, you know, we were looking at this at a very carbon dioxide-centric viewpoint where CO2 is ultimately the thing that we’re thinking about in the circle, although you can draw a circular economy diagram with a lot of things in the circle. But from the CO2 viewpoint, you know, what led us to this project with Aniruddh is we thought, we need to capture CO2, but once you capture CO2, you know, what do you do with it? [LAUGHTER] You can pump some of it back into the ground, but this is, you know, an economically non-productive activity. And so it’s something we have to do. It’s not something we want to do.

HUIZINGA: Right.

SMITH: And so what could we want to do with the CO2 that we’ve captured? And the thought was we do something economically viable with it. We, you know, upcycle the CO2 into something interesting, and what we really want, and what we still really want, is to be able to take that CO2, convert it down into a useful chemical feedstock—and there are great laboratories …

HUIZINGA: Oh, interesting …

SMITH: … doing work on this—and then we could, you know, look at our plastic design problem and say, hey, we have all this FR-4 in the world. How could we replace the FR-4—the, you know, explicit atoms that are in the FR-4—with atoms that have come from CO2 that we pulled out of the air? And so this is, you know, the circular economy portion. We come down to, you know, the specific problem here. Aniruddh talked a lot about e-waste.

HUIZINGA: Yeah.

SMITH: And I have great colleagues who also collaborated with us on this project—Bichlien Nguyen, Kali Frost—who have been doing work with our product teams here at Microsoft on, you know, what can we do to reduce the amount of e-waste that they put out towards Microsoft’s climate goals?

HUIZINGA: Right.

SMITH: And Microsoft, as a producer of consumer electronics and a consumer of, you know, industrial electronics, has a big e-waste problem itself that we need to, you know, actually take research steps in order to ultimately address, and so what we thought was, you know, we have this end-of-life electronic. We can do things like desolder the components. We can recapture those ICs, which have a lot of embedded carbon in them in the silicon that’s actually there. We can take and we can etch out the copper that has been put over this to form the traces, and we can precipitate out that electrochemically to recapture the copper, but at the end of the day, we’re left with this big chunk of plastic, and it’s got some glass inside of it, too, for completeness sake, and the thought was, you know, how do we do this? You can’t recapture this with FR-4. FR-4, to go back to the spaghetti thing, …

HUIZINGA: Right … [LAUGHS]

SMITH: … spaghetti is glued to itself. It doesn’t come apart. It rips apart if you try and take it apart. And so we wanted to say, you know, what could we do and, you know, what could we do with Aniruddh and his lab in order to get at this problem and to get us at a FR-4 replacement that we could actually reach this complete circularity with.

HUIZINGA: Interesting! Well, Jake, that is an absolutely perfect segue into “how I met your mother,” which is, you know, how you all started working together. Who thought of who first, and so on. I’m always interested to hear both sides of the meet-up. So, Aniruddh, why don’t you take the baton from Jake right there and talk about, from your perspective, how you saw this coming together, who approached who, what happened—and then Jake can confirm or deny the story! [LAUGHTER]

VASHISTH: Yeah, yeah. So it actually started off, I have a fantastic colleague and a very good friend in CS department, Professor Vikram Iyer, and he actually introduced me to Bichlien Nguyen from Microsoft, and we got a coffee together and we were talking about vitrimers, like the work that we do in our lab, and I had this one schematic—I forget if it was on my phone or I was carrying around one paper in my pocket—and I showed them. I was like, you know, if we can actually do a bunch of simulations, guide an ML model, we can create, for lack of a better word, like a ChatGPT-type of model where instead of telling like, “This is the chemistry; tell me what the properties are,” we can go from the other side. You can ask the model, “Hey, I want a vitrimer chemistry which is recyclable, re-processable, that I can make airplanes out of or I can make glasses out of. Tell me what that chemistry would look like.” And I think, you know, Bichlien was excited about this idea, and she connected me with Jake, and I think I’ve been enjoying this collaboration for the last couple of years, …

HUIZINGA: Right …

VASHISTH: … working on that.

HUIZINGA: Was there a paper that started the talk, or was it just this napkin drawing? [LAUGHS]

VASHISTH: I think, to give myself a little bit of credit there, I think there was a paper with a nice drawing on it.

HUIZINGA: Right?

VASHISTH: Yeah. There was a white paper. Yeah.

HUIZINGA: That’s good. Well, Jake, what’s your side of this story?

SMITH: Ah, this is awesome! We got the first half that I didn’t know, so …

HUIZINGA: Oh—filling in gaps!

SMITH: This was the Bichlien-mediated half! [LAUGHTER] I was sharing an office with Bichlien, who apparently came up from this meeting, and, you know, I saw the mythical paper! She put this on my desk. And I’ll plug another MCRI project that we were working on there where—or at the time—where we were attempting to do reverse design, or inverse design, of metal organic frameworks, which are these really interesting molecules that have the possibility to actually serve as carbon capture absorbents, …

HUIZINGA: Oh, wow.

SMITH: … but the approach there was to use machine learning to help us, you know, sample this giant space of metal organic frameworks and find ones that had the property that we cared about. I mean, you draw this diagram that’s much like Aniruddh just described, where you’ve got this model that you train and out the other side comes what you want, and so this paper came down on my desk, and I looked at it and I said, “Hey, that’s what we’re doing!” [LAUGHTER] And it, kind of, you know, went from there. We had a chat. We determined, hey, we’re both interested in, you know, this general approach to getting to novel materials.

HUIZINGA: Right.

SMITH: And then, you know, we’ve already talked about the synergy between our interests and Microsoft’s interests and the, you know, great work or the great particular applications that are possible with the type of polymer work that Aniruddh does.

HUIZINGA: Yeah. So the University of Washington and Microsoft meet again. [LAUGHTER] Well, Jake, let’s do another zoom out question because I know there’s more than just the Microsoft Climate Research Initiative. This project is a perfect example of another broader initiative within Microsoft which has the potential to quote “accelerate and enhance current research,” and that’s AI for Science. So talk about the vision behind AI for Science, and then if you have any success stories—maybe including this one—tell us how it’s working out.

SMITH: Yeah, absolutely. We are—and by we, I mean myself and my immediate colleagues—are certainly not the only ones interested in applying AI to scientific discovery at Microsoft. And it turned out, a year or two after we started this collaboration, a bigger organization named AI for Science arose, and we became part of it. And it’s, you know, generally a group of people who—along with our kind of sister organization in research called Health Futures, who work more on the biology side—are interested in how AI can help us do science in (a) a faster way, but (b) maybe a smarter, better-use-of-resources way, or the ultimate goal, or the ultimate dream, is (c) a way that we just can’t think of doing right now. A way that, you know, it just is fundamentally incompatible with the way that research has historically been done in, you know, small groups of grad students directed by a professor who are themselves, you know, the actual engine behind the work that happens. And so the AI for Science vision, you know, it’s got a couple of parts that really map very well onto this project. The first part is we want to be able to simulate bigger systems. We want to be able to run simulations for longer, and we want to be able to do simulations at higher accuracy. When we get into the details of, you know, the particulars of the vitrimer project, you’ll see that one of the fundamental blocks here is the ability to run simulations, and Aniruddh’s excellent grad student Yiwen, you know, spent a ton of time trying to identify the appropriate simulation parameters in order to capture the behavior that we care about here. And so, the first AI for Science vision says we don’t need Yiwen to do that, you know, we’re going to have a drop-in solution or we’re going to have, you know, a set of drop-in solutions that can, you know, take this work away from you and make it much easier for you to go straight to running the simulations that you care about.

HUIZINGA: Yeah. A couple questions. Not on the list here, but you prompted them. No pun intended. Are these specialized models with the kinds of information … I mean, if I go to ChatGPT and ask it to do what you guys are doing, I’m not going to get the same return am I?

SMITH: Absolutely.

HUIZINGA: Am I?

SMITH: Oh, no, no, no, no! [LAUGHTER] I was saying you were absolutely correct. [LAUGHS] You can ask ChatGPT, and it will tell you all sorts of things that are very interesting. It can tell you, probably, a vitrimer. It could give you Aniruddh’s spiel about the spaghetti, I’m sure, if you prompted it in the correct way. But what it can’t tell you is, you know, “Hey, I have this particular vitrimer composition, and I would like to know at what temperature it’s going to melt when I heat it up.”

HUIZINGA: Right. OK, so I have one more question. You talk about the simulations. Those take a lot of compute. Am I right? Am I right?

SMITH: You’re absolutely right.

VASHISTH: Yeah.

HUIZINGA: So is that something that Microsoft brings to the party in terms of … I mean, does the University of Washington have the same access to that compute, or what’s the deal?

VASHISTH: I think especially on the scale, we were super happy and excited that we were collaborating with Microsoft. I think one of these simulations took, like, close to a couple of weeks, and we ended up doing, I would say, like, close to more than 30,000 simulations. So that’s a lot of compute time if you think about it.

HUIZINGA: To put that in perspective, how long would it take a human to do those simulations? [LAUGHS]

SMITH: [LAUGHS] Oh, man, to try and actually, like, go do all this in the lab …

HUIZINGA: Right!

SMITH: First, you got to make these 30,000, like, starting materials. This in itself … let’s say you could buy those. Then to actually run the experiments, how long does it take to do one …

HUIZINGA: And how much money?

VASHISTH: That’s … that’s like you’re talking about like one PhD student there.

HUIZINGA: Right?

VASHISTH: That’s like, you know, it takes like a couple of years just to synthesize something properly and then characterize it, and it’s …

HUIZINGA: Yeah …

VASHISTH: Yeah, no, I think the virtual world does have some pluses to it.

HUIZINGA: So this is a really good argument for AI for Science, meaning the things that it can do, artificial intelligence can do, at a scale that’s much smaller than what it would take a human to do.

SMITH: Yeah, absolutely. And I’ll plug the other big benefit now, which is, hey, we can run simulations. This is fantastic. But the other thing that I think all of us really hope AI can do is it can help us determine which simulations to run …

HUIZINGA: Ooh …

SMITH: … so we need less compute overall, we need less experiments if we have to go do the experiments, and this is …

HUIZINGA: So it’s the winnowing process.

SMITH: Exactly.

HUIZINGA: OK. That’s actually really interesting.

SMITH: And this is, like, the second, or maybe even the largest, vector for acceleration that we could see.

HUIZINGA: Cool. Well, every show I ask, what could possibly go wrong if you got everything right? And, Aniruddh, I want to call this the “Defense Against the Dark Arts” question for you. You’re using generative AI to propose what you call novel chemistries, which can sound really cool or really scary, depending on how you look at it. But you can’t just take advice from a chatbot and apply it directly to aerospace. You have to kind of go through some processes before. So what role do people, particularly experts in other disciplines, play here, and what other things do you need to be mindful of to ensure the outputs you get from this research are valid?

VASHISTH: Yeah, yeah. That’s a fantastic question. And I’ll actually piggyback on what Jake just said here, about Yiwen Zheng, who’s like a fantastic graduate student that we have in our lab. He figured out how to run these simulations at the first point. It was like six months of … like, really long ordeal. How to make sure that in the virtual world, we are synthesizing these polymers correctly and we are testing them correctly. So that human touch is essential, I feel like, at every step of this research, not just like doing virtual characterization or virtual synthesis of these materials, training the models, but eventually, when you train the models also and the model tells you that, well, these are, like, the 10 best polymers that would work out, there you need people like Jake who are like chemists, you know. They come in [LAUGHTER] and they’re like, hey, you know what? Like, out of these 10 chemistries, this one you can actually synthesize. It’s a one-step reaction or things of that sort. So we have a chemist in our lab also, Dr. Agni Biswal, who’s a postdoc. So we actually show him all these chemistries, apart from Jake and Bichlien. We show the chemistries to all the chemists and say, like, OK, what do you think about this? How do these look like? Are they totally insane, or can we actually make them? [LAUGHTER]

SMITH: Yeah, we still need that, like, human evaluation step at the end, at this point.

HUIZINGA: Yeah … VASHISTH: Exactly.

HUIZINGA: Ask a chemist! Well, and I would imagine it would be further than just, “This would be the best one,” or something like, “You better not do that one.” Are there ever like crazy responses or replies from the model?

SMITH: [LAUGHS] It’s fascinating. Models are very good—and particularly we’ll talk about models that generate small organic structures—at generating things that look reasonable. They follow all the rules. But there’s this next step beyond that. And you see this when you talk to people who’ve worked in med chem for, you know, 30 years of their life. Well, they’ll look at a structure and they’ll, like, get this gut feeling like, you know, a storm is coming in and their knee hurts, and they really don’t like that molecule. [LAUGHTER] And if you push them a little bit, you know, sometimes they can figure out why. They’ll be like, oh, I worked on, you know, a molecule that looked like that 20 years ago, and it, you know, turned out to have this toxicity, and so I don’t want to touch that again. But oftentimes, people can’t even tell you. They’ve just got this instinct …

HUIZINGA: Really?

SMITH: … that they’ve built up, and trying to, you know, capture that intuition is a really interesting next frontier for this sort of research.

HUIZINGA: Wow. You know, you guys are just making my brain fry because it’s like so many other questions I want to ask, but we’re actually getting there to some of them, and I’m hoping we’ll address those questions with the other things I have. So, Jake, I want to come … Well, first of all, Aniruddh, have you finished your defense against the dark arts? [LAUGHS]

VASHISTH: I think I can point out one more thing very quickly there, and as Jake said, like, we are learning a lot, particularly about these materials, like, the vitrimer materials. These are new chemistries, and we are still learning about, like, the mechanical, thermorheological properties; how to handle these materials. So I think there’s a lot that we don’t know right now. So it’s like a bunch of, like, unknown unknowns that are there. So …

HUIZINGA: Well, and that’s research, right? The unknown unknowns. Jake, I want to come back to the vision of the climate research initiative for a minute. One goal is to develop technologies that reduce the raw tonnage of e-waste, obviously. But if we’re honest, advances in technology have almost encouraged us to throw stuff away. It’s like before it even wears out. And I think we talked earlier about, you know, this will last as long as my car lasts or whatever, but I don’t like my car in five years. I want a different one, right? So I wonder if you’ve given any thought to what things, in addition to the work on reusable and recyclable components, we might do to reverse engineer the larger throwaway culture?

SMITH: This was interesting. I feel like this gets into real questions about social psychology and our own behaviors …

HUIZINGA: Yeah …

SMITH: … with individual things. Why do I have this can of carbonated water here when I could have a glass of carbonated water? But I want to, kind of, completely sidestep that because …

HUIZINGA: Yeah … Well, we know why! Because it’s convenient, and you can take it in your car and not spill.

SMITH: Agreed. Yes. All right. [LAUGHTER] I also have this cup, and it could not spill, as well.

HUIZINGA: True! Recyclable—reusable.

SMITH: Ahhh … no, no … this is like a—it’s an ingrained consumer behavior that I’ve developed that might … I’ll slip into “Jake’s Personal Perspectives” here, which is that it should not be on the individual consumer behavior changes to ultimately drive a shift towards reusable and recyclable things. And so one of the fundamental, like, hypotheses that we had with the, you know, design of the projects we put together with the MCRI was that if we put appropriate economic incentives in place, then we can naturally guide behavior at a much bigger scale than the individual consumer. And maybe we’ll see that trickle down to the consumer. Or maybe this means that the actual actors, the large-scale actors, then have the economic incentive to follow it themselves.

HUIZINGA: Right.

SMITH: And so with the e-waste question in particular, we talked a lot about FR-4 and, you know, it’s the part of the circuit board that you’re left over with at the end that there’s just nothing to do with …

HUIZINGA: Right.

SMITH: … and so you toss into landfill, you burn it, you do something like this. But, you know, with a project like this, where our goal was to take that material and now make it reusable, we can add this actual economic value to the waste there.

HUIZINGA: Yeah. I realized even as I asked that question, that I had the answer embedded in the question because, in part, how we design technologies drives how people use things.

SMITH: Yeah, absolutely. VASHISTH: Yeah.

HUIZINGA: And usually, the drivers are convenience and economics. So if upstream of consumer … consumption? [LAUGHTER] Upstream of that, the design drives environmental health and so on, that’s actually … that’s up to you guys! So let’s get out of this booth and get back to work! [LAUGHTER] Well, Jake, to that point, talk about the economics. We talk about a circular economy. And I know that recycling is expensive. Can you talk a little bit about how that could be impacted by work that you guys do?

SMITH: Recycling absolutely is expensive relative to landfilling or a similar alternative.

HUIZINGA: Right …

SMITH: One of the things that makes us target e-waste is that there are things of value in e-waste that are, like, innately valuable. When you go recollect that copper or the gold that you’ve put into this, when you recollect the integrated circuits, you know, they had value, and so a lot of the economic drive is already there to get you to the point where you have these circuit boards. And then, you know, the question was, how do we get that next bit of economic value so that you’ve taken steps this far, you have this pile of circuit boards, so you’ve already been incentivized to get to here and it will be easy to make this—even if it’s not a completely economically productive material—versus synthesizing a circuit board from virgin plastic, but it’s offset enough. We’ve taken enough of that penalty for reuse out that it can be justifiable to go do.

HUIZINGA: Right. OK. So talk—again, off script a little bit—but talk a little bit about how vitrimers help take it to the last mile.

VASHISTH: Yeah, I think the inherent property of the polymer to kind of unclick and re-click back again, the heal-ability of the polymer, that’s something that, kind of, drives this reusability and re-processability of the material. I’ll just, like, point out, like, you know, particularly to the PCB case, where we recently published a collaborative paper where we showed that we can actually make PCB boards using vitrimers. We can unassemble everything. We can take out the electronics, and even the composite, the glass fiber and the polymer composite, we can actually separate that, as well, which is, in my mind, like, a pretty big success.

HUIZINGA: Yeah.

VASHISTH: And then we can actually put everything back together and remake a PCB board, and, you know, keep on doing that. So …

HUIZINGA: OK, so you had talked to me before about “Ring Around the Rosie” and the hands and the feet. Can you … ?

SMITH: [LAUGHS] His favorite analogy!

HUIZINGA: Do that one just for our audience because it’s good.

VASHISTH: OK. So I’ll talk a little bit about thermoset/thermoplastic again, and then I’ll just give you a much broader perspective there.

HUIZINGA: Yeah.

VASHISTH: So the FR-4 PCBs that are made, they are usually made with thermosetting polymers. So if you think about thermosetting polymers, just think of kids playing “Ring of Roses,” right? Like their hands are fixed and their feet are fixed. Once the network is formed, there’s no way you can actually destroy that network. The nice thing about vitrimers is that when you provide an external stimulus, like, just think about these kids playing “Ring of Roses” again. Their feet can move and their handshakes can change, but the number of handshakes remain the same. So the polymer is kind of, like, unclicking and re-clicking back again.

HUIZINGA: OK.

VASHISTH: And if you can cleverly use this mechanism, you can actually recycle, reprocess the polymer itself. But what we showed, particularly for the PCB paper, was that you can actually separate all the other constituents that are associated with this composite, yeah.

HUIZINGA: OK. That’s … I love that. Well, sticking with you for a second, Aniruddh, talking about mechanical reality—not just chemical reality, but mechanical reality—even the best composites wear out, from wear and tear. Talk about the goal of this work on novel polymers from an engineering perspective. How do you think about designing for reality in this way?

VASHISTH: Yeah, yeah. That’s a fantastic question. So we were really motivated by what type of mechanical or thermal loadings materials see in day-to-day life. You know, I sit in my car, I drive it, it drives over the road, there is some fatigue loadings, there’s dynamic loading, and that dynamic loading actually leads to some mechanical flaws in the material, which damages it. And the thought was always that, can we restrict that flaw, or can we go a step further? Can we actually reverse that damage in these composites? And that’s where, you know, that unclicking/re-clicking behavior of vitrimer becomes, like, really powerful. So actually, the first work that we did on these type of materials was that we took a vitrimer composite and we applied fatigue loading on it, cyclic loading on it, mechanical loading. And then we saw that when there was enough damage accumulated in the system, we healed the system. And then we did this again. And we were able to do it again and again until I was like, I’ve spent too much money on this test frame! [LAUGHS] But it was really exciting because for a particular loading case that we were looking at, traditional composites were able to sustain that for 10,000 cycles, but for vitrimers, if we did periodic healing in the material, we were able to go up to a million cycles. So I think that’s really powerful.

HUIZINGA: Orders of magnitude.

VASHISTH: Yeah, exactly.

HUIZINGA: Wow. Jake, I want to broaden the conversation right now, beyond just you and Aniruddh, and talk about the larger teams you need to assemble to ensure success of projects like this. Do you have any stories you could share about how you go about building a team? You kind of alluded to it at the beginning. There’s sort of a pickup basketball metaphor there. Hey, he’s doing that. We’re doing this. But you have some intentionality about people you bring in. So what strengths do each institution bring, and how do you build a team?

SMITH: Yeah, absolutely. We’ve tried a bunch of these collaborations, and we’ve definitely got some learnings about which ones work better than others. This has been a super productive one. I think it’s because it has that right mix of skills and the right mix of things that each side are bringing. So what we want from a Microsoft side for a successful collaboration is we want a collaborator who is really a domain expert in, you know, something that we don’t necessarily understand but who can tell us, in great detail, these are the actual design criteria; these are, you know, where I run into trouble with my traditional research; this is the area that, you know, I’d like to do faster, but I don’t necessarily know how. And this was the critical part, I think, you know, from the get-go. They need to, themselves, be an extremely, you know, capable subject matter expert. Otherwise, we’re just kind of chatting. We don’t have anyone that really knows what the problem truly is and you make no progress or you … worse, you spend a whole lot of resources to make “progress”—I’m doing air quotes …

HUIZINGA: Yeah. I love air quotes on a podcast!

SMITH: [LAUGHS]—that is actually just completely tangential to what the field needs or what the actual device needs. So this was, you know, the fundamental ingredient. And then on top of that, we need to find a problem that’s of joint interest where, in particular, …

HUIZINGA: Right …

SMITH: … computation can help. You talked about the amount of computation that we have at our disposal as researchers at Microsoft, which is a tremendous strength. And so we want to be able to leverage that. And so for a collaboration like this, where running a large number of simulations was a fundamental ingredient to doing it, this was, you know, a really good fit, that we could come in and we could enable them to have more data to train the models that we build together.

HUIZINGA: Mm-hm. Well, as researchers, are you each kind of always scanning the horizon for who else is doing things in your field that—or tangential to your field but necessary? How does that work for recruiting, I would say?

VASHISTH: Yeah, that’s a good question. I think … I mean, that’s kind of like the job, right. For the machine learning work we did, we saw a lot of inspiration from biology, where people have been designing biomolecules. The challenges are different for us. Like, we are designing much larger chains. But we saw some inspiration from there. So always, like, looking out for, like, who is doing what is super helpful, and it leads to, like, really nice collaborations, as well. We’ve had, like, really fruitful collaborations with the professor Sid Kumar at TU Delft, and we always get his wisdom on some of these things, as well. But yeah, recruiting students also becomes, like, very interesting and how, like, people who can help us achieve our idea …

HUIZINGA: Yeah. Jake, what’s your take on it from the other seat? I mean, do you look actively at universities around the world—and even in your backyard—to … like U Dub … ? [LAUGHTER]

SMITH: My perspective on, like, how collaborations come in to be is they’re really serendipitous. You know, we talked about how this one came in to be, and it was because we all happen to know Vikram, and Vikram happened to connect Bichlien with Aniruddh, and it kind of rolled from there. But you can have serendipitous, you know, meetings at a conference, where you happen to, you know, sit next to someone at a talk and you both share the same perspective on, you know, how a research problem should be tackled, and something could come out of that. Or in some cases, you go actually shopping for a collaborator.

HUIZINGA: Right. [LAUGHTER]

SMITH: You know, you need to talk to 10 people to find the one that has that same research perspective as you. I’ll second Aniruddh’s, you know, observation that you get a very different perspective if you go find someone who, they may have the same, like, perspective on how research should be tackled, but they have a different perspective on what the ultimate output of that research would be. But, you know, they can often point you in areas where your research could be helpful that you can’t necessarily see because you lack the domain knowledge or you lack that particular angle on it.

HUIZINGA: Which is another interesting thing in my mind is, you know, the role that papers, published papers, play—that’s a lot of p’s in a sentence [LAUGHTER] … alliteration—that you would be reading or hearing about either in a lightning talk or a presentation at a conference. Does that broaden your perspective, as well? And how do you … like, do you call people up? “I read your paper … ”?

SMITH: [LAUGHS] I have cold-emailed people. You know, this works sometimes! Sometimes this is just the introduction that you need. But the interesting thing in my mind is how much the computer science conferences and things like ChemRxiv and arXiv have really replaced, for me, the traditional chemistry literature or the traditional publishing literature where you can have a conversation with this person while they’re still actively doing the work because they put their initial draft up there and it still needs revision, and there’s opportunities even earlier on in the research process than we’ve had in the past.

HUIZINGA: Huh. And to your earlier point, I’m envisioning an Amazon shopping cart for research collaborators. [LAUGHTER] “Oh, he looks good. Into my cart.” Aniruddh, I always like to know where a project is on the spectrum from what I call lab to life, and I know there are different development stages when it comes to technology finding its way into production and then into broader use. So to use another analogy I like, pretend this is a relay race and research is the first leg. Who else has to run, and who brings it across the line?

VASHISTH: Yeah, yeah. So I think the initial work that we have done, I think it’s been super fruitful, and to Jake’s point, like, converging to, like, a nice output. It took a bunch of chemists, mechanical engineers, simulation folks, machine learning scientists to get where we are. And, as Jake mentioned, we’ve actually put some of our publications on arXiv, and it’s getting traction now. So we’ve had some excitement from startups and companies which make polymers asking us, “Oh, can you actually … can we get a slice of this framework that you’re developing for designing vitrimers?” Which is very promising. So we have done very fundamental work, but now, like, what’s called “the valley of death” in research, [LAUGHTER] like taking it from lab to like production scale, …

HUIZINGA: Yeah.

VASHISTH: … it’s usually a very tightly knit collaboration between industry, labs, and sometimes national labs, too. So we’re excited that, actually, a couple of national labs have been interested in the work that we have been doing, so super optimistic about it.

HUIZINGA: So would you say that the vitrimer-based printed circuit board is a proof of concept right now? Or have you made prototypes? Where is that now?

SMITH: Yeah, absolutely. We’ve mentioned our other collaborator, Vikram Iyer, a couple of times. And in collaboration with his lab, we did actually make a prototype circuit board. We showed that it works as you expect. We showed that it can be disassembled. It can be put back together, and it still works as expected …

HUIZINGA: The “break stuff/make stuff back” thing …

VASHISTH: Yeah, exactly.

SMITH: But, you know, I think to the spirit of the question, it’s still individual kind of one-off experiments being run in a lab, and Aniruddh is right. There’s a long way to go from, like, Technology Readiness Level 3, where we’re doing it ourselves on bench scale, up to, you know, the 7, 8, 9, where it’s actually commercially viable and someone has been able to reproduce this at scale.

HUIZINGA: Right. … So that’s when you bring investors in or labs that can make stuff in and scale.

VASHISTH: Yeah. Yeah, I think once you’re, like, close to 7, I think that’s where you’re pretty much ready for the big show.

HUIZINGA: So where are you now? 2? 3?

VASHISTH: I would say, like, 2 or 3 …

SMITH: 2, 3, somewhere in that range.

VASHISTH: Yeah.

HUIZINGA: OK.

SMITH: The scales, kind of, differ depending on which agencies you see put it out.

HUIZINGA: So, Jake, before we close, I want to talk briefly about other applications of recyclable vitrimer-based polymers, in light of their importance to the climate research initiative and AI for Science. So what other industries have polymer components that have nowhere to go after they die but the landfill, and will this research transfer across to those industries?

SMITH: An excellent question. So my personal view on this is that there’s a couple of classes of polymers. There’s these very high-value application uses of polymers where we’re talking about the printed circuit boards; we’re talking about aerospace composite; we’re talking about the panels on your car; we’re talking about things like wind turbines …

HUIZINGA: Oh, yeah.

SMITH: … where there’s a long life cycle. You have this device that’s going to be in use for five years, 50 years, and at the end of that, the polymer itself is still probably pretty good. You could still use it and regenerate it. And so Aniruddh’s lab has done great work showing that you can take things like the side panel of a plane and actually disassemble this thing, heal it, keep it in use longer, and use it at the end of its lifetime. There’s this other class of polymers, which I think are the ones that most people think about—your Coke bottle—and vitrimers seem like a much harder sell there. I think this is more the domain of, you know, biodegradable polymers in the long run to really tackle the issues there. But I’m very excited in this, you know, high-value polymer, this long-lifetime polymer, this, like, permanent install polymer, however you want to think about it, for work like this to have an impact.

HUIZINGA: Yeah. From your lab’s perspective, Aniruddh, where do you see other applications with great promise?

VASHISTH: Yeah. So as Jake said, places where we need high-performance polymers is where we can go. So PCBs is one, aerospace and automotive industry is one, and maybe medical industry is, …

HUIZINGA: Oh, interesting…

VASHISTH: … yeah, is another one where we can actually … if you can make prosthetics out of vitrimers … prosthetics actually lose a little bit of their stiffness, you know, as you use them, and that’s because of localized damage. It’s the fatigue cycle, right. So what if you can actually heal your prosthetics and reuse them? So, yeah, I feel like, you know, there’s so many different applications, so many different routes that we can go down.

HUIZINGA: Yeah. Well, I like to end our Collaborators shows with a little vision casting, and I feel like this whole podcast is that. I should also say, you know, back in the ’50s, there was the big push to make plastics! Your word is vitrimers! So let’s do a little vision casting for vitrimer-based polymers. Assuming your research is wildly successful and becomes a truly game-changing technology, what does the future look like—I mean, specified future, not general future—and how has your work disrupted this field and made the world a better place? I’ll let you each have the last word. Who’d like to go first?

VASHISTH: Sure, I can go first. I’ll try to make sure that I break it up into computation and experiments …

HUIZINGA: Good.

VASHISTH: … so that once I go back, like, my lab does not, like, pounce on me. [LAUGHS] Yeah, so I think from the computation point of view, we always thought that if somebody gave us, like, a hundred different chemistries, we can actually bottle it down to, like, we can do a bunch of simulations; tell you, like, 10 of these actually work. What we’ve been able to do specifically for vitrimers is that we’re able to look at the problem from the other side, and we are able to say that if you tell me a particular application, this particular chemistry would work best for you. In essence, what we were thinking of is that if aliens abducted all the chemists from the world, can we actually come up with a framework? [LAUGHS] So I think it’ll be difficult to get there because as I said earlier that, you know, you need that human touch. But I think we are happy that that we are getting there. And I think what remains to be seen now is, like, you know, now that we have this type of a framework, like what are the next challenges? Like, we are going from the lab to the large scale; like, what challenges are associated there? And I think similarly for the experimental side of things also, we know a lot—we have developed frameworks—but there’s a lot of work that still needs to be done in understanding and translating these technologies to real-life applications.

HUIZINGA: I like that you’re kind of hedging your bets there, saying, I’m not going to paint a picture of the perfect world because my lab is going to be responsible for delivering it. [LAUGHTER] Jake, assuming you haven’t been abducted by aliens, what’s your take on this?

SMITH: I view, kind of, the goal of this work and the ideal impact of this work as an acceleration of getting us to these polymers being deployed in all these other applications that we’ve talked about, and we can go broader than this.

HUIZINGA: Yeah …

SMITH: I think that there’s a lot of work, both within the MCRI, within Microsoft, and outside of Microsoft in the bigger field, focused on acceleration towards a specific goal. And if all of this work is successful, in 10 years, maybe our materials design process looks completely different, where we’ve gone from this kind of brute-force screening that Aniruddh has talked about to an approach where you start with the properties that you care about; they’re defined by the application that you have in mind. You want to make your vitrimer PCB, it needs to have, you know, a specific temperature where it becomes gummy; it needs to have a specific resistance to burning; it needs to be able to effectively serve as the dielectric for your bigger circuits. And we use this, like, “need space” to define the material that we would like, and we can use machine learning, artificial intelligence, in order to get us to the structure that we need to make in order to actually achieve this design space. And so, this was, you know, our big bet within AI for Science. This is the big bet of this project. And with this project, you know, we take one step towards showing that you can do this in one case. And the future casting would be we can do this in every materials design case that you can think about.

HUIZINGA: Hmmm. You know, I’m thinking of lanes—track analogy again—but, you know, you’ve got mechanical engineering, you’ve got chemistry, and you’ve got artificial intelligence, and each of those sciences is advancing, and they’re using each other to, sort of, help advance in various ways, so this is an exciting, exciting project and collaboration.

[MUSIC]

Jake, Aniruddh, thanks for joining us today on Collaborators. This has been really fun for me. [LAUGHTER] So thanks for coming in and sharing your stories today.

VASHISTH: Thank you so much.

SMITH: Yeah. Of course. Thank you.

[MUSIC FADES]

The post Collaborators: Sustainable electronics with Jake Smith and Aniruddh Vashisth appeared first on Microsoft Research.

]]>
Collaborators: Teachable AI with Cecily Morrison and Karolina Pakėnaitė http://approjects.co.za/?big=en-us/research/podcast/collaborators-teachable-ai-with-cecily-morrison-and-karolina-pakenaite/ Tue, 05 Dec 2023 16:40:57 +0000 http://approjects.co.za/?big=en-us/research/?p=987054 Cecily Morrison and Karolina Pakėnaitė are collaborators on a research prototype designed to help members of the blind community find their personal items. Learn how the work is advancing an approach to empower people to shape their own AI experiences.

The post Collaborators: Teachable AI with Cecily Morrison and Karolina Pakėnaitė appeared first on Microsoft Research.

]]>

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

In this episode, Gretchen Huizinga speaks with Cecily Morrison (opens in new tab), MBE, a Senior Principal Research Manager at Microsoft Research, and Karolina Pakėnaitė (opens in new tab), who also goes by Caroline, a PhD student and member of the citizen design team working with Morrison on the research project Find My Things. An AI phone application designed to help people who are blind or have low vision locate their personal items, Find My Things is an example of a broader research approach known as Teachable AI. Morrison and Pakėnaitė explore the Teachable AI goal of empowering people to make an AI experience work for them. They also discuss how “designing for one” when it comes to inclusive design leads to innovative solutions and what they learned about optimizing these types of systems for real-world use (spoiler: it’s not necessarily more or higher-quality data).

Transcript

[TEASER]

[MUSIC PLAYS UNDER DIALOGUE]

CECILY MORRISON: One of the things about Teachable AI is that it’s not about the AI system. It’s about the relationship between the user and the AI system. And the key to that relationship is the mental model of the user. They need to make good judgments about how to give good teaching examples if we want that whole cycle between user and AI system to go well.

[TEASER ENDS]

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC FADES]

Today I’m talking to Dr. Cecily Morrison, MBE, a Senior Principal Research Manager at Microsoft Research, and Karolina Pakėnaitė, a PhD student and a participant on the citizen design team for the Teachable AI research project Find My Things. Cecily and Karolina are part of a growing movement to bring accessible technologies to people with different abilities by closely collaborating with those communities during research and development. Cecily, Karolina, welcome to Collaborators!

CECILY MORRISON: Thank you, Gretchen.

KAROLINA PAKĖNAITĖ: Yeah, thank you.

HUIZINGA: Before we hear more about Find My Things, let’s get to know the both of you. And, Cecily, I’ll start with you. Give us a brief overview of your background, including your training and expertise, and what you’re up to in general right now. We’ll get specific shortly, but I just want to have sort of the umbrella of your raison d’être, or your reason for research being, as it were.

MORRISON: Sure, I’m a researcher in human-computer interaction with a very specific focus on AI and inclusion. Now this for me brings together an undergraduate degree in anthropology—understanding people—a PhD in computer science—understanding computers and technology—as well as a life role as a parent of a disabled child. And I’m currently leading a team that’s really trying to push the boundaries of what’s possible in human-AI interaction and motivated by creating technologies that lead us to a more inclusive world.

HUIZINGA: As a quick follow-up, Cecily, for our non-UK listeners, tell us what MBE stands for and why you were awarded this honor.

MORRISON: Yes, MBE. I also had to look it up when I first received the, uh, the award. [LAUGHTER] It stands for Member of the British Empire, and it’s part of the UK honor system. My MBE was awarded in 2020 for services to inclusive design. Now much of my career at Microsoft Research has been dedicated to innovating inclusive technology and then ensuring that it gets into the hands for those whom we made it for.

HUIZINGA: Right. Was there a big ceremony?

MORRISON: Things were a little bit different during the, the COVID times, but I did have the honor of going to Buckingham Palace to receive the award. And it was a wonderful time bringing my mother and my manager, uh, the important women around me, who’ve made it possible for me to do this work.

HUIZINGA: That’s wonderful. Well, Karolina, let’s talk to you for a minute here. You’re one of the most unique guests we’ve ever had on this podcast. Tell us a bit about yourself. Obviously, we’d like to know where you’re studying and what you’re studying, but this would be a great opportunity to share a little bit about your life story, including the rare condition that brought you to this collaboration.

PAKĖNAITĖ: Thank you so much again for having me. What an amazing opportunity to be here on the podcast. So I’m a PhD student at the University of Bath looking into making visual photographs accessible through text. Maybe you can tell from my speech that I am deaf-blind. So I got diagnosed with Usher syndrome type 2A at the age of 19, which means that I was born hard of hearing but then started to lose sight just around my early 20s. It has been a journey accepting this condition, but it’s also brought me some opportunities like becoming part of this collaboration for Microsoft Research project.

HUIZINGA: Karolina, a quick follow-up for you. Because of the nature of your condition, you’ve encountered some unique challenges, um, one of which made the news a couple of years ago. Can you talk a little bit about how perceptions about people with varying degrees of disability can cause skepticism, both from others and in fact, as you’ve pointed out, yourself? What can we learn about this here?

PAKĖNAITĖ: Yeah, so I have experienced many misunderstandings, and I know I’m not alone. So I have tunnel vision, a progressive condition at the stage where my specialists have registered me as blind instead of partially sighted. My central sight is still excellent, so that means I can still make eye contact, read books, do photography. Some people even tell me that I don’t look blind, but what does that even mean? [LAUGHTER] So since my early 20s, I became very, very clumsy. I stepped over children, walked into elderly, stepped on cat tails, experienced too many near-miss car accidents. So my brain no longer processes the world in the same way as before. But, yeah, for the longest time in my sight-loss journey, I felt like I had imposter syndrome, being completely skeptical about my own diagnosis despite the clumsy experiences, extensive eye tests, and genetic confirmation. I think the major reason is because of a lack of representation of the blind community in the media. Blindness is not black and white. Statistically, most of us have some remaining vision. Disability is not about having a certain look. This also applies to people with some form of visual impairment. I love it, how I can … how there’s so many more new Instagrammers and YouTubers who are just like me, but I still think there is a long way to go before having disability representation becoming a norm for greater understanding and inclusivity.

HUIZINGA: You know, I have to say, this is a great reminder that there is a kind of a spectrum of ability, and that we should be gracious to people as opposed to critical of them. So, um, thank you so much for that understanding that you bring to this, Karolina. Before we get into specifics of this collaboration—and that’s what we’re here for on this podcast—I think the idea of Teachable AI warrants some explication. So, Cecily, what is Teachable AI, and why is it an important line of research, including its applications in things like Find My Things?

MORRISON: Gretchen, that’s a great question. Teachable AI enables users to provide examples or higher-level constraints to an AI model in order to personalize that AI system to meet their own needs. Now most people are familiar with personalization. Our favorite shopping site or entertainment service offers us personalized suggestions. But we don’t always have a way to shape those suggestions. So you can imagine it’s pretty annoying, for example, if you keep being offered nappies by your favorite shopping service because you’ve been buying them for a friend, but actually, you don’t have or even plan to have a baby. So now Teachable AI gives, us—the user—agency in personalizing that AI system to make a choice about what are the things you want to be reflected in yourself, your identity, when you work or interact with that AI system? Now this is really important for AI systems that enable inclusion. So if we consider disability to be a mismatch between a person’s capabilities and their environment, then AI has a really significant role to play in reducing that mismatch. However, as we were working on this, we soon discovered that the number of potential mismatches between a person and their environment is incredibly large. I mean, it’s like the number of stars, right.

HUIZINGA: Right, right.

MORRISON: Because disability is a really heterogeneous group. But then we say, oh, well, let’s just consider people who are blind. Well, as Karolina has just shown us, um, even people who are blind are very, very diverse. So there are people with different amounts of vision or different types of vision. People who have different … experience the world with vision or without. People can lose their vision later in life. They can be born blind. People have different personalities. Some people are happy to go with whatever. Some people not so much.

HUIZINGA: Right.

MORRISON: People are from different cultures. Maybe they, they are used to being in an interdependent context. Other people might have intersecting disabilities like deaf-blindness and have, again, its own set of needs. So as we got into building AI for accessibility and AI for inclusion more generally, we realized that we needed to figure out how can we make AI systems work for individuals, not quote-unquote “people with disabilities”? So we focused on Teachable AI so that each user could shape the AI system to work for their own needs as an individual in a way that they choose, not somebody else. So Find My Things is a simple but working example of a Teachable AI system. And in this example, people can personalize a object finder or object detector for the personal items that matter to them. And they can do this by taking four videos of that personal item that’s important to them and then training, on their phone, a little model that will then recognize those items and guide them to those items. So you might say, well, recognizing objects with phone, we can do that now for a couple of years. And that’s very true. But much of what’s been recognizable wasn’t necessarily very helpful for people who are blind and low vision. Now it’s great if you can recognize doors, chairs, but carnivores and sombrero hats? [LAUGHTER] You know, perhaps this is less handy on a day-to-day basis. But your own keys, your friend’s front door, your guide cane, maybe even the TV remote that somebody’s always putting somewhere else. I mean these are the things that people want to keep track of. And each person has their own set of things that they want. So the Find My Things research prototype allows people to choose what they want to train or to teach to their phone and then be able to teach it and to find those things.

HUIZINGA: OK, so just to clarify, I have my phone. I’ve trained it to find certain objects that I want to find. What’s the mechanism that I use to say, what, you know … do you just say, “Find my keys,” and your phone leads you there through beeps or, you know, Marco Polo? Closer? Warmer?

MORRISON: Sure, how, how does it work?

HUIZINGA: Yeah!

MORRISON: Well, that’s a great question. So you then have a list of things that you can find. So for most people, there’s five or 10 things that are pretty important to them. And then you would find that … then you would scan your phone around the room. And you need to be within sort of 4 to 6 meters of something that you want to find. So if, if it’s in your back studio in the garden, it’s not going to find it. It’s not telepathic in that regard. It’s a computer vision system using vision. If it’s underneath your sofa, you probably won’t find it either. But we found that with all things human-AI interaction, we, we rely on the interaction between the person and the AI to make things work. So most people know where things might be. So if you’re looking for a TV remote, it’s probably not in the bathtub, right? It’s probably going to be somewhere in the living room, but, you know, your, your daughter or your brother or your housemate might have dropped it on the floor; they might have accidentally taken it into the kitchen. But you probably have some good ideas of where that thing might be. So this is then going to help you find it a little bit faster so you don’t need to get on your hands and knees and feel around to where it is.

HUIZINGA: Gotcha. The only downside of this is “find my phone,” which would help me find my things! [LAUGHTER] Anyway, that’s all …

MORRISON: Well, well, I think Apple has solved that one.

HUIZINGA: They do! They have, they have an app. Find My phone. I don’t know how that works. Well, listen, let’s talk about the collaboration a bit and, and talk about the meetup, as I say, on how you started working together. I like to call this bit “how I met your mother” because I’m always interested to hear each side of the collaboration story. So, Karolina, why don’t you take the lead here and then Cecily can fill in the blanks from her side on how you got together.

PAKĖNAITĖ: Um, yeah, so I found this opportunity to join this collaboration for Microsoft Research project as a citizen designer through an email newsletter from a charity, VICTA. From the newsletter, it looked like it was organized in a way where you were way more than just a participant for another research project. It looked like an amazing opportunity to actually get some experiences and skills. So gaining just as much as giving. So, yeah, I thought that I shouldn’t miss out.

HUIZINGA: So you responded to the email, “Yeah, I’m in.”

PAKĖNAITĖ: Yeah.

HUIZINGA: Cecily, what, what was going on from your side? How did you put this out there with this charity and bring this thing together?

MORRISON: So VICTA is a fantastic charity in the UK that works with, uh, blind and low vision young people up to the age of 30. And they’re constantly trying to bring educational and meaningful experiences to the people that they serve. And we thought this would be a great moment of collaboration where we could bring an educational experience about learning how to do design and they could help us reach out to the people who might want to learn about design and might want to be part of this collaboration.

HUIZINGA: So Karolina was one of many? How many other citizen designers on this project did you end up with?

MORRISON: Oh, that’s a great question. We had a lot of interest, I do have to say, and from there, we selected eight citizen designers from around the UK who were willing to make the journey to Cambridge and work with us over a period of almost six months. People came up to us about monthly, although we did some virtual ones, as well.

HUIZINGA: Well, Cecily, let’s talk about this idea of citizen designers. I, I like that term very much. Inclusive design isn’t new in computer-human interaction circles—or human-computer interaction circles—and you already operate on the principle of “nothing about us without us,” so tell us how the concept of citizen designer is different and why you think citizen designers take user input to another level.

MORRISON: Sure, I think citizen designer is a really interesting concept and one that we, we need more of. But let me first start with inclusive design and how that brings us to think about citizen designers. So inclusive design has been a really productive innovation tool because it brings us unusual constraints to the design problem. Within the Microsoft Inclusive Design toolkit, we refer to this as “designing for one.” And once you’ve got this very novel design that emerges, we then optimize it to work for everyone, or we extend it to many. So this approach really jogs the mind to radical solutions. So let me give you just one example. In years past, we developed a physical coding language to support blind and sighted children to learn to code together. So we thought, ah, OK, sighted children have blocks on a screen, so we’re going to make blocks on a table. Well, our young design team lined up the blocks on the table, put their hands in their lap, and I looked at them and I thought, we failed! [LAUGHTER] So we started again, and we said, OK, show us. And we worked with them to show us what excites the hands. You know, here are kids who live through their hands. You know, what are the shapes? What are the interactions? What are the kinds of things they want to do with their hands? And through this, we developed a completely different base idea and design, and we found that it didn’t just excite the hands of children who are blind or low vision, but it excited the hands of all children. They had brought us their expertise in thinking about the world in a different way. And so now we have this product Code Jumper, which kids just can’t put down.

HUIZINGA: Right.

MORRISON: So that’s great. So we, we know that inclusive design is going to generate great ideas. We also know that diverse teams generate the best ideas because diverse life experience can prompt us to think out of the box. But how do we get diverse teams when it can be hard for people with disabilities to enter the field of design and technology? So design assumes often good visual skills; it assumes the ability to draw. And that can knock out a lot of people who might be great at designing technology experiences without those skills. So with our citizen design team, we wanted to open up the opportunity to young people who are blind and low vision to really set the stage for them to think about what would a career in technology design be like? Could I be part of this? Can I be that generation who’s going to design the next cohort of accessible, inclusive technologies? So we did this through teaching key design skills like the design process itself, prototyping, as well as having, uh, this team act as full members of our own R&D team, so in an apprenticeship style. So our citizen designers weren’t just giving feedback as, as participants might, but they were creating prototypes, running A/B tests, and it was our hope and I think we succeeded in making it a give-give situation. We were giving them a set of skills, and they were giving us their design knowledge that was really valuable to our innovation process.

HUIZINGA: That is so awesome. I’m, you know, just thinking of, of the sense of belonging that you might get instead of being, as Karolina kind of referred to, it’s not just another user-research study where you’ll go and be part of a project that someone else is doing. You’re actually integrally connected to the project. And on that note, Karolina, talk a little bit about what it’s like to be a citizen designer. What were some of your aha moments on the project, maybe the items that you wanted to be able to find and what surprises you encountered in the process of developing a technique to teach a personal item?

PAKĖNAITĖ: Yeah, so it was, uh, incredibly fascinating to play the role of a citizen designer and testing a Teachable AI for use and providing further comments. It took me a bit of time to really understand how this tool is different from existing ones, but then I realized it’s literally in the name, a Teachable AI. [LAUGHTER] So it’s a tool designed for teaching it about your very own personal items. Yeah, your items may, may not look like a typical standard item; maybe you personalized them with engravings or stickers, or maybe it’s a unique gadget or maybe, say, a medical device. So it’s not about teaching every single item that you own, but rather a tool, a tool that lets us identify what matters most to you. So, yeah, I have about five to 10 small personal items that I always carry with me, and most of them are like very, very, very important to me. Like losing a bus pass means I can’t get anywhere. Losing a key means I can’t get home. Because these items are small and I use them daily, that means they are also, uh, being lost most commonly. So now I have a tool that is able to locate my personal items if they happen to be lost.

HUIZINGA: Right. And as you said earlier, you do have some sight. It’s, it’s tunnel vision at this point, so the peripheral part, um, is more challenging for you. But having this tool helps you to focus in a broader spectrum of, of visual sight. Cecily, this would be a great time to get a bit more specific about your Teachable AI discovery process. Tell us some research stories. How did you go about optimizing this AI system, and what things did you learn from both your successes and your failures?

MORRISON: Ah, yes, lots of research stories with this system, I’m afraid, but I think the very first thing we did was, OK, a user wants to teach this system, so we need to tell the user what makes a good teaching example. Well, we don’t know. Actually, we assumed we did know because in machine learning, the idea is more data, better quote-unquote “quality data,” and the system will work better. So the first thing that really surprised us when we actually ran some experimental analysis was that more data was not better and higher-quality data, or data that has less blur or is perfectly framed, was also not better. So what we realized is that it wasn’t our aim to kind of squeeze as much data as we could from the users but really to get the data that was the right kind of data. So we did need the object in the image. It’s, it’s really hard to train a system to recognize an object that’s not there at all. But what we needed was data that looked exactly like what the user was going to use when they were finding the objects. So if the user moves the camera really fast and the image becomes blurry, then we need those teaching examples to have blur, too.

HUIZINGA: Right.

MORRISON: So it was in understanding this relationship between the teaching examples and the user that really helped us craft a process that was going to help the user get the best result from the system. One of the things about Teachable AI is that it’s not about the AI system. It’s about the relationship between the user and the AI system. And the key to that relationship is the mental model of the user. They need to make good judgments about how to give good teaching examples if we want that whole cycle between user and AI system to go well. So I remember watching Karolina taking her teaching frames, and she was moving very far away. And I was thinking, hmm, I don’t think that data is going to work very well because there’s just not going to be enough pixels of the object to make a good representation for the system. So I asked Karolina about her strategy, and she said, well, if I want it to work from far away, then I should take teaching examples from far away. And I thought, ah, that’s a very logical mental model.

HUIZINGA: Right.

MORRISON: But unfortunately, we’ve broken the user’s mental model because that’s not actually how the system works because we were cropping frames and taking pixels out and doing all kinds of fancy image manipulation to, actually, to improve the performance under the hood. So I think this was an experience where we thought, ah, we want the user to develop a good mental model, but to do that, we need to actually structure this teaching process so they don’t need to think so hard and we’re guiding them into the, the kinds of things that make the system work well as opposed to not, and then they don’t need to guess. So the other thing that we found was that teaching should be fast and easy. Otherwise, it’s just too much work. No matter how personalized something is, if you have to work too hard, it’s a no-go. So we thought, ah, we want this to be really fast. We want it to take as few frames as possible. And we want the users to be really confident that they’ve got the object in the frame because that’s the one thing we really need. So we’re going to tell them all the time if the object’s in the frame: it’s in frame; it’s in frame; it’s in frame; it’s in frame; it’s in frame; it’s in frame. Well, there’s … citizen designers [LAUGHTER], including Karolina, came back to us and said, you know, this is really stressful. You know, I’m constantly worrying, “Is it in frame? Is it in frame? Is it in frame?” And actually, the cognitive load of that, even though we were trying to make the process really, really easy, um, was, was really overwhelming. And one of them said to us, well, why don’t I just assume that I’m doing a good job unless you tell me otherwise? [LAUGHTER] And that really helped shift our mindset to say, well, OK, we can help the user by giving them a gentle nudge back on track, but we don’t need to grab all their cognitive attention to make the perfect video!

HUIZINGA: [LAUGHS] That’s, that’s so hilarious. Well, Cecily, I want to stay with you for a minute and discuss the broader benefits of what you call “designing outside the mean.” And despite the challenges of developing technologies, we’ve seen specialized research deliver the so-called curb-cut effect over and over. Now you’ve already alluded to this a bit earlier. But clearly people with blindness and low vision aren’t the only ones who can’t find their things. So might this research help other people? Could it, could it be something I could incorporate into my phone?

MORRISON: That’s a great question. And I think an important question when we do any research is how do we broaden this out to meet the, the widest need possible? So I’m going to think about rather than Find My Things specifically, I’m going to think about Teachable AI. And Teachable AI should benefit everybody who needs something specific to themselves. And who of us don’t think that we need things to be specific to ourselves in this day and age?

HUIZINGA: Right … [LAUGHS]

MORRISON: But it’s going to be particularly useful for people on the margins of technology design for many reasons. So it doesn’t matter—it could be where your home is different or the way you go about your daily lives or perhaps the intersection of your identities. By having Teachable AI, we make systems that are going to work for individuals. Regardless of the labels that you might have or the life experience you might have, we want an AI system that works for you. And this is an approach that’s moving us in that direction.

HUIZINGA: You know, I love … I, I remembered what you said earlier, and it was for individuals, not people with disabilities. And I just love that framing anyway because we’re all individuals, and everyone has some kind of a disability, whether you call it that or not. So I just love this work so much. Karolina, back to you for a minute. You have said you’re a very tactile person. What role does haptics, which is the touch/feel part of computer science, play for you in this research, and how do physical cues work for you in this technology?

PAKĖNAITĖ: Yeah, so because I’m deaf-blind, I think my brain naturally craves information through senses which I have full access to. For me, it’s touch. So I find it very stimulating when the tools are tactile, whether that’s vibrations or textures. Tactile feedback not only enhances the experiences, but I think it’s also a good accessibility cue, as well. For example, one big instance happened that as a citizen designer was when I was pointing my camera at an object and, being hard of hearing, that means I couldn’t hear what it was saying, so I had to bring it close to my, my ear, and that meant that the object was lost in the camera view. [LAUGHS]

HUIZINGA: Right … [LAUGHS]

PAKĖNAITĖ: So … yeah, yeah, I think having tactile cues could be very beneficial for people like me who are deaf-blind but also others. Like, for example, you don’t always want your phone to be on sound all the time. Maybe in a quiet train, in a quiet tube, you don’t want your phone to start talking; you might be feeling self-conscious. So, yeah, I think …

HUIZINGA: Right …

PAKĖNAITĖ: … always adding those tactile cues will benefit me and everyone else.

HUIZINGA: Yeah, so to clarify, is haptics or touch involved in any of this particular Teachable AI technology, Cecily? I know that Karolina has that as a, you know, a “want to have” kind of thing. Where does it stand here?

MORRISON: Yeah, no, I, I think Karolina’s participation, um, was actually fairly critical in us adding, um, vibration cues to the experience.

HUIZINGA: Yeah, so it does use the, the haptic …

MORRISON: Yeah, we use auditory, visual, and, and vibration as a means of interaction. And I think in general, we should be designing all of our experiences with technology to be multisensory because, as Karolina pointed out, in certain circumstances, you don’t really want your computer talking at you. In other circumstances, you need something else. And in our different individual needs, we might need something else. So this allows people to be as flexible as possible for their context and for their own needs to make an experience work for them.

HUIZINGA: Right. Yeah, and I feel like this is already kind of part of our lives when our phones buzz or, or, you know, vibrate or when you wear the watch that gives you a little tip on your wrist that you’ve got a notification or you need to turn left or [LAUGHTER] whatever you’re using it for. Cecily, I always like to know where a project is on the spectrum from lab to life, as we say on this show. What’s the status of Teachable AI in general and Find My Things in particular, and how close is it to being able to be used in real life by a broader audience than your citizen designers and your team?

MORRISON: So it’s really important for us that the technologies we research become available to the communities to whom they are valuable. And in the past, we’ve had a whole set of partners, including Seeing AI, American Printing House for the Blind, to help us take ideas, research prototypes, and make them into products that people can have. Now Teachable AI is a grand vision. I think we are … showed with this work in Find My Things that the machine learning is there. We can do this, and it’s coming. And as we move into this new era of machine learning with these very large models, we’re going to need it there, too, because the larger the model, the more personalized we’re probably going to need the experience. In terms of Find My Things, we are also on that journey to finding the right opportunity to bring it out to the blind community.

HUIZINGA: So this has been fascinating. I’m … there’s so many more questions I want to ask, but we don’t have a lot of time to ask them all. I’m sure that we’re going to be watching as this unfolds and probably becomes part of all of our lives at some point thanks to the wonderful people doing the research. I like to end the podcast with a little future casting from each of my guests, and, Karolina, I’d like you to go first. I have a very specific question for you. Aside from your studies and your research work, you’ve said you’re on a mission. What’s that mission, and what does Mount Everest have to do with it?

PAKĖNAITĖ: So firstly, I’m hoping to complete my PhD this year. That’s my big priority for, for this year. And then, uh, I will be on a mission, an ambitious one that I feel a little bit nervous to share but also very excited. As an adventurer at heart, my dream is to summit Mount Everest. So before it’s always seemed like a fantasy, but I recently came back from an Everest base camp trek just a few months ago, and I met some mountaineers who were on their way to the top, and I found myself quietly saying, what if? And then, as I was thinking how I’m slowly losing my sight, I realized that if I do want to summit Everest, I would want to go there while I still can see with my remaining vision, so I realized that it would have to be now or never.

HUIZINGA: Right!

PAKĖNAITĖ: So when I came back, I decided … I just made some actions. So I reached out to different organizations and surprisingly a film production team is eager to document this journey and … yeah, it seems like something might be happening. So this mission isn’t just about me potentially becoming the first deaf-blind person to summit Everest but also a commitment to raising awareness and providing representation for the blind and deaf-blind community. I hope to stay in the research field, and I believe this mission has some potential for research. So I think that, for example, I’m looking for accessibility tools for, for me to climb Everest so that I can be the best climber I can be as a deaf-blind person, being independent but part of the team, or maybe make a documentary film a multisensory experience, accessible to a wider community, including deaf-blind. So, yeah, I’m actively looking for collaborators and would love to be contacted by anyone.

HUIZINGA: I love the fact that you are bringing awareness to the fact, first of all, that the deaf-blind community or even the blind community isn’t a one-size-fits-all. So, um, yeah, I hope you get to summit Everest to be able to see the world from the tallest peak in the world before anything progresses that way. Well, Cecily, I’d like to close with you. Go with me on a little forward-thinking, backward-thinking journey. You’re at the end of your career looking back. What have you accomplished as a researcher, and how has your work disrupted the field of accessible technology and made the world a better place?

MORRISON: Where would I like to be? I would say more like where would we like to be. So in collaboration with colleagues, I hope we have brought a sense of individual’s agency in their experience with AI systems, which allow people to shape them for their own unique experience, whoever they might be and wherever they might be in the world. And I think this idea is no less important, or perhaps it’s even more important, as we move into a world of large foundation models that underpin many or perhaps all of our experiences as we, as we go forward. And I think particularly large foundation models will bring really significant change to accessibility, and I hope the approach of teachability will be a significantly positive influence in making those experiences just what we need them to be. And I have to say, in my life role, I’m personally really very hopeful for my own blind child’s opportunities in the world of work in 10 years’ time. At the moment, only 25 percent of people who are blind or low vision work. I think technology can play a huge role in getting rid of this mismatch between the environment and a person and allowing many more people with disabilities to enjoy being in the workplace.

HUIZINGA: This is exciting research and really a wonderful collaboration. I’m so grateful, Cecily Morrison and Karolina Pakėnaitė, for coming on the show and talking about it with us today. Thank you so much.

MORRISON: Thank you, Gretchen, and thank you, Karolina.

PAKĖNAITĖ: Thank you.

The post Collaborators: Teachable AI with Cecily Morrison and Karolina Pakėnaitė appeared first on Microsoft Research.

]]>
Collaborators: Project InnerEye with Javier Alvarez and Raj Jena http://approjects.co.za/?big=en-us/research/podcast/collaborators-project-innereye-with-javier-alvarez-and-raj-jena/ Thu, 17 Aug 2023 13:18:40 +0000 http://approjects.co.za/?big=en-us/research/?p=962121 Microsoft Health Futures’ Javier Alvarez & oncologist Raj Jena have been collaborating for years on AI-assisted medical imaging. Today, their work is seeing real-world impact, helping doctors accelerate cancer patients’ access to treatment.

The post Collaborators: Project InnerEye with Javier Alvarez and Raj Jena appeared first on Microsoft Research.

]]>
black and white photos of Microsoft Health Futures’ Senior Director Javier Alvarez and Dr. Raj Jena, a radiation oncologist at Addenbrooke’s hospital, next to the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a new Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

In this episode, Dr. Gretchen Huizinga talks with Microsoft Health Futures Senior Director Javier Alvarez (opens in new tab) and Dr. Raj Jena (opens in new tab), a radiation oncologist at Addenbrooke’s hospital, part of Cambridge University Hospitals in the United Kingdom, about Project InnerEye, a Microsoft Research effort that applies machine learning to medical image analysis. The pair shares how a 10-plus-year collaborative journey—and a combination of research and good software engineering—has resulted in the hospital’s creation of an AI system that is helping to decrease the time cancer patients have to wait to begin treatment. Alvarez and Jena chart the path of their collaboration in AI-assisted medical imaging, from Microsoft Research’s initiation of Project InnerEye and its decision to make the resulting research tools available in open source to Addenbrooke’s subsequent testing and validation of these tools to meet the regulatory requirements for use in a clinical setting. They also discuss supporting clinician productivity—and ultimately patient outcomes—and the important role patients play in incorporating AI into healthcare.

Transcript

[TEASER]

[MUSIC PLAYS UNDER DIALOGUE]

JAVIER ALVAREZ: On the third iteration, we actually moved to deep learning, and we started using GPUs in the cloud.

RAJ JENA: I’m really interested in this part of the story, the “final mile” story, where you actually take something and instead of just topping out at saying, “Hey, we did something. Let’s write a paper” — which we did do! — you actually stick with it and get it all the way through to clinical impact.

ALVAREZ: So we started training models with 30 million parameters. And this was a huge breakthrough. So we started to get really good feedback from Raj and his colleagues at Addenbrooke’s. Uh, yeah, it was a great experience.

JENA: In 2016, some changes came to the team. Javi joined, and we were so excited because he was a software engineer, where before we had been researchers talking to researchers, and it was the ability to know that really good software engineering was going to be able to take something we built as research and make it good enough to plumb in the hospital as Javi described. That was a real exciting moment.

[TEASER ENDS]

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC ENDS]

I’m excited to be talking today with Javier Alvarez and Dr. Raj Jena. Javier is a Senior Director of Biomedical Imaging at Microsoft Health Futures in Cambridge, UK, and part of Project InnerEye, a machine learning technology designed to democratize AI for medical image analysis across the spectrum from research to practice. Raj is a radiation oncologist at Addenbrooke’s hospital, which is part of the Cambridge University Hospitals system, and he was also a collaborator with Project InnerEye during the research phase. Javier and Raj, welcome to the podcast. Now, before we peer into InnerEye, let’s get to know you a little bit better! Javier, I’ll start with you. Give us a brief overview of your training and expertise and then tell us about Microsoft Health Futures and your role there.

JAVIER ALVAREZ: Thank you for having me here. I’m Javier, and I lead the biomedical imaging team at Microsoft Health Futures. We are responsible for research, incubations, and moonshots that drive real-world impact across healthcare and life sciences inside MSR. Uh, yeah, my team is very diverse. We focus on end-to-end solutions. We collaborate with people like Raj, mostly clinicians, and we work on high-quality research, and we hope others can build on top of our work. We try to integrate our AI as a “friendly colleague.” And yeah, I have been in Microsoft for 10 years. My background is in computer science and engineering, and I have been always working on research and innovation projects, uh, focusing on high-risk/high-reward projects. And yeah, my first job at Microsoft was actually working on the first telemetry pipeline for Microsoft on, on the Azure cloud. And we helped several products like Skype, Xbox, Office, and Bing to get better insights into their data. And yeah, after that I joined Antonio Criminisi and Raj in 2016 to work on InnerEye. So yeah, I’m super, super excited to be here to share more about our work.

HUIZINGA: Well, Raj, our audience is a super smart one, but probably not all that well-versed on radiation therapy and neuro-oncology. So tell us about your work as a cancer doctor and a researcher, as well. What’s your background, and how would you define your role — or roles, plural — at Cambridge University Hospitals?

JENA: Thanks for the opportunity to join this discussion and to fly the flag for radiation oncology. It’s a really useful and very modern anti-cancer therapy. Half the people diagnosed with cancer who are cured will end up having radiation therapy as part of their treatment pathway. So I’m passionate about making radiation therapy as safe, as smart and accurate, and with as few side effects as possible. And I do that both in the context of my clinical work but also research work, where I focus mainly on sort of the analysis of images. We use an awful lot of imaging in radiation therapy to really target the radiation therapy. And it’s in that context, really, that I kind of started, you know, with this collaboration over 10 years ago now.

HUIZINGA: Wow. What would you say your “split” is? I mean, as a doctor or a researcher, how do you balance your time?

JENA: Some people would say I have the dream job because I do half and half. Half clinical work and half research work. And I really like that because it means that I can anchor myself in the clinic. I don’t lose track of why we’re trying to do these things. We’re trying to bring benefit to patients, to my patients. But it also means I’ve got the time to then explore on the research side and work with the best and brightest people, including, you know, many of the guys I’ve met at Microsoft Research.

HUIZINGA: Right. You know, as a side note, I just finished a book called The Butchering Art about Joseph Lister, who was both a surgeon, in the Victorian era, and also a researcher and sort of discovering this idea of germ theory and so on with Louis Pasteur, etc. So I’m, I’m ensconced in this idea of research and practice being so tightly woven together. So that’s really awesome. Well, before we get into specifics on the collaboration, Project InnerEye warrants a little bit of explication itself. From what you’ve described, I’d call it a “machine learning meets radiation therapy” love story, and it’s a match made in heaven, or at least the cloud. So what’s the catalyst for InnerEye, and how have the research findings changed the game? Raj, why don’t you talk about it from the medical angle?

JENA: Sure. So, um, as with many things, it started by chance. I went to a talk given by Antonio Criminisi, who Javi mentioned. He was the person that kind of established the InnerEye group at Microsoft Research back in 2011, I think. And he was talking about the way that his team, that did computer vision at the time, were using algorithms that had been developed to detect the human pose so that actually you could play video games without a controller. So this was technology that we all know and love in terms of systems like Kinect and the Xbox. You know, I had one of those! But I went to listen because Antonio wanted to apply it to medical imaging. So in the same way that they were using algorithms to mark out where the body was or where the hands were, could we also mark out tissues and structures within the body? So I said to him, after the end of this, you need to come and see what we do in radiation therapy because this really matters. And to his credit, he did! A couple of weeks later, he came to the department, and he went into a room where dozens of my colleagues were sitting in front of computers, working as fast and accurately as they could, to manually mock up all this normal anatomy on CT scans so we could get our patients onto radiotherapy as quickly as possible. And that was the light bulb moment where he realized, yeah, we need to make this better; we need to make this faster and use, initially, algorithms that came from computer vision, but now, you know, we’ve moved slowly over to things now that we would consider to be sort of machine learning and AI algorithms.

HUIZINGA: Right. Well, I should note that I’ve interviewed Antonio on this show, um, a few years back. And so if listeners want to go back to the archives and find the episode with Antonio Criminisi, that was a great one. So what you just described is sort of a “I can do this, but I can’t do it very fast” scenario. So let’s go into the geek side. Um, Javier, talk about the technical aspects of InnerEye and what it brought to the game. How has the research evolved? Where did it start, from your perspective, and where has it come in the cloud era?

ALVAREZ: Sure, yeah. I would be happy to geek out a bit! Um, so one of the biggest challenges that we faced in radiotherapy was working with CT scans. So CT scans are 3D images that contain around 20 million 3D pixels. We usually call them voxels. And we need to classify each of them as background, different organs, or tumor. And this actually requires a lot of compute and memory. So when we started in 2016, actually we started using very simple models called decision forests, and these can be trained on CPUs. So it was really easy to train them, but one of the problems with decision forests is that you actually have to do the feature extraction manually. So we had to code all that, and it’s a bit of a limitation of this approach. So in the second iteration, we started connecting the hospital to the cloud, and that gave us access to more compute, and we started introducing what we call the InnerEye-Gateway. So this actually helped to automatically route de-identified CT scans to the cloud and run the computation there. And we managed to integrate the model seamlessly into the workflow. So clinicians, when they go to open their CT scan, they already have the segmentation ready to be used on their favorite planning tool. They can review it and refine it. And then on the third iteration, we actually moved to deep learning, and we started using GPUs in the cloud. And this actually helped us create bigger models with more capacity to learn these complex tasks. So we started training models with 30 million parameters. And this was a huge breakthrough. So we started to get really good feedback from Raj and his colleagues at Addenbrooke’s. Uh, yeah, it was a great experience. We had to iterate many times and go to the hospital down the road here in Cambridge. And yeah, it wasn’t a straight path. We had to learn a lot about the radiotherapy workflow, and yeah, we actually learned that it’s actually very hard to deploy AI.

HUIZINGA: Yeah. Every time we do a podcast, um, listeners can’t see the other person shaking their head, but Raj has been shaking his head the whole time Javier’s talking. Talk a little bit, Raj, about that marriage of workflow and machine learning. How did it change your world?

JENA: Yeah, I mean, I think I’m really interested in this part of the story, the “final mile” story, where you actually take something and instead of just topping out at saying, “Hey, we did something. Let’s write a paper” — which we did do! — you actually stick with it and get it all the way through to clinical impact. And actually, you know, from my point of view, in 2016, some changes came to the team. Javi joined, and we were so excited because he was a software engineer, where before we had been researchers talking to researchers. And it was the ability to know that really good software engineering was going to be able to take something we built as research and make it good enough to plumb in the hospital as Javi described. That was a real exciting moment. And then the second exciting moment that followed from that was the first time our clinicians saw the output from that third iteration that Javi mentioned, the deep learning model, and you looked at their reactions because they’re thinking, I couldn’t immediately tell this was done by AI.

HUIZINGA: Wow!

JENA: And that was the moment I will never forget. Because they were very kind to us. They evaluated the models at the beginning, when the output wasn’t good enough and they said, hey, this is interesting, but, you know, we’re not really going to use it. It’s not really going to save us time. And they stuck with us, you know, the clinician part of the team stuck with the researcher part of the team, and we kept going. And it was that moment really when everything came together and we thought, yeah, we’re onto something. That was … that was huge.

HUIZINGA: Yeah. It sounds like you’re talking about how you met, but I’m not sure if that’s the whole story. So let’s talk about the meet-up and how the two of you, specifically as collaborators, started working together. I always like to call this “how I met your mother,” but I’m interested to hear each side of the story because there’s always an “aha moment” on what my work could contribute to this and how theirs could contribute to mine – the kind of co-learning scenario? So, Raj, go a little further in describing how Javi and you got together, and then we’ll see if Javier can confirm or deny the story! [LAUGHS]

JENA: Yeah. So as, as I mentioned … so I had already been working with Antonio purely as research for a little while, and Antonio was tremendously excited because he said the team was going to expand, and Javier was one of the first hires that we actually had to join the team. And I remember Antonio coming in and said, “We’ve just interviewed and appointed this guy. You wait till you … you wait till you meet him,” kind of thing. And then Javi joined us. From my point of view, I am a doctor that likes to code, so I like seeing code come to action, and I know the joy that that brings. And there was this amazing time, shortly after Javi first joined us, where I would come and meet the team about once a week and we would say, hey, you know, maybe we should do this and maybe this would be the way to solve this particular problem, or we need to design a tool so we can visualize the imaging and the machine learning parts of our workflow together and work on them together. And I come back next week, and the thing was practically built! And, you know, to me, that was just the amazing thing … is what you realized is that where before we had been struggling along with just researchers trying to do their best — you know, we know the maths but not how to build things — all of a sudden, Javi comes along and just the rate and the pace at which stuff move forwards, it was incredible! So yeah, that’s my side of the story.

HUIZINGA: I love it. Um, in fact, a doctor that likes to code … I’m wondering if Javier is a computer scientist that likes to … I don’t even know how to fill in the blank on your end … radiotherapy? Dabble in operation? Javier, what’s your side of the story?

ALVAREZ: Yeah, I think for me, it was really amazing to work with Raj because he was telling us about all the physics about radiotherapy, and this was super exciting. We went on multiple trips to Addenbrooke’s to see the radiotherapy department. So actually, yeah, for me, I, I … that was my first project on healthcare, so I had to learn a lot. So yeah, it was super useful to work with Raj, learning about the workflow in radiotherapy, how the data moves, as well. It was super useful. I think actually we met here with Antonio during lunch in the lab. Uhh, yeah…

HUIZINGA: During lunch in the lab … ! [LAUGHS] It would be a good time now for me to just clarify that Addenbrooke’s is the old name of the hospital that’s part of … um, Raj, explain that!

JENA: That’s right. So we’re now called Cambridge University Hospitals to reflect the fact that we’re a big biomedical campus and we actually have multiple hospitals: Addenbrooke’s, the Rosie, uh, Papworth Hospital … but affectionately, people who have lived in Cambridge still call it Addenbrooke’s.

HUIZINGA: That’s good. We can call it both. Javier, as we’re recording this podcast, some big things are going on in the UK. Um, it’s the 75th anniversary of the National Health Service, or NHS, and you guys recently got an award from that organization. You’ve written a JAMA paper and even the prime minister posted something on LinkedIn about your work, which is pretty cool! Tell us about some of the accolades associated with InnerEye right now, from where it started — you know, as a twinkle in someone’s eye — to where it is now, what kind of attention it’s getting. What’s the buzz?

ALVAREZ: Yeah, absolutely. Yeah, maybe I’ll talk about the JAMA paper, and I will let Raj talk about the NHS part, because I think this has been mostly his work.

HUIZINGA: Perfect.

ALVAREZ: So yeah, I think when we started getting really good results with our models in Addenbrooke’s and sharing it with the clinicians, we thought that yeah, we wanted to run a bigger study on evaluating the models for prostate and head and neck. Uh, so we ran a study that was published in JAMA, and here we asked the question of, OK, are these models actually acceptable and accurate enough for radiotherapy planning? And can we actually reduce the time in the workflow? So we, we actually got around eight datasets from all around the world, very diverse datasets from radiotherapy planning, and we set aside a couple of them for external validation. So we didn’t use those for training. And then we used the, the rest of them for training the model. And we actually show in the paper that the model generalizes to the external datasets, so it’s quite robust, using different protocols in radiotherapy. And we also did some interobserver variability study to check that the variability of the AI model is similar to the variability that we observed between different clinicians. And, yeah, as part of the paper, we actually open-sourced all the code. This is how Addenbrooke’s actually started to think about deploying the models clinically. Uh, yeah, in fact this work was recognized with this NHS AI Award and now with the NHS anniversary, but, yeah, I’ll let Raj talk about this part in the hospital.

HUIZINGA: Well, before we go to Raj, I want you to just clarify, because I think this is super interesting. You’ve got the paper and you’ve got practice. And what’s fascinating … I’ll say it again—I just finished the book—but what Joseph Lister did was practice and show how his theories and his work made a difference in his patients’ lives. But what you’re talking about, as you mentioned, Javier, is background, organ, tumor …

ALVAREZ: Yeah.

HUIZINGA: So those three things have to be differentiated in the radiologist’s workflow to say, I’m not going to shoot for the background or the organ; I want to get the tumor. And what you’re saying, Javier, is that this tool was able to do sort of human-level identification?

ALVAREZ: Yeah. Yeah, exactly. Yeah. This is what we, we showed in the JAMA paper. Yeah.

HUIZINGA: Well, Raj, talk about it from the medical angle. Um, what’s the buzz from your end?

JENA: Sure. Yeah. So, so InnerEye is a toolkit, and it was great to see it being used for all sorts of things, but in radiation therapy, we’re using that toolkit specifically to mark out the healthy organs that need to be shielded from radiation. At the moment, we’re not using InnerEye to try and mark out the tumor itself because tumors change a lot from person to person. And so what our design was, was to build something that very much assists rather than replacing the oncologist so that when the oncologist sits down to do this task, about 90 percent of the time is spent marking out all of the healthy organs and 10 percent of the time on the tumor. Actually, we’d love it to be the other way around. And that’s what this tool does. It means that when the oncologist sits down, all of the healthy organs that sit around the tumor that need to be shielded as much as possible from the radiation, that’s already done. So the oncologist goes through … they have to review it, obviously, and check each one is accurate. And in our real-world testing, we found out that about two times out of three, the tool does a good enough job that its output can be used directly without changing anything, which is really good.

HUIZINGA: Wow.

JENA: That means they can then focus on contouring the tumor, and it means the overall time taken to complete this task can be about two and a half times faster. Now, when you think, for the complex tumors that we deal with, that can take up to two hours, that’s a lot of time saving and that’s time given back to the oncologist to spend in front of the patient, basically. So from our point of view, Javi mentioned this, uh, NHS award—it was this AI award that we were given by our national healthcare service—and what that was charged to do was to pick up the baton, once Microsoft had turned InnerEye to an open-source tool, because to turn that open-source tool into a potential medical device that could be used in the cloud for real clinical care, needs a whole other level of sort of checks and evaluations. And that’s what we did, basically, in our team. We worked together with the team in our hospital that builds things as medical devices. Usually, in our hospital, that team builds what we call prosthetics. So things that you would put into a patient or onto a patient when they’ve been injured or something like that. They’d never done it for a software device. But it was great because we had some really strong starting points. First of all, we knew that the actual InnerEye code was fantastic, and secondly, we knew from the JAMA paper that the initial evaluations, in terms of how useful these things were, stood up very well. So that, together with our own clinical evaluations of having the tool plumbed in and seeing it being used, meant that we kind of already knew that this was going to be possible, that we were likely to succeed in this task.

HUIZINGA: Hmmm. Go back a little bit, Raj. You’ve mentioned that tumors change from patient to patient, so it’s not always the same. Do they also change over time?

JENA: Yes. Hopefully, they shrink after radiation therapy and the treatments that, that we give! And so yes, I mean, it’s a big part of what these sorts of tools will continue to be explored in the future is actually tracking how tumors change over time, and that’s a big area. But, you know, we chose to pick on something that was achievable, that wasn’t too risky, and that would already achieve real utility, you know, in, in a hospital. So we already did that with even what it does in terms of marking out the healthy organs. The tumor stuff will come, I’m sure, in time. But we already proved that you could use these tools and build them to be useful.

HUIZINGA: Right. Javier, you mentioned earlier that one of the mandates of the lab is high-risk/high-reward research. This seems to have super high reward, but it’s about now that I ask what could possibly go wrong to everybody that comes on the show. [LAUGHS] Some people hate it. Some have worried that AI will take jobs away from doctors, and I’m sure there’s other worries, as well. What thought have you given to potential consequences, intended and unintended, as you move forward with this work, and what strategies are you employing to mitigate them? Let’s hear from the technologist first, and then we’ll hear from the doctor.

ALVAREZ: Yeah, absolutely. I believe, uh, AI safety should be our top priority in any of our AI products in healthcare. And yeah, it is super important to consider the intended and unintended consequences of deploying these models into the clinical workflow. One of the top-of-mind concerns for the public is that AI might take jobs away from doctors, but actually, we need more doctors. So one out of five jobs in oncology are not being filled in the UK, and the way we are thinking about deploying these AI models is to augment the clinicians. So we want to help them be more productive and deliver better patient outcomes. So the models are working alongside the doctor. And in the case of InnerEye, we are delivering more accurate and faster segmentation. Other concerns could be biases in the models, and to mitigate this, we usually work with clinicians like Raj to build diverse and good datasets that are representative of the population. As always, we make sure the clinician has the ultimate decision and they approve the work of the AI model.

HUIZINGA: Raj, what’s your take on the “what could possibly go wrong” question?

JENA: Yeah, it’s an interesting one. You know, we’ve identified 500 risks, and we’ve gone through each and every one of them and made sure either that the software means that it can’t happen or we mitigate it, basically. Actually, though, the biggest thing that you can do to mitigate risk is talk to patients. And as part of this award, we got to do two really interesting consultations with patients, because then you understand the patient’s perspective. And two things, very briefly, that I took home from that: the first is, is that patients say, yeah, OK, this isn’t what I thought of when I think about AI. I understand that you’ve used incredibly advanced machine learning tools, but actually, this is a very simple task, and the risk is relevant to the task rather than the technology. So that was a useful thing. And the second thing is that they said, it’s all about who’s in control. I understand how this system works to assist an oncologist, and the oncologist retains ultimate control, and that is a huge thing in terms of enhancing trust. So I think as you move from these types of systems to systems where actually you start to push the envelope even further, it’s really important to take patients with you because they keep you grounded, and they will give you really good insights as to what those real risks are.

HUIZINGA: Right.

JENA: The other thing is, is that everyone knows, just like any job, you know, there are the bits that excite you and reward you. And then there are the bits that are kind of dull and tedious. And, you know, Eric Topol has this famous phrase that he said, you know, which is that good AI should give clinicians the gift of time, and that’s what you really want … is, is that you want the AI to allow you to spend more of the time that interests you, excites you, fascinates you, motivates you. And I think, you know, from my point of view, I’m a great believer that that’s what AI will do. It will actually, you know … doctors are very adaptive. They’ll learn to use new tools, whether it’s a robot from a surgeon’s point of view or a new AI algorithm, but they’ll use it in the best way possible to actually kind of still allow them to achieve that patient-centric care.

HUIZINGA: Well, that’s a lovely segue way into the next question I had for you anyway, which is what could possibly go right. And you, Raj, referred to the triple benefit of InnerEye. Go a little deeper into who this research helps and why and how.

JENA: I think it’s a really important illustration of how you can democratize AI. A lot of AI research work stays as research work, and people don’t really understand how these tools … they hear a lot about it, and they read a lot about it, but they don’t understand how it’s actually going to make a difference for them in the clinic. And I think that’s why, you know, stories like InnerEye are particularly meaningful. We’re not talking about building an AI that lets us understand something that the human couldn’t understand before. So it’s not earth shattering in that sense. And yet, even despite that simplicity, so many of my colleagues, they get it. They go, OK, you know, we really understand you’ve actually built something, and you’ve put it here into the clinic. And I think, you know, from my point of view, that’s the real value. There are other value propositions relating to the fact that it was open-source that lends itself to democratization and sharing and also because it runs in the cloud and that basically you don’t need a hospital that’s already got a quarter million-pound computer and only those hospitals with the latest kit can actually use it. So it means that it is just as easy to deploy in a small hospital as it is in a big hospital. So for me, those are the key messages, I think.

HUIZINGA: Javier, Raj just alluded to the open-source nature of this tool or toolkit. I want you to drill in a little more on that story. Um, I understand this lives on GitHub. How did that decision come about, and why do you believe this will benefit people in the future?

ALVAREZ: Yes. So the decision to make the code open-source came from the desire to democratize the access to these AI models. So we wanted to make sure everyone would be able to build on top of our research. And that was the way that we found to give access to Addenbrooke’s to create their own medical devices. We thought that also having open-source code allows us to be more transparent with our research and to gain trust on the technology. It also helps us, as well, to get help from the community on building this project. So we had people helping us to fix bugs and to make sure, uh, the algorithms are not biased. As part of the open-source, we made available three big components. One is the InnerEye-Gateway that routes the images to the AI models in the cloud and de-identifies the data. We also made available the InnerEye inference code that basically is an API that the InnerEye-Gateway uses to run the models. And also all the training code to be able to reproduce our work. Uh, yeah, we are super excited to see how people will use the open source in the future. We also have some startups that are using our code and trying to build products with it.

HUIZINGA: Go a little further, Javier, because this is interesting. Obviously, radiation therapy is one application of InnerEye, but I imagine it could be useful for other medical applications or other … actually, probably anything that you need to identify something, you know, the signal in the noise.

ALVAREZ: Yeah, um, segmentation in medical imaging is super important, so it allows you to actually strike measurements from the images. So, yeah, it can be useful, as well, in some radiology scenarios like clinical trials where you want to track tumors over time. And also in surgery where you want to plan surgery, so you need to understand how vessels are feeding into the tumor. So, yeah, segmentation is super important, and I think the components that we have could be useful for many different scenarios in medical imaging.

HUIZINGA: Well, Raj, I always like to know where the project is on the spectrum from lab to life, and as I understand it, after the InnerEye team completed the research and made the code open source, Addenbrooke’s took the regulatory baton for medical device approval in the UK, but it’s still not over. So continuing with that analogy: if this were a relay race and the idea was the first leg, who else is running, where are you in the race, and who brings it across the finish line?

JENA: Yeah, that’s a really good analogy. I, I might use that one in the future. So, uh, there are other commercial organizations that have systems that will perform this work. They are quite expensive, actually, to buy into if you want to buy them outright. There are some where, a bit like ours, you can scale it so that you pay as each patient’s data is processed. They also are quite expensive for some emerging, uh, healthcare markets, and by emerging healthcare markets, I include my own in the, in the NHS. To our knowledge, we are the only cloud-based, open-source medical imaging device that we’re actually trying to build within the NHS. So that is truly unique. And in terms of where we are on that journey to take the, you know, the InnerEye open source all the way through to a medical device that actually, you know, you can buy off the shelf and have all of the associated support and, you know, technical specifications that you need to use in practice, we’re at this point where the hospital has basically finished all of that work. The hospital has been incredibly supportive of this entire research for the last 10 years, but it can’t act as a manufacturer. It’s quite difficult to do that. So we’ll then partner with a manufacturer, actually a company that’s a friend to us in the hospital and to the InnerEye team, too, and they will be responsible for basically taking all of the work that we’ve done to prepare the medical device certification documents and then actually going through that device certification and bringing it to the market. So it’s very exciting, you know, to be literally at that final stage of the, of the story.

HUIZINGA: Right. Ready to run across the finish line. I like to end each podcast with a little vision-casting, and I’ve been shocked at how profoundly healthcare has advanced in just the last hundred and fifty years. So I won’t ask you to project a hundred and fifty years out, but if InnerEye is a truly game-changing technology, what does healthcare, and especially oncology, look like in the future, and how has your work disrupted the field and made the world a better place? Javier, why don’t you talk about it from the technical aspect, and then maybe Raj can bring the show home from the medical aspect.

ALVAREZ: Sure. Yeah. One exciting, uh, development on the horizon is the use of GPT-4 in radiology or maybe even in radiotherapy. We are also working on multimodal learning now and trying to expand the work that we have done with InnerEye to radiology, where there is a much bigger opportunity. Uh, with multimodal learning, we are trying to integrate multiple sources of data like medical images, text, audio, and also different types of modalities because we want to make sure we can use CT scans, MRI, x-rays … and yeah, this requires developing new types of models, and these models need to be able to generalize to many different tasks because we have a huge need for AI in healthcare, and the current way of, uh, building these models is we develop one model for every use case, and this is not scalable. So we need more general-purpose models that can be specialized really quickly to different needs. And I think the other thing that excites me is actually … maybe this is quite far away, but how do we create a digital copy of the human body for every person on the planet and we create some sort of digital twin that we can actually use to run simulations? And I think medical imaging is going to be a big, important part of this. And we can use that digital twin to run interventions and figure out how can we treat that patient, what is happening with that patient, so, yeah, I think it’s super exciting, the potential of AI in healthcare, but of course we need to make sure we look at the risks, as well, of using AI. But yeah, there are many positive opportunities.

HUIZINGA: Right. I’m just shaking my head and my jaw is dropped: my digital twin in the future! [LAUGHS] Raj?

JENA: I mean, I think it’s a tremendously exciting time, and we live in an exponential age where things are coming and new innovations are coming at a faster and faster rate. I think what we have to do is to really, as doctors, learn from history and adapt to make sure that we stay able to retrain and reconfigure ourselves, and reconfigure medicine, to keep up to speed with the digital technologies. You know, just to give an example to what you were talking about with Joseph Lister; it’s fascinating. You know, I always think about, you know, Semmelweis and a similar story. So he was an Austrian obstetrician who, for the audience, a hundred and fifty years ago worked out that actually if you wash your hands after delivering a baby from a mother, the mother was less likely to get a fever and less likely to die. He was 29 when he worked that out, and yet it took nearly 20 years for him to convince the medical community basically because they felt threatened. And, you know, that was the key thing. They just, you know, there wasn’t that level of understanding of, you know, that we need to think and adapt and incorporate new ideas and new thinking. And we will be challenged, you know, severely, I think, in the years to come, with new technologies. I’ve just come back from a conference talking about foundation models and GPT in medical imaging and, um, you know, there was a huge amount of excitement. One really interesting point that I heard is that these models were built on all of the images, mainly generated by cameras, on the internet and social media sphere, and if you add up all of the medical imaging that’s ever been done, it’s only about 1 percent of that image data. So it’s always going to be hard. And of course, we can’t always access all of that information, you know, for patient confidentiality and, you know, numerous factors. So it may take a little while before we have these amazing, generalizable AI models in medicine, but I’m sure they’ll come, and I think the biggest thing that we can do is to be ready for them. And the way I believe that you do that is in little steps, is to start bringing very simple, explainable, transparent AI into your workplace—of which, you know, InnerEye is a really good example—so that, you know, you can look inside the box, start to ask questions, and understand how it works because then, when the next AI comes along, or maybe the AI after that, that integrates more data than the human mind can hold together to make a decision, then you need to be comfortable with your ability to query that, to interrogate that, and make it safe, you know, for your patients. Because at the end of the day, for thousands of years, doctors have evaluated things. And yeah, I think, I think those things won’t change, you know, but we just … we’ve got to up our game, you know, so I’ve got to be as good as Javi is in kind of understanding how these things, how these things work. So …

HUIZINGA: Well, I love my job because I learn something new every show. And this one has been a humdinger, as they say. Thank you so much for taking time to educate us on InnerEye today.

ALVAREZ: Thank you.

JENA: Thanks. It’s been a pleasure.

The post Collaborators: Project InnerEye with Javier Alvarez and Raj Jena appeared first on Microsoft Research.

]]>
Collaborators: Data-driven decision-making with Jina Suh and Shamsi Iqbal http://approjects.co.za/?big=en-us/research/podcast/collaborators-data-driven-decision-making-with-jina-suh-and-shamsi-iqbal/ Thu, 03 Aug 2023 13:00:00 +0000 http://approjects.co.za/?big=en-us/research/?p=958119 Researcher Jina Suh and manager Shamsi Iqbal are longtime collaborators. Learn how their history of working together and their unique perspectives are informing their development of tools to support decision-making for organizational leaders.

The post Collaborators: Data-driven decision-making with Jina Suh and Shamsi Iqbal appeared first on Microsoft Research.

]]>
black and white photos of Principal Researcher Dr. Jina Suh and Principal Applied and Data Science Manager Dr. Shamsi Iqbal, next to the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a new Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

In this episode of the podcast, Dr. Gretchen Huizinga welcomes Principal Researcher Dr. Jina Suh and Principal Applied and Data Science Manager Dr. Shamsi Iqbal to the show to discuss their most recent work together, a research project aimed at developing data-driven tools to support organizational leaders and executives in their decision-making. The longtime collaborators explore how a long history of collaboration helps them thrive in their work to help workplaces thrive, how their relationship has evolved over the years, particularly with Iqbal’s move from the research side to the product side, and how research and product can align to achieve impact.

Transcript

[TEASER]

[MUSIC PLAYS UNDER DIALOGUE]

JINA SUH: So in this particular project we’re working on now, we’re focusing our attention on people leaders. And these people leaders have to make decisions that impact the work practices, work culture, you know, eventually the wellbeing of the team. And so the question we’re raising is how do we design tools that support these people leaders to attend to their work practices, their team, and their environment to enable more decisive and effective action in a data-driven way?

SHAMSI IQBAL: And so we need to think big, think from an organizational point of view. And then we need to think about if we walk it back, how does this impact teams? And if we want teams to function well, how do we enable and empower the individuals within those teams?

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC ENDS]

I’m here today with Dr. Jina Suh, a Principal Researcher in the Human Understanding and Empathy group at Microsoft Research, and Dr. Shamsi Iqbal, the Principal Applied and Data Science Manager for the Viva Insights team in the Microsoft Data Platform and Growth organization. Jina and Shamsi are collaborating on a research project for Viva Insights that they hope will help people leaders reduce uncertainties and make informed and data-driven decisions for their teams. Before we unpack how they hope to do that, let’s meet our collaborators. Jina, I’ll start with you. Tell us a bit more about the Human Understanding and Empathy group at Microsoft Research and your role there. So what lines of research does the group pursue? And what are your particular interests and passions?

JINA SUH: Thank you for having me here, first of all. So our group does exactly what the name says. [LAUGHTER] We use technologies to gain understanding about people, and we try to design and develop technologies that use this understanding towards human wellbeing. Um, so we try to build technologies that are more empathic, you know, therapeutic; they can assist and augment people and so forth. And so my particular area of interest is in, um, identifying challenges and barriers for mental wellbeing and designing technology interventions and tools to support mental wellbeing. And so while I’ve done this research in the clinical domains, my interest of late has been around workplace wellbeing or mental health in non-clinical domains.

HUIZINGA: Mm hmm. So tell me a little bit more; when you say human understanding and empathy, and yet you’re working with machines, um, are you focused more on the psychological aspects of understanding people and applying those to machine technologies?

SUH: That and, and some more. So we use technologies to gain better understanding about people, their psychologies, their physiology, the contexts around them, whether they’re at work in front of a computer or they’re working out. But we also use technology to bring interventions in a way that actually is more accessible than traditional, I guess, going to therapists, um, and seeing, seeing somebody in person. So we try to bring technologies and interventions in the moment of wherever you are.

HUIZINGA: Yeah, we could have a whole podcast on “can machines have empathy?” but we won’t … today! Maybe I’ll have you back for that. Uh, Shamsi, you’ve had research space in Building 99, and now you’re embedded in a product group. So give us a brief guided tour of Microsoft Viva, and then tell us about the specific work you’re doing there.

SHAMSI IQBAL: Well, thanks, Gretchen. I’m super excited to be here and especially with my good friend Jina today. So let me talk a little bit about Microsoft Viva first. So, um, this is an employee experience platform that is built for organizations, teams, and individuals to support their workplace experience needs. And by experience needs, what I mean is, uh, things that help them thrive at work. So you could imagine these are data-driven insights about how they work and how they achieve their goals, curated knowledge that helps them get to their goals. There are opportunities to foster employee communications and collaborations. There is also learning opportunities tailored to their needs … um, all elements that are really important for people thriving at work.

HUIZINGA: So give me like a 10,000-foot view. I understand there’s four sort of major elements to, to Viva, and you’re particularly in the Insights. What are the other ones, and what does each of them do kind of in the context of what you just said?

IQBAL: So there are a few, and there are a few that are also coming on board soon.

HUIZINGA: Ooohhh! [LAUGHS]

IQBAL: So there is, for example, there is Viva Engage, uh, that helps with employee communication and collaboration. There is Viva Goals that helps you with exactly what I said— goals—and helps you achieve outcomes. There is Viva Topics that helps you with the knowledge generation and knowledge curation and contextually, uh, help people get access to that knowledge. So, so these are a few examples of the modules that are within Viva. And I work in Viva Insights. I lead a team of applied scientists and data scientists, and our particular charter is to bring deep science into the product. And we do this through incubation of new ideas, where we get to partner with MSR and OAR and all these other cool research groups that exist in, in Microsoft. And then we are also tasked with translating some of these complex research findings into practical product directions. So these are kind of like the two main things that we are responsible for.

HUIZINGA: So without giving away any industry secrets, can you hint at what is coming online, or is that all under wraps right now?

IQBAL: Well, I mean, some things are obviously under wraps, so I shouldn’t be letting out all the secrets. But in general, right now, we are focusing on really organizations and organizational leaders and how we can help them achieve the outcomes that they’re trying to achieve. And we are trying to do this in a, in a data-driven way where we can show them the data that is going to be important for them, to help them, uh, reach their organizational decisions. And also at the same time, we want to be able to point them towards actions and provide them support for actions that will help them better achieve those outcomes.

HUIZINGA: Yeah. I’ll be interested when we get into the meat of the project, how what Jina does with human understanding and empathy plays into what you’re doing with kind of business/workplace productivity and I guess we’ll call it wellbeing, because if you’re doing a good job, you usually feel good, right?

IQBAL: Right. So yeah, I think that that’s really important, and that’s where Jina and I kind of started, is that thinking about productivity and wellbeing really being intertwined together. So I mean, if you’re not feeling good about yourself, you’re really not going to be productive. That is at an individual level …

HUIZINGA: … and vice versa. Shamsi, before we go on, I want you to talk a little bit more about your move from research to product. And I’ve often called this the human version of tech transfer, but what prompted the move, and how is your life the same or different now that you’re no longer sort of a research official?

IQBAL: Well, I still like to think of myself as a research official, but I’ll, I’ll give you a little bit of history behind that. So I was in Microsoft Research for, what, 13 years and kind of like settled into my role thinking that, well, going for the next big project and working on the research insights and kind of like making impact at the research level was satisfying. And then COVID happened. 2020. The workplace transformed completely. And I took a step back, and I was thinking that, well, I mean, this may be the time to take the research that we have done for so many years in the lab and take it to practice.

HUIZINGA: Yeah.

IQBAL: And so there was this opportunity in Viva Insights at that point, which was just announced, and I felt that, well, let me go and see if I can do actually some tech transfer there, so bring in some of the work that we have been doing in MSR for years and see how it applies in a practical setting.

HUIZINGA: Yeah. And, and obviously you’re connected to a lot of the people here like Jina. So having followed both of you—and not in a stalker kind of way—but following you and your work, um, over the last couple of years, I know you have a rich history of both research and academic collaborations, but I want to know what you’re working on now and kind of give us an explanation of the project and how it came about. I like to call this “how I met your mother.” Although you guys have known each other for years. So, Jina, why don’t you take the lead on this, and Shamsi can corroborate if the story’s accurate on how you guys got together on this, but what are you doing with this project, and, um, how did it come about?

SUH: Yeah, so I wanted to kind of go back to what Shamsi was saying before. We’ve been collaborating for a while, but our common passion area is really at the intersection of productivity and, and wellbeing. And I think this is like, I don’t know, a match made in heaven … is to help people be productive by, um, you know, helping them achieve wellbeing at work and vice versa. And so my focus area is individual wellbeing. And as prior literature should have warned me, and I had, I had been ignoring, helping individuals can only go so far. There are organizational factors that make it difficult for individuals to perform activities that help them thrive. And so Shamsi and I have been working on several projects that started as individual tools, but later it really revealed fundamental organizational issues where we couldn’t just ignore factors outside of an individual. So in this particular project we’re working on now, we’re focusing our attention on people leaders. These are organizational leaders and executives like C-suites, as well as middle managers and line managers. And these people leaders have to make decisions that impact the work practices, work culture, you know, eventually the wellbeing of the team. And sometimes these decisions are made with a hunch based on anecdotes or these decisions are not made at all. And so the question we’re raising is how do we design tools that support these people leaders to attend to their work practices, um, their team, and their environment to enable more decisive and effective action in a data-driven way? And so what is the role of data in this process, and how do we facilitate that, you know, reflexive conversation with data to reduce uncertainty about these decisions?

HUIZINGA: Mmmm. You know what I love, and this is just a natural give-and-take in the conversation, but the idea that the, the individual is a situated being in a larger cultural or societal or workplace setting, and you can’t just say, well, if I make you happy, everything’s cool. So you’ve got a lot of factors coming in. So it’s really interesting to see where this work might go. I love that. Some collaborators just started working with each other and you guys, because you’ve been together for some time, have an established relationship. How do you think, Shamsi, that that has impacted the work you do or … if at all? I mean, because we’re focusing on collaboration, I’m kind of interested to tease out some of the people on the show who have just started working together. They don’t know each other. They’re in different states, different countries. Are there any advantages to working together for years and now doing a collaboration?

IQBAL: Oh! I can name so many advantages! Jina and I have worked closely for so long. We know each other’s strengths, weaknesses, working styles. So I mean, when I moved into Viva Insights, I knew that Jina was going to be a close collaborator not just because of the projects that we had worked on and the natural connection and alignment to Viva Insights, but it’s just because I knew that whenever I have a question, I can go to Jina and she’s going to go and dig out all the research. She maybe already knew that I was going to ask that question, and she had that already ready. I don’t know. I mean, she seems to read my mind before even I know the questions that I was going to ask her. So this was very natural for me to continue with that collaboration. And my colleagues over in Viva Insights, they also know Jina from previous, uh, interactions. So whenever I say that, “Well, I’m collaborating with Jina Suh in MSR,” and they say, “Oh yeah, we know Jina! So we are in good hands.”

HUIZINGA: Jina, is that true? You can read minds?

SUH: I am …

HUIZINGA: Or just hers? [LAUGHTER]

SUH: I’m … I’m sweating right now! [LAUGHTER] I’m so nervous.

HUIZINGA: Oh my gosh! … Well, so how about you? I mean, you had talked earlier about sort of a language barrier when you don’t know each other and, and because you’ve both been researchers, so … what advantages can you identify from this kind of connection?

SUH: Yeah, I think having Shamsi in the product group and having Shamsi being, uh, in Microsoft Research before, she knows how to translate my words into the product words, and I, I, I’m pretty sure, Shamsi, you, you might have struggled at the beginning. I’m not sure … at the product group.

IQBAL: I did, I did. I still do! [LAUGHTER]

SUH: But I think that struggle, I mean, she knows how to amplify my research and how to, um, talk about it in a way that the product groups will appreciate it. And she finds and identifies opportunities where research is needed, where I could actually shine. You know, before it was like, “I did all this research! Come look at me! Come look at me!” Shamsi will like, you know, find me, find opportunities for me, and say, “Hey, we have this gap. Can you come and speak about that?” And so, I don’t know, having that bridge I think really helps. And Shamsi is more than a collaborator. She’s more of a mentor for me, um, in that regard.

HUIZINGA: Awesome … so academics definitely have a particular way of talking and writing … and communicating, and product people do, too. So what might be an example of what would need that kind of level of translation, if you will? What, what might not someone get?

IQBAL: I think one of the things that I am still learning, and it took me a while to get to that point where I really started understanding that I need to fill this gap because there is a substantial gap between where research findings are and how that actually resonates with a product team.

HUIZINGA: Interesting.

IQBAL: And I think that the biggest question there is that taking the research findings and situating it in a real-world problem. So in the product world, we talk a lot about customer needs. And coming from research, I had the idea that, well, I will identify the problems and if it’s a compelling problem and I have a good solution, the product teams will come. I have no responsibility in figuring out the customer need because, come on, I already identified a great problem. And I think that I have been humbled over the past couple of years that that is quite not how it works. But being in that space now allows me … every time I come across a research question or I’m setting up some kind of a hypothesis, I take a step back and think about, OK, so how does it relate to the product? What customer need, for now, are we solving, or a future customer need are we going to solve?

HUIZINGA: Right.

IQBAL: And I think that with Jina, she keeps me grounded in the research, but she also has an engineering background, as well. So it is not that she does not understand the space. She understands the constraints in implementation in building something. So that is really good for me because I can borrow that knowledge. And when I talk to my product colleagues, I, I can leverage that, as well.

HUIZINGA: That’s, that’s hilarious because you’ve just identified a third group, which is engineering. Jina, I’m interested to know how previous collaborations might have informed the approach to the work you do now, so briefly talk about your early work together and what learnings could you share from any early successes or failures?

SUH: Yeah, maybe this is a little one-sided than a story about a collaboration, but I’ve, I’ve always looked up to Shamsi as kind of like the expert in productivity and wellbeing, so …

IQBAL: I am now blushing! [LAUGHTER]

SUH: So I think I’m more on the receiving end in this collaborative relationship …

IQBAL: That is not true! [LAUGHTER]

SUH: So, you know, I’ve always been passionate about mental health and emotional wellbeing, and unfortunately, mental health isn’t a high priority for a lot of people and organizations. And, you know, in the workplace, it’s sometimes tricky whether this concept of mental health should or should not be part of the work equation. And I’ve always admired how Shamsi was able to naturally … I mean, it’s, it’s kind of amazing how seamlessly she’s integrating aspects of mental health into the research that she does without really calling it mental health. [LAUGHTER] So, for example, like helping people transition in and out of work and disengage from work. I mean, how close to mental health could that be? You know, taking breaks at work, helping people focus more and distract less … like all these studies around attention that she’s done for years. So these kinds of, um, way that Shamsi is able to bring aspects of something that I’m really passionate about into, uh, into the workplace and into a language where product groups and the businesses really care about, that’s one of my biggest learnings from looking up to Shamsi and working together. You know, she’s constantly helping me, trying to understand, um, how do we actually formulate and, and talk about wellbeing in the context of the workplace so that leaders and organizational leaders, as well as product and business owners, as well, Microsoft in general, appreciate the work that we do. So that’s been really a blessing to have Shamsi be my partner …

HUIZINGA: Yeah. Shamsi, do you want to spread the love back on her? [LAUGHS]

IQBAL: Yeah, I think that I get motivated by Jina every single day, and, um, I think one of the things, which … I was going to interrupt you, but you were, you were, you were articulating this so nicely that I felt that I needed to wait and not interrupt and then pick an opportune moment to interrupt! So I, I, I am bringing back my PhD thesis into this conversation. [LAUGHS]

HUIZINGA: Right on!

IQBAL: Yeah! So, so one thing which was super interesting to me when I moved over to the product group and I was starting to look deeply into how we can bring some of the wellbeing-related work into the product. And I started digging into the organizational behavior literature. And what was fascinating is that everything we talked about in terms of wellbeing had a different definition in terms of like workplace thriving. And so things about giving people mental space to work and giving people opportunity to grow and belonging and all of these constructs that we typically relate to mental health, those are actually important workplace wellbeing constructs that have a direct relationship to workplace outcomes. And so I tried to reframe it in that way so that it doesn’t come across as a “good to have”; it comes across as a “really necessary thing to have.” What has happened over the past few months or so, I would say, there has been a shift in how organizations are thinking about people and individuals and wellbeing and productivity, and this is just a function of how the world is right now, right? So organizations and leaders are thinking that maybe now is the time to bring the focus back onto outcomes— productivity, revenue. And it seems that all the other things that we were focusing on … 2020, 2021 … about really being employee-centric and allowing employees to bring their best selves to work, it seems on the surface that that has gone sideways. But I’m going to argue that we should not be doing that because at the end of the day, the individuals are the ones whose work is going to aggregate up to the organizational outcomes. So if we want organizations to succeed, we need individuals to succeed. And so, um, at the beginning of the podcast, Gretchen, you were talking about individuals being parts of organizations. So individuals are embedded in teams; teams are embedded in organizations. So if organizations figure out what they want to do, it kind of bubbles down to the individuals. And so we need to think big, think from an organizational point of view, because that keeps us scoped and constrained. And then we need to think about if we walk it back, how does this impact teams? And if we want teams to function well, how do we enable and empower the individuals within those teams? So, so it’s a, it’s a bigger construct than what we had originally started with. And I think that now I am also pushing myself to think beyond just individuals and thinking about how we can best support individuals to thinking about how that actually bubbles up to an organization.

HUIZINGA: Right.

SUH: This is exactly what I’m talking about. [LAUGHTER]

HUIZINGA: Yeah, no, I’m …

SUH: This is exactly … She’s helping me. [LAUGHS]

HUIZINGA: It’s a podcast and you can’t see Jina smiling and nodding her head as Shamsi’s talking. Umm, let’s, let’s drill in a little bit on alignment between product and research, because we talked a little bit earlier about the language barrier and sometimes the outcome difference. And it’s not necessarily conflicting, but it might be different. How do you get to what I’ll call the Goldilocks position of alignment, and what role do you think collaboration plays, if any, in facilitating that alignment?

IQBAL: So it is not easy. And, I mean, obviously, and I think that again, this is where I’m starting to learn how to do this better. I think that starting off with a problem that a product team cares about—and the leaders in the product team care about—I think that that’s where we really want to start. And in this particular collaboration that Jina and I are right now, um, we started off with having a completely different focus, and then in January I came back and told Jina, scratch that; we’ll have to go back to the drawing board and change things! And she didn’t bat an eyelash. Because I was expecting that she would push back and say that, well, I, I have things to deliver, as well. You can’t come and randomize me. But I mean, knowing Jina, she was completely on board. I mean, I was worried. She was less worried than I was. But I think that, um, going back to your original question, I think that alignment in terms of picking a problem that product teams care about, I think that that’s super important. I think that then, going back to the original point about translating the research findings, for this particular project, what we are doing is that we are looking at something that is not going to block the product right now in any way. We are looking at something in the future that will hopefully help, and we are really trying to understand this population in, in a much more deeper way than what we have done before.

HUIZINGA: Right, right. Jina on that same note, you know, Shamsi’s actually sitting over in product right now, so she’s talking about finding a problem that product people care about. But what about from the research angle? How do product people get you on board?

SUH: Yeah, I think for me, I, I wonder about my role in Microsoft Research. You know, why am I in Microsoft doing research? Why am I not doing research somewhere else? So I try to make a concerted effort to see if there are collaborators outside of just research to justify and to make sure that my impact is beyond just research. And so it was just natural, like, you know, me and Shamsi having shared interests, as well as her, you know … collaborating together with Shamsi and her moving to Viva was a natural kind of transition for me. And so having connections into Viva and making sure that I participate in, you know, Viva share-outs or other things where I learn about the product’s priorities, as well as the questions that they have, concerns, challenges that they’re facing. Those are all great opportunities for me to learn about what the product is going through and how I can maybe think about my research direction a little bit differently. You know, I feel like every research question can be morphed into different things, can be looked at it from different perspectives. But having that extra, um, signal from the product group helps me, you know, think about it in a different way, and then I can approach the product group and say, hey, I heard your challenges, and I thought about it. Here’s my research direction. I think it aligns. You know, it’s kind of a back-and-forth dance we have to play, and sometimes it doesn’t quite work out. Sometimes, you know, we just don’t have the resources or interests. But you know, in this case with Shamsi and Viva, I think our interests are just like perfectly aligned. So, you know, Shamsi talked about pivoting … Shamsi has given me enough warnings or, you know, kind of signals that things are changing early enough that I was already thinking about, OK, well, what does it mean for us to pivot? So it wasn’t that big of a deal.

HUIZINGA: Well, yeah, and we’ll get to pivot in a second. So the interesting thing to me right now is on this particular project, where you’re working on data-driven insights for people leaders to make data-driven decisions, how do you then balance say, you have a job at Microsoft Research, you have a lane that you’re running in in terms of deliverables for your bosses, does that impact you in terms of other things you’re doing? Do you have more than one project on the go at a time, or are you pretty focused on this? How does it look?

IQBAL: Jina is smiling! [LAUGHTER]

SUH: I think the DNA of a researcher is that you have way too many things going [LAUGHS] on than you have hands and arms to handle them, so, yes, I have a lot of things going on …

HUIZINGA: What about the researchers? Do the product people have to figure out something that the researchers care about?

IQBAL: So when we started first conceptualizing this project, I think that we started off with the intent that we will have research outcomes and research contributions, but that would be constrained within a particular product space. I think that that’s how we kept it kind of like both interesting for research and for product.

HUIZINGA: Got it.

IQBAL: I mean, one of the responsibilities that I have in my new role is that I also have to kind of deliver ideas that are not going to be immediately relevant maybe but towards the future. And so this gives me the opportunity to explore and incubate those new ideas. Maybe it works out; maybe it doesn’t. Maybe it creates a new direction. The product team is not going to hold me accountable for that because they have given me that, that flexibility that I can go and explore.

HUIZINGA: Have some runway …

IQBAL: Yeah. And so that’s, that’s why I tried to pick something—or Jina and I worked together to pick something—which would have enough interest as a research contribution as well as something that could be picked up by product leader, as well.

HUIZINGA: That’s a perfect way of putting it. You know, speaking of incubation, in some ways, Microsoft is well known for its internships. And you have an intern working on this project right now. So it’s sort of a Microsoft/Microsoft Research/university, um, collaboration. Jina, tell us about the student you’re working with and then talk about how Microsoft Research internships are beneficial, maybe … and anchor that on this particular project.

SUH: So the intern that is working on our project is Pranav Khadpe. He’s a PhD student in the, uh, Human-Computer Interaction Institute at Carnegie Mellon University. So Pranav studies and builds infrastructures that strengthen interdependence and collaboration in occupational communities, which is, I think, really aligned to what this podcast is trying to do!

HUIZINGA: Yeah, absolutely.

SUH: So for example, he builds technologies that support people seeking help and getting feedback, getting mentoring and a sense of belonging through interaction with others within those communities. And I think internships primarily have the benefit of mentoring for the future generation of researchers, right? We’re actually building this pipeline of researchers into technology companies. We’re giving them opportunities to, to experience what it’s like to be in the industry research and use that experience and, and entice them to come work for us, right? [LAUGHS]

HUIZINGA: Right. It’s a farm team!

SUH: Right. [LAUGHTER] Um, so I feel like I have this dual role at Microsoft Research. On one hand, we are researchers like Shamsi and I. We need to continue to push the boundaries of scientific knowledge and disseminate it with the rest of the world. But on the other hand, I need to bring value of that research back into our products and business, right? And so, um, internships that are designed with product partners are really forcing functions for us to think about this dual role, right? It’s a learning opportunity for all of us involved. So from the research side, we learn how to find the right balance between research and product, um, and ensuring that we do successful technology transfer. But from the product perspective, they learn how to apply scientific rigor in their product decisions or decisions that they make … or designs that they make. And it’s a, it’s a really great opportunity for Pranav to be sitting in between the product and research. He’s not only learning what he’s already being trained to do in his PhD, being mainly an independent researcher, but he’s also learning how to bring that back into the product. So now he’s being trained not only to be a researcher in MSR but perhaps an applied scientist in the industry, as well. So I think there’s that benefit.

HUIZINGA: And that gets back to the individual being situated within an organization. And so being an independent researcher is cool, but you’re always going to be working with a team of some kind … if you want a paycheck. [LAUGHS] Or you can just go off and invent. Shamsi, I always ask what could possibly go wrong? Some people hate that question, but I think it’s worth asking. And while I know that data driven—quotation marks around that, air quotes around that—is generally a positive buzz-phrase in decision-making today, I wonder how you’re addressing this augment-not-replace mandate in the work you’re doing. How do you keep humans in the loop with real life wisdom and emotions and prevent the march toward what I would call outsourcing decision-making, writ large, to machines?

IQBAL: I think it’s a, it’s a great question, and it’s a very timely one, right? And, uh, the way that I like to think about it, being a human-computer interaction researcher is—who is now dealing with a lot of data—is that data can show but not necessarily tell. And I think that the “telling” part comes from the humans. Maybe in the future, AI and data will be able to do the telling job better, but right now, humans have the context. So a human being who has the context can look at the data and explain why it is showing certain things. And I think that that’s where I feel that the integration of the human in that process is so important. The challenge is showing them the right data. I think that that is also where the human comes in, in figuring out what they need to see. The data can show them that, and then the human gets to tell the story around it.

HUIZINGA: OK. Jina, do you have any insights on that?

SUH: One of the challenges is that, um, we’re not only just building tools to help people look at the data and contextualize it and explain it and understand it but also show them how powerful it can be in changing their behaviors, changing their organization. And it takes work. So one challenge that, you know, we were just having a discussion over lunch is that, how do we actually get people to be motivated enough to interact with the data, have a conversation with the data? And so these are some of the questions that we are still … uh, we don’t have an answer for; we’re still trying to answer. But, you know, our role is not just to feed data and help them look at it and tell a story about it, but also demonstrate that it’s empowering so that they can have more engaging experience with that data.

HUIZINGA: Tell me what you mean by having a conversation with the data. I mean, what does that look like?

SUH: The obvious … most obvious example would be with the large language models.

HUIZINGA: OK!

SUH: You can have a …

HUIZINGA: An actual conversation!

SUH: An actual conversation with the data. But you can also do that through user experience, right? You can be asking questions. I think a lot of these things happen naturally in your head. You’re formulating questions about data. You’re finding insights. You move on to the next question. You become curious. You ask the next question. You explain it. You bring your context to it and then you explain it. So that sort of experience. But I think that takes a lot of work. And we need to make sure that we entice them to, to make sure that there’s value in doing that extra work.

HUIZINGA: Well, and the fact that you’re embedded in a group called Human Understanding and Empathy and that your intern is on human-computer interaction, the human-centeredness is a huge factor in this kind of work today. Ummm. The path from lab to life, as they say—wait, it’s what I say!—is not necessarily straight and linear and sometimes you have to pivot or, as I like to say, add a kick ball change to the research choreography. How did this work begin? Shamsi, you told a story, um, early on, and I think people like stories. I’m going to have you both address this, but I want Shamsi to go first. How did it begin, and then how did it change? What were the forcing functions that made the pivot happen and that you both reacted to quite eagerly, um, both to meet the research and organizational outcomes?

IQBAL: As we said many times in this podcast, I mean, Jina and I, we, we naturally gravitate towards the same research problems. And so, we were looking at one aspect of Viva Insights last year with another intern, and that internship project, apart from being really impactful and well-received in Viva Insights, as well, I think it was just a lot of fun doing a joint internship project with Jina. And so this time when the application period came around, it was a no-brainer. We were going to submit another proposal. And at that point … based on some of the work that we had done last year … so we were really going to focus on something around how we can help people managers and their reports have better conversations and … towards their goals. Right, Jina? I think that that’s where we had kind of like decided that we were going to focus on. And then we started interviewing interns with that project in mind, and then it was December or January where things shifted, the tech world went through quite a tumultuous time, and, uh, we just had to pivot because our organization had also changed directions and figured that, well, we need to focus more on supporting organizations and organization leaders through this time of change. Which meant that we could still do our internship project, but it just didn’t seem right in terms of what we could do, in terms of impact and maybe immediate impact, too, uh, for the field. So Jina and I talked. I recommended that we shift the intern that we had talked to. I think that we had already talked to Pranav. I mean, he seemed really versatile and smart. And then we just decided, I think he’ll be OK. I think that he will be fine. I think that he would actually be even a better fit for the project that we have in mind.

HUIZINGA: Yeah. You know, as you’re talking, I’m thinking, OK, yeah, we think of the workers in a workplace, but the pressure on leaders or managers is intense to make the right decision. So it seems really in line with the empathy and the productivity to bring those two together, to, to help the people who have the severe pressure of making the right decisions at the right time for their teams, so that’s awesome. Jina, do you have anything to add to the pivot path? I mean, from your perspective.

SUH: Yeah, I mean, like I was saying, I think Shamsi was giving us, or me, plenty of signals that this might be happening. So it gave me plenty of opportunities to think about the research. And really, we didn’t make too many changes. I mean, I’d, I’d like to think that we’re, we’re trying to get at the same problem but from a slightly different angle. And so, you know, before it was individual and manager conversations. Now we’re going from, you know, managers to organizational leaders. At the end of the day, like the real transformative change in an organization happens through the leadership. And so, I think before, we were just kind of trying to connect the individual needs to their, to their immediate managers. But now I think we’re going at the problem in a more fundamental way, really tackling the organizational leaders, helping them make the right decisions to, to help their organizations thrive. And I’m more excited about this new direction than before.

HUIZINGA: Yeah. You know, I hadn’t asked this, and I should be asking it to everyone that comes in the booth or on screen. What are the primary methodologies you engage with in this research? I mean, quantitative, qualitative, mixed?

SUH: Yeah, we, we do everything, I think. [LAUGHS] Um, I, I think that’s the beauty of the human-computer interaction … the space is huge. So we do anything from qualitative interviews, uh, you know, contextual inquiry, like observing people, understanding their work practices, to interviewing people, as well as running survey studies. We’ve done purely quantitative studies, as well, looking at Viva Insights data and understanding the correlation between different signals that Viva Insights is providing with workplace stress at a large scale, at high fidelity, so…

IQBAL: I think another thing that I would add is that sometimes we also build prototypes based on the ideas that we come up with and so we get to evaluate those prototypes in, in smaller scale but in far more depth. And so those kinds of results are also super important for the product teams because that helps bring those ideas to life, is that well, I understand your research and I understand your research findings, but what do I do with it? And so if you have a prototype and that shows that, well, this is something that you might be able to do, and then it’s up to them to figure out whether or not this is actually scalable.

HUIZINGA: Well, as we wrap up, I’d like to give each of you, uh, the chance to do a little future envisioning. And I know that that’s kind of a researcher’s brain anyway, is to say what kind of a future do I want to help build? But how will the workplace be different or better because of the collaborative work you’ve done? Jina, why don’t you go first.

SUH: As researchers, I think it’s our job to get ahead of the curve and to really teach the world how to design technology in a, in a way that considers both its positive and negative impacts. So in the context of using data at work, or data about work, how the data gets used for and against people at work …

HUIZINGA: Ooohh!

SUH: … there’s a lot of fear. Yes, there’s a lot of fear about workplace surveillance.

HUIZINGA: Yeah!

SUH: And so the question for us is, you know, how do we demonstrate that this data can be used ethically and responsibly and that there is value in, in this data. So I, I am hoping that, you know, through this collaboration, I’m hoping that we can pave the way for how to design these technologies responsibly, um, and, and develop data-driven practices.

HUIZINGA: Shamsi, close the show with us and tell me your preferred future. What, what are you going to contribute to the workplace world?

IQBAL: So I would just add one more thing. I think that data responsibility, transparency, and ethical use of data, I think it’s at the core of Microsoft’s mission, and I think it’s on us to show that in our products. I think that the other thing, which is a little away from the data, I think that just going back to this concept of leaders and, uh, individuals, I have always maintained that there is oftentimes a tension between what an individual’s goals might be and what an organization’s goals might be. And I’m hoping through this work that we can kind of like help resolve some of those tensions, that once organization leaders are provided with the right kind of insights and data, they will be more motivated to take actions that will be also beneficial to individuals. Oftentimes, that connection is not very clear, uh, but I’m hoping that we can shed some light on it.

HUIZINGA: Jina, Shamsi, so good to see you again. Smiles so big. Thanks for coming in and sharing your “insights” today.

SUH: Thank you for having us.

IQBAL: Thank you so much. This was a lot of fun.

The post Collaborators: Data-driven decision-making with Jina Suh and Shamsi Iqbal appeared first on Microsoft Research.

]]>
Collaborators: Gaming AI with Haiyan Zhang http://approjects.co.za/?big=en-us/research/podcast/collaborators-gaming-ai-with-haiyan-zhang/ Thu, 20 Jul 2023 13:00:00 +0000 http://approjects.co.za/?big=en-us/research/?p=954777 For over a decade, Xbox has been leveraging AI to elevate gaming. Haiyan Zhang, GM of Gaming AI, explores the collaborations behind the work and the potential for generative AI to support better experiences for both players and game creators.

The post Collaborators: Gaming AI with Haiyan Zhang appeared first on Microsoft Research.

]]>
black and white photos of Haiyan Zhang, General Manager of Gaming AI at Microsoft Gaming, next to the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a new Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

In the world of gaming, Haiyan Zhang has situated herself where research meets real-world challenges, helping to bring product teams and researchers together to elevate the player experience with the latest AI advances even before the job became official with the creation of her current role, General Manager of Gaming AI. In this episode, she talks with host Dr. Gretchen Huizinga about the variety of expertise needed to avoid the discomfort experienced by players when they encounter a humanlike character displaying inhuman behavior, the potential for generative AI to make gaming better for both players and creators, and the games she grew up playing and what she plays now.

Transcript

[TEASER]

[MUSIC PLAYS UNDER DIALOGUE]

HAIYAN ZHANG: And as animation rendering improves, we have the ability to get to a wholly life-like character. So how do we get there? You know, we’re working with animators. We want to bring in neuroscientists, game developers, user researchers. There’s a lot of great work happening across the games industry in machine learning research, looking at things like motion matching, helping to create different kinds of animations that are more human-like. And it is this bringing together of artistry and technology development that’s going to get us there. 

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC ENDS]

I’m delighted to be back in the booth today with Haiyan Zhang. When we last talked, Haiyan was Director of Innovation at Microsoft Research Cambridge in the UK and Technical Advisor to Lab Director Chris Bishop. In 2020, she moved across the pond and into the role of Chief of Staff for Microsoft Gaming. Now she’s the General Manager of Gaming AI at Xbox, and I normally say, let’s meet our collaborators right now, but today, we’ll be exploring several aspects of collaboration with a woman who has collaborating in her DNA. Welcome back, Haiyan! 

ZHANG: Hi, Gretchen. It’s so good to be here. Thank you for having me on the show.

HUIZINGA: Listen, I left out a lot of things that you did before your research work in the UK, and I want you to talk about that in a minute, but first, help us understand your new role as GM of Gaming AI. What does that mean, and what do you do?

ZHANG: Right. Uh, so I started this role about a year ago, last summer, actually. And those familiar with Xbox will know that Xbox has been shipping AI for over a decade and in deep collaborations with Microsoft Research, with technologies like TrueSkill and TrueMatch, uh, that came out of the Cambridge lab in the UK. So we have a rich history of transferring machine learning research into applied product launch, and I think more recently, we know that AI is experiencing this step change in its capabilities, what it can empower people to do. And as we looked across all the various teams in Xbox working on machine learning models, looking at these new foundation models, this role was really to say, hey, let’s bring everybody together. Let’s bring it together and figure out what’s our strategy moving forward? Are there new connections for collaboration that we can form across teams from the platform services team to the game development teams? Are there new avenues that we should be exploring with AI to really accelerate how we make games and how we deliver those great game experiences to our players? So my role is really, let’s bring together that strategy for Xbox AI and then to look at new innovations, new incubation, that my team is spearheading, uh, in new areas for game development and our gaming platform.

HUIZINGA: Fascinating. Are you the first person to have this role? I mean, has there been a Gaming AI person before you?

ZHANG: Did you … did you get that from [LAUGHS] what I just … yeah, did you get that hint from what I just said?

HUIZINGA: Right!

ZHANG: The role didn’t exist before I came into the role last summer. And sometimes, you know, when you step into a role that didn’t exist before, a part of that is just to define what the role is by looking at what the organization needs. So you are totally right. This is a completely new role, and it is very timely because we are at that intersection where AI is really transformational. I mean, we’ve seen machine learning gain traction with deep learning, uh, being applied in many more areas, and now with foundation models, with these large language models, we’re just going to see this completely new set of capabilities emerge through technology that we want to be ready for in Xbox.

HUIZINGA: You know, I’m going to say a phrase I hate myself for saying, but I want to double click on that for a minute. [LAUGHS] Um, AI for a decade or more in Xbox … how much did the new advances in large learning models and the GPT phenom help and influence what you’re doing?

ZHANG: Well, so interestingly, Gretchen, I, I’ve actually been secretly doing this role for several years before they’ve made it official. So back in Cambridge in the UK, with Microsoft Research, uh, I was working with our various AI teams that were working within gaming. So you might have talked before to Katja Hofmann’s team working on reinforcement learning.

HUIZINGA: Absolutely.

ZHANG: The team working on Bayesian inference with TrueSkill and TrueMatch. So I was helping the whole lab think about, hey, how do we take these research topics, how do we apply them into gaming? Coming across the pond from the UK to Redmond, meeting with different folks such as Phil Spencer, the CEO of Microsoft Gaming, the leadership team of Xbox and really trying to champion, how do we get more, infuse more, AI into the Xbox ecosystem? Every year, I would work with colleagues in Xbox, and we would run an internal gaming and AI conference where we bring together the best minds of Microsoft Research and the product teams to really kind of meet in the middle and talk about, hey, here are some research topics; here are some product challenges. So I think for the last five or six years, I’ve, I’ve been vying for this role to be created. And finally, it happened!

HUIZINGA: Sort of you personify the role anyway, Haiyan. Was that conference kind of a hackathon kind of thing, or was it more formal and, um, bring minds together and … academic wise?

ZHANG: Right. We saw a space for both. That there needed to be more of a translation layer between, hey, here are some real product challenges we face in Xbox, both in terms of how we make the games and then how we build the networking platform that players join to access the games. So here are some real-world problems we face. And then to bring that together with, hey, here are some research topics that people might not have thought about applied to gaming. In recent times, we’ve looked at topics like imitation learning. You know, imitation learning can be applied to a number of different areas, but to apply that into video games, to say, hey, how can we take real player data and be able to try to develop AI agents that can play, uh, in the style of those human players, personalized for human players? You know, this is where I think the exciting work happens between problems in the real world and, uh, researchers that are looking at bigger research topics and are looking for those real-world problems.

HUIZINGA: Right. To kind of match them together. Well, speaking of all the things you’ve done, um, you’ve had sort of a Where in the World is Carmen Sandiego? kind of career path, um, everything from software engineering and user experience to hardware R&D, design thinking, blue-sky envisioning. Some of that you’ve just referred to in that spectrum of, you know, real-world problems and research topics. But now you’re doing gaming. So talk a little bit about the why and how of your technology trek, personally. How have the things you did before informed what you’re doing now? Sort of broad thinking.

ZHANG: Thanks, Gretchen. You know, I’ve been very lucky to work across a number of roles that I’ve had deep passion for and, you know, very lucky to work with amazing colleagues, both inside of Microsoft and outside of Microsoft, so it’s definitely not something by plan, so it’s more that just following my own passions has led me down this path, for better or for worse. [LAUGHTER] Um, so I mean, my career starts in software engineering. So I worked in, uh, Windows application development, looking at data warehousing, developing software for biomedical applications, and I think there was a point where, you know, I, I really loved software architecture and writing code, and I, I wanted to get to the why and the what of what we build that would then lead to how we build it. So I wanted to get to a discipline where I could contribute to why and what we build.

HUIZINGA: Yeah.

ZHANG: And that led me on a journey to really focus in on user experience and design, to say, hey, why do we build things? Why do we build a piece of software? Why do we build a tool? It’s really to aid people in whatever tasks that they’re doing. And so that user experience, that why, is, is going to be so important. Um, and so I went off and did a master’s in design in Italy and then pursued this intersection of user research, user experience, and technology.

HUIZINGA: A master’s degree in Italy … That just sounds idyllic. [LAUGHS] So as you see those things building and following your passions and now you’re here, do you find that it has informed the way you see things in your current role, and, and how might that play out? Maybe even one example of how you see something you did before just found its way into, hey, this is what’s happening now; that, that fits, that connects.

ZHANG: You know, I think in this role and also being a team leader and helping to empower a team of technologists and designers to do their best work in this innovation space, it’s kind of tough. You know, it … sometimes I find it’s really difficult to wear many hats at the same time. So when I’m looking at a problem – and although I have experience in writing software, doing user research, doing user experience design – it’s really hard to bring all of those things together into a singular conversation. So I’m either looking at a problem purely through the coding, purely through the technology development, or purely through the user research. So I haven’t, I haven’t actually figured out a way to integrate all of those things. So when I was in the UK, when I was more working with a smaller team and, uh, really driving the innovation on my own, it was probably easier to, uh, to bring everything together, but then I’m only making singular things. So, for example, you know, in the UK, I developed a watch that helped a young woman called Emma to overcome some of the tremor symptoms she has from her Parkinson’s disease, and I developed some software that helped a young woman – her name is Aman – who had memory loss, and the software helped her be able to listen to her classes – she was in high school – to listen to her classes, record notes, be able to reflect back on her notes. So innovation really comes in many forms: at the individual level, at the group level, at the society level. And I find it just really valuable to gain experience across a spectrum of design and technology, and in order to really make change at scale, I think the work is really, hey, how do we empower a whole team? How do we empower a whole village to work on this together? And that is a, a very unique skill set that I’m still on a journey to really grasp and, and, and learn together with all my colleagues.

HUIZINGA: Yeah. Well, let’s talk about gaming more specifically now and what’s going on in your world, and I’ll, I’ll start by saying that much of our public experience with AI began with games, where we’d see headlines like “AI Beats the World Chess Champion” or the World Go Champion or even, um, video games like Ms. Pac-Man. So in a sense, we’ve been conditioned to see AI as something sort of adversarial, and you’ve even said that, that it’s been a conceptual envisioning of it. But do you see a change in focus on the horizon, where we might see AI as less adversary and more collaborator, and what factors might facilitate that shift?

ZHANG: I think we could have a whole conversation about all the different aspects of popular culture that has made artificial intelligence exciting and personified. So I can think of movies like WarGames or Short Circuit, just like really fun explorations into what might happen if a computer program gained some expanded intelligence. So, yes, we are … I think we are primed, and I think this speaks to some core of the human psyche that we love toying with these ideas of these personalities that we don’t totally understand. I mean, I think the same applies for alien movies. You know, we have an alien and it is completely foreign! It has an intelligence that we don’t understand. And sometimes we do project that these foreign personalities might be dangerous in some way. I also think that machine learning/AI research comes in leaps and bounds by being able to define state-of-the-art benchmarks that everyone rallies around and is either trying to replicate or beat. So the establishment of human performance benchmarks, that we take a model and we say this model can perform this much better than the human benchmark, um, is a way for us to measure the progress of these models. So I think the establishment of these benchmarks has been really important to progress AI research. Now we see these foundation models being able to be generalized across many different tasks and performing well at many different tasks, and we are entering this phase where we transition from making the technology to building the tools. So the technology was all about benchmarks: how do we just get the technology there to do the thing that we need it to do? To now, how do we now mold the technology into tools that actually assist us in our daily lives? And I think this is where we see more HCI researchers, more designers, bringing their perspectives into the tool building, and we will see this transition from beating human benchmarks to assistive tools – copilots – that are going to aid us in work and in play.

HUIZINGA: Yeah. Well, you have a rich history with gaming, both personally and professionally. Let me just ask, when did you start being a gamer?

ZHANG: Oh my goodness. Yeah, I actually prefer the term “player” because I feel like “gamer” has a lot of baggage in popular culture …

HUIZINGA: I like that. It’s probably true. When did you start gaming?

ZHANG: I have very fond memories of playing Minesweeper on my dad’s university PC. And so it really started very early on for me. It’s funny. So I was an only child, so I spent a lot of time on my own. [LAUGHS]

HUIZINGA: I have one!

ZHANG: And one of the first hobbies I picked up was, uh, programing on our home PC, programing in BASIC.

HUIZINGA: Right …

ZHANG: And, you know, it’s very … it’s a simple language. I think I was about 11 or 12, and I think the first programs that I wrote were trying to replicate music or trying to be simple games, because that’s a way for, you know, me as a kid, to see some exciting things happening on the screen.

HUIZINGA: Right …

ZHANG: Um,so I’d say that that was when I was trying to make games with BASIC on a PC and also just playing these early games like Decathlon or King’s Quest. So I’ve always loved gaming. I probably have more affinity to, you know, games of my, my childhood. But yeah, so started very early on, um, and then probably progressed I’d say I probably played a little too much Minecraft in university with some like many sleepless nights. That was probably not good. Um, and then I think has transitioned into mobile games like Candy Crush, just like quick sessions on a commute or something. And that’s why I prefer the term “player” because there’s, there’s billions of people in the world who play games, and I don’t think they think of themselves as gamers.

HUIZINGA: Right.

ZHANG: But you know, if you’re playing Solitaire, Candy Crush, you are playing video games.

HUIZINGA: Well, that was a preface to the, to the second part of the question, which is something you said that really piqued my interest. That video games are an interactive art form expressed through human creativity. But let’s get back to this gaming AI thing and, and talk a little bit about how you see creativity being augmented with AI as a collaborative tool. What’s up in that world?

ZHANG: Right. Yeah, I, I, I really fundamentally believe that, you know, video games are an experience. They’re a way for players to enter a whole new world, to have new experiences, to meet people in all walks of life and cultures that they’ve not met before; either they are fictional characters or other players in different parts of the world. So being able to help game creators express their creativity through technology is fundamental to what we do at Xbox. With AI, there’s this just incredible potential for that assistance in creativity to be expanded and accelerated. For example, you know, in 1950, Claude Shannon, who was the inventor of information theory, wrote a paper about computer chess. And this is a seminal paper where he outlined what a modern computer chess program could be – before there were really wide proliferation of computers – and in it, he talked about that there were innate advantages that humans had that a computer program could never achieve. Things like, humans have imagination; humans have flexibility and adaptability. Humans can learn on the fly. And I think in the last seven decades, we’ve now seen that AI is exhibiting potential for imagination and flexibility. And imagination is something that generative AI is starting to demonstrate to us, but purely in terms of being able to express something that we ask of it. So I think everybody has this experience of, hey, I’d really like to create this. I’d really like to express this. But I can’t draw. [LAUGHTER] I want to take that photo, but why do my photos look so bad? Your camera is so much better than mine! And giving voice to that creativity is, I think, the strength of generative AI. So developing these tools that aid that creativity will be key to bringing along the next generation of game creators.

HUIZINGA: So do you think we’re going to find ourselves in a place where we can accept a machine as a collaborator? I mean, I know that this is a theme, even in Microsoft Research, where we look at it as augmenting, not replacing, and collaborating, you know, not canceling. But there’s a shift for me in thinking of tools as collaborators. Do you feel like there’s a bridge that needs to be crossed for people to accept collaboration, or do they … or do you feel like it just is so much cooler what we can do with this tool that we’re all going to discover this is a great thing?

ZHANG: I think, in many ways, we already use tools as collaborators, uh, whether that’s a hammer, whether that’s Microsoft Word. I mean, I love the spell-check feature! Oh, geez! Um, so we already use tools to help us do our work ­– to make us work faster, type faster, um, check our grammar, check our spelling. This is the next step change in that these tools, the collaboration is going to be more complex, more sophisticated. I am somebody that welcomes that because I have so much work and need to get it done. At the same time, I really advocate for us, as a society, as our community, to have an open dialogue about it.

HUIZINGA: Yeah.

ZHANG: Because we should be talking about, what is it to be human, and how do we want to frame our work and our values moving forward given these new assistive tools? You know, when we talk about art, the assistance provided by generative AI will allow us to express, through words, what we want and then to have those words appear as an image on the page, and I think this will really challenge the art world to push artists beyond perhaps what we think of art today to new heights.

HUIZINGA: Yeah. Yeah. There’s a whole host of questions and issues and, um, concerns even that we’re going to face. And I think it may be a, like you say, a step change in even how we conceptualize what art … I mean, even now, Instagram filters, I mean, or Photoshop or any of the things that you could say, well, that’s not really the photo you took. But, um, well, listen, we’ve lived for a long time in an age of what I would call specialization or expertise, and we, we’ve embraced that, you know, talk to the experts, appeal to the experts. But the current zeitgeist in research is multidisciplinary, bringing many voices into the conversation. And even in video games, it’s multiplayer, multimodal. So talk for a minute about the importance of the prefix “multi” now and how more voices are better for collaborative innovation.

ZHANG: So I want to start with the word “culture,” because I fundamentally believe that when we have a team building an experience for our users and players, that team should reflect the diversity of those users and players, whether that’s diversity in terms of different abilities, in terms of cultural background. Teams that build these new experiences need to be inclusive, need to have many, many voices. So I want to start the “multi” discussion there, and I encourage every team building products to think about that diversity and to move to recruit towards diversity. Then, let’s talk about the technology: multi-modality. So games are a very rich medium. They combine 3D, 2D, behaviors, interactions, music, sounds, dialogue. And it’s only when these different modalities come together and converge and, and work in perfect synchronicity that you get that amazing immersive experience. And we’ve seen foundation models do great things with text, do amazing things with 2D, some 3D now we’re seeing, and this is what I’m trying to push, that we need that multi-modality in generative AI, and multi-modality in terms of bringing these different mediums together. Multi-disciplinary, you know, what I find interesting is that, um, foundation models, LLMs like GPT, are at the height of democratized technology. Writing natural language as a prompt to generate something is probably the simplest form of programing.

HUIZINGA: Oh interesting.

ZHANG: It does not get simpler than I literally write what I want, and the thing appears.

HUIZINGA: Wow.

ZHANG: So you can imagine that everybody in the world is going to be empowered with AI. And going from an idea, to defining that idea through natural language, to that idea becoming into reality, whether that’s it made an app for me, it made an image for me … so when this is happening, “multidisciplinary” is going to be the natural outcome of this, that a designer’s going to be able to make products. A coder is going to be able to make user experience. A user researcher is going to go from interviewing people to showing people prototypes with very little gap in order to further explore their topic. So I think we will get multidisciplinary because the technology will be democratized.

HUIZINGA: Right. You know, as you talk, I’m, I’m thinking “prompts as programing” is a fascinating … I never thought of it that way, but that’s exactly it. And you think about the layers of, um, barriers to entry in technology, that this has democratized those entry points for people who say I can never learn to code. Um, but if you can talk or type, you can! [LAUGHS] So that’s really cool. So we don’t have time to cover all the amazing scientists and collaborators working on specific projects in AI and gaming, but it would be cool if you’d give us a little survey course on some of the interesting, uh, research that’s being done in this area. Can you just name one or two or maybe three things that you think are really cool and interesting that are, um, happening in this world?

ZHANG: Well, I definitely want to give kudos to all of my amazing research collaborators working with large language models, with other machine learning approaches like reinforcement learning, imitation learning. You know, we know that in a video game, the dialogue and the story is key. And, you know, I work with an amazing research team with, uh, Bill Dolan, um, Sudha Rao, who are leading the way in natural language processing and looking at grounded conversational players in games. So how do we bring a large language model like GPT into a game? How do we make it fun? How do we actually tell a story? You know, as I said, technology is becoming democratized, so it’s going to be easy to put GPT into games, but ultimately, how do we make that into an experience that’s really valuable for our players? On the reinforcement learning/imitation learning front, we’ve talked before about Project Paidia. How do we develop new kinds of AI agents that play games, that can help test games, that can really make games more fun by playing those games like real human players? That is our ultimate goal.

HUIZINGA: You know, again, every time you talk, something comes into my head, and I’m thinking GPTs for NPCs. [LAUGHS]

ZHANG: I like that.

HUIZINGA: Um, and I have to say I’m, I’m not a game player. I’m not the target market. But I did watch that movie Free Guy and got exposure to the non … what is it called?

ZHANG: Non-player characters.

HUIZINGA: That’s the one. Ryan Reynolds plays that, and that was a really fun experience. Well, I want to come back to the multi-modality dimension of the technical aspects of gaming and AI’s role in helping make animated characters more life-like. And that’s key for a lot of gaming companies is, how real does it feel? So what different skills and expertise go into creating a world where players can maintain their sense of immersion and avoid the uncanny valley? Who are you looking to collaborate with to make that happen?

ZHANG: Right. So the uncanny valley is this space where, when you are creating a virtual character and you bring together animation, whether it’s facial animation, body movements with sound, with their voice, with eye movement, with how they interact with the player, and the human brain has an ability to recognize, subconsciously, recognize another human being. And when you play with that deep-seated instinct and you create a virtual character, but the virtual character kind of slightly doesn’t move in the right way, their eyes don’t blink in the right way, they don’t talk in the right way, it, it triggers, in the deep part of someone’s brain, a discomfort. And this is what we call the uncanny valley. And there are many games that we know they’re stylized worlds and they’re fictional, so we try to get the characters to a place where the player knows that it’s not a real person, but they’re happy to be immersed in this environment. And as animation rendering improves, we have the ability to get to a wholly life-like character. So how do we get there? You know, we’re working with animators. We want to bring in neuroscientists, game developers, user researchers. There’s a lot of great work happening across the games industry in machine learning research, looking at things like motion matching, helping to create different kinds of animations that are more human-like. And it is this bringing together of artistry and technology development that’s going to get us there.

HUIZINGA: Yeah. Yeah, yeah. Well, speaking of the uncanny valley – and I’ve experienced that on several animated movies where they … it’s just like, eww, that’s weird – um, and we, we have a quest to avoid it. We, we want them to be more real. And I guess what we’re aiming for is to get to the point in gaming where the experience is so realistic that you can’t tell the difference between the character in the game and humans. So I have to ask, and assume you’ve given some thought to it, what could possibly go wrong if indeed you get everything right?

ZHANG: So one thing is that video games are definitely a stylized visual art, so we also welcome game creators who want to create a completely cartoon universe, right?

HUIZINGA: Oh, interesting, yeah.

ZHANG: But for those creators that want to make that life-like visual experience, we want to have the technology ready. In terms of, hey, as you ask, what could possibly go wrong if indeed we got everything right, I think that throughout human history, we have a rich legacy of exploring new ideas and challenging topics through fiction. For example, the novels of Asimov, looking at robots and, and the laws of robotics, I believe that we can think of video games as another realm of fiction where we might be able to explore these ideas, where you can enter a world and say, hey, what if something went wrong when you have these life-like agents? It’s a safe place for players to be able to have those thoughts and experiences and to explore the different outcomes that might happen.

HUIZINGA: Yeah.

ZHANG: I also want to say that I think the bar for immersion might also move over time. So when you say, what could possibly go wrong if we got everything right? Well, it’s still a video game.

HUIZINGA: Yeah.

ZHANG: And it might look real, but it’s still in a game world. And I think once we experience that, the bar might shift. So for example, when the first black-and-white movies, uh, came out and people saw movies for the first time, I remember seeing a documentary where one of the first movies was somebody filming a train, a steam train, coming towards the camera, and the audience watching that jumped! They ran away because they thought there was a steam train coming at them. And I think since then, people have understood that that is not happening. But I think this bar of immersion that people have will move over time because ultimately it is not real. It is in a digital environment.

HUIZINGA: Right. Well, and that bar finds itself in many different milieus, as it were. Um, you know, radio came out with War of the Worlds, and everyone thought we were being invaded by aliens, but we weren’t. It was just fiction. Um, and we’re also dealing with both satire and misinformation in non-game places, so it’s up to humans, it sounds like, to sort of make the differentiation and adapt, which we’ve done for a long time.

ZHANG: I totally agree. And I think this podcast is a great starting point for us to have this conversation in society about topics like AGI, uh, topics like, hey, how is AI going to assist us and be copilots for us? We should be having more of that discussion.

HUIZINGA: And even specifically to gaming I think one of the things that I hope people will come away with is that we’re thinking deeply about the experiences. We want them to be good experiences, but we also want people to not become, you know, fooled by it and so on. So these are big topics, and again, we won’t solve anything here, but I’m glad you’re thinking about it. So people have called gaming — and sometimes gamers — a lot of things, but rarely do you hear words like welcoming, safe, friendly, diverse, fun, inviting. And yet Xbox has worked really hard to kind of earn a reputation for inclusivity and accessibility and make gaming for everyone. Now I sound like I’m doing a commercial for Xbox, and I’m really not. I think this has been something that’s been foregrounded in the conversation in Xbox. So talk about some of the things you’ve done to, as you put it, work with the community rather than just for the community.

ZHANG: Right. I mean, we estimate there are 3 billion players in the world, so gaming has really become democratized. You can access it on your phone, on your computer, on your console, on your TV directly. And that means that we have to be thinking about, how do we make gaming for everybody? For every type of player? We believe that AI has this incredible ability to make gaming more fun for more people. You know, when we think about games, hey, how do we make gaming more adaptive, more personalized to every player? If I find this game is too hard for me, maybe the game can adapt to my needs. Or if I have different abilities or if I have disabilities that prohibits my access to the game, maybe the game can change to allow me to play with my friends. So this is an area that we are actively exploring. And, you know, even without AI, Xbox has this rich history of thinking about players with disabilities and how we can bring features to allow more people to play. So for example, recently Forza racing created a feature to allow people with sight impairment to be able to drive in the game by using sound. So they introduced new 3D sounds into the game, where someone who cannot see or can only partially see the screen can actually hear the hairpin turns in the road in order to drive their car and play alongside their family and friends.

HUIZINGA: Right.

ZHANG: We’ve also done things like speech-to-text in Xbox Party Chat. How do we allow somebody who has a disability to be able to communicate across countries, across cultures, across languages with other players? And we are taking someone’s spoken voice, turning it into text chat so that everybody within that party can understand them and can be able to communicate with them.

HUIZINGA: Right. That’s interesting. So across countries, you could have people that don’t speak the same language be able to play together …

ZHANG: Right. Exactly.

HUIZINGA: …and communicate.

ZHANG: Yeah.

HUIZINGA: Wow. Well, one of the themes on this show is the path that technologies travel from mind to market, or lab to life, as I like to say. And we talk about the spectrum of work from blue-sky ideas to blockbuster products used by millions. But Xbox is no twinkle in anyone’s eye. It’s been around for a long time. Um, even so, there’s always new ideas that are making their way into established products and literally change the game, right? So without giving away any industry secrets, is there anything you can talk about that would give us hint as to what might be coming soon to a console or a headset or a device near us?

ZHANG: Oh my goodness. You know I can’t give away any product secrets!

HUIZINGA: Dang it! I thought I would get you!

ZHANG: Um, I mean, I’m excited by our ability to bring more life-like visuals, more life-like behavior, allowing players to play games anywhere at a fidelity they’ve not seen before. I mean, these are all exciting futures for AI. At the same time, generative AI capabilities to really accelerate and empower game developers to make games at higher quality, at a faster rate, these are the things that the industry wants. How do we turn these models, these technologies, into real tools for game developers?

HUIZINGA: Yeah. I’m only thinking of players, and you’ve got this whole spectrum of potential participants.

ZHANG: I think the players benefit when the creators are empowered. It starts with the creators and helping them bring to life their games that the players ultimately experience.

HUIZINGA: Right. And even as you talk, I’m thinking there’s not a wide gap between creator and player. I mean, many of the creators are avid gamers themselves and …

ZHANG: And now when we see games like Minecraft or Roblox, where the tools of creativity are being brought to the players themselves and they can create their own experiences …

HUIZINGA: Yeah.

ZHANG: …I want to see more of those in the world, as well.

HUIZINGA: Exciting. Well, as we close, and I hate to close with you because you’re so much fun, um, I’d like to give you a chance to give an elevator pitch for your preferred future. I keep using that term, but I know we all think in terms of what, what we’d like to make in this world to make a mark for the next generation. You’ve already referred to that in this podcast. So we can close the circle, and I’ll ask you to do a bit of blue-sky envisioning again, back to your roots in your old life. What do video games look like in the future, and how would you like to have changed the gaming landscape with brilliant human minds and AI collaborators?

ZHANG: Oh my goodness. That’s such a tall order! Uh, I think ultimately my hope for the future is that we really explore our humanity and become even more human through the assistance of AI. And in gaming, how do we help people tell more human stories? How do we enable people to create stories, share those stories with others? Because ultimately, I think, since the dawn of human existence, we’ve been about storytelling. Sitting around a fire, drawing pictures on a cave wall, and now we are still there. How do we bring to life more stories? Because through these stories, we develop empathy for other people. We experience other lives, and that’s ultimately what’s going to make us better as a society, that empathy we develop, the mutual understanding and respect that we share. And I see AI as a tool to get us there.

HUIZINGA: From cave wall to console … Haiyan Zhang, it’s always a pleasure to talk to you. Thank you for joining us today.

ZHANG: Thank you so, much, Gretchen. Thank you.

 

The post Collaborators: Gaming AI with Haiyan Zhang appeared first on Microsoft Research.

]]>
Collaborators: Holoportation™ communication technology with Spencer Fowers and Kwame Darko http://approjects.co.za/?big=en-us/research/podcast/collaborators-holoportation-communication-technology-with-spencer-fowers-and-kwame-darko/ Thu, 06 Jul 2023 13:00:00 +0000 http://approjects.co.za/?big=en-us/research/?p=952644 A global team of medical providers is leveraging Holoportation, a Microsoft 3D capture and communication technology, to widen access to specialized care. Computer engineer Spencer Fowers and plastic surgeon Kwame Darko discuss the collaboration.

The post Collaborators: Holoportation™ communication technology with Spencer Fowers and Kwame Darko appeared first on Microsoft Research.

]]>
black and white photos of Dr. Spencer Fowers, a member of the Special Projects Technical Staff at Microsoft Research and Dr. Kwame Darko, a plastic surgeon in the reconstructive plastic surgery and burns center in Ghana’s Korle Bu Teaching Hospital, next to the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a new Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

In this episode, host Dr. Gretchen Huizinga welcomes Dr. Spencer Fowers (opens in new tab), a member of the Special Projects Technical Staff at Microsoft Research, and Dr. Kwame Darko (opens in new tab), a plastic surgeon in the reconstructive plastic surgery and burns center in Ghana’s Korle Bu Teaching Hospital. The two are part of an intercontinental research project to study Holoportation, a Microsoft 3D capture and communication technology, in the medical setting. The goal of the study—led by Korle Bu, NHS Scotland West of Scotland Innovation Hub, and Canniesburn Plastic Surgery and Burns Unit—is to make specialized healthcare more widely available, especially to those in remote or underserved communities. Fowers and Darko break down how the technology behind Holoportation and the telecommunication device being built around it brings patients and doctors together when being in the same room isn’t an easy option and discuss the potential impact of the work.

Transcript

[TEASER]

[MUSIC PLAYS UNDER DIALOGUE]

SPENCER FOWERS: I work with a team that does moonshots for a living, so I’m always looking for, what can we shoot for? And our goal really is like, gosh, where can’t we apply this technology? I mean, just anywhere that it is at all difficult to get, you know, medical expertise, we can ease the burden of doctors by making it so they don’t have to travel to provide this specialized care and increase the access to healthcare to these people that normally wouldn’t be able to get access to it.

KWAME DARKO: So yeah, the scope is as far as the mind can imagine it.

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC ENDS]

On this episode, I’m talking to Dr. Spencer Fowers, a Principal Member of the Technical Staff at Microsoft Research, and Dr. Kwame Darko, a plastic surgeon at the National Reconstructive Plastic Surgery and Burns Centre at the Korle Bu Teaching Hospital in Accra, Ghana. Spencer and Kwame are working on 3D telemedicine, a project they hope will increase access to specialized healthcare in rural and underserved communities by using live 3D communication, or Holoportation. We’ll learn much more about that in this episode. But first, let’s meet our collaborators.

Spencer, I’ll start with you. Tell us about the technical staff in the Special Projects division of Microsoft Research. What kind of work do you do, what’s the research model, and what’s your particular role there?

SPENCER FOWERS: Hi, Gretchen. Thanks for having me on here. Yeah, um, so our group at Special Projects was kind of patterned after the Lockheed Martin Skunk Works methodology. You know, we are very much a sort of “try big moonshot projects” type group. Our goal is sort of to focus on any sort of pie-in-the-sky idea that has some sort of a business application. So you can imagine we’ve done things like build the world’s first underwater datacenter or do post-quantum cryptography, things like that. Anything that, uh, is a very ambitious project that we can try to iterate on and see what type of an application we can find for it in the real world. And I’m one of the, as you said, a principal member of the technical staff. That means I’m one of the primary researchers, so I wear a lot of different hats. My job is everything from, you know, managing the project, meeting with people like Kwame and the other surgeons that we’ve worked with, and then interfacing there and finding ways that we can take theoretical research and turn it into applied research, actually find a way that we can bring that theory into reality.

HUIZINGA: You know, that’s a really interesting characterization because normally you think of those things in two different buckets, right? The big moonshot research has got a horizon, a time horizon, that’s pretty far out, and the applied research is get it going so it’s productizable or monetizable fairly quickly, and you’re marrying those two kinds of research models?

FOWERS: Yeah. I mean, we fit kind of a really interesting niche here at Microsoft because we get to operate sort of like a startup, but we have the backing of a very large company, so we get to sort of, yeah, take on these moonshot projects that a, a smaller company might not be able to handle and really attack it with the full resources of, of a company like Microsoft.

HUIZINGA: So it’s a moonshot project, but hurry up, let’s get ’er done. [LAUGHS]

FOWERS: Right, yeah.

HUIZINGA: Well, listen, Kwame, you’re a plastic surgeon at Korle Bu in Ghana. In many circles, that term is associated with nonessential cosmetic surgery, but that’s not what we’re talking about here. What constitutes the bulk of your work, and what prompted you to pursue it in the first place?

KWAME DARKO: All right, thanks, Gretchen, and also thank you for having me on the show. So, um, just as you said, my name is Kwame Darko, and I am a plastic surgeon. I think a lot of my passion to become a plastic surgeon came from the fact that at the time, we didn’t have too many plastic surgeons in Ghana. I mean, at the time that I qualified as a plastic surgeon, I was the eighth person in the country. And at the time, there was a population of 20-something million. Currently, we’re around 33 million, 34 million, and we have … we’re still not up to 30 plastic surgeons. So there was quite a bit of work to be done, and my work scopes from all the way to what everybody tends to [associate] plastic surgery with, the cosmetic stuff, across to burns, across to trauma from people with serious accidents that need some parts of their body reconstructed, to tumors of all sorts. Um, one of my fortes is breast cancer and breast reconstruction, but not limiting to that. We also [do] tumors of the leg. And we also help other surgeons to cover up spaces or defects that may have been created when they’ve taken off some sort of cancer or tumor or whatever it may be. So it’s a wide scope, as well as burn surgery and burn care, as well. So that’s the scope of the kind of work that I do.

HUIZINGA: You know, this wasn’t on my list to ask you, but I’m curious, um, both of you … Spencer, where did you get your training? What … what’s your background?

FOWERS: I actually got my PhD at university in computer engineering, uh, focused on computer vision. So a lot of my academic research was in, uh, you know, embedded systems and low-power systems and how we can get a vision-based stuff to work without using a lot of processing. And it actually fits really well for this application here where we’re trying to find low-cost ways that we can bring really high-end vision stuff, you know, and put it inside a hospital.

HUIZINGA: Yeah. So, Kwame, what about you? Where did you get your training, and did you start out in plastic surgery thinking, “Hey, that’s what I want to be”? Or did you start elsewhere and say, “This is cool”?

DARKO: So my, my background is that I did my medical school training here in Ghana at the medical school in Korle Bu and then started my postgraduate training in surgery. Um, over here, you need to do a number of years in just surgery before you can branch out and do a specific type of surgery. So, after my three, four years in that, I decided to do plastic surgery once again here in Ghana. You spend another up to three years—minimum of three years—training in that, which I did. And then you become a plastic surgeon. But then I went on for a bit of extra training and more exposure from different places around the world. I spent some time in Cape Town in South Africa working in a hospital called Groote Schuur. Um, I’ve also had the opportunity to work in, in Glasgow, where this idea originated from, and various courses in different parts of the world from India, the US, and stuff like that.

HUIZINGA: Wow. You know, I could spend a whole podcast asking you what you’ve seen in your lifetime in terms of trauma and burns and all of that. But I won’t, uh, because let’s talk about how this particular project came about, and I’d like both of your perspectives on it. This is a sort of “how I met your mother” story. As I understand it, there were a lot of people and more than two countries involved. Spencer, how do you remember the meet-up?

FOWERS: Yeah, I mean, Holoportation has been around since 2015, but it was around 2018 that, uh, Steven Lo—he’s a plastic surgeon in Glasgow—he approached us with this idea, saying, “Hey, we want to, we want to use this Holoportation technology in a hospital setting.” At that point, he was already working with Kwame and, and other doctors. They have a partnership between the Canniesburn Plastic Surgery unit in Glasgow and the Korle Bu Teaching Hospital in Accra. And he, he approached us with this idea of saying we want to build a clinic remotely so that people can come and see this. There is, like Kwame mentioned, right, a very drastic lack of surgeons in Ghana for the amount of the population, and so he wanted to find a way that he could provide reconstructive plastic surgery consultation to patients even though they’re very far away. Currently, the, you know, the Canniesburn unit, they do these trips every year, every couple of years, where they fly down to Ghana, perform surgeries. And the way it works is basically the surgeons get on an airplane, they fly down to Ghana, and then, you know, the next day they’re in the hospital, all day long, meeting these people that they’re going to operate on the next day, right? And trying to decide the day before the surgery what they’re going to operate on and what they’re going to do and get the consent from these patients. Is there a better way? Could we actually talk to these patients ahead of time? And 2D video calls just didn’t cut it. It wasn’t good enough, and Kwame can talk more about that. But his idea was, can we use something like this to make a 3D model of a patient, have a live conversation with them in 3D so that the surgeon can evaluate them before they go to Ghana and get an idea of what they’re going to do and be able to explain to the patient what they want to do before the surgery has to happen.

HUIZINGA: Yeah. So Microsoft Research, how did that group get involved?

FOWERS: Well, so we started with this technology back in, you know, 2015. And when he approached us with this idea, we were looking for ways that we could apply Holoportation to different areas, different markets. This came up as like one of those perfect fits for the technology, where we wanted to be able to use the system to image someone, it needed to be a live conversation, not a recording, and so that was, right there … was where we started working with them and designing the cameras that would go into the system they’re using today.

HUIZINGA: Right, right, right. Well, Kwame, in light of, uh, the increase, as we’ve just referred to, in 2D telemedicine, especially during COVID and, and post-COVID, people have gotten pretty used to talking to doctors over a screen as opposed to going in person. But there are drawbacks and shortcomings of 2D in your world. So how does 3D fill in those gaps, and, and what was attractive to you in this particular technology for the application you need?

DARKO: OK. So great, um, just as you’re saying, COVID really did spark the, uh, the spread of 2D telemedicine all over the world. But for myself, as a surgeon and particularly so as a plastic surgeon, we’re trying to think about, how is 2D video going to help me solve my problem or plan towards solving my problem for a patient? And you realize there is a significant shortfall when we’re not just dealing with the human being as a 2D object, uh, but 3D perspective is so important. So one of the most common things we’ve used this system to help us with is when we’re assessing a patient to decide which part of the body we’re going to move and use it to fit in the space that’s going to be created by taking out some form of tumor. And not only taking it out in 3D for us to know that it’s going to fit and be big enough but also demonstrating to the patient so they have a deeper understanding of exactly what is going to go and be used to reconstruct whichever part of their body and what defect is going to be left behind. So as against when you’re just having a straightforward consultation back and forth, answer and response, question and response, in this situation, we get the opportunity and have the ability to actually turn the patient around and then measure out specific problem … parts of the body that we’re going to take off and then transpose that on a different part of the body to make sure that it’s also going to be big enough to switch around and transpose. And when I’m saying transpose, I’m talking about maybe sticking something off from the front part of your thigh and then filling that in with maybe massive parts of your back muscle.

FOWERS: To add on to what Kwame said, you know, for us, for Microsoft Research, when Steven approached us with this, I don’t think we really understood the impact that it could have. You know, we even asked him, why don’t you just use like a cell phone, or why don’t you just use a 2D telemedicine call? Like, why do you need all this technology to do this? And he explained it to us, and we said, OK, like we’re going to take your word for it. It wasn’t until I went over there the first time that it really clicked for me and we had set up the system and he brought in a patient that had had reconstructive plastic surgery. She had had a cancerous tumor that required the amputation of her entire shoulder. So she lost her arm and, you know, this is not something that we think of on a day-to-day basis, but you actually, you can’t wear a shirt if you don’t have a shoulder. And so he was actually taking her elbow and replacing the joint that he was removing with her elbow joint. So he did this entire transpose operation. The stuff that they can do is amazing. But …

HUIZINGA: Right.

FOWERS: … he had done this operation on her probably a year before. And so he was bringing her back in for just the postoperative consult to see how she was doing. He had her in the system, and while she’s sitting in the system, he’s able to rotate the 3D model of her around so that she can see her own back. And he drew on her: “OK, this is where your elbow is now, and this is where we took the material from and what we did.” And during the teleconference, she says, “Oh, that’s what you did. I never knew what you did!” Like … she had had this operation a year ago, never knew what happened to herself because she couldn’t see her own back that way and couldn’t understand it. And it finally clicked to us like, oh my gosh, like, this is why this is important. Like not just because it aids the doctors in planning for surgeries, but the tremendous impact that it has on patient satisfaction with their operation and patient understanding of what’s going to happen.

HUIZINGA: Wow. That’s amazing. Even as you describe that, it’s … ahh … we could go so deep into the strangeness of what they can do with plastic surgery. But let’s talk about technology for a minute. Um, this is a highly visual technology, and we’re just doing a podcast, and we will provide some links in the show notes for people to see this in action, I hope. But in the meantime, Spencer, can you give us a kind of word picture of 3D telemedicine and the technology behind Holoportation? How does it work?

FOWERS: Yeah, the idea behind this technology is, if we can take pictures of a person from multiple angles and we know where those cameras are very, very accurately, we can stitch all those images together to make like a 3D picture of a person. So we’re actually using, for the 3D telemedicine system, we’re using the Azure Kinect. So it’s like Version 3 of the Kinect sensor that was introduced back in the Xbox days. And what that gives us is it gives us not just a color picture like you’re seeing on your normal 2D phone call, but it’s also giving us a depth picture so it can tell how far away you are from the camera. And we take that depth and that color information from 10 different cameras spaced around the room and stitch them all together in real time. So while we’re talking at, you know, normal conversation speed, it’s creating this 3D image of a person that the doctor, in this case, can actually rotate, pan, zoom in, and zoom out and be able to see them from any angle that they want without requiring that patient to get up and move around.

HUIZINGA: Wow. And that speaks to what you just said. The patient can see it as well as the clinician.

FOWERS: Yeah, I mean, you also have this problem with a lot of these patients if they’d had, you know, a leg amputation or something, when we typically talk like we’re talking now on like a, you know, the viewer, the listeners can’t see it, but a typical 2D telemedicine call, you’re looking at me from like my shoulders up. Well, if that person has an amputation of their knee, how do you get it so that you can talk to them in a normal conversation and then look at their knee? You, you just can’t do that on a 2D call. But this system allows them to talk to them and turn and look at their knee and show them—if it’s on their back, wherever it is—what they’re going to do and explain it to them.

HUIZINGA: That’s amazing. Kwame, this project doesn’t just address geographical challenges for remote patients. It also addresses geographical challenges for remote experts. So tell us about the nature and makeup of what you call MDTs­—or multidisciplinary teams—that you collaborate with and how 3D telemedicine impacts the care you’re able to provide because of that.

DARKO: All right. So with an MDT, or multidisciplinary team, just as you said, the focus on medicine these days is to take out individual bias in how we’re going to treat a particular patient, an individual knowledge base. So now what we tend to do is we try and get a group of doctors who would be treating a particular ailment—more often than not, it’s a cancer case—and everybody brings their view on what is best to holistically find a solution to the patient’s … the most ideal remedy for the patient. Now let’s take skin cancer, for example. You’re going to need a plastic surgeon if you’re going to cut it out. You’re going to need a dermatologist who is going to be able to manage it. If it’s that severe, you’re also going to need an oncologist. You may even need a radiologist and, of course, a psychologist and your nursing team, as well. So with an MDT, you’d ideally have members from each of these specialties in a room at a time discussing individual patients and deciding what’s best to do for them. What happens when I don’t have a particular specialty? And what happens when, even though I am the representative of my specialty on this group, I may not have as in-depth knowledge as is needed for this particular patient? What do we do? Do we have access to other brains around the world? Well, with this system, yes, we do. And just as we said earlier, that unlike where this is just a regular let’s say Teams meeting or whatever form of, uh, telemedicine meeting, in this one where we have the 3D edge, we can actually have the patient around in the rig. And as we’re discussing and talking about—and people are giving their ideas—we can swing the patient around and say, well, on this aspect, it would work because this is far away from the ear or closer to the ear, or no, the ear is going to have to go with this; it’s too close. So what do we do? Can we get somebody else to do an ear reconstruction in addition? If it’s, um, something on the back, if we’re taking it all out, is this going to involve the muscle, as well? If so, how are we going to replace the muscle? It’s beyond my scope. But oh! What do you know? We have an expert who’s done this kind of things from, let’s say, Korea or Singapore. And then they would log on and be able to see everything and give their input, as well. So this is another application which just crosses boundary … um, borders and gives us so much more scope to the application of this, uh, this new device.

HUIZINGA: So, so when we’re talking about multidisciplinary teams and, and we look at it from an expert point of view of having all these different disciplines in the room from the medical side, uh, Spencer this collaboration includes technologists, as well as medical professionals, but it also includes patients. You, you talk about what you call a participatory development validation. What is the role of patients in developing this technology?

FOWERS: Well, similar to like that story I was mentioning, right, as we started using this system, the initial goal was to give doctors this better ability to be able to see patients in preparation for surgery. What we found as we started to show this to patients was that it drastically increased their satisfaction from the visits with the doctors because they were able to better understand the operation that was going to be performed. It’s surprising how many times like Kwame and Steven will talk to me and they’ll tell us stories about how like they explain a procedure to a patient about what they’re going to do, and the patient says, “Yeah, OK.” And then they get done and the patient’s like, “Wait, what did you do? Like that doesn’t … I didn’t realize you were going to do that,” you know, because it’s hard for them to understand when you’re just talking about them or whether you’re drawing on a piece of paper. But when you actually have a picture of yourself in front of you that’s live and the doctors indicating on you what’s going to happen and what the surgery is going to be, it drastically increases the patient satisfaction. And so that was actually the direction of the randomized controlled trial that we’re conducting in, in Scotland right now is, what kind of improvement in patient satisfaction does this type of a system provide?

HUIZINGA: Hmm. It’s kind of speaking UX to me, like a patient experience as opposed to a user experience. Um, has it—any of this—fed into sort of feedback loop on technology development, or is it more just on the user side of how I feel about it?

FOWERS: Um, as far as like technology that we use for the system, when we started with Holoportation, we were actually using kind of research-grade cameras and building our own depth cameras and stuff like that, which made for a very expensive system that wasn’t easy to use. That’s why we transitioned over to the Azure Kinect because it’s actually like the highest-resolution depth camera you can get on the market today for this type of information. And so, it’s, it’s really pushed us to find, what can we use that’s more of a compact, you know, all-in-one system so that we can get the data that we need?

HUIZINGA: Right, right, right. Well, Kwame, at about this time, I always ask what could possibly go wrong? But when we talked before, you, you kind of came at this from a cup-half-full outlook because of the nature of what’s already wrong in digital healthcare in general, but particularly for rural and underserved communities, um, and you’ve kind of said what’s wrong is why we’re doing this. So what are some of the problems that are already in the mix, and how does 3D telemedicine mitigate any of them? Things like privacy and connectivity and bandwidth and cost and access and hacking, consent—all of those things that we’re sort of like concerned about writ large?

DARKO: All right. So when I was talking about the cup being half full in terms of these, all of these issues, it’s because these problems already exist. So this technology doesn’t present itself and create a new problem. It’s just going to piggyback off the solutions of what is already in existence. All right? So you, you mentioned most of them anyway. I mean, talking about patient privacy, which is No. 1. Um, all of these things are done on a hospital server. They are not done on a public or an ad hoc server of any sort. So whatever fail-safes there are within the hospital in itself, whichever hospital network we’re using, whether here in Ghana, whether in Glasgow, whether somewhere remotely in India or in the US, doesn’t matter where, it would be piggybacking off a hospital server. So those fail-safes are there already. So if anybody can get into the network and observe or steal data from our system, then it’s because the hospital system isn’t secure, not because it’s our system, in a manner of speaking, is not secure. All right? And then when I was saying that it’s half full, it’s because whatever lapses we have already in 2D telemedicine, this supersedes it. And not only does it supersede the 2D lapses, it goes again and gives significant patient feedback like we were saying earlier, what Spencer also alluded to, is that now you have the ability to show the patient exactly what’s going on. And so in previous aspects where, think about it, even if it’s an in-person consultation where I would draw on a piece of paper and explain to them, “Well, I’m going to do this, this, this, and that,” now I actually have the patient’s own body, which they’re watching at the same time, being spun around and indicating that this actually is the spot I was talking about and this is how big my cut is going to be, and this is what I’m going to move out from here and use to fill in this space. So once again, my inclination on this is that, on our side, we can only get good, as against to looking for problems. The problems, I, I admit, will exist, but not as a separate entity from regular 2D medicine that’s … or 2D videography that we’re already encountering.

HUIZINGA: So you’re not introducing new risks with this. You’re just sort of serving on the other risks.

DARKO: We’re adding to the positives, basically.

HUIZINGA: Right. Yeah, Spencer, in the “what could go wrong” bucket on the other side of it, I’m looking at healthcare and the cost of it, uh, especially when you’re dealing with multiple specialists and complicated surgeries and so on. And I know healthcare policy is not on your research roadmap necessarily, but you have to be thinking about that as you’re, um, as you’re going on how this will ultimately be implemented across cultures and so on. So have you given any thought to how this might play out in different countries, or is this just sort of “we’re going to make the technology and let the policy people and the wonks work it out later?”

FOWERS: [LAUGHS] Yeah, it’s a good question, and I think it’s something that we’re really excited to see how it can benefit. Luckily enough, where we’re doing the tests right now, like in, uh, Glasgow and in Ghana, they already have partnerships and so there’s already standards in place for being able to share doctors and technology across that. But yeah, we’ve definitely looked into like, what kind of an impact does this have? And one of the benefits that we see is using something like 3D telemedicine even to provide greater access for specialty doctors in places like rural or remote United States, where they just don’t have access to those specialists that they need. I mean, you know, Washington state, where I am, has a great example where you’ve got people that live out in Eastern Washington, and if they have some need to go see like a pediatric specialist, they’re going to have to drive all the way into Seattle to go to Seattle Children’s to see that person. What if we can provide a clinic that allows them to, you know, virtually, through 3D telemedicine, interface with that doctor without having to make that drive and all that commute until they know what they need to do. And so we actually look at it as being beneficial because this provides greater access to these specialists, to other regions. So it’s actually improving, improving healthcare reach and accessibility for everyone.

HUIZINGA: Yeah. Kwame, can you speak to accessibility of these experts? I mean, you would want them all on your team for a 3D telemedicine call, but how hard is it to get them all on the same … I mean, it’s hard to get people to come to a meeting, let alone, you know, a big consultation. Does that enter the picture at all or, is that … ?

DARKO: It does. It does. And I think, um, COVID is something, is something else that’s really changed how we do everyday, routine stuff. So for example, we here in Ghana have a weekly departmental meeting and, um—within the plastic surgery department and also within the greater department of surgery, weekly meeting—everything became remote. So all of a sudden, people who may not be able to make the meeting for whatever reason are now logging on. So it’s actually made accessibility to them much, much easier and swifter. I mean, where they are, what they’re doing at the time, we have no idea, but it just means that now we have access to them. So extrapolating this on to us getting in touch with specialists, um, if we schedule our timing right, it actually makes it easier for the specialists to log on. Now earlier we spoke about international MDTs, not just local, but then, have we thought about what would have happened if we did not have this ability to have this online international MDT? We’re talking about somebody getting a plane ticket, sitting on a plane, waiting in airports, airport delays, etcetera, etcetera, and flying over just to see the patient for 30 minutes and make a decision that, “Well, I can or cannot do this operation.” So now this jumps over all of this and makes it much, much easier for us. And now when we move on to the next stage of consultation, after the procedure has been done, when I’m talking about the surgery, now the patient doesn’t need to travel great distances for individual specialist review. Now in the case of plastic surgery, this may cover not only the surgeon but also the physiotherapist. And so, it’s not just before the consultation but also after the consultation.

HUIZINGA: Wow. Spencer, what you’re doing with 3D telemedicine through Holoportation is a prime example of how a technology developed for one thing turned out to have incredible uses for another. So give us just a brief history of the application for 3D communication and how it evolved from where it started to where it is now.

FOWERS: Yeah, I mean, 3D communication, at least from what we’re doing, really started with things like the original Xbox Kinect, right? With a gaming console and a way to kind of interact in a different way with your gaming system. What happened was, Microsoft released that initial Kinect and suddenly found that people weren’t buying the Kinect to play games with it. They were buying to put it on robots and buying to use, you know, for different kind of robotics applications and research applications. And that’s why the second Kinect, when it was released, they had an Xbox version and they actually had a Kinect for Windows version because they were expecting people to buy this sensor to plug it in their computers. And if you look at the form factor now with the Azure Kinect that we have, it’s a much more compact unit. It’s meant specifically for using on computers, and it’s built for robotics and computer vision applications, and so it’s been really neat to see how this thing that was developed as kind of a toy has become something that we now use in industrial applications.

HUIZINGA: Right. Yeah. And this … sort of the thing, the serendipitous nature of research, especially with, you know, big moonshot projects, is like this is going to be great for gaming and it actually turns out to be great for plastic surgery! Who’d have thunk? Um, Kwame, speaking to where this lives in terms of use, where is it now on the spectrum from lab to life, as I like to say?

DARKO: So currently, um, we have a rig in our unit, the plastic surgery unit in the Korle Bu Teaching Hospital. There’s a rig in Glasgow, and there’s a rig over in the Microsoft office. So currently what we’ve been able to do is to run a few tests between Ghana, Seattle, and Glasgow. So basically, we’ve been able to run MDTs and we’ve been able to run patient assessments, pre-op assessments, as well as post-operative assessments, as well. So that’s where we are at the moment. It takes quite a bit of logistics to initiate, but we believe once we’re on a steady roll, we’ll be able to increase our numbers that we’ve been able to do this on. I think currently those we’ve operated and had a pre-op assessment and post-op assessment have been about six or seven patients. And it was great, basically. We’ve done MDTs across with them, as well. So the full spectrum of use has been done: pre-op, MDT, and post-op assessments. So yeah, um, we have quite a bit more to do with numbers and to take out a few glitches, especially with remote access and stuff like that. But yeah, I think we’re, we’re, we’re making good progress.

HUIZINGA: Yeah. Spencer, do you see, or do you know of, hurdles that you’re going to have to jump through to get this into wider application?

FOWERS: For us, from a research setting, one of the things that we’ve been very clear about as we do this is that, while it’s being used in a medical setting, 3D telemedicine is actually just a communication technology, right? It’s a Teams call; it’s a communication device. We’re not actually performing surgery with the system, you know, or it’s not diagnosing or anything. So it’s not actually a medical device as much as it’s a telecommunication device that’s being used in a medical application.

HUIZINGA: Well, as we wrap up, I like to give each of you a chance to paint a picture of your preferred future. If your work is wildly successful, what does healthcare look like in five to 10 years? And maybe that isn’t the time horizon. It could be two to three; it could be 20 years. I don’t know. But how have you made a difference in specialized medicine with this communication tool?

FOWERS: Like going off of what Kwame was saying, right, back in November, when we flew down and were present for that first international MDT, it was really an eye-opening experience. I mean, these doctors, normally, they just get on an airplane, they fly down, and they meet these patients for the first time, probably the day before they’ve had surgery. And this time, they were able to meet them and then be able to spend time before they flew down preparing for the surgery. And then they did the surgeries. They flew back. And normally, they would fly back, they wouldn’t see that patient again. With 3D telemedicine, they jumped back on a phone call and there was the person in 3D, and they were able to talk to them, you know, turn them around, show them where the procedure was, ask them questions, and have this interaction that made it so much better of an experience for them and for the doctors involved. So when I look at kind of the future of where this goes, you know, our vision is, where else do we need this? Right now, it’s been showcased as this amazing way to bring international expertise to one operating theater, you know, with specialists from around the world, as needed. And I think that’s great. And I think we can apply that in so many different locations, right? Rural United States is a great example for us. We hope to expand out what we’re doing in Scotland, to rural areas of Scotland that, you know, it’s very hard for people in the Scottish isles to be able to get to their hospitals. You know, other possible applications … like can we make this system mobile? You can imagine like a clinical unit where this system drives out to remote villages and is able to allow people that can’t make it in to a hospital to get that initial consultation, to know whether they should make the trip or whether they need other work done before they can start surgery. So kind of the sky’s the limit, right? I mean, it’s always good to look at like, what’s … I work with a team that does moonshots for a living, so I’m always looking for what can we shoot for? And our goal really is like, gosh, where can’t we apply this technology? I mean, it just anywhere that it is at all difficult to get, you know, medical expertise, we can ease the burden of doctors by making it so they don’t have to travel to provide this specialized care and increase the access to healthcare to these people that normally wouldn’t be able to get access to it.

HUIZINGA: Kwame, what’s your, what’s your take?

DARKO: So to me, I just want to describe what the current situation is and what I believe the future situation will be. So, the current situation—and like, like, um, Spencer was saying, this just doesn’t apply to Ghana alone; it can apply in some parts of the US and some parts of the UK, as well—where a patient has a problem, is seen by a GP in the local area, has to travel close to 24 hours, sometimes sleep over somewhere, just to get access to a specialist to see what’s going on. The specialist now diagnoses, sees what’s happening, and then runs a barrage of tests and makes a decision, “Well, you’re going to have an operation, and the operation is going to be in, let’s say, four weeks, six weeks.” So what happens? The patient goes, spends another 24 hours-plus going all the way back home, waiting for the operation day or operation period, and then traveling all the way back. You can imagine the time and expense. And if this person can’t travel alone, that means somebody else needs to take a day off work to bring the person back and forth. So now what would happen in the future if everything goes the way we’re planning? We’d have a rig in every, let’s say, district or region. The person just needs to travel, assumedly, an hour or two to the rig. Gets the appointment. Everything is seen in 3D. All the local blood tests and stuff that can be done would be done locally, results sent across. Book a theater date. So the only time that the person really needs to travel is when they’re coming for the actual operation. And once again, if an MDT has to be run on this, on this patient, it will be done. And, um, they would be sitting in their rig remotely in the town or wherever it is. Those of us in the teaching hospitals across the country would also be in our places, and we’d run the MDT to be sure. Postoperatively, if it’s a review of the patient, we’d be able to do that, even if it’s an MDT review, as well, we could do that. And the extra application, which I didn’t highlight too much and I mentioned it, but I didn’t highlight it, is if this person needs to have physiotherapy and we need to make sure that they’re succeeding and doing it properly, we can actually do it through a 3D call and actually see the person walking in motion or wrist movement or hand extension or neck movements, whatever it is. We can do all this in 3D. So yeah, the, the scope is, is as far as the mind can imagine it!

HUIZINGA: You know, I’m even imagining it, and I hate to bring up The Jetsons as, you know, my, my anchor analogy, but, you know, at some point, way back, nobody thought they’d have the technology we have all in our rooms and on our bodies now. Maybe this is just like the beginning of everybody having 3D communication everywhere, and no one has to go to the doctor before they get the operation. [LAUGHS] I don’t know. Spencer Fowers, Kwame Darko. This is indeed a mind-blowing idea that has the potential to be a world-changing technology. Thanks for joining me today to talk about it.

DARKO: Thanks for having us, Gretchen.

FOWERS: Thanks.

The post Collaborators: Holoportation™ communication technology with Spencer Fowers and Kwame Darko appeared first on Microsoft Research.

]]>
Collaborators: Renewable energy storage with Bichlien Nguyen and David Kwabi http://approjects.co.za/?big=en-us/research/podcast/collaborators-renewable-energy-storage-with-bichlien-nguyen-and-david-kwabi/ Thu, 22 Jun 2023 13:00:00 +0000 http://approjects.co.za/?big=en-us/research/?p=947874 Researcher Bichlien Nguyen is an organic electrochemist turned technologist. Professor David Kwabi is a mechanical engineer. Their work uses ML to help discover organic compounds for renewable energy storage. Learn about their collaboration.

The post Collaborators: Renewable energy storage with Bichlien Nguyen and David Kwabi appeared first on Microsoft Research.

]]>
black and white photos of Microsoft Principal Researcher Dr. Bichlien Nguyen and Dr. David Kwabi, Assistant Professor of Mechanical Engineering at the University of Michigan, next to the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a new Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.

In this episode, Microsoft Principal Researcher Dr. Bichlien Nguyen (opens in new tab) and Dr. David Kwabi, Assistant Professor of Mechanical Engineering at the University of Michigan (opens in new tab), join host Dr. Gretchen Huizinga to talk about how their respective research interests—and those of their larger teams—are converging to develop renewable energy storage systems. They specifically explore their work in flow batteries and how machine learning can help more effectively search the vast organic chemistry space to identify compounds with properties just right for storing waterpower and other renewables for a not rainy day. The bonus? These new compounds may just help advance carbon capture, too.

Transcript

[MUSIC PLAYS UNDER DIALOGUE]

DAVID KWABI: I’m a mechanical engineer who sort of likes to hang out with chemists.

BICHLIEN NGUYEN: I’m an organic chemist by training, and I dabble in machine learning. Bryan’s a computational chemist who dabbles in flow cell works. Anne is, uh, a purely synthetic chemist who dabbles in almost all of our aspects.

KWABI: There’s really interesting synergies that show up just because there’s people, you know, coming from very different backgrounds.

NGUYEN: Because we have overlap, we, we have lower, I’m going to call it an activation barrier, in terms of the language we speak.

[MUSIC]

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC ENDS]

Today I’m talking to Dr. Bichlien Nguyen, a Principal Researcher at Microsoft Research, and Dr. David Kwabi, an Assistant Professor of Mechanical Engineering at the University of Michigan. Bichlien and David are collaborating on a fascinating project under the umbrella of the Microsoft Climate Research Initiative that brings organic chemistry and machine learning together to discover new forms of renewable energy storage. Before we unpack the “computational design and characterization of organic electrolytes for flow batteries and carbon capture,” let’s meet our collaborators.

Bichlien, I’ll start with you. Give us a bit more detail on what you do at Microsoft Research and the broader scope and mission of the Microsoft Climate Research Initiative.

BICHLIEN NGUYEN: Thanks so much, Gretchen, for the introduction. Um, so I guess I’ll start with my background. I have a background in organic electric chemistry, so it’s quite fitting, and as a researcher at Microsoft, really, it’s my job, uh, to come up with the newest technologies and keep abreast of what is happening around me so that I can actually, uh, fuse those different technology streams together and create something that’s, um, really valuable. And so as part of that, uh, the Microsoft Climate Research Initiative was really where a group of researchers came together and said, “How can we use the resources, the computational resources, and expertise at Microsoft to enable new technologies that will allow us to get to carbon negative by the year 2050? How can we do that?” And that, you know, as part of that, um, I just want to throw out that the Microsoft Climate Research Initiative really is focusing on three pillars, right. The three pillars are being carbon accounting, because if you don’t know how much carbon is in the atmosphere, you can’t really, uh, do much to remedy it, right, if you don’t know what’s there. The other one is climate resilience. So how do people get affected by climate change, and how do we overcome that, and how can we help that with technology? And then the third is materials engineering, where, that’s where I sit in the Microsoft Climate Research Initiative, and that’s more of how do we either develop technologies that, um, are used to capture and store carbon, uh, or are used to enable the green energy transition?

HUIZINGA: So do you find yourself spread across those three? You say the last one is really where your focus is, but do you dip your toe in the other areas, as well?

NGUYEN: I love dipping my toe in all the areas because I think they’re all important, right? They’re all important. We have to really understand what the environmental impacts of all the materials, for example, that we’re making are. I mean, it’s so … so carbon accounting is really important. Environmental accounting is very important. Um, and then people are the ones that form the core, right? Why are we, do … why do we do what we do? It’s because we want to make sure that we can enable people and solve their problems.

HUIZINGA: Yeah, when you talk about carbon accounting and why you’re doing it, it makes me think about when you have to go on a diet and the doctor says, “You have to get really honest about what you’re eating. Don’t, don’t fake it.” [LAUGHS] So, David, you’re a professor at the University of Michigan, and you run the eponymous Kwabi Lab there. Tell us about your work in general. What are your research interests? Who do you work with, and what excites you most about what you do?

DAVID KWABI: Yeah, happy to! Thank you for the introduction and, and for having me on, on here today. So, um, uh, so as you said, I run the Kwabi Lab here at the University of Michigan, and the sort of headline in terms of what we’re interested in doing is that we like to design and study batteries that can store lots of renewable electricity, uh, on the grid, so, so that’s kind of our mission. Um, that’s not quite all of what we do, but it’s, uh, it’s how I like to describe it. Um, and the motivation, of course, is … comes back to what Bichlien just mentioned, is this need for us to transition from carbon-intensive, uh, ways of producing energy to renewables. And the thing about renewables is that they’re intermittent, so solar and wind aren’t there all the time. You need to find a way to store all that energy and store it cheaply for us to really make, make a dent, um, in carbon emissions from energy production. So we work on, on building systems or energy storage systems that can meet that goal, that can accomplish that task.

HUIZINGA: Both of you talked about having larger teams that support the work you’re doing or collaborate with you two as collaborators. Do you want to talk about the size and scope of those teams or, um, you know, this collaboration across collaboration?

KWABI: Yeah, so I could start with that. So my group, like you said, we’re in the mechanical engineering department, so we really are, um, we call ourselves electric chemical engineers, and electric chemistry is the science of batteries, but it’s a science of lots of other things besides that, but the interesting thing about energy storage systems or batteries in general is that you need to build and put these systems together, but they’re made of lots of different materials. And so what we like to do in my group is build and put together these systems and then essentially figure out how they perform, right. Uh, try to explore performance limits as a function of different chemistries and system configurations and so on. But the hope then is that this can inform materials chemists and computationalists in terms of what materials we want to make next, sort of, so, so there’s a lot of need for collaboration and interdisciplinary knowledge to, to make progress here.

HUIZINGA: Yeah. Bichlien, how about you in terms of the umbrella that you’re under at Microsoft Research?

NGUYEN: There are so many different disciplines, um, within Microsoft Research, but also with the team that we’re working, you know, with David. So we have actually two other collaborators from two different, I guess, departments. There’s the chemical engineering department, which Bryan Goldsmith is part of, and Anne McNeil, I believe, is part of the chemistry department. And you know, for this particular project, on flow battery electrolytes for energy storage, um, we do need a multidisciplinary team, right? We, we need to go from the atomistic, you know, simulation level all the way to the full system level. And I think that’s, that’s where, you know, that’s important.

HUIZINGA: Now that we’re on the topic of this collaboration, let’s talk about how it came about. I like to call this “how I met your mother.” Um, what was the initial felt need for the project, and who called who and said, “Let’s do some research on renewable climate-friendly energy solutions?” Bichlien, why don’t you go ahead and take the lead on this?

NGUYEN: Yeah. Um, so I’m pretty sure what happened—and David, correct me if I’m wrong—

[LAUGHTER]

HUIZINGA: Pretty sure … !

NGUYEN: I’m pretty sure, but not 100 percent sure—is that, um, while we were formulating how to … uh, what, what topics we wanted to target for the Microsoft Climate Research Initiative, we began talking to many different universities as a way to learn from them, to see what areas of interest and what areas they think are, uh, really important for the future. And one of those universities was the University of Michigan, and I believe David was one of few PIs on that initial Teams meeting. And David gave, I believe … David, was it like a 10-minute presentation? Very quick, right? Um, but it sparked this moment of, “Wow, I think we could accelerate this.”

HUIZINGA: David, how do you remember it?

KWABI: Yeah, I think I remember it. [LAUGHS] This is so almost like a, like a marriage. Like, how did you guys meet? Um, and then, and then the stories have to align in some way or …

HUIZINGA: Yeah, who liked who first?

KWABI: Yeah, exactly. Um, but yeah, I think, I think that’s what I recall. So basically, I’m part of … here at the university, I’m part of this group called the Global CO2 Initiative, uh, which is basically, uh, an institute here at the university that convenes research related to CO2 capture, CO2 utilization, um, and I believe the Microsoft team set up a meeting with the Global CO2 Initiative, which I joined, uh, in my capacity as a member, and I gave a little 10-minute talk, which apparently was interesting enough for a second look, so, um, that, that’s how the collaboration started. There was a follow-up meeting after that, and here we are today.

HUIZINGA: Well, it sounds like you’re compelling, so let’s get into it, David. Now would be a good time to talk about, uh, more detail on this research. I won’t call it Flow Batteries for Dummies, but assume we’re not experts. So what are flow batteries, what problems do they solve, and how are you going after your big research goals methodologically?

KWABI: OK, so one way to think about flow batteries is to, to think first about pumped hydro storage is, is how I like to introduce it. So, so a flow battery is just like a battery, the, the sort of battery that you have in your cell phone or laptop computer or electric vehicle, but it’s a … it has a very different architecture. Um, and to explain that architecture, I like to talk about pumped hydro. So pumped hydro is I think a technology many of us probably appreciate or know about. You have two reservoirs that, that hold water—so upper and lower reservoirs—and when you have extra electricity, or, or excess electricity, you can pump water up a mountain, if you like, from one reservoir to another. And when you need that electricity, water just flows down, spins some turbines, and produces electricity. You’re turning gravitational potential energy into electrical energy, or electricity, is the idea. And the nice thing about pumped hydro, um, is that if you want to store more energy, you just need a bigger reservoir. So you just need more water essentially, um, in, in the two reservoirs to get to longer and longer durations of storage, um, and so then this is nice because more and more water is actually … is cheap. So the, the marginal cost of your … every stored, um, unit of energy is quite low. This isn’t really the case for the source of batteries we have in our cell phones and laptop computers. So if we’re talking about grid storage, you want something like this, something that decouples energy and power, so we have essentially a low cost per unit of electricity. So, so flow batteries essentially mimic pumped hydro because instead of turning gravitational potential energy into electricity, you’re actually changing or turning chemical potential energy, if you like, into electricity. So essentially what you’re doing is just storing energy in, um, in the form of electrons that are sort of attached to molecules. So you have an electron at a really high energy that is like flowing onto another molecule that has a low energy. That’s essentially the transformation that you’re trying to do in a, in a flow battery. But that’s the analogy I like to, I like to draw. It’s sort of a high- and low-reservation, uh, reservoirs you have. High and low chemical, uh, potential energy.

HUIZINGA: So what do these do better than the other batteries that you mentioned that we’re already using for energy storage?

KWABI: So the other batteries don’t have this decoupling. So in the flow battery, you have the, the energy being stored in these tanks, and the larger the tank, the more the energy. If you want to turn that energy into, uh, chemical energy into electricity, you, you, you run it through a reactor. So the reactor can stay the same size, but the tank gets bigger and bigger and you store more energy. In a laptop battery, you don’t have that. If you want more energy, you just want more battery, and that, the cost of that is the same. So there isn’t this big cost advantage, um, that comes from decoupling the energy capacity from the power capacity.

NGUYEN: David, would you, would you also say that, um, with, you know, redox organic flow batteries, you also kind of decouple the source, right, of the, the material, the battery material, so it’s no longer, for example, a rare earth metal or precious metal.

KWABI: Absolutely. So that’s, that’s then the thing. So when you … so you’ve got … you know, imagine these large systems, these giant tanks with, with molecules that store electricity. The question then is what molecules do you choose? Because if it’s like really expensive, then your electricity is also very expensive. Um, the particular type of battery we’re looking at uses organic molecules to store that electricity, and the hope is that these organic molecules can be made very cheaply from very abundant materials, and in the end, that means that this then translates to a really low cost of electricity.

HUIZINGA: Bichlien, I’m glad you brought that up because that is a great comparison in terms of the rare earth stuff, especially lithium mining right now is a huge cost, or tax, on the environment, and the more electric we have, the more lithium we need, so there’s other solutions that you guys are, are digging around for. Were you going to say something else, Bichlien?

NGUYEN: I was just going to say, I mean, another reason why, um, we thought David’s work was so interesting is because, you know, we’re looking at, um, energy, um, storage for renewables, and so to get to this green energy economy, we’ll need a ton of renewables, and then we’ll need a ton of ways to store the energy because renewables are, you know, they’re intermittent. I mean, sometimes the rain rains all the time, [LAUGHS] and sometimes it doesn’t. It’s really dry. Um, I don’t know why I say rain, but I assume … I probably …

HUIZINGA: Because you’re in Seattle, that’s why!

NGUYEN: That’s true. But like the sun shines; it doesn’t shine. Um, yeah, the wind blows, and sometimes it doesn’t.

HUIZINGA: Or doesn’t. Yeah. … Well, let’s talk about molecules, David, um, and getting a little bit more granular here, or maybe I should say atomic. You’re specifically looking for aqueous-soluble redox-active organic molecules, and you’ve noted that they’re really hard to find, um, these molecules that meet all the performance requirements for real-world applications. In other words, you have to swipe left a lot before you get to a good [LAUGHS] match, continuing with the marriage analogy. … So what are the properties necessary that you’re looking for, and why are they so hard to find?

KWABI: So the “aqueous soluble” just means soluble in water. You want the molecule to be able to dissolve into water at really high concentrations. So that’s one, um, property. You need it to last a really long time because the hope is that these flow battery installations are going to be there for decades. You need it to store electrons at the right energy. So, uh, I mentioned you have two tanks: one tank will store electrons at high energy; the other at low energy. So you need those energy levels to be set just right in a sense, so you want a high-voltage battery, essentially. You also want the molecule to be set that it doesn’t leak from one tank to the other through the reactor that’s in the middle of the two tanks, right. Otherwise, you’re essentially losing material, which is not, uh, desirable. And you want the molecule to be cheap. So that, that’s really important, obviously, because if we’re going to do this at, um, really large scale, and we want this to be low cost, that we want something that’s abundant and cheap. And finding a molecule that satisfies all of these requirements at the same time is really difficult. Um, you can find molecules that satisfy three or four or two, but finding something that hits all the, all the criteria is really hard, as is finding a good partner. [LAUGHS]

HUIZINGA: Well, and even in, in other areas, you hear the phrase cheap, fast, good—pick two, right? So, um, yeah, finding them is hard, but have you identified some or one, or I mean where are you on this search?

KWABI: Right now, the state-of-the-art charge-storing molecule, if you like, is based on a rare earth … rare element called vanadium. So the most developed flow batteries now use vanadium, um, to store electricity. But, uh, vanadium is pretty rare in the Earth’s crusts. It’s unclear if we start to scale this technology, um, to levels that would really make an impact on climate, it’s not clear there’s enough vanadium, uh, to, to do the job. So it fulfills a bunch of, a bunch of the criteria that I just mentioned, but not, not the cheap one, which is pretty important. We’re hoping, you know, with this project, that with organic molecules, we can find examples or particular compounds that really can fulfill all of these requirements, and, um, we’re excited because organic chemistry gives us, uh … there’s a wide design space with organic molecules, and you’re starting from abundant elements, and, you know, the hope is that we can really get to something that, that, you know, we can swipe left on … is it swipe left or right? I’m not sure.

NGUYEN: I have no idea.

HUIZINGA: Swipe left means …

[LAUGHTER]

HUIZINGA: I looked it up. I, I’ve been married for a really long time, so I did look it up, and it is left if you’re not interested and right if you are, apparently on Tinder, but, uh, not to beat that horse …

KWABI: You want to swipe right eventually.

HUIZINGA: Yes. Which leads me to, uh, Bichlien. What does machine learning have to do with natural sciences like organic chemistry? Why does computation play a role here, particularly generative models, for climate change science?

NGUYEN: Yeah so what, you know, the past decade or two, um, in computer science and machine learning have taught us is that ML is really good at pattern recognition, right, being able to, um, take complex datasets and pull out the most type … you know, relevant, uh, trends and information, and, and it’s good at classifying … you know, used as a classification tool. Uh, and what we know about nature is that nature is full of patterns, right. We see repeating patterns all the time in nature, at many different scales. And when we think, for example, of all of the combinations of carbons, carbon organic molecules, that you could make, you see around 1060; it’s estimated to be 1060. Um, and those are all connected somehow, you know, in this large, you know, space, this large, um, distribution. And we want to, for example, as David mentioned, we want to check the boxes on all these properties. So what is really powerful, we believe, is that generative models will allow us to sample this, this organic chemistry space and allow us to condition the outputs of these models on these properties that David wants to checkmark. And so in a way, it’s allowing us to do more efficient searching. And I like to think about it as like you’re trying to find a needle, right, in the ocean, and the ocean’s pretty vast; needles are really small. And instead of having the size of the ocean as your search space, maybe you have the size of a bathtub and so that we can narrow down the search space and then be able to test and validate some of the, the molecules that come out.

HUIZINGA: So do these models then eliminate a lot of the options, making the pool smaller? Is that how that works to make it a bathtub instead of an ocean?

NGUYEN: I, I wouldn’t say eliminates, but it definitely tells you where you should be … it helps focus where you’re searching.

HUIZINGA: Right, right, right. Well, David, you and I talked briefly and exchanged some email on “The Elements” song by Tom Lehrer, and it’s, it’s a guy who basically sings the entire periodic chart of the elements, really fast, to the piano. But at the end, he mentions the fact that there’s a lot that haven’t been discovered. There’s, there’s blanks in the chart. And so I wonder if, you know, this, this search for molecules, um, it just feels like is there just so much more out there to be discovered?

KWABI: I don’t know if there’s more elements to be discovered, per se, but certainly there’s ways of combining them in ways that produce …

HUIZINGA: Aaahhhhh …

KWABI: … new compounds or compounds with properties that, that we’re looking for, for example, in this project. So, um, that’s, I think, one of the things that’s really exciting about, uh, about this particular endeavor we’re, we’re, we’re engaged in. So, um, one of the ways that people have traditionally thought about finding new molecules for flow batteries is, you know, you go into the lab or you go online and order a chemical that you think is going to be promising [LAUGHS] … some people I know have done this, uh, myself included … but you, you order a chemical that you think is promising, you throw it in the flow battery, and then you figure out if it works or not, right. And if it doesn’t work, you move on to the next compound, or you, um …

NGUYEN: You tweak it!

KWABI: … if it does work, you publish it. Yeah, exactly—you tweak it, for example. Um, but one of the, one of the questions that we get to ask in this project is, well, rather than think about starting from a molecule and then deciding or figuring out whether it works, we, we actually start from the criteria that we’re looking for and then figure out if we can intelligently design, um, a molecule based on the criteria. Um, so it’s, it’s, uh, I think a more promising way of going about discovering new molecules. And, as Bichlien’s already alluded to, with organic chemistry, the possibilities are endless. We’ve seen this already in, like, the pharmaceutical industry for example, um, and lots of other industries where people think about, uh, this combinatorial problem of, how do I get the right structure, the right compound, that solves the problem of, you know, killing this virus or whatever it is. We’re hoping to do something similar for, uh, for flow batteries.

HUIZINGA: Yeah, in fact, as I mentioned at the very beginning of the show, you titled your proposal “The computational design and characterization of organic electrolytes for flow batteries,” so it’s kind of combining all of that together. David, sometimes research has surprising secondary uses. You start out looking for one thing and it turns out to be useful for something else. Talk about the dual purposes of your work, particularly how flow batteries both store energy and work as a sort of carbon capture version of the Ghostbusters’ Ecto-Containment Unit. [LAUGHS]

KWABI: Sure. Yeah, so this is where I sort of confess and say I wasn’t completely up front in the beginning when I said all we do is energy storage, but, um, another, um, application we’re very interested in is carbon capture in my group. And with regard to flow batteries, it turns out that you, you actually can take the same architecture that you use for a flow battery and actually use it to, to capture carbon, CO2 in particular. So the way this would work is, um, it turns out that some of the molecules that we’ve been talking about, some of the organic molecules, when you push an electron onto them—so you’re storing energy and you push an electron onto them—it turns out that some of these molecules also absorb hydrogen ions from water so those two processes sort of happen together. You push an electron onto the molecule, and then it picks up a hydrogen ion from water. Um, and if you remember anything about something from your chemistry classes in high school, that changes the pH of water. If you remove protons, uh, from water, that makes water more basic, or more alkaline. And alkaline electrolytes or alkaline water actually absorbs or reacts with CO2 to make bicarbonate. So that’s a chemical reaction that can serve as a mode, or a mechanism, for removing CO2 from, from the environment, so it could be air, or it could be, uh, flue gas or, you know, exhaust gas from a power plant. So imagine you, you run this process, you push the electron onto the molecule, you change the pH of the solution, you remove CO2 … that can then … you can actually concentrate that CO2 and then run the opposite reaction. So you pull the electron off the molecule; that then dumps protons back into solution, and then you can release all this pure CO2 all of a sudden. So, so now what you can do is take a flow battery that stores energy, but also, uh, use it to separate CO2, separate and concentrate CO2 from a, from a gaseous source. So this is, um, some work that we’ve been pursuing sort of in parallel with our work on energy storage, and the hope is that we can find molecules that, in principle, maybe could do both—could do the energy storage and also help with, uh, with CO2 separation.

HUIZINGA: Bichlien, is that part of the story that was attractive to Microsoft in terms of both storage for energy and getting rid of CO2 in the environment?

NGUYEN: Yeah, absolutely. Absolutely. Of course, the properties of, of, you know, both CO2 capture and the energy storage components are sometimes somewhat, uh—David, correct me if I’m wrong—kind of divergent. It’s, it’s hard to optimize for one and have the other one optimized, too. So it’s really a balance of, of things, and we’re targeting, just right now, for this project, our joint project, the, the energy storage aspect.

HUIZINGA: Yeah. On that note, and either one of you can take this, what do you do with it? I mean, when I, when I used the Ghostbusters’ Ecto-Containment Unit, I was being direct. I mean, you got to put it somewhere once you capture it, whether it’s storing it for good use or getting rid of it for bad use. So how are you managing that?

KWABI: Great question, so … Bichlien, were you going to … were you going to go?

NGUYEN: Oh, I mean, yeah, I was going to say that there are many ways, um, for I’ll call it CO2 abatement, um, once you have it. Um, there are people who are interested in storing it underground, so, uh, mineralizing it in basalt formations, rock formations. There are folks, um, like me, who are interested in, you know, developing new catalysts so that we can convert CO2 to different renewable feedstocks that can be used in different materials like different plastics, different, um, you know, essentially new fuels, things of that nature. And then there’s, you know, commercial applications for pure streams of CO2, as well. Uh, yeah, so I, I would say there’s a variety of things you can do with CO2.

HUIZINGA: What’s happening now? I mean, where does that generally … David we, I, I want to say, we talked about this issue, um, when we met before on some of the downsides of what’s current.

KWABI: Yeah, so currently, um, so there’s, as Bichlien has mentioned, there’s a number of things you could do with it. But right now, of all the sort of large projects that have been set up, large pilot plants for CO2 capture that have been set up, I think the main one is enhanced oil recovery, which is a little bit controversial, um, because what you’re doing with the CO2 there is you’re pumping it underground into an oil field that has become sort of less productive over time. And the goal there is to try to coax a little bit more oil, um, out of this field. So, so you pump the CO2 underground, it mixes in with the oil, and then you … that, that sort of comes back up to the surface, and you separate the CO2 from the oil, and you can, you can go off and, um, use the oil for whatever you use it for. So, so the economically attractive thing there is, there’s, uh, there’s, there’s going to be some sort of payoff. There’s a reason, a commercial incentive, for separating the CO2, uh, but of course the problem is you’re removing oil from the … you’re, you’re extracting more oil that’s going to end up with … in more CO2 emissions. So, um, there are, in principle, many potential options, but there aren’t very many that have both the sort of commercial … uh, where there’s sort of a commercial impact and there’s also sort of the scale to take care of the, you know, the gigatons of CO2 that we’re going to have to draw down, basically, so … .

NGUYEN: Yeah. And I, I think, I mean, you know, to David’s point, that’s true—that, that is what’s happening, you know, today because it provides value, right? The issue, I think, with CO2 capture and storage is that while there’s global utility, there’s no monetary value to it right now. Um, and so it makes it a challenge in terms of being able to industrialize, you know, industries to take care of the CO2. But I, I, I think, you know, as part of the MCRI initiative, you know, we’re very interested in both the carbon capture and the utilization aspect, um, and utilization would mean utilizing the CO2 in productive ways for long-term storage, so think about maybe using CO2, um, converting it electrochemically, for example, into, uh, different monomers. Those monomers maybe could be used in new plastics for long-term storage. Uh, maybe those are recyclable plastics. Maybe they’re plastics that are easily biodegradable. But, you know, one of the issues with using, or manufacturing, is there’s always going to be energy associated with manufacturing. And so that’s why we care a lot about renewables and, and the green energy transition. And, and that’s why, uh, you know, we’re collaborating with David and his team as, as part of that. It’s really full circle. We have to really think about it on a systems level, and the collaboration with David is, is one part of that system.

HUIZINGA: Well, that leads beautifully, and that’s probably an odd choice of words for this question, but it seems like “solving for X” in climate change is a no-lose proposition. It’s a good thing to do. But I always ask what could possibly go wrong, and in this case, I’m thinking about other solutions, some of which you’ve already mentioned, that had seemed environmentally friendly at first, but turned out to have unforeseen environmental impacts of their own. So even as you’re exploring new solutions to renewable energy sources, how are you making sure, or how are you mitigating, harming the environment while you’re trying to save it?

KWABI: That’s a great question. So it’s, it’s something that I think isn’t traditionally, at least in my field, isn’t traditionally sort of part of the “solve for X” when people are thinking about coming up with a new technology or new way of storing renewable electricity. So, you know, in our particular case, one of the things that’s really exciting about the project we’re working on is we’re looking at molecules that are fairly already quite ubiquitous, so, so they’re already being used in the food and textile industry, for example, derivatives of the molecules we’re using. So, you know, thinking about the materials you’re using and the synthetic routes that are necessary to produce them is sort of a pitfall that one can easily sort of get into if you don’t start thinking about this question at the very beginning, right? You might come up with a technology that’s, um, appealing and that works really well, performance-wise, but might not be very recyclable or might have some difficulties in terms of extraction and so on and so forth. So lithium-ion batteries, for example, come to mind. I think you were alluding to this earlier, that, you know, they’re a great technology for electric vehicles, but mining cobalt, extracting cobalt, comes with a lot of, um, just negative impacts in terms of child labor and so on in the Congo, et cetera. So how, how do we, you know, think about, you know, materials that don’t … that sort of avoid this? And I’ll, I’ll just highlight as one of our team members … so Anne McNeil, who’s in the chemistry department here, thinks quite a lot about this, and that’s appropriate because she’s sort of the synthetic chemist on the team. She’s the one who’s thinking a lot about, you know, given we have this molecule we want to make, what’s the most eco-friendly, sustainable route to making that molecule with materials that don’t require, you know, pillaging and polluting the earth to do it in a sense, right. And also materials … and also making it in a way that, you know, at end of life, it can be potentially recycled, right.

HUIZINGA: Right.

KWABI: So thinking about sustainable routes to making these molecules and potential sort of ways of recycling them are things that, um, we’re, we’re trying to, in some sense, to take into consideration. And by we, I mean Anne, specifically, is thinking quite seriously about …

NGUYEN: David … David, can I put words in your mouth?

KWABI: But, yeah. … Yeah, sure, go ahead.

NGUYEN: Um, you’re, you’re thinking of sustainability as being a first design principle for …

KWABI: Yes! I would take those words! Exactly.

NGUYEN: OK. [LAUGHS] Yeah, I mean, that’s really important. I, I agree and second what David said.

HUIZINGA: Bichlien, when we talked earlier, the term co-optimization came up, and I want to dig in here a little bit because whenever there’s a collaboration, each discipline can learn something from the other, but you can also learn about your own in the process. So what are some of the benefits you’ve experienced working across the sciences here for this project? Um, could you provide any specific insights or learnings from this project?

NGUYEN: I mean, I, I think maybe a naive … something that maybe seems naive is that we definitely have to work together in all three disciplines because what we’re also learning from David and Bryan is that there are different experimental and computational timelines that sometimes don’t agree, and sometimes do agree, and we really have to, uh, you know, work together in order to create a unified, I’m not going to call it a roadmap, but a unified research plan that works for everyone. For example, um, it takes much longer to run an experiment to synthesize a molecule … I, I think it takes much longer to synthesize a molecule than, for example, to run a, uh, flow cell, um, experiment. And then on the computational side, you could probably run it, you know, at night, on a weekend, you know, have it done relatively soon, generate molecules. And one of those that we’re, you know, understanding is that the human feedback and the computational feedback, um, it takes a lot of balancing to make sure that we’re on the same track.

HUIZINGA: What do you think, David?

KWABI: Yeah, I think that’s definitely accurate, um, figuring out how we can work together in a way that sort of acknowledges these timelines is really important. And I think … I’m a big believer in the fact that people from somewhat different backgrounds working together, the diversity of background, actually helps to bring about, you know, really great innovative solutions to things. And there’s various ways that this has sort of shown up in our, in own work, I think, and in our, in our discussions. Like, you know, we’re currently working on a particular sort of molecular structure for, uh, for a compound that we think will be promising at storing electricity, and the way we, we came about with it is that my, my group, you know, we ran a flow cell and we saw some data that seemed to suggest that the molecule was decomposing in a certain way, and then Anne’s group, or one of Anne’s students, proposed a mechanism for what might be happening. And then Jake, who works with Bichlien, also … and then thought about, “Well, what, what about this other structure?” So that sort of … and then that’s now informing some of the calculations that are going on, uh, with Bryan. So there’s really interesting synergies that show up just because there’s people working from, you know, coming from very different backgrounds. Like I’m a mechanical engineer who sort of likes to hang out with chemists and, um, there’s actual chemists and then there’s, you know …

NGUYEN: But, David, I think …

KWABI: … the people who do computation, and so on …

NGUYEN: I think you’re absolutely right here in terms of the overlap, too, right? Because in a, in a way, um, I’m an organic chemist by training, and I dabble in machine learning. You’re a mechanical engineer who dabbles in chemistry. Uh, Bryan’s a computational chemist who dabbles in flow cell works. Uh, Anne is, uh, you know, a purely synthetic chemist who dabbles in, you know, almost all of our aspects. Because we have overlap, we have lower, I’m going to call it an activation barrier, [LAUGHS] in terms of the language we speak. I think that is something that, you know, we have to speak the same language, um, so that we can understand each other. And sometimes that can be really challenging, but oftentimes, it’s, it’s not.

HUIZINGA: Yeah, David, all successful research projects begin in the mind and make their way to the market. Um, where does this research sit on that spectrum from lab to life, and how fast is it moving as far as you’re concerned?

KWABI: Do you mean the research, uh, in general or this project?

HUIZINGA: This project, specifically.

KWABI: OK, so I’d say we’re, we’re still quite early at this stage. So there’s a system of classification called Technology Readiness Level, and I would say we’re probably on the low end of the scale, I don’t know, maybe like a 1 or 2.

NGUYEN: We just started six months ago!

KWABI: We just started six months ago! So …

[LAUGHTER]

HUIZINGA: OK, that’s early. Wait, how many levels are there? If there’s 1 or 2, what’s the high end?

KWABI: I think we go up to an 8 or so, an 8 or a 9. Um, so, so we’re quite early; we just started. But the, the nice thing about this field is that things can move really quickly. So in a year or two, who knows where we’ll be? Maybe four or five, but things are still early. There’s a lot of fundamental research right now that’s happening …

HUIZINGA: Which is so cool.

KWABI: Proof of concept. Which is necessary, I think, before you can get to the, the point where you’re, um, you’re spinning out a company or, or moving up to larger scales.

HUIZINGA: Right. Which lives very comfortably in the academic world. Bichlien, Microsoft Research is sort of a third space where they allow for some horizon on that scale in terms of how long it’s going to take this to be something that could be financially viable for Microsoft. Is that just not a factor right now? It’s just like, let’s go, let’s solve this problem because this is super-important?

NGUYEN: I guess I’ll say that it takes roughly 20 years or so to get a proof of concept into market at an industrial scale. So, I’m … what I’m hoping that with this collaboration, and with others, is that we can shorten the time for discovery so that we understand the fundamentals and we have a good baseline of what we think can be achieved so that we can go to, for example, a pilot scale, like a test scale, outside of the laboratory, not full industrial scale, but just a pilot scale much faster than we would if we had to hand iterate every single molecule.

HUIZINGA: So the generative models play a huge role in that shortening of the time frame …

NGUYEN: Yes, yes. That’s what we …

KWABI: Yeah, I think …

NGUYEN: Go ahead, David.

KWABI: Yeah. I think the idea of having a platform … so, so rather than, you know, you found this wonderful, precious molecule that you’re going to make a lot of, um … you know, having a platform that can generate molecules, right, I think is, you know, proving that this actually works gives you a lot more shots on goal, basically. And I think that, you know, if we’re able to show that, in the next year or two, that there’s, there’s a proof of concept that this can go forward, then um, then, in principle, we have many more chemistries to work with and play with, than the …

NGUYEN: Yeah, and, um, we might also be able to, you know, with, with this platform, discover molecules that have that dual purpose, right, of both energy storage and carbon capture.

HUIZINGA: Well, as we wrap up, I’d love to know in your fantastical ideal preferred future, what does your work look like … now, I’m going to say five to 10 years, but, Bichlien, you just said 20 years, [LAUGHS] so maybe I’m on the short end of it here. In the “future,” um, how have you changed the landscape of eco-friendly, cost-effective energy solutions?

KWABI: That’s a, that’s a big question. I, I tend to think in more two–, three–year timelines sometimes. [LAUGHS] But I think in, in, in, you know, in like five, 10 years, if this research leads to a company that’s sort of thriving and demonstrating that flow batteries can really make an impact in terms of low-cost energy storage, that would have been a great place to land. I mean that and the demonstration that you, you know, with artificial intelligence, you can create this platform that can, uh, custom design molecules that fulfill these criteria. I think that would be, um, that would be a fantastic outcome.

HUIZINGA: Bichlien, what about you?

NGUYEN: So I think in one to two years, but I also think about the 10-to-20-year timeline, and what I’m hoping for is, again, to demonstrate the value of AI in order to enable a carbon negative economy so that we can all benefit from it. It sounds very … a polished answer, but I, I really think there are going to be accelerations in this space that’s enabled by these new technologies that are coming out.

HUIZINGA: Hmm.

NGUYEN: And I hope so! We have to save the planet!

KWABI: There’s a lot more to AI than ChatGPT and, [LAUGHS] you know, language models and so on, I think …

HUIZINGA: That’s a perfect way to close the show. So … Bichlien Nguyen and David Kwabi, thank you so much for coming on. It’s been delightful—and informative!

NGUYEN: Thanks, Gretchen.

KWABI: Thank you very much.

The post Collaborators: Renewable energy storage with Bichlien Nguyen and David Kwabi appeared first on Microsoft Research.

]]>