Transcript<\/strong><\/p>\nSimon Peyton Jones: I like to put it like this: when the limestone of imperative programming has worn away, the granite of functional programming will be revealed underneath!<\/p>\n
Host: You\u2019re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research, and the scientists behind it. I\u2019m your host, Gretchen Huizinga.<\/strong><\/p>\nWhen we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain\u2019s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work. Today, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs.<\/strong><\/p>\nThat and much more on this episode of the Microsoft Research Podcast.<\/strong><\/p>\nHost: Simon, welcome. You\u2019re in the Programming Principles and Tools group at Microsoft Research in Cambridge. What do you spend most of your time doing there?<\/strong><\/p>\nSimon Peyton Jones: Well, programming languages are the fundamental material out of which we build programs. When a builder builds a building, they can build out of bricks or out of straw or out of bananas or out of steel girders\u2026 And it makes a difference what you build out of, how ambitious your building can be and how likely it is to fall down. So, when developers write programs, the material that they use, the fabric of their programs \u2013 the programming language is super important to the robustness and longevity and reliability of their programs. So, programming language researchers study programming languages with the aim of building more robust building materials for developers to use.<\/p>\n
Host: What role does research play in making good programming languages?<\/strong><\/p>\nSimon Peyton Jones: Well, at first you might think that a programming language was \u2013 well, you just kind of throw it together. But actually, when you build a programming language, you want to be sure that you know what it means. That is to say, if you write a program you\u2019d like it to be clear what the program means, what should happen when you execute it. That\u2019s called its semantics. So, having a good way to specify in a rigorous way what that program means, what it does, is really important. So we need to find formalisms that we can write down rigorously what a program means. And then we need to implement it. So, if we\u2019re going to build a compiler that, say translates a high-level language program into low-level machine code that\u2019s going to run on your machine, you\u2019d like to be confident that the compiler itself was correct. Right? That is, that it didn\u2019t change the meaning of the program along the way. And it would do so consistently and reliably, day after day, on program after program after program. So, programming language research is about methods and tools and techniques and ideas and theories that will enable people to build programming language designs and implementations that will be robust. I wouldn\u2019t say that programming languages tend to arise specifically from academics having clever ideas about what a language design might look like. They\u2019re very often born in a much more random way, in the white heat of, \u201cOh, I just need to get something done!\u201d And then retrospectively, programming language designers and researchers start to look closely at the design and try to improve it. So, there\u2019s been, you know, dozens and dozens of papers about Java Script, for example. But Java Script was not designed initially by an academic.<\/p>\n
Host: I\u2019ve seen your talks and you use some slides that show, sort of, the trajectory of a lot of a different languages\u2026 You\u2019ve suggested that there\u2019s hundreds of languages. Most of them share the fate of an early death, with only 1 or 2 at the memorial service. And then there\u2019s some that just resonate and take off. What does it take to \u201cmake it big\u201d and is that something you should aim for?<\/strong><\/p>\nSimon Peyton Jones: So, I think every computer scientist wants their language to be used. That\u2019s one of the exciting things of working at Microsoft Research is that there\u2019s a real chance your stuff might get used and have impact. So, we all want to make the world a better place. In programming language research, I would say though that the while everybody would aspire to have languages that have impact and are successful, it\u2019s pretty random which ones are. The ones that are wildly successful are not necessarily the ones that are technically beautiful, or well-designed. They just hit some sweet spot at some particular moment. So, it\u2019s a bit frustrating in a way. I think the\u2026 Haskell the language that I\u2019ve been involved in, has been quite successful, but it could easily not have been. There\u2019s a lot of randomness in the process.<\/p>\n
Host: You mentioned two giants in computer science \u2013 Alan Turing and Alonzo Church \u2013 who came up with ideas at about the same time that have had a big impact on programming languages in two different steams. I think you talked about declarative and imperative languages\u2026<\/strong><\/p>\nSimon Peyton Jones: Yeah.<\/p>\n
Host: Can you talk about that for a minute?<\/strong><\/p>\nSimon Peyton Jones: So, my entire research life, ever since I first got excited about functional programming, at\u2026 when I was studying at Cambridge in 1979 or thereabouts. My entire research life has been following through the idea of what might purely functional programming mean? And if you look back a long way, as you said it does all date to Alonzo Church and Alan Turing, to pick just two giants from the literature. So, Turing said, \u201cWhat is computation? What does it mean to compute something?\u201d And he designed this thing that we now call the Turing Machine that was a very much step-at-a-time, do this, do this. Read a thing from the tape. Write that onto the tape. It was a very imperative machine. Meanwhile, at the very same time, actually in the same place, it was in Princeton, Alonzo Church was designing the Lambda Calculus, which is a, more of a kind of\u2026 seems much more abstract, algebraic thing. It\u2019s like, rewriting expressions. And he discovered this tiny language in which expression rewrites could also, apparently, model computation. So, then it seemed obvious to us, is there anything you could compute with the Turing Machine that you couldn\u2019t compute with Lambda Calculus or vice versa? And in the end, it turns out, very surprisingly, that these two notions of computation were the same. That is, anything you could do with the Turing Machine you could do with Lambda Calculus and vice versa. But, although they were equally powerful, in the sense of, what can you, in principle, do? they gave rise to very different language streams. So, Turing Machines, ultimately, you could see this is a bit of a retrospective justification, but you could see that\u2026 see Turing machines as the basis for all imperative languages, right? Do this, and then do that. Step-at-a-time computation, in which the program is a sequence of steps that you do, in sequence. Lambda Calculus is then the grandmother of functional programming in which you\u2026 a program executes by evaluation. You evaluate an expression. And it seems like completely different ways about thinking about your program. You have to think about programming in a completely different way. But nevertheless, they\u2019re equally expressive. So, the interest for me has been, what would it mean to take this much less popular, but nevertheless universal programming power, functional programming, and really push it through to see what could that mean in a practical way for writing practical programs.<\/p>\n
Host: So, talk about the difference between functional programming languages and other programming languages\u2026<\/strong><\/p>\nSimon Peyton Jones: The imperative approach, step-at-a-time programming, is what everybody\u2019s used to. It\u2019s what C is like, Java is like, C++ is like, Python is like, Perl is like, Ruby is like\u2026you know. You name it, they\u2019re mostly imperative programming languages. Functional programming is very different. It\u2019s more like, everybody\u2019s used a spreadsheet, and in a spreadsheet cell, you say, \u201cHere is a formula that gives the value of a cell.\u201d And you compute the value of a whole spreadsheet full of cells by computing each cell, perhaps one at a time, perhaps in parallel, but in data dependency order. If cell A1 depends on cell B3, you must compute cell B3 first, and then A1. But there\u2019s no notion of \u201copen a valve\u201d or \u201claunch the missiles\u201d or \u201cprint something.\u201d You can\u2019t do that in the middle of a formula. It wouldn\u2019t make sense. So, that\u2019s functional programming, right? All of the\u2026 Excel\u2019s built-in functions are functions. That is to say, they take some inputs and they produce some outputs. They have no side effects. And so, the surprising thing really, is that this purely functional approach to programming is in fact universal. If you think about it in a spreadsheet way, you think, \u201cWell that\u2019s good if you\u2019re writing business plans, maybe, or computing my bank balance. But it couldn\u2019t do anything useful.\u201d Could you write a word processor in a spreadsheet? Well, probably not, right? But the insight of functional programming which stems right back to Church is that this programming paradigm is universal. You can do anything. And so, the, you know, functional programming language researchers have said, \u201cSupposing we took that execution by evaluation idea and scaled it up? What would that mean?\u201d And that\u2019s what my whole research life has been about, really.<\/p>\n
Host: Why did you get interested in that, I mean, at the very beginning?<\/strong><\/p>\nSimon Peyton Jones: Because it\u2019s like a radical and elegant attack on the entire enterprise of programming. Rather than just being, \u201cWell, let\u2019s just try doing this a slightly different way,\u201d it\u2019s like saying, \u201cLet\u2019s just attack programming from a completely different direction.\u201d Moreover, it\u2019s very close to mathematics. The whole idea of Lambda Calculus really grew out of logic, and there\u2019s very beautiful dualities between programming on the one hand, and logic on the other. It\u2019s called the Curry-Howard Isomorphism, in which you can view a, let\u2019s say I have a function whose type is\u2026 it takes 2 integers and it produces an integer. Well, that type tells you something about the program. So, in a sense, it\u2019s a weak theorem about the program. It tells you something about the program, but not everything. And indeed, you could regard the program as a proof of that theorem. So, the idea of \u201ctypes as theorems\u201d and \u201cprograms as proofs\u201d is a very deep connection between logic, on the one hand, and programming, on the other. And this duality is very immediate in functional programming. But it\u2019s rather distant in imperative programming. So, I tried to give you a sense for what got me excited about it. I just got excited about it because I thought, \u201cIt\u2019s such a beautiful, simple, elegant way of thinking about the enterprise of programming. Let\u2019s see if we can make it practical.\u201d<\/p>\n
Host: I love that. Now, how many people are in your \u2013 is camp the right word? Because you have people writing imperative languages all over the place. Is this something that needs to be evangelized, functional languages?<\/strong><\/p>\nSimon Peyton Jones: Sure! Yes. So, you know, I like to put it like this, \u201cWhen the limestone of imperative programming has worn away, the granite of functional programming will be revealed underneath.\u201d So, imperative programming is very appealing. Don\u2019t get me wrong, right? It\u2019s sort of what real machines do. If you look at what a microprocessor does, it does loads and stores and adds and it sets things in registers that make valves go open or launches the missiles or prints something, right? Functional programming is a bit more abstract. So, that\u2019s why it\u2019s been sort of a minority pursuit for a long time. And over I guess, the 40-year period of my you know, adventure with functional programming, it\u2019s gradually infected the mainstream more and more. But not too fast. That\u2019s quite important, right? \u201cAvoid success at all costs,\u201d is one of my little mottos, right? Because if you\u2019re too successful too quickly, you get sort of stuck and you can\u2019t change anything anymore. But functional programming has become more and more influential. We can talk about ways in which that has happened.<\/p>\n
Host: Well I do want to talk about Haskell, and what you\u2019ve just said about the slow burn, the slow rise, and the benefits of not getting too successful too quickly, or dying an early death. But having the tenacity to stay there for long enough to start to grow and get more useful.<\/strong><\/p>\nSimon Peyton Jones: Yes, so for me one of the glories and privileges of being a research computer scientist, is that you\u2019re not just allowed, but actually paid, to work on a simple and elegant idea and to do so for, you know, 35 or 40 years. That\u2019s amazing that society allows us to do that! So as far as Haskell goes, I mean you don\u2019t just want to work on abstract ideas. You want to work on things that have impact. So, Haskell was developed by a group of research colleagues around the world, including myself. And our idea was just to embody the current consensus among ourselves about what purely functional programming actually was\u2026 let pure, lazy functional programming might look like. And at that time, it was very much a university enterprise. But by having an actual language and then turning it into an actual compiler that people could actually use to get their job done, and then extending the compiler so we could deal with input\/output, and we could deal with foreign function interfaces, and talk to C and so forth, and we could develop the types that would actually be useful. Over time, we\u2019ve turned Haskell into something that is useful for practical applications and now in fact it\u2019s really quite widely used by developers in mostly small companies.<\/p>\n
Host: So, let\u2019s talk about laziness for a little bit. When I was growing up, that wasn\u2019t a virtuous quality in our household but somehow lazy functional computing is a good thing. Why is that?<\/strong><\/p>\nSimon Peyton Jones: Oh yes! Yes! So, at first it was just an amazingly clever and elegant thing. So, laziness is the idea that if you call a function in a normal imperative language or call-by value language, then before calling the function you\u2019re going to evaluate the arguments to the values of those arguments and then you\u2019ll pass them to the function. In a lazy functional language, you don\u2019t evaluate the arguments before passing them to the function, you create recipes or suspensions or funcs which you pass to the function. And if it needs that argument, then it will evaluate it. So, you can write a function that might evaluate one or other but not both of its arguments. And that can be super important. Just think of a function like \u201cif,\u201d a conditional, where you don\u2019t want to evaluate both the \u201cthen\u201d branch and the \u201celse\u201d branch. So why did that happen? Well firstly it was because we could. Because the Lambda Calculus\u2026 programming the Lambda Calculus is an expression that you evaluate. And when you evaluate an expression, like if I evaluate the arithmetic expression 3+4 times 7+8, then I could evaluate the 3+4 first, or the 7+8 first. There isn\u2019t an inherent order in expression evaluation, except that I must evaluate the 3+4 and the 7+8 before I multiply them, right? So, there\u2019s some data dependencies, but there\u2019s a lot of fluidity about evaluation order. And it\u2019s the same with the Lambda Calculus. And it turns out\u2026 there\u2019s a lot of study in the theoretical literature about evaluation order. And some of these evaluation orders, called Normal Order, naturally led to lazy evaluation. We thought, \u201cOh, that\u2019s interesting. Oh, it just sort of naturally arises. What would that be good for?\u201d At first, we just thought it was cool. And then John Hughes wrote this very interesting paper called \u201cWhy Functional Programming Matters,\u201d in which he said, \u201cLaziness is not just cool, it\u2019s useful.\u201d And he did that by describing how laziness gives you a new form of modularity. And his classic example was this: supposing I\u2019m writing a program to play chess. Well one thing I might do is explore the tree of possible moves. He can move this way, then I could move that way, then you could move that way. There\u2019s a big tree. Suppose I first generated a tree and then pruned it to figure out the best move. Well, that tree would be too big. So, usually, we would have to generate and prune at the same time. And John said, \u201cWell, no. With lazy evaluation, you can generate in one piece of program and prune in another. And that gives you a new form of modularity.\u201d So, that was really an interesting idea that\u2019s worked out exactly that way.<\/p>\n
Host: I love it. And I\u2019ll probably use it. That laziness is not just cool, but useful.<\/strong><\/p>\nSimon Peyton Jones: It is not just cool but useful, yes.<\/p>\n
Host: How did laziness and purity come together?<\/strong><\/p>\nSimon Peyton Jones: So, Haskell\u2019s initial defining characteristic was that it was a lazy language. That\u2019s what brought that particular group of people together, what we thought was exciting and cool. But in retrospect, I now think what was much more important was that laziness forced Haskell to be a pure language. By which I mean, in a call-by value functional language like ML or Lisp, if you wanted to print something it was too tempting to have a function, in quotes, which, when you call it, would print something as a side effect. That is, it wouldn\u2019t just return well what would print return? Unit or 3 or something. But it would print something on the side. So, we couldn\u2019t do that in a lazy language because we couldn\u2019t predict the evaluation order well enough. So, laziness kept us pure. And purity was embarrassing for a long time, because you couldn\u2019t really do much by way of input\/output. You couldn\u2019t print things or open files or launch missiles or sail the boat. So that forced us to invent what came to be called monadic input\/output. And there was another classic example which Phil Wadler, my colleague at Glasgow, took ideas from the logic world. The theory of monads developed by various people. But he was particularly drawn on the work of Eugenio Moggi, who was very much a theorist. Phil Wadler wrote this wonderful paper comprehending monads in which he described monads as a programming idiom. And then he and I subsequently wrote a paper called \u2018Imperative Functional Programming\u2019 which showed how you can apply monadic programming to do input\/output to affect the world. And that idea has been wildly infectious. That\u2019s spread to all sorts of places. So, people now use the monadic thought pattern as a design idea for designing their programming languages or ways to\u2026 you could see it all over the place now. But it only happened because we were stuck with purity because we had laziness. It was another place where the sort of theory both helped the practice and also almost forced the practice, because we would have had to break with our principles too much to just have side effects. So, we were stuck with no side effects and were forced to invent this alternative way of going about things.<\/p>\n
Host: Aside from your pioneering work in functional programming languages, a good part of what you do involves inspiring the next generation to take up the computer science baton and run with it. How have you gone about doing that? What have you done in the inspiration business for computer science?<\/strong><\/p>\nSimon Peyton Jones: I started with this about 10 years ago when my children were at school. We would sit round the dinner table and they would tell me what they thought they did at school. And they had complete contempt for their lessons in ICT, Information and Communication Technology. And so, in talking to them, I was unable to make any connection between the subject that I thought was SO interesting, that I devoted my professional life to it, and this subject that they were learning in school. And that was different to, say, biology in which I think a biologist sitting round the dinner table with their children would be able to make a connection between the subject discipline that their children \u2013 even at primary school were learning at school \u2013 and the subject discipline that they thought was so interesting they devoted their professional life to it. So that seemed like a very big disconnect. The more people I talked to, the more people said, \u201cWell, yeah, it doesn\u2019t make sense, but that\u2019s the way it is.\u201d So, I helped start an outfit called Computing at School, which is based in the UK, but open to anybody anywhere in the world, whose sole mission was to try to say, \u201cWhat might it mean to teach computer science as a subject discipline to school children? And to teach it at the same levels and for the same reasons that we teach natural science or mathematics.\u201d That is, not because they\u2019re going to become physicists or mathematicians, necessarily. A few will, but most will not. But because knowing some elementary principles about the physical or chemical or biological or digital world that surrounds them will make them more empowered, better-informed citizens. And that applies from primary school onwards. So, that was the mission of Computing at School.<\/p>\n
Host: It\u2019s now part of the core curriculum in the UK\u2026<\/strong><\/p>\nSimon Peyton Jones: That\u2019s right. So, we were unexpectedly successful and we started in 2007-08. It was like, we felt as if we were at the bottom of a deep well, you know, shouting up towards the daylight, \u201cComputer science is important, you know?\u201d We got lucky. We wrote a curriculum. There was a review of the entire national curriculum, serendipitously, started by the then Conservative government. So, we were ready to make input to that curriculum debate. And in the end, we achieved almost all our policy goals. The new, national curriculum for computing, in England, pretty much says, in black and white, all children should learn the fundamental principles of computer science, and should do so from primary school onwards. So that\u2019s amazing. And that came into force in 2014. But there\u2019s a big challenge after that. It\u2019s like when you scale one, apparently an unsurmountable mountain, what do you find behind it? Another, bigger mountain! In this case, it\u2019s \u201chow do we turn that aspirational idea into a tangible and living reality in every classroom in the land?\u201d And that is a big challenge because while teachers are willing and committed and hardworking and able, they\u2019re by and large not qualified in computer science. So, there\u2019s a lot to do. There\u2019s a lot to do. The state in this country is pockets of excellence but overall, it\u2019s quite fragile.<\/p>\n
Host: I think that in various stages, most countries in the world are facing the same issues within policy goals and implementation, and then how do you prepare teachers? We\u2019re watching the UK, I think.<\/strong><\/p>\nSimon Peyton Jones: Yeah, I think every, pretty much every nation in the world is thinking hard about what they teach their children about computing and how they go about teaching it. And I don\u2019t think anybody has a monopoly on truth here. We\u2019re all trying to figure it out as we go along.<\/p>\n
Host: Do you think there\u2019s any room in the research community for this kind of line of inquiry?<\/strong><\/p>\nSimon Peyton Jones: Oh, tremendous! Yes, so both among computer scientists, who I think individually and collectively, computer scientists should be active in talking to their local school teachers and being on school boards of governors, because there\u2019s a seismic change taking place. It\u2019s like establishing an entirely new subject at school level. And what is that entirely new subject? Well, it\u2019s called computer science, and who would know about that? Well, computer scientists. Particularly research computer scientists. So, we may not know how to teach. We may not know much about children, but we know the subject discipline, so we should get involved. But the other thing, at the research end that we need, is research in education, right? Because computer scientists know nothing about education. What is good pedagogy for computer science concepts? How might you teach computational thinking? What role does formative assessment play? How could you use, you know, hinge point questions to teach computing more effectively? When we teach programming does it make sense to start from a blank sheet of paper and say write a program to do X. Or should we instead spend a lot of time showing programs and saying please explain to your neighbor how this works. Or, here\u2019s a program with a bug in it. Please find the bug and explain what\u2019s wrong and fix it. There are a lot of different approaches to how you go about teaching. And we need educational research, in the end, backed by research evidence, to say which of these approaches works better.<\/p>\n
Host: I think you\u2019ve just given any number of listeners to this podcast some ideas about where they might want to go with research in the future if they have a passion for education and for computer science.<\/strong><\/p>\nSimon Peyton Jones: Yeah, this is it. The intersection of education and computer science is a very rich area at the moment. And everybody wants to make a difference to the education that we give our children. Because many of us have children and want to see them succeed.<\/p>\n
Host: Listen, let\u2019s talk about another intersection that you\u2019re really interested in: theory and practice.<\/strong><\/p>\nSimon Peyton Jones: Computer science is unusual. Like, if you\u2019re in biology, then just finding out something that is true is progress. So, novelty has value in its own right. That\u2019s true of any natural science. In computer science, novelty has no value in and of itself. It\u2019s too easy to make up new stuff. It\u2019s a kind of like a fractal discipline. Everywhere you dig, you can make new details because we\u2019re creating ideas out of nothing, out of pure thought stuff. Fred Brooks had this wonderful Newell Award lecture. It\u2019s called the \u201cThe Computer Scientist as Toolsmith.\u201d And he says, computer science and its theories only have value insofar as they demonstrate utility. So that\u2019s a question asked about every paper, every research proposal I see. It\u2019s not just ideas, but utility. So, to return to your question then about theory and practice, nevertheless, it\u2019s much more fun if theory and practice live quite close together. If you can use a piece of theory to give practical results, you know, and make that crossover without, you know, bending the theory too much out of shape. And in functional programming that\u2019s particularly true. So, for example, in the compiler that we built for Haskell, it\u2019s called GHC. We were struggling, in the very early 90s, to think what should its intermediate language be like? Haskell\u2019s very large. Source language we compile it into a small, intermediate language that we will then transform, optimize, transform and optimize, and then finally spit out machine code. What should that intermediate language be? We want it to be strongly typed itself. And I was worrying about, \u201cOh, where could we put the types and how would they live and how would they survive transformation?\u201d And Phil Wadler said to me, \u201cYou know what, Simon? We should use System F.\u201d And I sort of rocked back in my chair and thought, System F? I learned about that in an extremely theoretical seminar that I went to run by Samson Abramsky. I thought that was a purely theoretical interest. But it turned out we ended up directly implementing System F in GHC and it\u2019s still there to this day. It\u2019s a very pure embodiment of an idea that was developed solely in the theory context but turned out to have immediate practical utility. And that happens again and again in functional programming. I love that.<\/p>\n
Host: I want to ask you about a couple of videos you\u2019re in that have tens of thousands of downloads on YouTube, about how to write a research paper and how to give a research talk. Could you talk about that a little bit and why that was important to you and how that came about, that you became a video star on YouTube?<\/strong><\/p>\nSimon Peyton Jones: Well, a lot of research is about communicating. As I say in these talks, no matter how brilliant you are, if you sit in a sealed room and have fantastic ideas but don\u2019t tell anybody, then all you\u2019ve done is heat up the universe. You\u2019ve not really made it a better place. So, communication is key. I think I wrote the first of these that was about how to give a talk with John Hughes and John Launchbury after we were colleagues in the same department. We had been to a lot of research talks and started talking to each other about, \u201cCouldn\u2019t a lot of these talks be a lot better with some quite simple suggestions?\u201d So, then we wrote them down in a SIGPLAN notices paper and I gave a talk about it. Then, subsequently I developed a talk about how to write a research paper, which has been extremely popular. And it was\u2014and it arose in the same way. I just thought, I\u2019m reading a lot of papers, I\u2019m reviewing a lot of papers, and some quite simple ideas, I feel, could make them a lot better. And so, I thought that it was worth putting a bit of effort into trying to articulate or distill the techniques or ideas that I used in the hope they be useful to others. And to my astonishment they\u2019ve seem to have been quite widely looked at, including by people in completely different disciplines, like psychology and history.<\/p>\n
Host: Absolutely.<\/strong><\/p>\nSimon Peyton Jones: It\u2019s really strange; I get email from the most remarkable places. Yeah, I think if, in terms of citations or views or webpage hits, all the rest of this functional programming stuff is you know, it is nothing.<\/p>\n
Host: Dwarfed.<\/strong><\/p>\nSimon Peyton Jones: That\u2019s right, dwarfed. By this \u201chow to write a research paper.\u201d<\/p>\n
Host: One of the most interesting things I heard you say is that computer programs are among the largest structures, or the largest things humans have ever built. And when we look at other structures they seem enormous to our eyes, but people don\u2019t usually see the millions of lines of code behind a very small thing like a search engine box. Why do you tell that story and what\u2019s important for us to understand about that?<\/strong><\/p>\nSimon Peyton Jones: Well, because I think that by and large 99.9% of the population has no visceral, sort of, gut feel for just how complicated, remarkable and fragile our software infrastructure is. The search box looks simple but there\u2019s the millions of lines of code\u2026 If you could see that in a way that you can see an aircraft carrier or some complicated machine that you can see inside, you\u2019d have a more visceral sense for how amazing it is that it works at all, still less that it works so well. But you don\u2019t get that sense from a computer program because it\u2019s so tiny, right? All of my intellectual output for my entire life, including GHC, would easily fit on a USB stick. On that little thumbnail-size thing, I\u2019ve just changed some 1s to 0s and some 0s to 1s and all the 1s and 0s were there to begin with. All I\u2019ve done is change the state of some of them, as my entire professional output. And yet, these artifacts are so complex and so large they need entirely new techniques for dealing with them. So, if you think about how a large piece of software is built, we built it with layer upon layer of abstraction. We build libraries which hide their insides but provide an API that you can call. And you build another library on top of that and another library on top of that. And so, we manage the complexity of these gigantic systems by building abstractions. And learning how to describe those abstractions. I mean it\u2019s another big part of what programming language people are interested in, right? So, why is that important? One, I would like people who are not computer scientists to have the idea that there is something rather amazing going on. And also, that it\u2019s so complicated, it\u2019s not surprising if it goes wrong occasionally. We shouldn\u2019t place too much trust in it, right? It\u2019s not magic. Sometimes I think people are too guilelessly trustworthy of computers. But also, for computer scientists or people thinking about, is this a field I\u2019d like to be interested in? the idea of this whole remarkable wonderland of interesting complexity and creativity, right? Programming is one of the most creative disciplines in the world. Where you can create completely new things that nobody has ever built before. That\u2019s something I\u2019d like to get across to people.<\/p>\n
Host: What\u2019s the best thing about being a researcher, to you, and why would a young computer scientist want to follow in your footsteps in the field of research?<\/strong><\/p>\nSimon Peyton Jones: Well for me, it\u2019s been a great privilege just to be able to take one idea and follow it through. Take the idea of functional programming and run with it. And I, being able to do that, both at university for about 15 years, 17 years, and then subsequently at Microsoft for rather longer now, actually. Coming up on 20 years at Microsoft. And for me, it\u2019s been this mixture of elegant, theoretical ideas that have direct, practical impact, has always been my powerful motivator. So, why might a young person want to be interested in computing, whether in research or not? Because you can build amazing things out of this pure thought stuff. Why might somebody want to go in research specifically? Well, typically if you\u2019re working in industry you\u2019re building amazing programs out of nothing, right? In research you build amazing ideas out of nothing.<\/p>\n
Host: So, as we close, what thoughts would you share about your long life of research that would give the next generation, say, a vision for what might be next?<\/strong><\/p>\nSimon Peyton Jones: So, I never had a long-term research plan. I never had a, \u201cOh, here are the 3 big things I\u2019m going to do with my life and I\u2019m on this 20-year trajectory to do it.\u201d I was always just doing the next thing. So, I, I\u2019m not really a very long-range planner. But I did have hold of one idea, this functional programming idea. I didn\u2019t know how it would turn out. But I just found it fascinating. So, I would suggest to younger people, just start with something. I remember when I started as a researcher at the University College, London, I didn\u2019t have a PhD. My head of department gave me some time off to do research. But I had no idea what to do. So, I just sat there with a sharp pencil and a blank sheet of paper, hoping for great ideas to come, which of course they didn\u2019t. And then my colleague, John Washbrook, he said to me, \u201cSimon, just do something. Anything. No matter how humble and simple. Just start something.\u201d And so, I did. I wrote a little parser generator for a functional language called SASL. And that eventually turned into a research paper, as it happened. So, the wonderful thing about computer science is you can start almost anything, it\u2019ll turn into something interesting. Don\u2019t be too worried, just get started on something that interests you.<\/p>\n
Host: Simon Peyton Jones, thanks for coming all the way over from England on Skype with us today.<\/strong><\/p>\nSimon Peyton Jones: Oh, it\u2019s been fun.<\/p>\n
Host: To learn more about Dr. Simon Peyton Jones, and his work in the field of lazy, functional programming languages, visit Microsoft.com\/research<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"Episode 7, January 10, 2018 – When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain\u2019s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work.
\nToday, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs.<\/p>\n","protected":false},"author":37074,"featured_media":453396,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"https:\/\/player.blubrry.com\/id\/30417797","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[240054,194488],"tags":[],"research-area":[13560],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-452935","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-msr-podcast","category-program-languages-and-software-engineering","msr-research-area-programming-languages-software-engineering","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"https:\/\/player.blubrry.com\/id\/30417797","podcast_episode":"","msr_research_lab":[199561],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"","byline":"","formattedDate":"January 10, 2018","formattedExcerpt":"Episode 7, January 10, 2018 - When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/452935"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37074"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=452935"}],"version-history":[{"count":12,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/452935\/revisions"}],"predecessor-version":[{"id":487625,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/452935\/revisions\/487625"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/453396"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=452935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=452935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=452935"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=452935"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=452935"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=452935"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=452935"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=452935"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=452935"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=452935"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=452935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}