December 31, 2015

2009-2010 Weekly Colloquium Series

Location: Cambridge, MA, U.S.

  • Andzej Skrzypacz (opens in new tab), Stanford

    When: Wednesday, July 21, 2010

    Description

    We consider a seller who wishes to sell K goods by time T. Potential buyers enter IID over time and are patient. At any point in time, profit is maximized by awarding the good to the agent with the highest valuation exceeding a cutoffs. These cutoffs are characterized by a one-period-look-ahead rule and are deterministic, depending only on the number of units left and the time remaining. The cutoffs decrease over time and in the inventory size, with the hazard rate of sales increasing as the deadline approaches. In the continuous time limit, the optimal allocation can be implemented by posted-prices with an auction at time T. Unlike the cutoffs, the prices depend on the history of past sales.

    Biography

    Andrzej (Andy) Skrzypacz is the Theodore J. Kreps Professor of Economics at the Stanford Graduate School of Business. His research is on microeconomics, industrial organization, auctions, bargaining and market design. His recent papers consider auction design, innovation incentives, and collusion in markets. He teaches microeconomics in the MBA program and a PhD class on auctions, bargaining and pricing. He received his PhD in Economics from the University of Rochester in 2000 and has since been working at the Stanford GSB. He is currently an associate editor for the American Economic Review, the Rand Journal of Economics and the Theoretical Economics journal.

  • Ted Adelson (opens in new tab), MIT

    When: Wednesday, July 14, 2010 No recording available

    Description

    Objects are “things” but they are made of “stuff.” In both human and machine vision, there has been much research on object recognition, but rather little on material perception. My lab is exploring material perception at various levels. At the low-level, we have identified certain image statistics, such as subband skewness, that are correlated with surface properties such as gloss. We suggest that there are mechanisms in early vision that are sensitive to such statistics, which could be easily computed with neural mechanisms. For our high-level studies, we are evaluating material recognition (categorization): the ability to look at something and decide whether it is, say, leather or plastic or cloth. We have assembled some image databases for use in exploring material recognition. In humans, material recognition can occur at high speed, even when the images are quite diverse. We have used machine vision and machine learning techniques in an attempt to achieve automated categorization of images in our database. While our system outperforms prior systems, it still falls far short of human performance.

    Biography

    Edward H. Adelson is the John and Dorothy Wilson Professor of Vision Science in Dept. of Brain and Cognitive Sciences at MIT, as well as a member of the Computer Science and Artificial Intelligence Lab. His research areas include human vision, machine vision, image processing, and computer graphics. He has made significant contributions on such topics as surface perception, motion perception, multiscale representation, image coding, and computational photography. He has been awarded the Adolph Lomb Medal of the Optical Society of America, the Rank Prize in Optoelectronics, and the Longuet-Higgins award of the IEEE Computer Science Society. He is a member of the National Academy of Sciences and a fellow of the American Academy of Arts and Sciences.

  • Judith Donath (opens in new tab), MIT

    When: Wednesday, June 23, 2010 No recording available

    Description

    Deception – indicating that you are smarter, nicer, or possessed of better genes than is actually the case – can be quite beneficial.Yet if deception was rampant, communication would cease to function. Signaling theory provides an economic model that shows how sufficient honesty is maintained to keep communication working.This model, which was developed in the field of theoretical biology, has also been used to understand a variety of human behaviors, e.g. cooperative hunting and religious rituals. Yet human communication differs significantly from animal signaling: we can rely on cheap conventional signals because we have coordinated sanctioning; our cultural evolution allows for rapidly changing signal vocabulary; we can imagine other minds and deliberately manipulate impressions; we have internalized morality; and we are very creative – there are no signals, no matter how seemingly reliable, that we will not attempt to fake. In this talk I will introduce signaling theory and describe how to adapt it for modeling human communication. I will then discuss two examples of applying this theory – first to analyze the social phenomenon of fashion (in clothing, arts, academic concentrations) and second to design new interfaces for online communication.

    Biography

    Judith Donath synthesizes knowledge from fields such as urban design, evolutionary biology and cognitive science to build innovative interfaces for on-line communities and virtual identities. A Harvard Berkman Faculty Fellow and formerly director of the Sociable Media Group at MIT Media Lab, she is known internationally for her writing on identity, interface design, and social communication. She created several of the earliest social applications for the web, including the original postcard service and the first interactive juried art show. Her work with the Sociable Media Group has been shown in museums and galleries worldwide, and was recently the subject of a major exhibition at the MIT Museum. Her current research focuses on understanding the social economics underlying communication, both face to face and online. This work bring a fresh understanding of the meaning embodied in everyday phenomena, such as fashion, faces, gifts, etc. – and it provides a strong foundatition for designing engaging social environments that promote cooperation and trust. She has two books in progress, one on the design of sociable media and one which explores how we signal identity in both mediated and face-to-face interactions. She received her doctoral and master’s degrees in Media Arts and Sciences from MIT, her bachelor’s degree in History from Yale University, and has worked professionally as a designer and builder of educational software and experimental media.

  • Christian Sandvig (opens in new tab), Illinois

    When: Wednesday, June 16, 2010 No recording available

    Description

    Addresses and identifiers of all kinds wash over us in a typical day: KQED, 98.5 FM, channel 9, http://whitehouse.gov/ (opens in new tab), 1-800-HOLIDAY, lonelygirl15, Christian Sandvig, and 90210. In networks, the design of the address has serious implications for routing, usability, scalability, efficiency, security, privacy, identity, safety, intellectual property, and more. Yet in technical systems each new pool of identifiers is often created as if some of these dimensions were wholly unproblematic. In this talk I will consider addressing very broadly (including both naming and numbering) in networked infrastructures of the past in order to ask what sorts of public controversies regularly recur due to addressing. The goal will be to use older systems to learn about addressing on the Internet, as well as the reverse: to ask what lessons the Internet might hold for other infrastructures.

    Biography

    Christian Sandvig is Associate Professor in Communication, Media, and the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign (on leave). He is also a fellow of the Berkman Center for Internet & Society at Harvard University and a visiting researcher at the Sloan School of Management, MIT. His research investigates the social science and public policy of new communication infrastructure. He received the Ph.D. in Communication from Stanford University and was previously Markle Foundation Information Policy Fellow at Oxford University. He is a past recipient of the NSF CAREER award in Human-Centered Computing and was named a “next-generation leader” in science and technology policy by the American Association for the Advancement of Science.

  • Slava Gerovitch (opens in new tab), MIT

    When: Wednesday, June 2, 2010 No recording available

    Description

    The role of onboard computers in human spaceflight gradually grew in importance from a crew’s computational aid to a central element of the spacecraft control system. At the same time, astronauts’ attitudes toward onboard computers evolved from mistrust and resistance to close integration. All essential actions of space crews are now mediated by computers. In the inherently stressful conditions of spaceflight, effective use of technology crucially depends on the division of control functions between the crew and the computer, on the trust the humans place in the system, and on the subjective sense of competition between the human and the machine. This talk will review several episodes from the history of American and Soviet spaceflight in which human-computer interaction on board played a critical role for mission success or crew survival. Lessons from history suggest focusing attention on the ways computer use restructures work practices and reshapes the professional identity of the user.

    Biography

    Slava Gerovitch (opens in new tab) is a visiting scholar in the Mathematics Department at MIT. Trained as an applied mathematician, he received his first PhD in philosophy of science from the Russian Academy of Sciences and his second PhD in history and social study of science and technology from Science, Technology and Society Program at MIT. He is the author of From Newspeak to Cyberspeak: A History of Soviet Cybernetics (opens in new tab) (2002), a co-author of The Future of Human Spaceflight: Objectives and Policy Implications in a Global Context (opens in new tab) (2009), and the creator of the online historical project, Computing in the Soviet Space Program (opens in new tab). At MIT he has taught courses on the history of scientific computing (opens in new tab) and on the cultural history of mathematics (opens in new tab). He is currently working on a book on human-machine issues in the Soviet space program and on a documentary film, Cyber-dreams of the Superpowers.

  • Eszter Hargittai (opens in new tab), Northwestern

    When: Wednesday, May 26, 2010

    Description

    Much enthusiasm surrounds the potential of the Internet to improve people’s lives in numerous domains from health matters to education, from creative expression to financial independence, from political engagement to on-the-job performance. While it is easy to come up with hypotheticals on how the Web may improve people’s life chances, we know relatively little about the extent to which such potential is being met and who is more or less likely to benefit from the various opportunities. Drawing on unique data collected about a diverse group of young American adults’ Internet uses, this presentation will look at predictors of various types of online engagement and participation. In particular, the talk will point out variation in people’s online skills and how differences in Web know-how influence what people do online.

    Biography

    Eszter Hargittai is Associate Professor of Communication Studies and Faculty Associate of the Institute for Policy Research at Northwestern University where she heads the Web Use Project. She is also Fellow at Harvard’s Berkman Center for Internet & Society where she spent the 2008/09 academic year in residence. Hargittai’s research focuses on the social and policy implications of information technologies with a particular interest in how IT may contribute to or alleviate social inequalities. She is editor of the recently published book “Research Confidential”, which presents a behind-the-scenes look at the realities of doing empirical social science research. For more information, see http://www.eszter.com (opens in new tab) and http://www.webuse.org (opens in new tab).

  • Ben Shneiderman (opens in new tab), U of MD

    When: Thursday, May 20, 2010 (Special Date & Time) No recording available

    Description

    Interactive information visualization tools provide researchers with remarkable capabilities to support discovery. These telescopes for high-dimensional data combine powerful statistical methods with user-controlled interfaces. Users can begin with an overview, zoom in on areas of interest, filter out unwanted items, and then click for details-on-demand. With careful design and efficient algorithms, the dynamic queries approach to data exploration can provide 100msec updates even for million-record databases. This talk will start by reviewing the growing commercial success stories such as www.spotfire.com (opens in new tab), www.smartmoney.com/marketmap (opens in new tab) and www.hivegroup.com (opens in new tab). Then it will cover recent research progress for visual exploration of large time series data applied to financial, medical, and genomic data (www.cs.umd.edu/hcil/timesearcher (opens in new tab) ). These strategies of unifying statistics with visualization are applied to electronic health records (www.cs.umd.edu/hcil/lifelines2 (opens in new tab)) and social network data (www.cs.umd.edu/hcil/socialaction (opens in new tab) and www.codeplex.com/nodexl (opens in new tab)). Demonstrations will be shown.

    Biography

    BEN SHNEIDERMAN (http://www.cs.umd.edu/~ben (opens in new tab)) is a Professor in the Department of Computer Science and Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/ (opens in new tab)) at the University of Maryland. He was elected as a Fellow of the Association for Computing (ACM) in 1997, a Fellow of the American Association for the Advancement of Science (AAAS) in 2001, and a member of the National Academy of Engineering in 2010. He received the ACM SIGCHI Lifetime Achievement Award in 2001. Ben is the co-author with Catherine Plaisant of “Designing the User Interface: Strategies for Effective Human-Computer Interaction” (5th ed., 2010) http://www.awl.com/DTUI/ (opens in new tab). With Stu Card and Jock Mackinlay, he co-authored “Readings in Information Visualization: Using Vision to Think” (1999). With Ben Bederson he co-authored “The Craft of Information Visualization” (2003). His book “Leonardo’s Laptop” appeared in October 2002 (MIT Press) (http://mitpress.mit.edu/leonardoslaptop (opens in new tab)) and won the IEEE book award for Distinguished Literary Contribution.

  • Joel Spencer, Courant Institute, NYU

    When: Wednesday, May 12, 2010

    Description

    Let S_1,…,S_n be subsets of {1,…n}. A quarter century ago I showed that there is a two-coloring of the vertices so that all sets have discrepancy at most K\sqrt{n}, K an absolute constant. The colors \chi(j) are +1,-1 and discrepancy is the absolute value of the sum of the colors. In matrix terms, given an n by n matrix A with all coefficients in [-1,+1] there is a vector x with values -1,+1 so that Ax has L-infinity norm at most K\sqrt{n}].) I had long conjectured that there would not be an algorithm to find such a coloring. Very recently Nikhil Bansal (IBM) has found an algorithm, which I shall describe. At its heart it uses Semidefinite Programming. The colors \chi(j) “float” in [-1,+1], each performing a discretized Brownian motion until being caught by the boundary. Unlike the recent Moser-Tardos work on the Lovasz Local Lemma, this does not provide a new proof — the known arguments are used to prove feasibility of certain Semidefinite Programs. Along the way, this provides a fresh view of the original argument via a “cost equation.”

    Biography

    Joel Spencer is a Professor of Mathematics and Computer Science at the Courant Institute (New York). He co-founded the journal Random Structures and Algorithms and co-authored The Probabilistic Method. His Erdos number is one.

  • Sham M. Kakade (opens in new tab), Wharton

    When: Wednesday, May 5, 2010

    Description

    We consider the problem of revenue-optimal dynamic mechanism design in settings where agents’ types evolve over time as a function of their (both public and private) experience with items that are auctioned repeatedly over an infinite horizon. A central question here is understanding what natural restrictions on the environment permit the design of optimal mechanisms (note that even in the simpler static setting, optimal mechanisms are characterized only under certain restrictions). We provide a structural characterization of a natural “separable” multi-armed bandit environment (where the evolution and incentive structure of the a-priori type is decoupled from the subsequent experience in a precise sense) where dynamic optimal mechanism design is possible. Here, we present the Virtual Index Mechanism, an optimal dynamic mechanism, which maximizes the (long-term) virtual surplus using the classical Gittins algorithm. The mechanism optimally balances exploration and exploitation, taking incentives into account. We pay close attention to the applicability of our results to the (repeated) ad auctions used in sponsored search, where a given ad space is repeatedly allocated to advertisers. The value of an ad allocation to a given advertiser depends on multiple factors such as the probability that a user clicks on the ad, the likelihood that the user performs a valuable transaction (such as a purchase) on the advertiser’s website and, ultimately, the value of that transaction. Furthermore, some of the private information is learned over time, for example, as the advertiser obtains better estimates of the likelihood of a transaction occurring. We provide a dynamic mechanism that extracts the maximum feasible revenue given the constraints imposed by the need to repeatedly elicit information. One interesting implication of our results is a certain revenue equivalence between public and private experience, in these separable environments. The optimal revenue is no less than if agents’ private experience (which they are free to misreport, if they are not incentivized appropriately) were instead publicly observed by the mechanism.

    Biography

    Sham Kakade is currently an associate professor of statistics at the Wharton School at the University of Pennsylvania. He received his B.A. in Physics from the California Institute of Technology and his Ph.D. from the Gatsby Computational Neuroscience Unit affiliated with University College London. He spent the following two years as a Postdoctoral Researcher at the Department of Computer and Information Science at the University of Pennsylvania, with Michael Kearns. Subsequently, he joined the Toyota Technological Institute, where he was an assistant professor for four years. His research focuses on artificial intelligence and machine learning, and their connections to other areas such as game theory and economics.

  • Dirk Bergemann (opens in new tab), Yale

    When: Wednesday, April 28, 2010 No recording available

    Description

    We develop solution concepts for games with incomplete information which are robust to the private and strategic information held by the agents in the game. We present epistemic foundations for these solution concepts and establish relationship between them. In particular, we characterize the solution concepts for supermodular games and potential games with private information, a version of the beauty contest and oligopoly competition among them. We analyze the sensitivity of the equilibrium set to the private information and relate it to the (partial) identification problem. (Joint work with Stephen Morris, Princeton University)

    Biography

    Dirk Bergemann, is Douglass and Marian Campbell Professor of Economics at Yale University. Dirk received his B.A. in economics at J.W. Goethe University in Frankfurt and his Ph.D. in economics from the University of Pennsylvania in. He joined Yale in 1995 as an assistant professor, having previously served as a faculty member at Princeton University. He has been affiliated with the Cowles Foundation for Research in Economics at Yale since 1996. His research is concerned with game theory, contract theory and mechanism design. His research has been supported by grants from the National Science Foundation, the Alfred P. Sloan Research Fellowship and the German National Science Foundation. Bergemann is foreign editor for the Review of Economic Studies, and associate editor of several publications, including American Economic Journal, Econometrica, Games and Economic Behavior and the Journal of Economic Theory. For more information see http://cowles.econ.yale.edu/faculty/bergemann.htm (opens in new tab).

  • George Hart (opens in new tab), Stony Brook

    When: Wednesday, April 21, 2010 No recording available

    Description

    George Hart will present and discuss examples of his mathematically informed sculptures, which generally apply computer technology in their design and/or fabrication. These include works made of metal, wood, plastic, or found objects, and often use laser-cutting, plasma-cutting or additive freeform fabrication technologies in their realization.

    Also shown will be brief videos of the assembly of some larger commissions. Mathematical and computer science aspects of these designs will be discussed. Hart is currently on sabbatical, working as Chief Content Officer in the formation of The Museum of Mathematics, which will open in New York City within two years. Some discussion and pictures of the Museum activities will also be presented.

    Biography

    George W. Hart is a Research Professor in the Computer Science Department at Stony Brook University. His primary research area is to apply mathematics and algorithmic ideas to sculpture. His artworks can be seen at many locations around the world, including MIT, Princeton, and U.C. Berkeley. For more information, see http://georgehart.com (opens in new tab) and http://momath.org (opens in new tab)

  • Helen Nissenbaum (opens in new tab), NYU

    When: Thursday, April 15, 2010 No recording available

    Description

    Privacy is one of the most urgent issues associated with information technology and digital media. This book claims that what people really care about when they complain and protest that privacy has been violated is not the act of sharing information itself-most people understand that this is crucial to social life -but the inappropriate, improper sharing of information. Arguing that privacy concerns should not be limited solely to concern about control over personal information, Helen Nissenbaum counters that information ought to be distributed and protected according to norms governing distinct social contexts-whether it be workplace, health care, schools, or among family and friends. She warns that basic distinctions between public and private, informing many current privacy policies, in fact obscure more than they clarify. In truth, contemporary information systems should alarm us only when they function without regard for social norms and values, and thereby weaken the fabric of social life.

    Biography

    Helen Nissenbaum is Professor of Media, Culture and Communication, and Computer Science and Senior Fellow of the Information Law Institute at New York University. She is the coeditor of Academy and the Internet (2004) and Computers, Ethics, and Social Values (1995), and the author of Emotion and Focus (1985).

  • Costis Maglaras (opens in new tab), Columbia

    When: Wednesday, April 7, 2010 No recording available

    I propose a multiclass queueing model of a limit order book, and study questions of optimal limit order placement, market impact, and optimal trade execution. The motivating application is quantitative trade execution.

    Biography

    Costis Maglaras is the David and Lyn Silfen Professor of Business at Columbia University. His research focuses on quantitative pricing and revenue management, the economics, design and operations of service systems, and financial engineering. He is the author of many research articles spanning the theory and application of stochastic modeling in a variety of fields, more recently the application of quantitative modeling in the pricing, risk management and valuation of multi-unit real estate portfolios, and in the design of portfolio trading systems and portfolio trading algorithms. He holds editorial positions in many of the flagship journals of his fields of study, he is the recipient of several research and teaching awards, and he teaches and serves as faculty director for the executive education course on Risk Management offered by Columbia Business School.

  • Galen Pickard, MIT

    When: Wednesday, March 24, 2010 No recording available

    Description

    On 5 December 2009, DARPA held a “Network Challenge” in honor of the 40th anniversary of the Internet. DARPA deployed ten moored, eight-foot wide, red weather balloons at undisclosed locations around the United States, and offered $40,000 to the first team to locate all ten balloons. DARPA allowed teams up to nine days to find the balloons, but a team of five students from the Human Dynamics group at MIT’s Media Lab completed the challenge in under nine hours. As one of these five students, I will explain how existing social networks like Facebook and Twitter were used, combined with an idea borrowed from behavioral economics that was described as “recursive incentives,” to win DARPA’s Network Challenge.

    Biography

    Galen is a Ph.D. student at MIT. He received undergraduate degrees in Computer Science and in Cognitive Science from MIT in 2005, and currently studies at MIT’s Media Lab under the direction of Prof. Sandy Pentland. In December, he was one of 5 organizers of the “MIT Red Balloon Challenge Team” that won the DARPA Network Challenge.

  • Niv Shavit, Tel-Aviv University and Sun Labs

    When: Wednesday, March 10, 2010 No recording available

    Description

    As multicore machines become our mainstream computing platform, their unique architectural properties (supporting synchronization and coherence) and the new classes of algorithms they are intended to run, are forcing us to rethink our basic algorithms. The result is a flurry of activity in the design of concurrent algorithms and techniques to improve concurrent programming.

    This talk will provide a glimpse into the work in this area. I will use concurrent hashing as an example of this new research wave, presenting two new high performance multicore hash-table algorithms:split-ordered hashing and hopscotch hashing. These algorithms are examples of how the new hardware has necessitated alternative algorithmic approaches, and how in turn these new algorithmics carry into the sequential world.

    Biography

    Nir Shavit is a prof of CS at Tel Aviv University and a part time member of Sun Labs (now part of Oracle). Nir is known for his contributions to the mathematical computability theory underlaying multiprocessor machines, work for which he received the 2004 Godel Prize. Prof. Shavit is the past program chair of the ACM PODC and SPAA conferences, and a co-author of “The Art of Multiprocessor Programming”, a textbook for multicore programming used in many universities around the world.

  • Jeff Hancock, Cornell

    When: Wednesday, Feruary 17, 2010

    Description

    Deception is a significant and pervasive social phenomenon. At the same time, technologies have suffused almost all aspects of human communication. The intersection between deception and information technology gives rise to many questions about deception in the digital age. How does communication technology change the way we lie? Why and how do people lie in online relationships? Can people detect if they are being lied to in an email? Can computer programs identify word patterns that reveal whether someone is lying or not? This talk will examine these questions and describe some recent research that may shed some light on the answers.

    Biography

    Jeff Hancock is an Associate Professor in the Department of Communication and in the Faculty of Computing and Information Science, and co-Director of Cognitive Science at Cornell University. His work is concerned with how information technologies – such as email, instant messaging, and social networking sites – affect the way we understand and relate to one another, with a particular emphasis on deception. His research is supported by funding from the National Science Foundation and the Department of Defense, and his work on lying online has been featured in the New York Times, CNN, NPR, BBC and ABC News and the CBC documentary The Truth About Lying. Dr. Hancock earned his PhD in cognitive psychology at Dalhousie University, Canada, and joined Cornell in 2002.

  • Christos Papadimitriou, Berkeley

    When: Tuesday, Feruary 9, 2010 No recording available

    Description

    A series of lectures from acclaimed authors and popular thinkers in research, business and technology. As the first lecture series for 2010, this program will feature Christos Papadimitriou author of LOGICOMIX (opens in new tab).

    Abstract

    LOGICOMIX (opens in new tab) is about the quest for the foundations of mathematics in the beginning of the 20th century, the subtle ways in which its ultimate failure brought about the computer and the curious fact that so many of its protagonists had serious brushes with insanity. This talk discusses logic as one of the three intellectual currents which eventually brought us computation.

    Biography

    Christos Papadimitriou studied in Greece and at Princeton and has taught at Harvard, MIT, Stanford and UCSD. He is currently the C. Lester Hogan Professor of Computer Science at University of California at Berkeley. His research is about algorithms and complexity and their applications to a variety of fields such as databases, optimization, AI and robotics, game theory and economics and evolution. He has published several of the standard textbooks in Computer Science and the novel Turing: A Novel About Computation. He is a member of the National Academy of Sciences, the National Academy of Engineering and the American Academy of Arts and Sciences. His graphic novel LOGICOMIX with Apostolos Doxiadis, art by Alecos Papadatos and Annie di Donna topped the New York Times bestseller list of the genre for 11 weeks upon its publication in the fall of 2009.

  • Tarleton Gillespie, Cornell, Communication Department

    When: Wednesday, January 20, 2010

    Description

    If the character of public discourse, long chaperoned by the broadcast networks and major publishers, is now increasingly in the hands of online media platforms like YouTube, Facebook, and Flickr, then we must ask some quite old questions about how these commercial information providers navigate protecting free speech, meeting community standards, avoiding legal liability for their content, and pursuing their own business imperatives. Though they often make the promise to openly and impartially host all content, they are of course actively making decisions about where the edges of these platforms should be: what should and should not appear, how content should be organized, what should be featured or squirreled away, and how it should be patrolled. And they’re experimenting with both traditional and novel techniques for managing this discourse: not just removal and rating, but also technical mechanisms for marking content or making it inaccessible, and emerging techniques of depending on community governance. My aim is to sketch this array of interventions and see them together as structuring contemporary public discourse, and situate them in the history of commercial obligations around free speech.

    Biography

    Tarleton Gillespie is an Assistant Professor in the Communication Department of Cornell University, with affiliations in the Department of Science and Technology Studies and the Information Science program. He is also a fellow with the Center for Internet and Society at the Stanford School of Law. His research examines the ways in which cultural discourse is structured by legal, political, and economic arrangements; much of this work has focused on debates about digital copyright. His book on the subject, Wired Shut: Copyright and the Shape of Digital Culture, (http://www.wiredshut.org/ (opens in new tab)) was published by MIT Press in 2007; in 2009, it was awarded Outstanding Book of the year by the International Communication Association (ICA) and the Communication and Information Technology division of the American Sociological Association (CITASA).

  • Larry Samuelson, Dept. of Economics, Yale

    When: Friday, December 18, 2009

    Description

    We examine a repeated interaction between an agent, who undertakes experiments, and a principal who provides the requisite funding for these experiments. The agent’s actions are hidden, and the principal, who makes the offers, cannot commit to future actions. We identify the unique Markovian equilibrium (whose structure depends on the parameters) and characterize the set of all equilibrium payoffs, uncovering a collection of non-Markovian equilibria that can Pareto dominate and reverse the qualitative properties of the Markovian equilibrium. The prospect of lucrative continuation payoffs makes it more expensive for the principal to incentivize the agent, giving rise to a dynamic agency cost. As a result, constrained efficient equilibrium outcomes call for nonstationary outcomes that front-load the agent’s effort and that either attenuate or terminate the relationship inefficiently early. Link to paper: (http://pantheon.yale.edu/~ls529/papers/VentureCapital08-08-31-2009.pdf (opens in new tab))

    Biography

    Larry Samuelson received his Ph. D. in economics from the University of Illinois and is now the A. Douglas Melamed Professor of Economics at Yale University. He is interested in game theory and its applications, with concentrations in evolutionary games and repeated games. “Incentives for Experimenting Agents” investigations the complications that arise in creating incentives in agency relationships that are repeated rather than isolated. Stephen Morris, Dept. of Economics, Princeton

  • Stephen Morris, Princeton University

    When: Wednesday, December 16, 2009

    Description

    Economists’ analysis of mechanism design under incomplete information relies on strong assumptions about economic agents’ beliefs and common knowledge of those beliefs. This talk will survey an approach to mechanism design that aims to relax those assumptions and thus derives more robust mechanisms. The relationship to prior-free optimal mechanism design in the computer science literature will be discussed.

    Biography

    Stephen Morris is Professor of Economics at Princeton University. His research has focused on theoretical and applied game theory, with a particular focus on the role of beliefs and higher order beliefs. Roz Picard, MIT Media Lab.

  • Roz Picard, MIT Media Lab

    When: Wednesday, December 9, 2009

    Description

    This talk will demonstrate new technologies that help measure and communicate emotion, and describe ways in which these open up whole new lines of research. The technologies to be shown include software for reading real-time emotional expressions from the face, as well as autonomic arousal from the body. While most of this research has been focused on helping people on the autism spectrum and others with communication challenges, there are also hard research questions to address in understanding customer experiences and the role of emotion in decision-making. This talk will highlight ways we are advancing the state of the art (at the Media Lab) in these areas.

    Biography

    Professor Rosalind W. Picard, Sc.D. is founder and director of the Affective Computing Research Group at the Massachusetts Institute of Technology (MIT) Media Laboratory, co-director of the Things That Think Consortium, the largest industrial sponsorship organization at the lab, and leader of the new and growing Autism Communication Technology Initiative at MIT. In April 2009 she co-founded Affectiva, Inc. with Dr. Rana el Kaliouby, to commercialize technologies for emotion measurement and communication. Picard holds a Bachelors in Electrical Engineering with highest honors from the Georgia Institute of Technology, and Masters and Doctorate degrees, both in Electrical Engineering and Computer Science, from the Massachusetts Institute of Technology (MIT). Prior to completing her doctorate at MIT, she was a Member of the Technical Staff at AT&T Bell Laboratories where she designed VLSI chips for digital signal processing and developed new methods of image compression and analysis. In 1991 she joined the MIT Media Lab faculty, where she became internationally known for constructing powerful mathematical models for content-based retrieval of images, for creating new tools such as the Photobook system, and for pioneering methods of automated search and annotation in digital video. The year before she was up for tenure, she published the award-winning book Affective Computing, which was instrumental in starting a new field by that name. Picard has been awarded dozens of distinguished and named lectureships internationally and in 2005 was honored as a Fellow of the IEEE for contributions to image and video analysis and affective computing. Picard interacts regularly with industry and has consulted for companies such as Apple, AT&T, BT, HP, i.Robot, and Motorola. She is a popular keynote speaker and hergroup’s achievements have been featured in forums for the general public such as The New York Times, The London Independent, National Public Radio, Scientific American Frontiers, ABC’s Nightline and World News Tonight with Peter Jennings, Time, Vogue, Wired, Voice of America Radio, New Scientist, and BBC’s “The Works” and “The Big Byte.”

  • Joe Silverman, Brown (opens in new tab)

    When: Wednesday, December 2, 2009 No recording available

    Description

    Public key cryptography is now 30+ years old, with new and exciting results appearing all the time, but the underlying hard mathematical problems on which (practical) public key cryptography is based are few and far between. In this talk I will discuss some of the history of public key cryptography and then describe the scant four mathematical problems, two from the 1970s, one from the 1980s, and one from the 1990s, that underlie virtually all current practical public key constructions. The emphasis will be on how these hard problems differ and on how their differences lead to public key cryptosystems with differing operating characteristics, especially in terms of speed, memory requirements, and other special properties. (Don’t Panic! Mathematical prerequisites will be kept to a minimum.)

    Biography

    Joseph Silverman is Professor of Mathematics at Brown University and currently a visitor at Microsoft Research New England. He is the author of seven books, including two graduate texts on elliptic curves that were awarded the American Mathematical Society Steele Prize for mathematical exposition and a recent undergraduate textbook on mathematical cryptography. He has written more than 100 research papers and is a co-inventor, with Jeff Hoffstein and Jill Pipher, of the NTRU public key cryptosystem. His primary research interests include elliptic curves, arithmetic geometry, mathematical foundations of cryptography, and number theoretic properties of dynamical systems.

  • Vijay Vazirani, GA Tech

    When: Wednesday, November 18, 2009 No recording available

    Description

    “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard for their own interest.” Each participant in a competitive economy is “led by an invisible hand to promote an end which was no part of his intention.” — Adam Smith, 1776.

    With his treatise, The Wealth of Nations, 1776, Adam Smith initiated the field of economics, and his famous quote provided this field with its central guiding principle. The pioneering work of Walras (1874) gave a mathematical formulation for this statement, using his notion of market equilibrium, and opened up the possibility of a formal ratification. Mathematical ratification came with the celebrated Arrow-Debreu Theorem (1954), which established existence of equilibrium in a very general model of the economy; however, an efficient mechanism for finding an equilibrium has remained elusive. The question of algorithmic ratification was taken up in the earnest within theoretical computer science a decade ago, and attention soon gravitated on markets under piecewise-linear, concave utility functions. As it turned out, the recent resolution of this open problem did not yield the hoped-for mechanism; however, it did mark the end of the road for the current approach. It is now time to step back and plan a fresh attack, using the powerful tools of modern complexity theory and algorithms. After providing a summary of key developments through the ages and a gist of the recent results, we will discuss some ways of moving forward. (Based in part on recent work with Mihalis Yannakakis.)

    Biography

    Vijay Vazirani got his Bachelor’s degree from MIT in 1979, his Ph.D. from U.C. Berkeley in 1983, and is currently Professor of Computer Science at Georgia Tech. His research career has been centered around the design of algorithms, together with work on complexity theory, cryptography, coding theory, and game theory. He is best known for his work on efficient algorithms for the classical maximum matching problem (1980’s), fundamental complexity-theoretic results obtained using randomization (1980’s), approximation algorithms for basic NP-hard optimization problems (1990’s), and efficient algorithms for computing market equilibria (current). In 2001 he published what is widely viewed as the definitive book on approximation algorithms. This book has been translated into Japanese, French and Polish, and Persian and Chinese translations are forthcoming. In 2005 he initiated work on a comprehensive volume on algorithmic game theory; the co-edited volume appeared in 2007.

  • Nigel Shadbolt, U of Southampton

    When: Wednesday, November 11, 2009

    Description

    The World Wide Web has changed almost every aspect of modern life and touches us all. We use it to shop, date, entertain, communicate and research. It’s billions of pages, links and other resources comprise the largest information fabric in the history of humanity. It is fundamentally a socio-technical system connecting hundreds of millions of people in networks that are constantly changing and evolving. How much of this do we understand? From a series of straightforward engineering protocols we see the emergence of large-scale structure. What evolutionary patterns have driven the Web’s growth, and will they persist? How are tipping points reached, and can they be predicted or altered? What trends might fragment the Web? What properties create social effects, and how do social norms influence the viral update of Web capabilities? Answers to any of these questions would enhance our ability to maintain the Web as an accessible information technology to help humankind prosper. This talk will argue the case for a Science of the Web. This new interdisciplinary enterprise will require insights and methods from many disciplines. It demands that we understand the Web as an engineered construct that demands scientific analysis. It requires that we see the Web as a social construct that embodies all our human hopes and fears, interests and appetites. The talk will review progress to date as we seek to establish Web Science, discuss the major research insights that are emerging and look forward to the challenges ahead.

    Biography

    Nigel Shadbolt is Professor of Artificial Intelligence (AI) and Deputy Head (Research) of the School of Electronics and Computer Science at the University of Southampton. He is a Founding Director of the Web Science Research Initiative, a joint endeavour between the University of Southampton and MIT. Together with Sir Tim Berners-Lee He has recently been given a special role by the Prime Minister to help transform public access to Government information. In its 50th Anniversary year 2006 – 07, Nigel was President of the British Computer Society. He is a Fellow of both the Royal Academy of Engineering and the British Computer Society. Between 2000-7, he was the Director of the £7.5m EPSRC Interdisciplinary Research Collaboration in Advanced Knowledge Technologies (AKT). He has recently been awarded a further £2m by the EPSRC to build on this work. He has been involved in a wide range of entrepreneurial activities. In 2006 he was one of three founding Directors and Chief Technology Officer of Garlik Ltd, a company specialising in consumer products and services to put people and their families in control of their own digital information. In 2008 Garlik was awarded Technology Pioneer status by the Davos World Economic Forum and won the prestigious UK national BT Flagship IT Award. He is the co-author of The Spy in the Coffee Machine and has an interest in issues to do with privacy and trust in the Digital age.

  • Ran Raz, Weizmann

    When: Wednesday, November 4, 2009 No recording available

    Description

    I will give an introduction to the problem of parallel repetition of two-prover games and its applications to theoretical computer science, mathematics and physics. I will then describe a recent counterexample to the strong parallel repetition conjecture (a conjecture that would have had important applications). In a two-prover (alternatively, two-player) game, a referee chooses questions $(x,y)$ according to a (publicly known) distribution, and sends $x$ to the first player and $y$ to the second player. The first player responds by $a=a(x)$ and the second by $b=b(y)$ (without communicating with each other). The players jointly win if a (publicly known) predicate $V(x,y,a,b)$ holds. The value of the game is the maximal probability of success that the players can achieve, where the maximum is taken over all protocols $a=a(x),b=b(y)$. A parallel repetition of a two-prover game is a game where the players try to win $n$ copies of the original game simultaneously. More precisely, the referee generates questions $x=(x_1,…,x_n), y=(y_1,…,y_n)$, where each pair $(x_i,y_i)$ is chosen independently according to the original distribution. The players respond by $a=(a_1,…,a_n)$ and $b=(b_1,…,b_n)$. The players win if they win simultaneously on all the coordinates, that is, if for every $i$, $V(x_i,y_i,a_i,b_i)$ holds. The parallel repetition theorem states that for any two-prover game, with value $1-\epsilon$ (for, say, $\epsilon < 1/2$), the value of the game repeated in parallel $n$ times is at most $(1- \epsilon^3)^{\Omesdfsdfn/s)}$, where $s$ is the answers’ length (of the original game). The theorem has important applications in complexity theory, quantum computation, and geometry. Several researchers asked whether this bound could be improved to $(1-\epsilon)^{\Omesdfsdfn/s)}$; this question is usually referred to as the strong parallel repetition problem. A positive answer would have had important applications. We show that the answer for this question is negative.

    Biography

    Ran Raz is a Professor of Mathematics and Computer Science at the Weizmann Institute of Science. He received his B.Sc in mathematics and physics and his PhD in mathematics from the Hebrew University, and after a short postdoc at Princeton University joint the Weizmann Institute. His main research area is complexity theory, with emphasis on proving lower bounds for computational models. More specifically, he is interested in Boolean and arithmetic circuit complexity, communication complexity, propositional proof theory, probabilistically checkable proofs, quantum computation and communication, and randomness and derandomization. He was a member at the Institute for Advanced Study (2000-2001, and fall 2002) and a visiting researcher at Microsoft Research Redmond (spring 2006). He is currently visiting Microsoft Research New England (July-December 2009).

  • Elizabeth Goodman, Berkeley

    When: Wednesday, October 28, 2009 No recording available

    Description

    Urban and suburban green space, as a technology of living well together, provides many benefits. But the stewardship of urban green space isn’t usually associated with information technology. This talk maps out various directions in information technologies for promoting the creation and maintenance of urban green space. It discusses how assumptions about environmentalism, technology, nature, our relationship with living things shape the kinds of technologies people can and do make. In particular, it explores the possibilities of treating urban green spaces not as individual sites, but as parts of ecological networks that connect institutions, organizations, databases, sensors, water, plants, animals, chemicals — and of course, people.

    Biography

    After brief summer research stints at Intel Research Berkeley and Fuji-Xerox Palo Alto focusing on social interaction in public places, I joined Intel’s User Centered Design group as a design researcher. There, I focused on people-centered research and design for health and wellness. Since September 2006 I have been in the PhD program at UC Berkeley’s School of Information.

  • Peter Winkler, Dartmouth

    When: Wednesday, October 21, 2009 No recording available

    Description

    With the recent federal crackdown on banks that support online gambling, the state courts are under pressure to decide whether various games constitute gambling. The usual criterion is whether a game is “predominantly” a game of chance, as opposed to skill. Can mathematicians (or statisticians) help the courts? After all, it should be an easy matter to examine tournament data and determine whether luck or skill is the predominant factor, right? We will discuss the obstacles to this approach and propose some ways to get around them.

    Biography

    Peter Winkler is Professor of Mathematics and Computer Science, and Albert Bradley Third Century Professor in the Sciences, at Dartmouth College. A winner of the Mathematical Association of America’s Lester R. Ford Award for mathematical exposition, he is the author of about 135 mathematical research papers and holds a dozen patents in computing, cryptology, holography, optical networking and marine navigation. His research is primarily in combinatorics, probability, and the theory of computing, with forays into statistical physics. Prof. Winkler has also authored two collections of mathematical puzzles, a portfolio of compositions for ragtime piano, and (just completed) a book on uses of cryptography in the game of bridge.

  • Parag Pathak, MIT, Dept of Economics

    When: Wednesday, October 14, 2009 No recording available

    Description

    This paper introduces a method to compare direct mechanisms based on their vulnerability to manipulation or deviation from truthful reporting. We explore the following idea: if a player can manipulate mechanism psi whenever some player can manipulate mechanism phi, then psi is more manipulable than phi. Our notion generates a partial ordering on mechanisms based on their degree of manipulability. We illustrate the concept by comparing several well-known mechanisms in the matching and auction literature. The applications include comparisons between stable matching mechanisms, school choice mechanisms, auctions for internet advertising, and multi-unit auctions. This talk is based on joint work with Tayfun Sonmez of Boston College.

    Biography

    Parag Pathak is an economic theorist best known for his work on matching markets. He has been involved in designing the system used by the New York City Department of Education to assign students to high schools and the student assignment system in Boston Public Schools. He has also worked on various topics in applied microeconomics, including urban economics and the economics of education. He received his A.B. and S.M. Harvard University in Applied Mathematics in 2002, and his Ph.D. at Harvard in 2007. Formerly a Junior Fellow at Harvard’s Society of Fellows, he is currently the Career Development Assistant Professor of Economics at the Massachusetts Institute of Technology.

  • Raissa D’Souza, UC Davis, Depts. of Mechanical Engineering and Computer Science

    When: Wednesday, October 7, 2009

    Description

    Networks with complex structures and functions are pervasive in the modern world, spanning social, biological and technological systems. One feature common to many such networks is a broad variation in the number of edges incident to each node, and growth by preferential attachment, whereby the rich get richer, has been assumed as an explanatory axiom. I will show that an underlying local optimization mechanism can in fact give rise to preferential attachment. Another complex feature of evolving networks is that exhibit phase transitions, such as the sudden emergence of large-scale connectivity. I show that a variant of the classic Erdos-Renyi model of network formation (using the power of two choices) can alter the location and also the nature of the phase transition, making for an explosive onset of connectivity. Finally, in the past ten years a general theory of networks has been developing, but it applies only to isolated networks. In reality, individual networks are increasingly interdependent (e.g., the Internet and the power grid, globalization of financial markets and of social networks). I show that interactions between different types of networks can actually lower the critical threshold, allowing large-scale connectivity to be achieved with fewer overall connections, with implications for the spread of disease across geographic regions and the design of simple communications networks.

    Biography

    Raissa D’Souza is an Associate Professor at UC Davis in Mechanical Engineering and Computer Science, as well as an External Professor at the Santa Fe Institute. She received a PhD in statistical physics from MIT, then was a postdoctoral scholar, first at Bell Labs, then in the Theory Group at Microsoft Research. Dr. D’Souza’s research focuses on self-organization, phase transitions and networked interactions occurring in natural and engineered systems, and her publications span the fields of physics, computer science, and applied mathematics.

  • Salvatore Torquato, Princeton, Dept of Chemistry

    When: Wednesday, September 30, 2009 No recording available

    Description

    Packing problems, such as how densely nonoverlapping particles fill d-dimensional Euclidean space Rd are ancient and still provide fascinating challenges for scientists and mathematicians [1,2]. Bernal has remarked that “heaps” (particle packings) were the first things that were ever measured in the form of basketfuls of grain for the purpose of trading or of collection of taxes. While maximally dense packings are intimately related to classical ground states of matter, disordered sphere packings have been employed to model glassy states of matter. There has been a resurgence of interest in maximally dense sphere packings in high-dimensional Euclidean spaces [3,4], which is directly related to the optimal way of sending digital signals over noisy channels. I begin by first describing “order” maps to classify jammed sphere packings, which enables one to view a host of packings with varying degrees of disorder as extremal structures. I discuss work that provides the putative exponential improvement on a 100-year- old lower bound on the maximal packing density due to Minkowski in Rd in the asymptotic limit d à [4]. Our study suggests that disordered (rather than ordered) sphere packings may be the densest for sufficiently large d – a counterintuitive possibility. Finally, I describe recent work to find and characterize dense packings of three-dimensional nonspherical shapes of various shapes, including the Platonic and Archimedean solids [5]. We conjecture that the densest packings of the Platonic and Archimedean solids with central symmetry are given by their corresponding densest lattice packings. This is the analogue of Kepler’s sphere conjecture for these solids.

    Biography

    Salvatore Torquato is Professor of Chemistry and the Princeton Institute for the Science and Technology of Materials at Princeton University. He is a Senior Faculty Fellow in the Princeton Center for Theoretical Science. He also hold appointments in four departments at Princeton: Physics, Applied and Computational Mathematics, Chemical Engineering, and Mechanical & Aerospace Engineering. He is broadly interested in the fundamental microscopic understanding of the structure and bulk properties of condensed matter using statistical mechanics. His current work has been focused on self-assembly theory, particle packing problems, quasicrystals, optimal multifunctional material design and cancer modeling. He has published over 280 journal articles and a book entitled “Random Heterogeneous Materials.” He is the recipient of numerous awards/honors, including the American Physical Society 2009 Adler Lectureship Award for Materials Physics, Society for Industrial and Applied Mathematics 2007 Ralph E. Kleinman Prize and Society of Engineering Science 2004 William Prager Medal.

  • Yiling Chen, Harvard, Dept of Computer Science

    When: Wednesday, September 23, 2009 No recording available

    Description

    Situated in a de facto standard market maker mechanism, this talk presents some recent results on analyzing and designing prediction markets for information aggregation. We take both economic and computational perspectives. From the economic perspective, we engage game theoretic analysis to investigate the equilibrium behavior of informed traders in prediction markets. We examine what information structures lead to truthful play by traders, meaning that traders reveal all of their information honestly as soon as they are able, and when traders have an incentive to lie about their own information, with the intention to strategically mislead other traders and profit later by correcting their errors. From the computational perspective, we design expressive betting languages for combinatorial prediction markets and examine the computational problem of pricing such markets. In our combinatorial markets, traders can submit bets of the form “horse A finishes in position 1, 2, or 5”, “horse C beats horse D”, “a Democrat wins Ohio and Florida”, or “Duke wins a third round game in the NCAA basketball tournament”. We find that pricing such markets are computationally intractable except for the single-elimination tournament betting. This is the first example of a tractable market-maker driven combinatorial market. Joint work with Stanko Dimitrov, Lance Fortnow, Sharad Goel, Rica Gonen, Robin D. Hanson, Nicolas Lambert, David M. Pennock, Daniel M. Reeves, Rahul Sami, and Jennifer Wortman Vaughan.

    Biography

    Yiling Chen is an Assistant Professor of Computer Science at Harvard University. She received her Ph.D. in Information Sciences and Technology from the Pennsylvania State University. Prior to working at Harvard, she spent two years at the Microeconomic and Social Systems group of Yahoo! Research in New York City. Her general research interests are on the border of computer science and economics. She is interested in designing and analyzing social computing systems according to both computational and economic objectives.

  • Elchanan Mossel, Weizmann Institute and U.C. Berkeley

    When: Wednesday, September 16, 2009 No recording available

    Description

    Results in economics established in 50s-70s imply that there is no rational way to rank or elect a winner when there are 3 or more alternatives. We will consider the following question: Is it possible to rank or elect a winner in a way that is typically rational even if for some specific unlikely preferences of the voters it is not. The talk will provide historical background on the topic as well as hints to some of the exotic mathematical tools that are used in this theory. These include inverse hyper-contractive estimates, non-linear invariance and a new isoperemetric theory involving interfaces between 3 bodies.

    Biography

    Prof. Elchanan Mossel studies mathematical, probabilistic, and algorithmic problems arising in the theory of computing, as well as in such areas as molecular biology, evolution and social choice. He received his PhD from the Hebrew University, conducted postdoctoral studies in the theory group of Microsoft’s research division, and was a Miller fellow at U.C Berkeley. He is now a faculty member in the departments of Statistics and Computer Science at U.C. Berkeley and a member of the faculty of Mathematics and Computer Science at the Weizmann Institute. He has received a number of prestigious grants and awards, including an Alfred Sloan Fellowship in Mathematics, a National Science Foundation Career Award and a BSF Bergman prize.

  • Bill Freeman, MIT

    When: Wednesday, September 9, 2009

    Description

    I’ll give my view of the frontier of computer vision and where I feel computer vision needs advances from computer science and machine learning. This will cover where computer vision works well: finding cars and faces, operating in controlled environments, and where it doesn’t work well: in the uncontrolled settings of daily life. There will be a quick history object recognition. I’ll list a number of computer vision problems, describe their structure, and tell where we need help.

    Biography

    Bill Freeman is Professor of Computer Science at the Massachusetts Institute of Technology, joining the faculty in 2001. His current research interests include machine learning applied to computer vision and graphics, and computational photography. Dr. Freeman is active in the program and organizing committees of the major computer vision, graphics, and machine learning conferences, and was the program co-chair for the International Conference on Computer Vision (ICCV) in 2005. From 1981 – 1987, he worked at Polaroid, developing image processing algorithms for electronic cameras and printers. In 1987-88, Dr. Freeman lived in China, working as a Foreign Expert at the Taiyuan University of Technology , P. R. of China. From 1992 – 2001 he worked at Mitsubishi Electric Research Labs (MERL), in Cambridge, MA, most recently as Sr. Research Scientist and Associate Director. He holds 25 patents and is an IEEE Fellow. A hobby is flying cameras in kites.

  • Erik Demaine, MIT

    When: Wednesday, September 2, 2009

    Description

    When I was six years old, my father Martin Demaine and I designed and made puzzles as the Erik and Dad Puzzle Company, which distributed to toy stores across Canada. So began our journey into the interactions between algorithms and the arts (here, puzzle design). More and more, we find that our mathematical research and artistic projects converge, with the artistic side inspiring the mathematical side and vice versa. Mathematics itself is an art form, and through other media such as sculpture, puzzles, and magic, the beauty of mathematics can be brought to a wider audience. These artistic endeavors also provide us with deeper insights into the underlying mathematics, by providing physical realizations of objects under consideration, by pointing to interesting special cases and directions to explore, and by suggesting new problems to solve (such as the metapuzzle of how to solve a puzzle). This talk will give several examples in each category, from how our first font design led to a universality result in hinged dissections, to how studying curved creases in origami led to sculptures at MoMA. The audience will be expected to participate in some live magic demonstrations.

    Biography

    Erik Demaine is Associate Professor in computer science at the Massachusetts Institute of Technology. Demaine’s research interests range throughout algorithms, from data structures for improving web searches to the geometry of understanding how proteins fold to the computational difficulty of playing games. He received a MacArthur Fellowship (2003) as a “computational geometer tackling and solving difficult problems related to folding and bending–moving readily between the theoretical and the playful, with a keen eye to revealing the former in the latter”. He cowrote a book about the theory of folding, together with Joseph O’Rourke, called Geometric Folding Algorithms: Linkages, Origami, Polyhedra (Cambridge University Press, 2007), and a book about the computational complexity of games, together with Robert Hearn, called Games, Puzzles, and Computation (A K Peters, 2009). His interests span the connections between mathematics and art, particularly sculpture and performance, including curved origami sculptures in the permanent collection of Museum of Modern Art (MoMA), New York.

  • Eric Maskin, Institute for Advanced Study

    When: Tuesday, September 1, 2009

    Description

    We show that there is a sense in which the Condorcet method (simple majority rue) is less vulnerable to strategic voting than any other reasonable voting rule satisfying independence of irrelevant alternatives (IIA). If we drop the requirement of IIA, then Condorcet and the Borda method (rank-order voting) are jointly the least vulnerable to strategizing. Joint Work of P. Dasgupta and E. Maskin

    Biography

    Eric Maskin is an economic theorist best known for his work on the theory of mechanism design. For laying the foundations of this field he shared the 2007 Nobel Memorial Prize in Economics. He has also made contributions to game theory, social choice theory, voting theory, monetary theory, contract theory, and the economics of intellectual property, among other areas. He is currently Albert O. Hirschman Professor of Social Science at the Institute for Advanced Study, Princeton. A.B., Harvard University, 1972; Ph.D., Harvard, 1976; Research Fellow, Jesus College, Cambridge University, 1976-77; Assistant and Associate Professor, Massachusetts Institute of Technology, 1977-81; Professor, MIT, 1981-84; Professor, Harvard, 1985-2000, Louis Berkman Professor of Economics, Harvard, 1997-2000; Albert O. Hirschman Professor, Institute for Advanced Study, 2000-; Visiting Lecturer, Princeton University, 2000-; Editor, Quarterly Journal of Economics, 1984-1990; Editor, Economics Letters 1992-; Director, Summer School in Economic Theory, Hebrew University of Jerusalem, 2008-; President-elect, Game Theory Society, 2008-2010, Guggenheim Fellow, 1981; Sloan Foundation Fellow, 1983-85; Fellow, American Academy of Arts and Sciences; Fellow, Econometric Society (President, 2003); Member, National Academy of Sciences; Corresponding Fellow, British Academy; Honorary Fellow, St John’s College, Cambridge; Kempe Award in Environmental Economics, 2007; Honorary Professor, Wuhan, Tsinghua, and Shenzhen Universities and Higher School of Economics, Moscow; Nobel Memorial Prize in Economic Sciences, 2007; Doctor of Humane Letters, Bard College, 2008; Doctor honoris causa, Corvinus University of Budapest, 2008, Grand Medal of the City of Marseille, 2009.