{"id":247136,"date":"2016-07-01T15:38:47","date_gmt":"2016-07-01T22:38:47","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=247136"},"modified":"2022-08-31T12:43:43","modified_gmt":"2022-08-31T19:43:43","slug":"microsoft-research-colloquium","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-research-colloquium\/","title":{"rendered":"Microsoft Research Colloquium"},"content":{"rendered":"\n\n\n\n\n

Important update: Due to COVID-19, the MSR New England Colloquium Series has been put on hiatus until further notice. We look forward to being able to host externally facing <\/b>colloquiums <\/b><\/span>again soon.  <\/b><\/span><\/p>\n

The Microsoft Research Colloquium at Microsoft Research New England focuses on research in the foundational aspects of computer science, mathematics, economics, anthropology and sociology. With an interdisciplinary flavor, this colloquium series features some of the foremost researchers in their fields talking about their research, breakthroughs and advances. The agenda typically consists of approximately 50 minutes of prepared presentation and brief Q&A, followed immediately by a brief reception* to meet the speaker and address detailed questions. We welcome members of the local academic community to attend.<\/p>\n\n\n\n\n\n

\n

\n\t\t\t\tMobiles and Micropayments as Tools in Global Development- Bill Thies\t\t\t<\/h4>\n
\n

\n<\/p>

Bill Thies (opens in new tab)<\/span><\/a>, MSR New England<\/span> | December 04, 2019<\/p>\n

Abstract:<\/h2>\n

With the global proliferation of mobile phones has also come a rise of inclusive financial services, many of them accessible via mobile phones. This talk will describe two projects, both works-in-progress, that seek to leverage a combination of mobiles and micropayments to advance development goals in low-income communities. In the Learn2Earn project, we seek to leverage mobile payments to bolster the effectiveness of public awareness campaigns, while in Project Karya, we seek to build a mobile platform for dignified crowdsourced work. Both systems are being piloted in various settings in rural India, and have ripe opportunities for collaboration as they transition to scaled deployments.<\/span><\/p>\n

Biography:<\/h2>\n

Bill Thies has been a researcher at Microsoft Research since 2008, and at Microsoft Research New England since 2017. His research focuses on building computer systems that positively impact lower income communities, primarily in developing countries. Previously, Bill worked on programming languages and compilers for multicore architectures and microfluidic chips. He earned his B.S., M.Eng and Ph.D. degrees from the Massachusetts Institute of Technology.<\/span> <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tWhere Do Trends Come From?- Devon Powers\t\t\t<\/h4>\n
\n

\n<\/p>

Devon Powers (opens in new tab)<\/span><\/a>, Temple University | November 20, 2019<\/p>\n

Abstract:<\/h2>\n

We\u2019re all familiar with trending: the word we use to describe \u201crelevant right now\u201d and a prominent feature of our social media platforms. Yet before there was trending, there were trends: broad dynamics that redirect culture, traceable shifts in dress, language, beliefs, and ways of doing and being. For businesses, the ability to identify, understand, and anticipate trends brings enormous benefits, since they can adjust their planning and behavior accordingly. Since trends suggest the direction that culture might go, they have given rise to an industry dedicated to finding them, interpreting them, and turning them in to corporate products that help steer the future. This talk will tell the story of that industry, exploring how and why the trends business came to be and examining how trend professionals do their work. Human trend forecasters digest abundant information in search of patterns and use these patterns to make educated guesses about the course of cultural change. In the talk, I will discuss some of the methods and strategies trend forecasters use, how these strategies came to be understood as \u201cfuturism,\u201d and why forecasters are adamant that their work will never be automated\u2014even as trend companies experiment with machine learning, artificial intelligence, and data science. The talk draws from my recent book On Trend: The Business of Forecasting the Future (2019).<\/i><\/p>\n

Biography:<\/h2>\n

Devon Powers is Associate Professor of Advertising, Temple University. She is the author of On Trend: The Business of Forecasting the Future (2019) and Writing the Record: The Village Voice and the Birth of Rock Criticism (2013), and co-editor of Blowing Up the Brand: Critical Perspectives on Promotional Culture (2010). Her research explores consumer culture, cultural circulation, and promotion. Her work has appeared in Journal of Consumer Culture, New Media; Society, Critical Studies in Media and Communication, and Popular Communication, among other venues, and she is the chair (2018-2020) of the Popular Communication division of the International Communication Association (ICA). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tDesigning Restorative Approaches to Moderating Adversarial Online Interactions- Cliff Lampe\t\t\t<\/h4>\n
\n

\n<\/p>

Cliff Lampe (opens in new tab)<\/span><\/a>, Michigan | November 06, 2019 | <\/span>Video (opens in new tab)<\/span><\/a><\/p>\n

Abstract:<\/h2>\n

Restorative justice is the idea that remediating a \u201ccrime\u201d should focus on supporting the victim, restoring the community, and returning the perpetrator to good community standing. In the U.S. justice system, restorative approaches have been used in parallel with more common retributive approaches and have had strong outcomes in terms of reduced recidivism. The concept of \u201ccrime\u201d in online space is more generally referred to as harassment, bullying, trolling or a variety of other terms related to adversarial interactions. Content moderation that uses retributive approaches is constrained in its effectiveness, so our project is looking to design content moderation approaches that use restorative justice. The project I will describe is in the early stages of applying restorative approaches to moderating adversarial online moderation. I will address  theories of retributive and restorative justice, and show how those have been connected to different moderation mechanisms. I will present research we\u2019ve done on \u201cretributive harassment\u201d and connect that to an upcoming research agenda related to restorative justice.<\/p>\n

Biography:<\/h2>\n

Cliff Lampe is a Professor at the University of Michigan School of Information. His work on social media and online communities has been widely cited over the past 15 years. Cliff\u2019s general interest is how interaction in social computing platforms leads to positive outcomes, and how to overcome barriers to those positive outcomes. His work touches on social capital development via social media interactions, the benefits of anonymity, civic technology, alt right organization in online spaces, and similar topics. Dr. Lampe is also the Director of the Citizen Interaction Design Program at the University of Michigan, which works closely with city governments to improve citizenship opportunities through the targeted application of information technology. He is a Distinguished Member of The ACM, and has various service roles in the HCI research community. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tEfficient Computing for AI and Robotics- Vivienne Sze\t\t\t<\/h4>\n
\n

\n<\/p>

Vivienne Sze (opens in new tab)<\/span><\/a>, MIT | October 30, 2019<\/p>\n

Abstract:<\/h2>\n

Computing near the sensor is preferred over the cloud due to privacy and\/or latency concerns for a wide range of applications including robotics\/drones, self-driving cars, smart Internet of Things, and portable\/wearable electronics.  However, at the sensor there are often stringent constraints on energy consumption and cost in addition to the throughput and accuracy requirements of the application. In this talk, we will describe how joint algorithm and hardware design can be used to reduce energy consumption while delivering real-time and robust performance for applications including deep learning, computer vision, autonomous navigation\/exploration and video\/image processing.  We will show how energy-efficient techniques that exploit correlation and sparsity to reduce compute, data movement and storage costs can be applied to various tasks including image classification, depth estimation, super-resolution, localization and mapping.<\/p>\n

Biography:<\/h2>\n

Vivienne Sze is an Associate Professor at MIT in the Electrical Engineering and Computer Science Department.  Her research interests include energy-aware signal processing algorithms, and low-power circuit and system design for portable multimedia applications, including computer vision, deep learning, autonomous navigation, and video process\/coding. Prior to joining MIT, she was a Member of Technical Staff in the R&D Center at TI, where she designed low-power algorithms and architectures for video coding. She also represented TI in the JCT-VC committee of ITU-T and ISO\/IEC standards body during the development of High Efficiency Video Coding (HEVC), which received a Primetime Engineering Emmy Award.  She is a co-editor of the book entitled \u201cHigh Efficiency Video Coding (HEVC): Algorithms and Architectures\u201d (Springer, 2014). Prof. Sze received the B.A.Sc. degree from the University of Toronto in 2004, and the S.M. and Ph.D. degree from MIT in 2006 and 2010, respectively. In 2011, she received the Jin-Au Kong Outstanding Doctoral Thesis Prize in Electrical Engineering at MIT.  She is a recipient of the 2018 Facebook Faculty Award, the 2018 & 2017 Qualcomm Faculty Award, the 2018; 2016 Google Faculty Research Award, the 2016 AFOSR Young Investigator Research Program (YIP) Award, the 2016 3M Non-Tenured Faculty Award, the 2014 DARPA Young Faculty Award, the 2007 DAC\/ISSCC Student Design Contest Award, and a co-recipient of the 2018 Symposium on VLSI Circuits Best Student Paper Award, the 2017 CICC Outstanding Invited Paper Award, the 2016 IEEE Micro Top Picks Award and the 2008 A-SSCC Outstanding Design Award. For more information about research in the Energy-Efficient Multimedia Systems Group at MIT visit: <\/i>http:\/\/www.rle.mit.edu\/eems\/ (opens in new tab)<\/span><\/a><\/i> <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tComputer Security and Safety for Victims of Intimate Partner Violence- Nicki Dell\t\t\t<\/h4>\n
\n

\n<\/p>

Nicki Dell (opens in new tab)<\/span><\/a>, Cornell Tech | October 02, 2019<\/p>\n

Abstract:<\/h2>\n

Digital technologies, including mobile devices, cloud computing services, and social networks, play a nuanced role in intimate partner violence (IPV) settings, including domestic abuse, stalking, and surveillance of victims by abusive partners. In this talk, I\u2019ll cover our recent and ongoing work on understanding technology\u2019s role in IPV, improving the privacy and security of current technologies, and designing new tools and systems that increase security, privacy, and safety for victims.<\/p>\n

Biography:<\/h2>\n

Nicki Dell is an Assistant Professor at Cornell University based at the Cornell Tech campus in New York City. Her research interests are in human-computer interaction (HCI) and information and communication technologies and development (ICTD), with a focus on designing, building, and evaluating novel computing systems that improve the lives of underserved populations in the US and around the world. At Cornell, Nicki is part of the Center for Health Equity, the Digital Life Initiative, the Atkinson Center for a Sustainable Future, and she co-leads a research team studying computer security and privacy in the context of intimate partner violence. Her research is funded by the NSF, RWJF, Google, Facebook, and others. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tWork of the Past, Work of the Future – David Autor\t\t\t<\/h4>\n
\n

\n<\/p>

David Autor (opens in new tab)<\/span><\/a>, MIT | September 04, 2019<\/p>\n

Abstract:<\/h2>\n

David plans on presenting a version of his Ely Lecture, Work of the Past, Work of the Future, which can be found here: https:\/\/www.nber.org\/papers\/w25588<\/p>\n

Biography:<\/h2>\n

David Autor, one of the leading labor economists in the world and a member of the American Academy of Arts and Sciences, is a professor and associate department head of the Massachusetts Institute of Technology Department of Economics. He is also a faculty research associate of the National Bureau of Economic Research and editor in chief of the Journal of Economic Perspectives. His current fields of specialization include human capital and earnings inequality, labor market impacts of technological change and globalization, disability insurance and labor supply, and temporary help and other intermediated work arrangements. Dr. Autor received a BA in psychology from Tufts University and a PhD in public policy at Harvard University\u2019s Kennedy School of Government. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tLarge Networks, from Mathematics to Machine Learning – Christian Borgs\t\t\t<\/h4>\n
\n

\n<\/p>

Christian Borgs (opens in new tab)<\/span><\/a>, MSR NE | Wednesday, August 28, 2019<\/p>\n

Abstract:<\/h2>\n

Graphons were invented to model the limit of large, dense graphs. While this led to interesting applications in combinatorics, most applications require limits of sparse graphs. In this talk, I will review the notion of graph limits for both dense and sparse graphs, and discuss a couple of applications: non-parametric modelling of sparse graphs, and recommendation systems where the matrix of known ratings is so sparse that two typical users have never rated the same item, making standard similarity based recommendation algorithms challenging. This is joint work with Jennifer Chayes, Henry Cohn, and several others.<\/p>\n

Biography:<\/h2>\n

Christian Borgs is deputy managing director and co-founder of Microsoft Research New England (opens in new tab)<\/span><\/a> in Cambridge, Massachusetts. He studied physics at the University of Munich (opens in new tab)<\/span><\/a>, the University Pierre et Marie Curie (opens in new tab)<\/span><\/a> in Paris, the Institut des Hautes Etudes (opens in new tab)<\/span><\/a> in Bures-sur-Yvettes, and the Max-Planck-Institute for Physics (opens in new tab)<\/span><\/a> in Munich.  He received his Ph.D. in mathematical physics from the University of Munich, held a postdoctoral fellowship at the ETH Zurich (opens in new tab)<\/span><\/a>, and received his Habilitation in mathematical physics from the Free University in Berlin (opens in new tab)<\/span><\/a>.  After his Habilitation he became the C4 Chair for Statistical Mechanics at the University of Leipzig (opens in new tab)<\/span><\/a>, and in 1997 he joined Microsoft Research to co-found the Theory Group (opens in new tab)<\/span><\/a>.   He was a manager of the Theory group until 2008, when he co-founded Microsoft Research New England. Christian Borgs is well known for his work on the mathematical theory of first-order phase transitions and finite-size effects, for which he won the 1993 Karl-Scheel Prize of the German Physical Society.  Since joining Microsoft, Christian Borgs has become one of the world leaders in the study in phase transitions in combinatorial optimization, and more generally, the use of methods from statistical physics and probability theory in problems of interest to computer science and technology.  He is one of the top researchers in the modeling and analysis of self-organized networks (such as the Internet, the World Wide Web and social networks), as well as the analysis of processes and algorithms on networks. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tDeploying Differential Privacy for the 2020 Census of Population and Housing – Simson L Garfinkel\t\t\t<\/h4>\n
\n

\n<\/p>

Simson L Garfinkel (opens in new tab)<\/span><\/a>, US Census Office\/ MIT | Wednesday, August 21, 2019<\/p>\n

Abstract:<\/h2>\n

When differential privacy was created more than a decade ago, the motivating example was statistics published by an official statistics agency. In theory there is no difference between theory and practice, but in practice there is. In attempting to transition differential privacy from the theory to practice, and in particular for the 2020 Census of Population and Housing, the U.S. Census Bureau has encountered many challenges unanticipated by differential privacy\u2019s creators. Many of these challenges had less to do with the mathematics of differential privacy and more to do with operational requirements that differential privacy\u2019s creators had not discussed in their writings. These challenges included obtaining qualified personnel and a suitable computing environment, the difficulty of accounting for all uses of the confidential data, the lack of release mechanisms that align with the needs of data users, the expectation on the part of data users that they will have access to micro-data, the difficulty in setting the value of the privacy-loss parameter, \u03b5 (epsilon), and the lack of tools and trained individuals to verify the correctness of differential privacy, and push-back from same members of the data user community. Addressing these concerns required developing a novel hierarchical algorithm that makes extensive use of a high-performance commercial optimizer; transitioning the computing environment to the cloud; educating insiders about differential privacy; engaging with academics, data users, and the general public; and redesigning both data flows inside the Census Bureau and some of the final data publications to be in line with the demands of formal privacy.<\/p>\n

Biography:<\/h2>\n

Simson Garfinkel received undergraduate degrees in Chemistry, Political Science, and the Science, Technology and Society program from the Massachusetts Institute of Technology in 1987; a MS in Journalism from Columbia University in 1988; and a PhD in Computer Science from MIT in 2005. He has over 30 years of research and development experience with over 50 publications in peer-reviewed journals and conferences. His research interests include digital forensics, usable security, and technology transfer. In 2017 Garfinkel was appointed the Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau; he was previously a Senior Advisor at the US National Institute of Standards and Technology, and an Associate Professor in the Computer Science Department at the Naval Postgraduate School. He is a fellow of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers, holds a PhD in Computer Science from MIT, and teaches as an adjunct faculty member at the George Mason University in Vienna, Virginia. Garfinkel shared the 2017 NIST Information Technology Laboratory Outstanding Standards Document Award for NIST SP 800-188, Trustworthy Email, and the 2011 Department of Defense Value Engineering Achievement Award for his leadership in the Bulk Extractor Program. He has received three Best Paper awards at the DFRWS digital forensics research symposium, as well as multiple national awards for his work in technology journalism. Garfinkel is the author or co-author of fifteen books on computing. His most recent book is The Computer Book, which features 250 chronologically arranged milestones in the history of computing from the ancient Abacus (c. 2500BCE) to the limits of computation far in the future. He is also known for Database Nation, which explored privacy issues, and Practical UNIX and Internet Security, which sold more than 250,000 copies. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tShifting the Balance Between Individuals & Data Processors: Asserting Cooperative Autonomy Over Personal Data While Generating New Value – Katina Ligett\t\t\t<\/h4>\n
\n

\n<\/p>

Katrina Ligett (opens in new tab)<\/span><\/a>, Hebrew University | Wednesday, August 14, 2019<\/p>\n

Abstract:<\/h2>\n

Technology makes it possible to measure and digitize almost every aspect of our existence, introducing serious information risks to both individuals and society. The societal benefits from uses of personal data, meanwhile, are far from fully realized. We will discuss the potential and perils of one particular approach to revising the data ecosystem\u2014the introduction of a new layer, \u201cdata co-ops\u201d, between individuals and those who use their data. While the idea of co-ops has been around for decades, we argue that perhaps its time has finally come, and invite engagement with the broad array of crucial research questions it raises. Based on joint work with Kobbi Nissim.<\/i><\/p>\n

Biography:<\/h2>\n

Katrina Ligett is an Associate Professor of Computer Science at the Hebrew University of Jerusalem, where her research interests include data privacy, algorithmic fairness, algorithmic economics, and machine learning theory. Her work has been recognized with a NSF CAREER award and a Microsoft Faculty Fellowship. She is currently a Visiting Researcher at MSR-NE. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tLanguage Deprivation and the American Sign Language Lexicon – Naomi Caselli\t\t\t<\/h4>\n
\n

\n<\/p>

Naomi Caselli (opens in new tab)<\/span><\/a> Boston University | Wednesday, August 07, 2019<\/p>\n

Abstract:<\/h2>\n

The majority of deaf children experience a period of limited exposure to language (spoken or signed) during early childhood, which has cascading effects on many aspects of learning and development. In order to better support deaf children, Dr. Caselli\u2019s lab has developed two tools for research and clinical practice. The first is ASL-LEX, a lexical database cataloguing more than 50 pieces of information about 2,500 ASL signs including, for example, how to \u2018pronounce\u2019 the sign or how frequently the sign occurs. The second is the ASL CDI, a sign language vocabulary assessment designed for children birth to five-years-old. Both ASL-LEX and the ASL CDI have research applications in psychology, education, and computer science, and as well as clinical applications for teachers and speech pathologists. In this talk, Caselli will describe how she has used these tools investigate how deaf children naturally build a vocabulary in sign language, and how limited exposure to language during early childhood affects vocabulary acquisition.<\/p>\n

Biography:<\/h2>\n

Naomi Caselli, PhD is an Assistant Professor in the Programs in Deaf Studies at Boston University. She is the PI on three NIH and NSF funded grants examining to vocabulary of ASL, and how language deprivation affects how people learn and process ASL signs. She earned a joint PhD in Psychology and Cognitive Science from Tufts University, as well as an Ed.M. in Deaf Education and an M.A. in Psychology from Boston University. She is hearing, and a native speaker of both ASL and English. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tElectric Opium War: Case Studies in Screen Addiction and Accumulation- Moira Weigel\t\t\t<\/h4>\n
\n

\n<\/p>

Moira Weigel (opens in new tab)<\/span><\/a>, Harvard | Wednesday, July 31, 2019<\/p>\n

Abstract:<\/h2>\n

In this talk, I take up the concept of screen or Internet addiction and place it in a global context, by comparing official and popular discourses about pathological Internet use in the United States and the People\u2019s Republic of China. I show that in the first case the harms of screen addiction have been characterized primarily as dehumanizing, while in the second, they have more often been construed as damaging to suzhi (\u7d20<\/span>\u8d28<\/span>), or \u201cquality.\u201d The contrast does not only demonstrate that this ostensibly biomedical phenomenon is, in fact, (also) socially and culturally constructed. It highlights the role that paradigms for understanding screen addiction play in constructing the ideal users of digital networks. Taking this comparative analysis as a point of departure, I then turn to theoretical and methodological questions. How can we account for the kinds of variability that such distinct constructions of users introduce into the global system alternately described as \u201cplatform capitalism,\u201d \u201csurveillance capitalism,\u201d or \u201cdata colonialism\u201d? I propose that feminist and postcolonial traditions offer crucial conceptual tools for analyzing how successful platforms sustain socially reproductive data relations\u2013and for understanding how the global history of digital capital is always locally instantiated.<\/p>\n

Biography:<\/h2>\n

I am a Junior Fellow at the Harvard Society of Fellows. I recently received my PhD in Comparative Literature and Film and Media from Yale University. Before Yale, I earned a BA (summa cum laude) from Harvard University, and an M. Phil from the University of Cambridge, where I was the Harvard Scholar in residence at Emmanuel College. My dissertation explores the prehistory of posthumanism from the perspective of cinema and media studies. My goal is to show how, before explicit discourses concerning \u201cThe End of Man\u201d emerged after World War II, the cinema served as an alternative public sphere, where new relations between \u201csociety\u201d and \u201cnature\u201d were modeled and negotiated. My first book, \u201cLabor of Love: The Invention of Dating\u201d was published by Farrar, Straus, and Giroux in 2016. In a series of interlinking essays, LOL investigates the shape-shifting institution of dating\u2013which, I contend, names the logic of courtship under consumer driven capitalism. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tDesigning for Low-Literate Users- Indrani Medhi Thies\t\t\t<\/h4>\n
\n

\n<\/p>

Indrani Medhi Thies (opens in new tab)<\/span><\/a>, MSR India | Wednesday, July 24, 2019<\/p>\n

Abstract:<\/h2>\n

About 800 million people in the world are completely non-literate and many are able to read only with great difficulty and effort. Even though mobile phone penetration is growing very fast, people with low levels of literacy have been found to avoid complex functions, and primarily use mobile phones for voice communication only. \u201cText-Free UIs\u201d are design principles and recommendations for computer-human interfaces that would allow a first-time, non-literate person, on first contact with a PC or a mobile phone, to immediately realize useful interaction with minimal or no external assistance. We followed an ethnographic design and iterative prototyping process involving hundreds of hours spent in the field among low-income, low-literate communities across rural and urban India, the Philippines and South Africa.<\/p>\n

Biography:<\/h2>\n

Indrani Medhi Thies is a Researcher in the Technology for Emerging Markets group at Microsoft Research in Bangalore, India. She currently sits with MSRNE. Her research interests are in the area of User Interfaces, User Experience Design, and ICTs for Global Development. Over the years, Indrani\u2019s primary work has been in user interfaces for low-literate and novice technology users. Her distinctions include an MIT TR35 award and an ACM SIGCHI Social Impact Award. Indrani has a PhD from the Industrial Design Centre, IIT Bombay, India. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tTabloid Trump and the Political Imaginary- Geoffrey Baym\t\t\t<\/h4>\n
\n

\n<\/p>

Geoffrey Baym (opens in new tab)<\/span><\/a>, Temple | Wednesday, July 17, 2019<\/p>\n

Abstract:<\/h2>\n

Years before Twitter, Fox News, or reality TV, Donald Trump became a public figure through his presence in tabloid media. Much of that focused on sex and spectacle, but early tabloid coverage of Trump was also surprisingly political, with speculation about a possible presidential campaign beginning as early as 1987. Although that coverage has been largely overlooked, this study reveals that tabloid media played a central role in building the foundations of Trump\u2019s political identity. It tracks the early articulation of the Trump character and its simultaneous politicization within a media space outside the ostensibly legitimate arena of institutional public-affairs journalism. In so doing, it reveals the deeper contours of an imagined political world in which a Trump presidency could be conceivable in the first instance \u2013 a political imaginary adjacent to the deep assumptions of liberal Democracy, and therefore long invisible to most serious observers of presidential politics.<\/p>\n

Biography:<\/h2>\n

Geoffrey Baym is Professor of Media Studies at the Klein College of Media and Communication, Temple University. The author of From Cronkite to Colbert: The Evolution of Broadcast News, he has published numerous articles and chapters exploring the hybridization of news, public affairs media, and political discourse. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tJump Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream- Jonathan Gruber\t\t\t<\/h4>\n
\n

\n<\/p>

Jonathan Gruber (opens in new tab)<\/span><\/a>, MIT | Wednesday, July 10, 2019<\/p>\n

Abstract:<\/h2>\n

This new book, by Jonathan Gruber and Simon Johnson, argues that a return to large public investments in research and development, implemented in a place based fashion, can lead to faster and more equitable growth in the US.<\/p>\n

Biography:<\/h2>\n

Dr. Jonathan Gruber is the Ford Professor of Economics at the Massachusetts Institute of Technology, where he has taught since 1992.  He is also the Director of the Health Care Program at the National Bureau of Economic Research, and President of the American Society of Health Economists.  He is a member of the Institute of Medicine, the American Academy of Arts and Sciences, the National Academy of Social Insurance, and the Econometric Society.  He has published more than 160 research articles, has edited six research volumes, and is the author of Public Finance and Public Policy<\/em>, a leading undergraduate text, and Health Care Reform<\/em>, a graphic novel.  In 2006 he received the American Society of Health Economists Inaugural Medal for the best health economist in the nation aged 40 and under. During the 1997-1998 academic year, Dr. Gruber was on leave as Deputy Assistant Secretary for Economic Policy at the Treasury Department. From 2003-2006 he was a key architect of Massachusetts\u2019 ambitious health reform effort, and in 2006 became an inaugural member of the Health Connector Board, the main implementing body for that effort.  During 2009-2010 he served as a technical consultant to the Obama Administration and worked with both the Administration and Congress to help craft the Patient Protection and Affordable Care Act.  In 2011 he was named \u201cOne of the Top 25 Most Innovative and Practical Thinkers of Our Time\u201d by Slate Magazine.  In both 2006 and 2012 he was rated as one of the top 100 most powerful people in health care in the United States by Modern Healthcare Magazine.  Dr. Gruber is the Chair of the Industry Advisory Board for Flare Capital Partners and is on the board of the Health Care Cost Institute. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tHigher Fidelity Systems for Online Discussion- David Karger\t\t\t<\/h4>\n
\n

\n<\/p>

David Karger (opens in new tab)<\/span><\/a>, MIT | Wednesday, June 26, 2019 | Video (opens in new tab)<\/span><\/a><\/p>\n

Abstract:<\/h2>\n

My group develops systems to help people manage information and share it with others.  We study both text (online discussion tools) and structured data (information visualization and management applications).  Our guiding principle is that humans are powerful and creative information managers, and that the key challenge is to build systems that can accurately store and present the sophisticated thinking that people apply to their information. In this talk I\u2019ll take a rapid tour through several of our online discussion projects, emphasizing this common solution principle.  I\u2019ll discuss NB, an online-education tool for discussing course content in the margins, Murmur, a system that modernizes the mailing list to address its drawbacks while preserving its great utility, Squadbox, a tool that scaffolds workflows to help protect targets of online harassment, and Wikum, a system that bridges between discussion forums and wikis by helping forum participants work together to build a summary of a long discussion\u2019s main points and conclusions.<\/p>\n

Biography:<\/h2>\n

David R. Karger is a Professor of Electrical Engineering and Computer Science at MIT\u2019s Computer Science and Artificial Intelligence Laboratory.  David earned his Ph.D. at Stanford University in 1994 and has since contributed to many areas of computer science, publishing in algorithms, machine learning, information retrieval, personal information management, networking, peer to peer systems, databases, coding theory, and human-computer interaction. A general interest has been to make it easier for people to create, find, organize, manipulate, and share information.  He formed and leads the Haystack group to investigate these issues. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tCapitalism and Entrepreneurship in Socialist China- Adam Frost\t\t\t<\/h4>\n
\n

\n<\/p>

Adam Frost (opens in new tab)<\/span><\/a>, Harvard | Wednesday, May 08, 2019 | Video (opens in new tab)<\/span><\/a><\/p>\n

Abstract:<\/h2>\n

Contrary to popular belief, the modern Chinese economy did not spring into being in 1978 with Deng Xiaoping\u2019s call for \u201cReform and Opening Up.\u201d Rather, it was a product of an incremental, bottom-up transformation, decades in the making. As my research shows, throughout China\u2019s socialist era, citizens at all levels of society\u2014 from farmers who illegally traded ration coupons, to state officials who colluded with underground factories to manufacture goods\u2014 actively subverted state control to profit from inefficiencies in planning and, more generally, to make things work. In the absence of \u201cgood institutions,\u201d they formed illicit networks that subsumed the ordinary functions of markets (ex. coordination, information aggregation, risk sharing, lending) and created productive assemblages of capital, labor, and knowledge. Drawing upon an array of unconventional sources that have never before been examined by scholars, I will argue that capitalism and entrepreneurship not only supported the functioning of China\u2019s socialist economy, but fundamentally reshaped it and created the conditions for subsequent economic growth.<\/p>\n

Biography:<\/h2>\n

Adam Frost is a Ph.D. student at Harvard University who specializes in the economic history of modern China. His research broadly explores the history of informal economies, from unlicensed taxi drivers in 1920\u2019s Shanghai to underground entrepreneurs in socialist China to beggars in contemporary Xi\u2019an. In addition to conducting traditional archival research, Adam draws heavily upon ethnography, oral history, and contraband documents. Most recently he completed a documentary on the everyday lives of beggars in Northwest China entitled, The End of Bitterness. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tNew Algorithms for Interpretable Machine Learning in High Stakes Decisions- Cynthia Rudin\t\t\t<\/h4>\n
\n

\n<\/p>

Cynthia Rudin, (opens in new tab)<\/span><\/a> Duke | Wednesday, December 12, 2018<\/p>\n

Abstract:<\/h2>\n

With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed models for medical imaging, and poor bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable models, they come with their own explanations, which are faithful to what the model actually computes. I will present work on (i) optimal decision lists, (ii) interpretable neural networks for computer vision, and (iii) optimal scoring systems (sparse linear models with integer coefficients). In our applications, we have always been able to achieve interpretable models with the same accuracy as black box models.<\/p>\n

Biography:<\/h2>\n

Cynthia Rudin is an associate professor of computer science, electrical and computer engineering, and statistics at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD in applied and computational mathematics from Princeton University. She is the recipient of the 2013 and 2016 INFORMS Innovative Applications in Analytics Awards, an NSF CAREER award, was named as one of the \u201cTop 40 Under 40\u201d by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. Work from her lab has won 10 best paper awards in the last 5 years. She is past chair of the INFORMS Data Mining Section, and is currently chair of the Statistical Learning and Data Science section of the American Statistical Association. She has served on committees for DARPA, the National Institute of Justice, and AAAI. She has served on three committees for the National Academy of Sciences, including the Committee on Applied and Theoretical Statistics, the Committee on Law and Justice, and the Committee on Analytic Research Foundations for the Next-Generation Electric Grid. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tWill AI Cure Healthcare?- Ernest Fraenkel\t\t\t<\/h4>\n
\n

\n<\/p>

Ernest Fraenkel (opens in new tab)<\/span><\/a>, MIT | Wednesday, October 03, 2018<\/p>\n

Abstract:<\/h2>\n

Artificial intelligence (AI) is widely touted as the solution to almost every problem in society.AI is predicted to transform the workplace, manufacturing, farming, marketing, banking, insurance, transportation, policing, education and even dating. What are the prospects for applying AI to healthcare? What problems are ripe for data driven approaches? Which solutions are within reach if we plan properly, and which remain in the distant future?  I will provide somewhat opinionated answers to these questions and look forward to a healthy discussion.<\/p>\n

Biography:<\/h2>\n

Ernest Fraenkel is a Professor of Biological Engineering at the Massachusetts Institute of Technology. His laboratory seeks to understand diseases from the perspective of systems biology.T hey develop computational and experimental approaches for finding new therapeutic strategies by analyzing molecular networks, clinical and behavioral data. He received his PhD in Biology from MIT after graduating summa cum laude from Harvard College with an AB in Chemistry and Physics. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tBiased data, biased predictions, and disparate impacts: Evaluating risk assessment instruments in criminal justice- Alex Chouldechova\t\t\t<\/h4>\n
\n

\n<\/p>

Alex Chouldechova (opens in new tab)<\/span><\/a>, Carnegie Mellon University| Wednesday, August 29, 2018<\/p>\n

Abstract:<\/h2>\n

Risk assessment tools are widely used around the country to inform decision making within the criminal justice system. Recently, considerable attention has been paid to whether such tools may suffer from predictive racial bias, and whether their use may result in racially disparate impact. Evaluating a tool for predictive bias typically entails a comparison of different predictive accuracy metrics across racial groups. Problematically, such evaluations are conducted with respect to target variables that may represent biased measurements of an unobserved outcome of more central interest. For instance, while it would be desirable to predict whether an individual will commit a future crime (reoffend), we only observe proxy outcomes such as rearrest and reconviction.  My talk will focus on how this issue of \u201ctarget variable bias\u201d affects evaluations of a tool\u2019s predictive bias.  I will also discuss various reasons why risk assessment tools may result in racially disparate impact.<\/p>\n

Biography:<\/h2>\n

Alexandra Chouldechova is an Assistant Professor of Statistics and Public Policy at Carnegie Mellon University\u2019s Heinz College.  Her research over the past few years has centered on fairness in predictive modeling, particularly in the context of criminal justice and public services applications.  Some of her recent work has delved into tradeoffs between different notions of predictive bias, disparate impact of data-driven decision-making, and inferential challenges stemming from common forms of \u201cdata bias\u201d.  A statistician by training, Alex received her Ph.D. in Statistics from Stanford University in 2014. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tDelegating Computation- Yael Kalai\t\t\t<\/h4>\n
\n

\n<\/p>

Yael Kalai (opens in new tab)<\/span><\/a>, MSR-NE | Wednesday, August 22, 2018<\/p>\n

Abstract:<\/h2>\n

Efficient verification of computation, also known as delegation of computation, is one of the most fundamental notions in computer science, and in particular it lies at the heart of the P vs. NP question. In this talk I will give a brief overview of the evolution of proofs in computer science, and show how this evolution is instrumental to solving the problem of delegating computation. I will highlight a curious connection between the problem of delegating computation and the notion of no-signaling strategies from quantum physics.<\/p>\n

Biography:<\/h2>\n

Most recently an Assistant Professor of Computer Science at Georgia Tech. Before this, Yael was a post-doc at the Weizmann Institute in Israel and Microsoft Research in Redmond. She graduated from MIT, working in cryptography under the superb supervision of Shafi Goldwasser. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThe Dynamics of Network Formation and Some Social and Economic Consequences- Matt Jackson\t\t\t<\/h4>\n
\n

\n<\/p>

Matt Jackson (opens in new tab)<\/span><\/a>, Stanford | Wednesday, August 15, 2018<\/p>\n

Abstract:<\/h2>\n

Technological advances are changing interaction patterns from world trade to social network patterns. Two different implications of evolving networks are discussed \u2013 one is changing trade patterns and their impact on military alliances and wars, and the other is the formation and evolution of friendships among students, and resulting academic performance.<\/p>\n

Biography:<\/h2>\n

Matthew O. Jackson is the William D. Eberle Professor of Economics at Stanford University and an external faculty member of the Santa Fe Institute and a senior fellow of CIFAR.  He was at Northwestern University and Caltech before joining Stanford, and received his BA from Princeton University and PhD from Stanford.  Jackson\u2019s research interests include game theory, microeconomic theory, and the study of social and economic networks, on which he has published many articles and the books Social and Economic Networks and The Human Network.  He teaches an online course on networks and co-teaches two others on game theory.  Jackson is a Member of the National Academy of Sciences, a Fellow of the American Academy of Arts and Sciences,  a Fellow of the Econometric Society, a Game Theory Society Fellow, and an Economic Theory Fellow, and his other honors include the von Neumann Award of the Rajk Laszlo College, a Guggenheim Fellowship, the Social Choice and Welfare Prize, and the B.E.Press Arrow Prize. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThe Welfare Effects of Information- Cass Sunstein\t\t\t<\/h4>\n
\n

\n<\/p>

Cass Sunstein (opens in new tab)<\/span><\/a>, Harvard | Wednesday, August 08, 2018<\/p>\n

Abstract:<\/h2>\n

Some information is beneficial; it makes people\u2019s lives go better. Some information is harmful; it makes people\u2019s lives go worse. Some information has no welfare effects at all; people neither gain nor lose from it. Under prevailing executive orders, agencies must investigate the welfare effects of information by reference to cost-benefit analysis. Federal agencies have (1) claimed that quantification of benefits is essentially impossible; (2) engaged in \u201cbreak-even analysis\u201d; (3) projected various endpoints, such as health benefits or purely economic savings; and (4) relied on private willingness-to-pay for the relevant information. All of these approaches run into serious objections. With respect to (4), people may lack the information that would permit them to say how much they would pay for (more) information; they may not know the welfare effects of information; and their tastes and values may shift over time, in part as a result of information. These points suggest the need to take the willingness-to-pay criterion with many grains of salt, and to learn more about the actual effects of information, and of the behavioral changes produced by information, on people\u2019s experienced well-being.<\/p>\n

Biography:<\/h2>\n

Cass R. Sunstein is currently the Robert Walmsley University Professor at Harvard. From 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs. He is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. Mr. Sunstein has testified before congressional committees on many subjects, and he has been involved in constitution-making and law reform activities in a number of nations. Mr. Sunstein is author of many articles and books, including Republic.com (2001), Risk and Reason (2002), Why Societies Need Dissent (2003), The Second Bill of Rights (2004), Laws of Fear: Beyond the Precautionary Principle (2005), Worst-Case Scenarios (2001), Nudge: Improving Decisions about Health, Wealth, and Happiness (with Richard H. Thaler, 2008), Simpler: The Future of Government (2013) and most recently Why Nudge? (2014) and Conspiracy Theories and Other Dangerous Ideas (2014). He is now working on group decision making and various projects on the idea of liberty. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tPlanar Graph Perfect Matching is in NC- Vijay Vazirani\t\t\t<\/h4>\n
\n

\n<\/p>

Vijay Vazirani (opens in new tab)<\/span><\/a>, University of California, Irvine | Wednesday, August 01, 2018 | <\/span> Video (opens in new tab)<\/span><\/a> Please Note: This seminar is of a more technical nature than our typical colloquium talks.<\/p>\n

Abstract:<\/h2>\n

Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #P-complete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution! The case of bipartite planar graphs was solved by Miller and Naor in 1989 via a flow-based algorithm.  In 2000, Mahajan and Varadarajan gave an elegant way of using counting matchings to finding one, hence giving a different NC algorithm.  However, non-bipartite planar graphs still didn\u2019t yield: the stumbling block being odd tight cuts.  Interestingly enough, these are also a key to the solution: a balanced tight odd cut leads to a straight-forward divide and conquer NC algorithm. However, a number of ideas are needed to find such a cut in NC; the central one being an NC algorithm for finding a face of the perfect matching polytope at which Omesdfsdfn) new conditions, involving constraints of the polytope, are simultaneously satisfied. Paper available at: https:\/\/arxiv.org\/pdf\/1709.07822.pdf Joint work with Nima Anari.<\/p>\n

Biography:<\/h2>\n

Vijay Vazirani received his BS at MIT and his Ph.D. from University of California, Berkeley. He is currently Distinguished Professor at University of California, Irvine.  He has made seminal contributions to the theory of algorithms, in particular to the classical maximum matching problem, approximation algorithms, and complexity theory. Over the last decade and a half, he has contributed widely to an algorithmic study of economics and game theory. Vazirani is author of a definitive book on Approximation Algorithms, published in 2001, and translated into Japanese, Polish, French and Chinese. He was McKay Fellow at U. C. Berkeley in Spring 2002, and Distinguished SISL Visitor at Caltech during 2011-12. He is a Guggenheim Fellow and an ACM Fellow. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tCan Intricate Structure Occur by Accident?- Henry Cohn\t\t\t<\/h4>\n
\n

\n<\/p>

Henry Cohn (opens in new tab)<\/span><\/a>, MSR-NE | Wednesday, July 25, 2018<\/p>\n

Abstract:<\/h2>\n

Many topics in science and engineering involve a delicate interplay between order and disorder.  For example, this occurs in the study of interacting particle systems, as well as related problems such as designing error-correcting codes for noisy communication channels.  Some solutions of these optimization problems exhibit beautiful long-range order while others are amorphous.  Finding a clear basis for this dichotomy is a fundamental mathematical problem, sometimes called the crystallization problem.  It\u2019s natural to assume that any occurrence of dramatic structure must happen for a good reason, but is that really true?  I wish I knew.  In this talk (intended to be accessible to an interdisciplinary audience) we\u2019ll take a look at some test cases.<\/p>\n

Biography:<\/h2>\n

Henry Cohn\u2019s mathematical interests include symmetry and exceptional structures; more generally, he enjoys any area in which concrete problems are connected in surprising ways with abstract mathematics.  He came to MSR as a postdoc in 2000 and joined the theory group long-term in 2001.  In 2007 he became head of the cryptography group, and in 2008 he moved to Cambridge with Jennifer Chayes and Christian Borgs to help set up Microsoft Research New England. He stays up late at night worrying about why the 16th dimension isn\u2019t like the 8th or 24th. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThe Simple Economics of Artificial Intelligence- Avi Goldfarb\t\t\t<\/h4>\n
\n

\n<\/p>

Avi Goldfarb (opens in new tab)<\/span><\/a>, University of Toronto | Wednesday, July 18, 2018<\/p>\n

Abstract:<\/h2>\n

Recent excitement in artificial intelligence has been driven by advances in machine learning. In this sense, AI is a prediction technology. It uses data you have to fill in information you don\u2019t have. These advances can be seen as a drop in the cost of prediction. This framing generates powerful, but easy-to-understand implications. As the cost of something falls, we will do more of it. Cheap prediction means more prediction. Also, as the cost of something falls, it affects the value of other things. As machine prediction gets cheap, human prediction becomes less valuable while data and human judgment become more valuable. Business models that are constrained by uncertainty can be transformed, and organizations with an abundance of data and a good sense of judgment have an advantage. Based on the book Prediction Machines by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.<\/p>\n

Biography:<\/h2>\n

Avi Goldfarb is the Rotman Professor of Artificial Intelligence and Healthcare at the Rotman School of Management, University of Toronto, and coauthor of the Globe & Mail bestselling book Prediction Machines: The Simple Economics of Artificial Intelligence. Avi is also Senior Editor at Marketing Science, Chief Data Scientist at the Creative Destruction Lab, and Research Associate at the National Bureau of Economic Research where he helps run the initiatives around digitization and artificial intelligence. Avi\u2019s research focuses on the opportunities and challenges of the digital economy. He has published over 60 academic articles in a variety of outlets in marketing, statistics, law, computing, management, and economics. This work has been discussed in White House reports, Congressional testimony, European Commission documents, The Economist, Globe and Mail, National Public Radio, The Atlantic, New York Times, Financial Times, Wall Street Journal, and elsewhere. He holds a Ph.D. in economics of Northwestern University. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tYou Can Lead a Horse to Water: Spatial Learning and Path Dependence in Consumer Search- Greg Lewis\t\t\t<\/h4>\n
\n

\n<\/p>

Greg Lewis (opens in new tab)<\/span><\/a>, MSR- NE | Wednesday, July 11, 2018 | Video (opens in new tab)<\/span><\/a><\/p>\n

Abstract:<\/h2>\n

We introduce a model of search by imperfectly informed consumers with unit demand. Consumers learn spatially: sampling the payoff to one product causes them to update their payoffs about all products that are nearby in some attribute space.  Search is costly, and so consumers face a trade-off between \u201cexploring\u201d far apart regions of the attribute space and \u201cexploiting\u201d the areas they already know they like. We present evidence of spatial learning in data on online camera purchases, as consumers who sample unexpectedly low quality products tend to subsequently sample products that are far away in attribute space. We develop a flexible parametric specification of the model where consumer utility is sampled as a Gaussian process and use it to estimate demand in the camera data using Markov Chain Monte Carlo (MCMC) methods. We conclude with a counterfactual experiment in which we manipulate the initial product shown to a consumer, finding that a bad initial experience can lead to early termination of search. Product search rankings can therefore substantially affect consumer search paths and purchase decisions.<\/p>\n

Biography:<\/h2>\n

Greg Lewis is an economist, whose main research interests lie in industrial organization, market design and applied econometrics. He received his bachelor\u2019s degree in economics and statistics from the University of the Witwatersrand in South Africa, and his MA and PhD both from the University of Michigan.  He then served on the economics faculty at Harvard, as assistant and then associate professor.  Recently, his time has been spent analyzing strategic learning by firms in the British electricity market, suggesting randomized mechanisms for price discrimination in online display advertising, developing econometric models of auction markets, and evaluating the design of procurement auctions. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tNew Media, New Work, and the New Call to Intimacy: The Case of Musicians- Nancy Baym\t\t\t<\/h4>\n
\n

\n<\/p>

Nancy Baym (opens in new tab)<\/span><\/a> MSR- NE | Wednesday, June 27, 2018<\/p>\n

Abstract:<\/h2>\n

The architectures and norms of new media push people toward sharing everyday intimacies they might historically have kept to close friends and family. As more people are pushed toward gig work, the original gig workers \u2013 musicians \u2013 provide an exemplary lens for exploring the implications of this widespread blurring of interpersonal communication into everyday practices of professional viability. This talk, based on the new book Playing to the Crowd: Musicians, Audiences, and the Intimate Work of Connection, draws on nearly a decade of work to show how the pressure to be \u201cauthentic\u201d in communicating with audiences, combined with the designs and materialities of new communication technologies, raises dialectic tensions that musicians \u2013 and many others \u2013 must manage as social media platforms become integral to professional life.<\/p>\n

Biography:<\/h2>\n

Nancy Baym is a Principal Researcher at Microsoft Research in Cambridge, Massachusetts. After earning her Ph.D. in 1994 in the Department of Speech Communication at the University of Illinois, she was on the faculty of Communication departments for 18 years before joining Microsoft in 2012. With Steve Jones (and others), she was a founder of the Association of Internet Researchers and served as its second President. She is the author of Personal Connections in the Digital Age (Polity Press), now in its second edition, Tune In, Log On: Soaps, Fandom and Online Community (Sage Press), and co-editor of Internet Inquiry: Conversations About Method (Sage Press) with Annette Markham. She serves on the editorial boards of New Media & Society, the Journal of Communication, The Journal of Computer Mediated Communication, and numerous other journals. Her book Playing to the Crowd: Musicians, Audiences, and the Intimate Work of Connection will be published in July by NYU Press.  More information, most of her articles, and some of her talks are available at nancybaym.com. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tCustodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media- Tarleton Gillespie\t\t\t<\/h4>\n
\n

\n<\/p>

Tarleton Gillespie (opens in new tab)<\/span><\/a>, MSR- NE | Wednesday, June 13, 2018<\/p>\n

Abstract:<\/h2>\n

This talk will give an overview of my new book, and highlight the public debate about content moderation and its implications for those studying or building information systems that host user content. Most social media users want their chosen platforms free from harassment and porn. But they also want to see the content they choose to see. This means platforms face an irreconcilable contradiction: while platforms promise an open space for participation and community, every one of them imposes rules of some kind. In the early days of social media, content moderation was hidden away, even disavowed. But the illusion of the open platform has, in recent years, begun to crumble. Today, content moderation has never been more important, or more controversial. In this book, I discuss how social media platforms police what we post online \u2013 and the societal impact of these decisions. Content moderation still receives too little public scrutiny. How and why platforms moderate can shape societal norms and alter the contours of public discourse, cultural production, and the fabric of society \u2014 and the very fat of moderation should change how we understand what platforms are.<\/p>\n

Biography:<\/h2>\n

Tarleton Gillespie is a principal researcher at Microsoft Research, an affiliated associate professor in Cornell\u2019s Department of Communication and Department of Information Science, co-founder of the blog Culture Digitally, author of Wired Shut: Copyright and the Shape of Digital Culture (MIT, 2007), co-editor of Media Technologies: Essays on Communication, Materiality, and Society (MIT, 2014), and the author of the forthcoming Custodians of the Internet (Yale, 2018). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tBuilding Machines That Learn and Think Like People- Josh Tenenbaum\t\t\t<\/h4>\n
\n

\n<\/p>

Josh Tenenbaum (opens in new tab)<\/span><\/a>, MIT | Wednesday, June 6, 2018<\/p>\n

Abstract:<\/h2>\n

Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.<\/p>\n

Biography:<\/h2>\n

Joshua Tenenbaum is Professor of Cognitive Science and Computation at the MIT. He and his colleagues in the Computational Cognitive Science group want to understand that most elusive aspect of human intelligence: our ability to learn so much about the world, so rapidly and flexibly. While their core interests are in human learning and reasoning, they also work actively in machine learning and artificial intelligence. These two programs are inseparable: bringing machine-learning algorithms closer to the capacities of human learning should lead to more powerful AI systems as well as more powerful theoretical paradigms for understanding human cognition. Their current research explores the computational basis of many aspects of human cognition: learning concepts, judging similarity, inferring causal connections, forming perceptual representations, learning word meanings and syntactic principles in natural language, noticing coincidences and predicting the future, inferring the mental states of other people, and constructing intuitive theories of core domains, such as intuitive physics, psychology, biology, or social structure. He is known for contributions to mathematical psychology and Bayesian cognitive science. Tenenbaum previously taught at Stanford University, where he was the Wasow Visiting Fellow from October 2010-January 2011. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThe Cloud is a Factory: A Material History of the Digital Economy- Nathan Ensmenger\t\t\t<\/h4>\n
\n

\n<\/p>

Nathan Ensmenger (opens in new tab)<\/span><\/a>, Indiana | Wednesday, May 2, 2018<\/p>\n

Abstract:<\/h2>\n

Drawing on the literature on environmental history, this paper surveys the multiple ways in which humans, environment, and computing technology have been in interaction over the past several centuries. From Charles Babbage\u2019s Difference Engine (a product of an increasingly global British maritime empire) to Herman Hollerith\u2019s tabulating machine (designed to solve the problem of \u201cseeing like a state\u201d in the newly trans-continental American Republic) to the emergence of the ecological sciences and the modern petrochemical industry, information technologies have always been closely associated with the human desire to understand and manipulate their physical environment. More recently, humankind has started to realize the environmental impacts of information technology, including not only the toxic byproducts associated with their production, but also the polluting effects of the massive amounts of energy and water required by data centers at Google and Facebook (whose physicality is conveniently and deliberately camouflaged behind the disembodied, ethereal \u201ccloud\u201d). More specifically, this paper will explore the global life-cycle of a digital commodity \u2014 in this case a unit of the virtual currency Bitcoin \u2014 from lithium mines in post-colonial South America to the factory city-compounds of southern China to a \u201cserver farm\u201d in the Pacific Northwest to the \u201ccomputer graveyards\u201d outside of Agbogbloshie, Ghana. The goal is to ground the history of information technology in the material world by focusing on the relationship between \u201ccomputing power\u201d and more traditional processes of resource extraction, exchange, management, and consumption.<\/p>\n

Biography:<\/h2>\n

Nathan Ensmenger is an Associate Professor in the School of Informatics, Computing and Engineering at Indiana University, where he also serves as the Chair of the Informatics department. He specializes in the social and labor history of computing, gender and computing, and the relationship between computing and the environment. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tAlan Turing: Pioneer of the Information Age- Jack Copeland\t\t\t<\/h4>\n
\n

\n<\/p>

Jack Copeland (opens in new tab)<\/span><\/a>, University of Canterbury | Wednesday, April 11, 2018<\/p>\n

Abstract:<\/h2>\n

At the turn of the millennium Time magazine listed Alan Turing among the twentieth century\u2019s 100 greatest minds, alongside the Wright brothers, Albert Einstein, Crick and Watson, and Alexander Fleming. Turing\u2019s achievements during his short life of 42 years were legion. Best known as the genius who broke some of Germany\u2019s most secret codes during the war of 1939-45, Turing was also the father of the modern computer. Today, all who click or touch to open are familiar with the impact of his ideas. To Turing we owe the concept of storing applications, and the other programs necessary for computers to do our bidding, inside the computer\u2019s memory, ready to be opened when we wish. We take for granted that we use the same slab of hardware to shop, manage our finances, type our memoirs, play our favourite music and videos, and send instant messages across the street or around the world. Like many great ideas this one now seems as obvious as the cart and the arch, but with this single invention\u2014the stored-program universal computer\u2014Turing changed the world. Turing was a theoretician\u2019s theoretician, yet he also had immensely practical interests. In 1945 he designed a large stored-program electronic computer called the Automatic Computing Engine, or ACE. Turing\u2019s sophisticated ACE design achieved commercial success as the English Electric Company\u2019s DEUCE, one of the earliest electronic computers to go on the market. In those days\u2014the first eye-blink of the Information Age\u2014the new machines sold at a rate of no more than a dozen or so a year. But in a handful of decades, Turing\u2019s ideas transported us from an era where \u2018computer\u2019 was the term for a human clerk who did the sums in the back office of an insurance company or science lab, into a world where many have never known life without the Internet.<\/p>\n

Biography:<\/h2>\n

Jack Copeland FRS NZ is Distinguished Professor of Philosophy at the University of Canterbury in New Zealand. He is also Co-Director and Permanent Visiting Fellow of the Turing Centre at the Swiss Federal Institute of Technology in Zurich, and Honorary Research Professor of Philosophy at the University of Queensland, Australia, and is currently the John Findlay Visiting Professor of Philosophy at Boston University. In 2016 Jack received the international Covey Award in recognition of \u201ca substantial record of innovative research in the field of computing and philosophy\u201d, and in 2017 his name was added to the IT History Society Honor Roll, which the Society describes as \u201ca listing of a select few that have made an out-of-the-ordinary contribution to the information industry\u201d. He has just been awarded the annual Barwise Prize by the American Philosophical Association for \u201csignificant and sustained contributions to areas relevant to philosophy and computing\u201d. A Londoner by birth, Jack gained a D.Phil. in mathematical logic from the University of Oxford. His books include The Essential Turing (Oxford University Press); Colossus: The Secrets of Bletchley Park\u2019s Codebreaking Computers (Oxford University Press); Alan Turing\u2019s Electronic Brain (Oxford University Press); Computability: Turing, G\u00f6del, Church, and Beyond (MIT Press); Logic and Reality (Oxford University Press), and Artificial Intelligence (Blackwell). He has published more than 100 journal articles on the history and philosophy of both computing and mathematical logic. In 2014 Oxford University Press published his highly accessible paperback biography Turing, and in 2017 released his latest book The Turing Guide. Jack has been script advisor, co-writer, and scientific consultant for a number of historical documentaries. One of them, Arte TV\u2019s The Man Who Cracked the Nazi Codes is based on his bio Turing and won the audience\u2019s Best Documentary prize at the 2015 FIGRA European film festival; another, the BBC\u2019s Code-Breakers: Bletchley Park\u2019s Lost Heroes won two BAFTAs and was listed as one of the year\u2019s three best historical documentaries at the 2013 Media Impact Awards in New York City. Jack was Visiting Professor of Information Science at Copenhagen University in 2014-15 and Visiting Professor in Computer Science and Philosophy at the Swiss Federal Institute of Technology in 2013-14; and in 2012 he was the Royden B. Davis Visiting Chair of Interdisciplinary Studies in the Department of Psychology at Georgetown University, Washington DC. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tConnected self-ownership and implications for online networks and privacy rights- Ann Cudd\t\t\t<\/h4>\n
\n

\n<\/p>

Ann Cudd (opens in new tab)<\/span><\/a>, BU | Wednesday, April 4, 2018<\/p>\n

Abstract:<\/h2>\n

This talk explores the concept of the connected self-owner, which takes account of the metaphysical significance of relations among persons for persons\u2019 capacities to be owners. This concept of the self-owner conflicts with the traditional, libertarian understanding of the self as atomistic or essentially separable from all others. I argue that the atomistic self cannot be a self-owner. A self-owner is a moral person with intentions, desires, thoughts. But to have intentions, desires, thoughts a being has to relate to others through language and norm-guided behavior. Individual beings require the pre-existence of norms and norm-givers to bootstrap their selves, and norms and norm-givers and norm-takers are necessary to continue to support the self. That means, I argue, that the self who can be an owner is essentially connected. Next, I ask how humans become connected selves and whether that matters morally. I distinguish among those connections that support development of valuable capacities. One such capacity is the autonomous individual. I argue that the social connections that allow the development of autonomous individuals have moral value and should be fostered; oppressive social connections, on the other hand, tend to thwart autonomy. This has implications for how we should think about privacy rights and online networks.<\/p>\n

Biography:<\/h2>\n

Ann E. Cudd is Dean of the College and Graduate School of Arts & Sciences and Professor of Philosophy at Boston University. In her role as dean, she has worked to increase diversity and inclusion at the College, fundraising tirelessly for need-based scholarships and taking concrete steps to increase faculty diversity. She has identified five areas of research and teaching excellence within the College as priorities for investment: the digital revolution in the arts & sciences, neuroscience, climate change and sustainability, the humanities, and the study of inequality. A champion of the arts and sciences, she has been instrumental in launching a dialogue among Boston colleges and universities about the importance of liberal education. Prior to joining BU in 2015, Cudd was Vice Provost and Dean of Undergraduate Studies and University Distinguished Professor of Philosophy at the University of Kansas. Among other roles during her 27 years at KU, she directed the Women, Gender, and Sexuality Studies program and served as associate dean for the humanities in the College of Liberal Arts & Sciences. Her award-winning 2006 book, Analyzing Oppression (Oxford University Press), examines the economic, social, and psychological causes and effects of oppression. She is co-editor or co-author of six other books, including Capitalism For and Against: A feminist debate (with Nancy Holmstrom; Cambridge University Press). Her recent work concerns the moral value of capitalism, conceptions of domestic violence in international law, the injustice of educational inequality, and the self-ownership debate in liberal political philosophy. She is past president and founding member of the Society for Analytical Feminism and vice president and president-elect of the American section of the International Society for the Philosophy of Law and Social Philosophy (AMINTAPHIL). She received her BA in Mathematics and Philosophy from Swarthmore College and an MA in Economics and PhD in Philosophy from the University of Pittsburgh. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tPrelaunch Demand Estimation- Juanjuan Zhang\t\t\t<\/h4>\n
\n

\n<\/p>

Juanjuan Zhang (opens in new tab)<\/span><\/a>, MIT | Wednesday, December 13, 2017<\/p>\n

Abstract:<\/h2>\n

Demand estimation is important for new-product strategies, but is challenging in the absence of actual sales data. We develop a cost-effective method to estimate the demand of new products based on incentive-aligned choice experiments. Our premise is that there exists a structural relationship between manifested demand and the probability of consumer choice being realized. We illustrate the mechanism using a theory model, in which consumers learn their product valuation through costly effort and their effort incentive depends on the realization probability. We run a large-scale choice experiment on a mobile game platform, where we randomize the price and realization probability when selling a new product. We find reduced-form support of the theoretical prediction and the decision effort mechanism. We then estimate a structural model of consumer choice. The structural estimates allow us to infer actual demand using data from incentive-aligned choice experiments with small to moderate realization probabilities.<\/p>\n

Biography:<\/h2>\n

Juanjuan Zhang is the Epoch Foundation Professor of International Management and Professor of Marketing at the MIT Sloan School of Management. Zhang studies marketing strategies in today\u2019s social context. Her research covers industries such as consumer goods, social media, and healthcare, and functional areas such as product development, pricing, and sales. She has received the Frank Bass Award for the best marketing thesis, is a four-time finalist for the John Little Award for the best marketing paper, and a twice finalist for the INFORMS Society for Marketing Science Long Term Impact Award. Zhang currently serves as Department Editor of Management Science, and Associate Editor of the Journal of Marketing Research, Marketing Science, and Quantitative Marketing and Economics. She also serves as a VP of the INFORMS Society for Marketing Science (ISMS). Zhang teaches Marketing Management at MIT Sloan. She is a recipient of the MIT d\u2019Arbeloff Fund for Excellence in Education and MIT Sloan\u2019s highest teaching award, the Jamieson Prize. She holds a B. Econ. from Tsinghua University and a Ph.D. in Business Administration from the University of California, Berkeley. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tAre we over-testing? Using machine learning to understand physician decision making- Ziad Obermeyer\t\t\t<\/h4>\n
\n

\n<\/p>

Ziad Obermeyer (opens in new tab)<\/span><\/a>, Harvard | Wednesday, December 06, 2017<\/p>\n

Abstract:<\/h2>\n

Low-value health care\u2014care that provides little health benefit relative to its cost\u2014is a central concern for policy makers. Identifying exactly which care is likely to be of low-value ex ante, however, has proven challenging.  We use a novel machine learning approach to gauge the extent of low-value care. We focus on the decision to perform high-cost tests on emergency department (ED) patients whose symptoms suggest they might be having a heart attack\u2014notoriously difficult to differentiate from more benign causes.  We build an algorithm to predict whether a particular patient is in fact having a heart attack using a training sample of randomly-selected patients), and apply it to a new population of patients the algorithm has never seen. We find that a large set of patients tested by doctors have extremely low ex ante predicted risk of having a heart attack; and these patients do indeed have a very low rate of positive test results when tested. Our focus on testing decisions on the margin reveals that the rate of over-testing is substantially higher than we would think if we simply measured overall effectiveness of the test: the marginal test has much lower value than the average test, and our approach can quantify this difference. We also find that many patients who go untested in fact appear high risk to the algorithm. Doctors\u2019 decisions not to test these patients does not appear to reflect private information: we find that these patients develop serious complications (or death) at remarkably high rates in the months after emergency visits. By isolating specific conditions under which patients in emergency departments are quasi-randomly assigned to doctors, we are able to minimize the influence of unobservables. These results suggest that both under-testing and over-testing are prevalent. We conclude with exploratory analysis of the behavioral mechanisms underlying under-testing, by examining those high-risk beneficiaries whom physicians fail to test. These patients often have concurrent health issues with symptoms similar to heart attack that may lead physicians to anchor prematurely on an alternative diagnosis. Applying deep learning to electrocardiographic waveform data from these patients, we can also isolate specific physiological characteristics of the heart attacks that doctors overlook.<\/p>\n

Biography:<\/h2>\n

Ziad Obermeyer is an Assistant Professor at Harvard Medical School and an emergency physician at the Brigham and Women\u2019s Hospital, both in Boston. His lab applies machine learning to solve clinical problems. As patients age and medical technology advances, the complexity of health data strains the capabilities of the human mind. Using a combination of machine learning and traditional methods, his work seeks to find hidden signal in health data, help doctors make better decisions, and drive innovations in clinical research. He is a recipient of multiple research awards from NIH (including the Office of the Director and the National Institute on Aging) and private foundations, and a faculty affiliate at ideas42, Ariadne Labs, and the Harvard Institute for Quantitative Social Science. He holds an A.B. (magna cum laude) from Harvard and an M.Phil. from Cambridge, and worked as a consultant at McKinsey & Co. in Geneva, New Jersey, and Tokyo, before returning to Harvard for his M.D (magna cum laude). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tSpatial Pricing in Ride-Sharing Networks- Kostas Bimpikis\t\t\t<\/h4>\n
\n

\n<\/p>

Kostas Bimpikis (opens in new tab)<\/span><\/a>, Stanford | Wednesday, November 29, 2017<\/p>\n

Abstract:<\/h2>\n

We explore spatial price discrimination in the context of a ride-sharing platform that serves a network of locations. Riders are heterogeneous in terms of their destination preferences and their willingness to pay for receiving service. Drivers decide whether, when, and where to provide service so as to maximize their expected earnings, given the platform\u2019s prices. Our findings highlight the impact of the demand pattern on the platform\u2019s prices, profits, and the induced consumer surplus. In particular, we establish that profits and consumer surplus are maximized when the demand pattern is \u201cbalanced\u201d across the network\u2019s locations. In addition, we show that they both increase monotonically with the balancedness of the demand pattern (as formalized by its structural properties). Furthermore, if the demand pattern is not balanced, the platform can benefit substantially from pricing rides differently depending on the location they originate from. Finally, we consider a number of alternative pricing and compensation schemes that are commonly used in practice and explore their performance for the platform. (joint work with Ozan Candogan and Daniela Saban)<\/p>\n

Biography:<\/h2>\n

Kostas Bimpikis is an Associate Professor of Operations, Information and Technology at Stanford University\u2019s Graduate School of Business. Prior to joining Stanford, he spent a year as a postdoctoral research fellow at the Microsoft Research New England Lab. Professor Bimpikis has received a PhD in Operations Research from the Massachusetts Institute of Technology in 2010, an MS in Computer Science from the University of California, San Diego and a BS degree in Electrical and Computer Engineering from the National Technical University of Athens, Greece. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tMatching Pennies on the Campaign Trail: An Empirical Study of Senate Elections and Media Coverage- Pinar Yildirim\t\t\t<\/h4>\n
\n

\n<\/p>

Pinar Yildirim (opens in new tab)<\/span><\/a>, UPENN | Wednesday, November 15, 2017<\/p>\n

Abstract:<\/h2>\n

We study the strategic interaction between the media and Senate candidates during elections. While the media is instrumental for candidates to communicate with voters, candidates and media outlets have conflicting preferences over the contents of the reporting. In competitive electoral environments such as most US Senate races, this can lead to a strategic environment resembling a matching pennies game. Based on this observation, we develop a model of bipartisan races where media outlets report about candidates, and candidates make decisions on the type of constituencies to target with their statements along the campaign trail. We develop a methodology to classify news content as suggestive of the target audience of candidate speech, and show how data on media reports and poll results, together with the behavioral implications of the model, can be used to estimate its parameters. We implement this methodology on US Senatorial races for the period 1980-2012, and find that Democratic candidates have stronger incentives to target their messages towards turning out their core supporters than Republicans. We also find that the cost in swing-voter support from targeting core supporters is larger for Democrats than for Republicans. These effects balance each other, making media outlets willing to cover candidates from both parties at similar rates. Joint work with Camilo Garcia Jimeno Department of Economics, University of Pennsylvania and NBER.<\/p>\n

Biography:<\/h2>\n

Pinar Yildirim (opens in new tab)<\/span><\/a> is Assistant Professor of Marketing at the Wharton School of the University of Pennsylvania and is also a Senior Fellow at the Leonard Davis Institute. Pinar\u2019s research areas are media, information economics, and network science. She focuses on applied theory and applied economics problems relevant to online platforms, advertising, networks, media and political economy. Her research appeared in top management and marketing journals including Marketing Science, Journal of Marketing Research, Management Science, and Journal of Marketing. Pinar is on the editorial board of Marketing Science and recently received the 2017 MSI Young Scholar award. She holds Ph.D. degrees in Marketing and Business Economics, as well as Industrial Engineering from the University of Pittsburgh. She joined the Wharton School in 2012 and has been teaching in the Executive, MBA, and undergraduate programs since. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tSocial Order in the Age of Big Data: Exploring the Knowledge Problem and the Freedom Problem\u2013 Nick Couldry\t\t\t<\/h4>\n
\n

\n<\/p>

Nick Couldry (opens in new tab)<\/span><\/a>, LSE | Wednesday, November 1, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

This talk will explore how to use social theory to understand problems of social order and its relationship to an era of Big Data. I take as a starting-point of the talk the neglected late work of theorist Norbert Elias and the concept of figurations, which I draw upon in my recent book (The Mediated Construction of Reality, with Andrew Hepp, Polity 2016), as a way of thinking better about the social world\u2019s real complexity. I will sketch the historical background that shapes what we know about the role of communications in the growth of industrial capitalism in the 19th century to argue that we are in a parallel phase of major transformation today. This raises two problems on which the talk will reflect: first, what are the distinctive features of the social knowledge that is today being generated through big data processes, compared with the 19th century\u2019s rise of statistics as the primary generator of social knowledge; second, what are the implications for the enduring value of freedom of the data collection processes on which Big Data is founded?<\/p>\n

Biography<\/h2>\n

Nick Couldry is Full Professor of Media, Communications and Social Theory at the London School of Economics and Political Science, UK. From August 2014 to August 17, he was chair of LSE\u2019s Department of Media and Communications. He is the author of 8 books, including most recently The Mediated Construction of Reality (with Andreas Hepp, Polity 2016). He has since 2015 been joint coordinating author of the chapter on media and communications in the International Panel on Social Progress: https:\/\/www.ipsp.org\/ (opens in new tab)<\/span><\/a>. For more information, please go to http:\/\/www.nickcouldry.org\/ (opens in new tab)<\/span><\/a>. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThree principles of data science: predictability, stability, and computability\u2013 Bin Yu\t\t\t<\/h4>\n
\n

\n<\/p>

Bin Yu (opens in new tab)<\/span><\/a>, UC Berkeley | Thursday, October 26, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

In this talk, I will discuss intertwining importance and connections of three principles of data science. The three principles will be demonstrated in the context of two neuroscience projects and through analytical connections. In particular, the first project adds stability to predictive models used for reconstruction of movies from fMRI brain signals to gain interpretability of the predictive models. The second project employs predictive transfer learning and stable (manifold) deep dream images to characterize the difficult V4 neurons in primate vision cortex. Our results lend support, to a certain extent, to the resemblance to a primate brain of Convolutional Neural Networks (CNNs).<\/p>\n

Biography<\/h2>\n

Bin Yu is Chancellor\u2019s Professor in the Departments of Statistics and of Electrical Engineering & Computer Sciences at the University of California at Berkeley. Her current research interests focus on statistics and machine learning algorithms and theory for solving high-dimensional data problems. Her lab is engaged in interdisciplinary research with scientists from genomics, neuroscience, precision medicine and political science. She obtained her B.S. degree in Mathematics from Peking University in 1984, her M.A. and Ph.D. degrees in Statistics from the University of California at Berkeley in 1987 and 1990, respectively. She held faculty positions at the University of Wisconsin-Madison and Yale University and was a Member of Technical Staff at Bell Labs, Lucent. She was Chair of Department of Statistics at UC Berkeley from 2009 to 2012, and is a founding co-director of the Microsoft Lab on Statistics and Information Technology at Peking University, China, and Chair of the Scientific Advisory Committee of the Statistical Science Center at Peking University. She is Member of the U.S. National Academy of Sciences and Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, an Invited Speaker at ICIAM in 2011, and the Tukey Memorial Lecturer of the Bernoulli Society in 2012. She was President of IMS (Institute of Mathematical Statistics) in 2013-2014 and the Rietz Lecturer of IMS in 2016. She is a Fellow of IMS, ASA, AAAS and IEEE. She served on the Board of Mathematics Sciences and Applications (BMSA) of NAS and as co-chair of SAMSI advisory committee, and on the Board of Trustees at ICERM and Scientific Advisory Board of IPAM. She has served or is serving on many editorial boards, including Journal of Machine Learning Research (JMLR), Annual Reviews in Statistics, Annals of Statistics, and American Statistical Association (JASA). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tConsumer Reviews and Regulation: Evidence from NYC Restaurants\u2013 Chiara Farronato\t\t\t<\/h4>\n
\n

\n<\/p>

Chiara Farronato (opens in new tab)<\/span><\/a>, Harvard | Wednesday, October 25, 2017<\/p>\n

Description<\/h2>\n

We investigate complementarities and substitutabilities between two signals of restaurant quality: health inspections and online reviews. To protect consumers from unsafe dining, health inspections periodically evaluate restaurants on hygiene quality, and assign them health grades. Recently, consumers have increasingly been able to rate restaurant quality online, through platforms like Yelp. We first investigate whether online consumer reviews detect hygienic conditions that health inspectors evaluate. To do this, we implement a text analysis machine learning algorithm to predict individual restaurant violations from the text of Yelp reviews. We preliminarily find that consumer reviews are good predictors of food handling violations, but are poor predictors of facilities and maintenance violations. We then investigate how the hygienic information contained in online reviews affects consumer demand and supply incentives. On the demand side, we preliminarily find that conditional on hygiene quality contained in online reviews, customers still use health grades to choose restaurants. On the supply side, we find that relative to restaurants not on Yelp, restaurants reviewed on Yelp score better on hygiene dimensions detectable by customers than on dimensions not detectable by customers. The paper results have implications for the design of government regulation in a world where consumers rate their service experiences online.<\/p>\n

Biography<\/h2>\n

Chiara Farronato is an assistant professor of business administration in the Technology and Operations Management Unit at Harvard Business School. Based on a broad interest in the economics of innovation and the Internet, she concentrates her research on the evolution of e-commerce and peer-to-peer online platforms, including platform adoption, economies of scale, and drivers of heterogeneous platform success. Chiara has investigated such phenomena as the shift in e-commerce from auctions to posted prices; matching supply and demand on peer-to-peer platforms for local and time-sensitive services; and the effect of peer-to-peer entry on the market structure of existing industries. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tValuing Alternative Work Arrangements\u2013 Amanda Pallais\t\t\t<\/h4>\n
\n

\n<\/p>

Amanda Pallais (opens in new tab)<\/span><\/a>, Harvard | Wednesday, October 11, 2017<\/p>\n

Description<\/h2>\n

We employ a discrete choice experiment in the employment process for a national call center to estimate the willingness to pay distribution for alternative work arrangements relative to traditional office positions. Most workers are not willing to pay for scheduling flexibility, though a tail of workers with high valuations allows for sizable compensating differentials. The average worker is willing to give up 20% of wages to avoid a schedule set by an employer on short notice, and 8% for the option to work from home. We also document that many jobseekers are inattentive, and we account for this in estimation.<\/p>\n

Biography<\/h2>\n

Amanda Pallais is the Paul Sack Associate Professor of Political Economy and Social Studies at Harvard University. Her research studies the labor market performance and educational investment decisions of disadvantaged and socially excluded groups such as women, ethnic minorities, and individuals from low-income families. She is also interested in online platforms and how technology will change the nature of work and education. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tNew Frontiers in Imitation Learning\u2013 Yisong Yue\t\t\t<\/h4>\n
\n

\n<\/p>

Yisong Yue (opens in new tab)<\/span><\/a>, Caltech | Wednesday, September 6, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

The ongoing explosion of spatiotemporal tracking data has now made it possible to analyze and model fine-grained behaviors in a wide range of domains. For instance, tracking data is now being collected for every NBA basketball game with players, referees, and the ball tracked at 25 Hz, along with annotated game events such as passes, shots, and fouls. Other settings include laboratory animals, people in public spaces, professionals in settings such as operating rooms, actors speaking and performing, digital avatars in virtual environments, and even the behavior of other computational systems. In this talk, I will describe ongoing research in using imitation learning to develop predictive models of fine-grained behavior. Imitation learning is branch of machine learning that deals with learning to imitate dynamic demonstrated behavior. I will provide a high level overview of the basic problem setting, as well as specific projects in modeling laboratory animals, professional sports, speech animation, and expensive computational oracles.<\/p>\n

Biography<\/h2>\n

Yisong Yue is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. He was previously a research scientist at Disney Research. Before that, he was a postdoctoral researcher in the Machine Learning Department and the iLab at Carnegie Mellon University. He received a Ph.D. from Cornell University and a B.S. from the University of Illinois at Urbana-Champaign. Yisong\u2019s research interests lie primarily in the theory and application of statistical machine learning. His research is largely centered around developing integrated learning-based approaches that can characterize complex structured and adaptive decision-making settings. Current focus areas include developing novel methods for spatiotemporal reasoning, structured prediction, interactive learning systems, and learning with humans in the loop. In the past, his research has been applied to information retrieval, recommender systems, text classification, learning from rich user interfaces, analyzing implicit human feedback, data-driven animation, behavior analysis, sports analytics, policy learning in robotics, and adaptive routing & allocation problems. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tFast Quantification of Uncertainty and Robustness with Variational Bayes \u2013 Tamara Broderick\t\t\t<\/h4>\n
\n

\n<\/p>

Tamara Broderick (opens in new tab)<\/span><\/a>, MIT | Wednesday, August 23, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

In Bayesian analysis, the posterior follows from the data and a choice of a prior and a likelihood. These choices may be somewhat subjective and reasonably vary over some range. Thus, we wish to measure the sensitivity of posterior estimates to variation in these choices. While the field of robust Bayes has been formed to address this problem, its tools are not commonly used in practice. We demonstrate that variational Bayes (VB) techniques are readily amenable to fast robustness analysis. Since VB casts posterior inference as an optimization problem, its methodology is built on the ability to calculate derivatives of posterior quantities with respect to model parameters. We use this insight to develop local prior robustness measures for mean-field variational Bayes (MFVB), a particularly popular form of VB due to its fast runtime on large data sets. A potential problem with MFVB is that it has a well-known major failing: it can severely underestimate uncertainty and provides no information about covariance. We generalize linear response methods from statistical physics to deliver accurate uncertainty estimates for MFVB\u2014both for individual variables and coherently across variables. We call our method linear response variational Bayes (LRVB).<\/p>\n

Biography<\/h2>\n

Tamara Broderick is the ITT Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT. She is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the MIT Statistics and Data Science Center, and the Institute for Data, Systems, and Society (IDSS). She completed her Ph.D. in Statistics with Professor Michael I. Jordan at the University of California, Berkeley in 2014. Previously, she received an AB in Mathematics from Princeton University (2007), a Master of Advanced Study for completion of Part III of the Mathematical Tripos from the University of Cambridge (2008), an MPhil by research in Physics from the University of Cambridge (2009), and an MS in Computer Science from the University of California, Berkeley (2013). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning\u2014especially Bayesian nonparametrics. She has been awarded a Google Faculty Research Award, the ISBA Lifetime Members Junior Researcher Award, the Savage Award (for an outstanding doctoral dissertation in Bayesian theory and methods), the Evelyn Fix Memorial Medal and Citation (for the Ph.D. student on the Berkeley campus showing the greatest promise in statistical research), the Berkeley Fellowship, an NSF Graduate Research Fellowship, a Marshall Scholarship, and the Phi Beta Kappa Prize (for the graduating Princeton senior with the highest academic average). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tMusic, Deep Learning, and Acoustic Invariances\u2013 Sham Kakade\t\t\t<\/h4>\n
\n

\n<\/p>

Sham Kakade (opens in new tab)<\/span><\/a>, University of Washington | Wednesday, August 16, 2017<\/p>\n

Description<\/h2>\n

Given a recorded (polyphonic) performance of, say, classical music, how can we learn to identify which notes are being played at any given time? by which instruments? if the notes are whole notes, half notes, etc.? Can we even make progress in modeling the composition process? Taking a machine learning viewpoint, one method is to learn a classifier for these tasks. Recently, we have created the MusicNet dataset to aid in the process of supervised learning on music. MusicNet consists of freely-licensed classical music recordings along with instrument\/note labels, including over 40 hours of polyphonic music, covering 10 instruments and 10 composers, resulting in over 1 million temporal labels with an average of about 50 distinct notes per instrument. Given such a large scale supervised dataset, what supervised learning approaches capture the natural invariances in music? Inspired by the impressive successes of convolutional neural networks, one approach would be to train a convolutional neural network directly on the acoustic signal. We are argue (both empirically and based on the nature of acoustic signals) that this approach is lacking.  Instead, we consider a different architecture designed to capture invariances that are more natural to the way in which people perceive pitch in music (an approach which is possibly appropriate to speech recognition as well). We train this architecture in an end to end manner. Our current results already significantly outperform commercially available software for tasks such as the aforementioned one, along with the task of music transcription. Joint work with: John Thickstun, Zaid Harchaoui, and Dean Foster.<\/p>\n

Biography<\/h2>\n

Sham Kakade joins the University of Washington this fall with a joint position in Computer Science & Engineering and Statistics. He was most recently a principal research scientist at Microsoft Research. Prior to Microsoft, Sham held faculty positions at the University of Pennsylvania and the Toyota Technological Institute in Chicago. Sham researches artificial intelligence, statistical machine learning and signal processing, developing methods of teaching computers to make predictions based on data collection and computation time. Sham is interested in applying scalable and efficient algorithms to solve core scientific and economic problems involving complex data mining. He explores artificial neural networks that process sight and sound like the human brain and studies how machine learning intersects with neuroscience and computational biology. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThe FlowNet Project: Transnational Investigations of Social Media Use and \u2018Internet Freedom\u2013 Lisa Parks\t\t\t<\/h4>\n
\n

\n<\/p>

Lisa Parks (opens in new tab)<\/span><\/a>, MIT | Wednesday, August 9, 2017<\/p>\n

Description<\/h2>\n

This talk will provide an overview of findings from a three-year, interdisciplinary research project called FlowNet funded by the US State Department (2014-2017). Our team used qualitative, field-based methods to investigate social media use and internet freedom climates in Mongolia, Turkey, and Zambia. Working with local partners and translators, we interviewed nearly 200 people on the frontlines of free speech struggles in these countries \u2014 including journalists, lawyers, elected officials, LGBTQ, environmental, and social activists, media company owners, artists, and political dissidents \u2014 in an effort to understand how social media and internet freedom are culturally understood and practiced across diverse national contexts. These findings were shared with computer scientists on our team who developed an app called SecurePost, which enables anonymous, verified, group communication using existing social media platforms. The talk will conclude with a discussion of design challenges in developing contexts and the ethical considerations of developing tools and applications to support internet freedoms.<\/p>\n

Biography<\/h2>\n

Lisa Parks is Professor of Comparative Media Studies and Director of the Global Media Technologies and Cultures Lab at MIT. Her research is focused on uses of information and media technologies across diverse, transnational cultural contexts. She is the author of Cultures in Orbit: Satellites and the Televisual (Duke UP, 2005) and Coverage: Vertical Mediation and the War on Terror (forthcoming). She is also co-editor of Life in the Age of Drone Warfare (Duke UP, 2017), Signal Traffic: Critical Studies of Media Infrastructures (U of Illinois, 2015), Down to Earth: Satellite Technologies, Industries, and Cultures (Rutgers UP, 2012), and Planet TV: A Global Television Reader (NYU Press, 2003). Parks has held visiting appointments at the Institute for Advanced Study in Berlin, University of Southern California, and the Annenberg School of Communication at the University of Pennsylvania. She has been a PI on major research grants from the National Science Foundation and the US State Department. Before joining the MIT faculty in 2017, she was Professor of Film and Media Studies at UC Santa Barbara, where she also served as the Director of the Center for Information Technology and Society (2012-2015). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tSubmodularity, Optimal Transport and Machine Learning: New Applications and Algorithms\u2013 Stefanie Jegelka\t\t\t<\/h4>\n
\n

\n<\/p>

Stefanie Jegelka (opens in new tab)<\/span><\/a>, MIT | Wednesday, August 2, 2017<\/p>\n

Description<\/h2>\n

Submodularity and Optimal Transport (OT) are two powerful mathematical concepts with multiple potential benefits to be exploited in machine learning. In this talk, I will outline connections and some new applications and algorithms for these concepts. First, we show how submodularity can be leveraged to solve a non-convex saddle point problem to global optimality (under conditions). The underlying method uses connections to OT. The resulting algorithm solves a robust budget allocation (or bipartite influence maximization) problem with uncertain parameters.Second, we combine submodularity and OT for a new, structured assignment model that encourages coupled assignments and has applications from domain adaptation to NLP. If time permits, I will also sketch new results on scalable computation of Wasserstein barycenters with applications to parallel Bayesian inference. This talk is based on joint works with Matthew Staib, David Alvarez Melis, Sebastian Claici, Tommi Jaakkola and Justin Solomon.<\/p>\n

Biography<\/h2>\n

Stefanie Jegelka is an X-Consortium Career Development Assistant Professor in the Department of EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL), the Center for Statistics and an affiliate of IDSS and ORC. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received an NSF CAREER Award, a DARPA Young Faculty Award, a Google research award, the German Pattern Recognition Award and a Best Paper Award at the International Conference for Machine Learning (ICML). Her research interests span the theory and practice of algorithmic machine learning. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tSubgraphs, clusters and hierarchy in complex networks \u2013 Johan van Leeuwaarden\t\t\t<\/h4>\n
\n

\n<\/p>

Johan van Leeuwaarden (opens in new tab)<\/span><\/a>, Eindhoven University of Technology | Wednesday, July 26, 2017<\/p>\n

Description<\/h2>\n

Real-world networks often have power-law degrees and scale-free properties such as ultra-small distances and ultra-fast information spreading. We provide evidence of a third universal property: correlations that suppress among others the creation of triangles and signal the presence of hierarchy. We first quantify this property in terms of c(k), the probability that two neighbors of a degree-k node are neighbors themselves. We investigate how c(k) scales with k and discover a universal curve that consists of three k-ranges where c(k) remains flat, starts declining, and eventually settles on a power law with an exponent that depends on the power law of the degree distribution. We test these results against ten contemporary real-world networks (please approach us if you have your own network data set that needs to be tested) and then generalize our theory to any finite subgraph. Understanding the natural scale of all subgraphs might prove useful for community detection algorithms and establishing graph limits. Joint work with Clara Stegehuis, Remco van der Hofstad and Guido Janssen.<\/p>\n

Biography<\/h2>\n

Johan van Leeuwaarden (1978) is professor of mathematics at Eindhoven University of Technology. He chairs the group Stochastic Networks and Applied Probability and investigates phenomena arising in complex networks, such as communication networks, social networks and biological networks, primarily through stochastic models (random graphs, interacting particles and queueing networks), in particular their scaling limits and asymptotic behavior. Johan is member of the Young Academy (part of The Royal Netherlands Academy of Arts and Sciences) and promotes the role of mathematics and data in the networked society. He also co-founded the large multidisciplinary research program NETWORKS (www.thenetworkcenter.nl (opens in new tab)<\/span><\/a>). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tGender Salience and Racial Frames, Potholes for Women in Science: Understanding the Context Before and the Potential Consequences of Sexual Harassment\u2013 Anna Branch\t\t\t<\/h4>\n
\n

\n<\/p>

Anna Branch (opens in new tab)<\/span><\/a>, UMass Amherst | Wednesday, July 19, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

This talk grounds the experience of harassment sociologically drawing attention to the way that race and gender interact to shape who experiences gender harassment and how they respond to it. Gender salience explains how gender moves from the background to the foreground when women experience harassment and racial frames account for the hypersexualization of women of color that makes them more vulnerable to harassment. Dealing with the culture of harassment is integral to creating climates where women can fully contribute to science. Conceiving of the challenges to diversifying science as a pipeline problem, fails to appreciate the hazards, such as harassment, that women experience along the way. I introduce the road with exits, pathways, and potholes to articulates the ideas of agency and constraint for women in science and offer suggestions of what we can do to ease their journey.<\/p>\n

Biography<\/h2>\n

Enobong Hannah Branch is an Associate Professor of Sociology and the Chancellor\u2019s Faculty Advisor for Diversity & Inclusive Excellence at the University of Massachusetts-Amherst. Her research interests are in race, racism, and inequality; intersectional theory; work and occupations; and diversity in science. Her book Opportunity Denied: Limiting Black Women to Devalued Work (2011)<\/em> provides an overview of the historical evolution of Black women\u2019s work and the social-economic structures that have located them in particular and devalued places in the U.S. labor market. She is also the editor of Pathways, Potholes, and the Persistence of Women in Science: Reconsidering the Pipeline (2016) <\/em>which outlines the inadequacy of the pipeline metaphor in understanding the challenges of entry and persistence in science and offers an alternative model that better articulates the ideas of agency, constraint, and variability along the path to scientific careers for women. Dr. Branch is also the author of several articles. <\/em>Her current research, investigates rising employment insecurity in the post-industrial era through the lens of racial and gender inequality and implementing the Computer Science for All educational goals and aims within the context of existing inequality in urban school districts in Western Massachusetts. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tUsing Technology for Social Good: Successes, Failures, and Opportunities\u2013 Bill Thies\t\t\t<\/h4>\n
\n

\n<\/p>

Bill Thies (opens in new tab)<\/span><\/a>, MSR India | Thursday, July 13, 2017<\/p>\n

Description<\/h2>\n

While technology has offered benefits for many of us, rarely do those benefits extend equally across all members of society.  A growing community of researchers is specifically targeting their work to underserved populations, developing methods and tools that result in direct and measurable social good.  This talk will focus on those living in extreme poverty, where the latest technologies are often out of reach.  Instead of making things faster, bigger, and more futuristic, can we make things radically cheaper, simpler, and more inclusive?  I will describe some of our successes, failures, and lessons learned in deploying such \u201cfrugal technologies\u201d in India over the past eight years.  Drawing on projects in health and citizen reporting, I will synthesize our recommendations for having social impact with technology, and outline opportunities for future work.<\/p>\n

Biography<\/h2>\n

Bill Thies is a Senior Researcher at Microsoft Research India, where he has worked since 2008. His research focuses on building appropriate information and communication technologies that contribute to the socio-economic development of low-income communities, a field known as ICTD.  Previously, Bill earned his B.S., M.Eng., and Ph.D. degrees from MIT, where he worked on programming languages and compilers for multicore processors as well as microfluidic chips. His distinctions include the John C. Reynolds Doctoral Dissertation Award, a CHI Best Paper Award, and a 2016 MacArthur Fellowship. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tInside Job or Deep Impact? Using Extramural Citations to Assess Economic Scholarship\u2013 Josh Angrist\t\t\t<\/h4>\n
\n

\n<\/p>

Josh Angrist (opens in new tab)<\/span><\/a>, (opens in new tab)<\/span><\/a> MIT | Wednesday, July 5, 2017<\/p>\n

Description<\/h2>\n

Does academic economic research produce material of broader scientific value, or are academic economists writing only for their peers? Is economics scholarship especially insular? We address these questions by quantifying interactions between economics and other disciplines. Changes in the impact of economic scholarship are measured here by the way other disciplines cite us and by the extent to which we cite others. We document a clear rise in the extramural influence of economic research, while also showing that economics is increasingly likely to reference other social sciences. A breakdown of extramural citations by economics fields shows broad field impact. Differentiating between theoretical and empirical papers classified using machine learning, we see that much of the rise in extramural influence reflects growth in citations to empirical work. This parallels a growing share of empirical cites within economics. With Pierre Azoulay, Glenn Ellison, Ryan Hill, and Susan Lu.<\/p>\n

Biography<\/h2>\n

Joshua Angrist is the Ford Professor of Economics at MIT, a director of MIT\u2019s School Effectiveness and Inequality Initiative (opens in new tab)<\/span><\/a>,  and a Research Associate at the National Bureau of Economic Research (opens in new tab)<\/span><\/a>. A dual U.S. and Israeli citizen, he taught at Harvard and the Hebrew University of Jerusalem before coming to MIT in 1996. Angrist received his B.A. from Oberlin College in 1982 and completed his Ph.D. in Economics at Princeton in 1989. Angrist\u2019s research interests include the economics of education and school reform; social programs and the labor market; the effects of immigration, labor market regulation and institutions; and econometric methods for program and policy evaluation.  Angrist is a Fellow of the American Academy of Arts and Sciences and the Econometric Society, and has served on many editorial boards and as a Co-editor of the Journal of Labor Economics. He received an honorary doctorate from the University of St Gallen (Switzerland) in 2007 and is the author (with Steve Pischke) of Mostly Harmless Economics: An Empiricist\u2019s Companion<\/em> and Mastering \u2018Metrics: The Path from Cause to Effect<\/em>, both published by Princeton University Press. Angrist and Pischke hope to bring undergraduate econometrics instruction out of the Stones Age. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tNetwork Pricing: How to Induce Optimal Flows Under Strategic Link Operators\u2013 Evdokia Nikolova\t\t\t<\/h4>\n
\n

\n<\/p>

Evdokia Nikolova (opens in new tab)<\/span><\/a>, UT Austin | Wednesday, June 21, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

Network pricing games provide a framework for modeling real-world settings with two types of strategic agents: users of the network and owners (operators) of the network. Owners of the network post a price for usage of the link they own; users of the network select routes based on price and level of use by other users. One challenge in these games is that there are two levels of competition: one, among the owners to abstract users to their link so as to maximize profit; and second, among users of the network to select routes that are cheap yet not too congested. Interestingly, we observe that: (i) an equilibrium may not exist; (ii) it might not be unique; and (iii) the network performance at equilibrium can be arbitrarily inefficient. Our main result is to observe that a slight regulation on the network owners market solves all three issues above. Specially, if the authority could set appropriate caps (upper bounds) on the tolls (prices) operators can charge then: the game among the link operators has a unique strong Nash equilibrium and the users\u2019 game results in a Wardrop equilibrium that achieves the optimal total delay. We call any price vector with these properties a great set of tolls. We then ask, can we compute great tolls that minimize total users\u2019 payments? We show that this optimization problem reduces to a linear program in the case of single-commodity series-parallel networks. Starting from the same linear program, we obtain multiplicative approximation results for arbitrary networks with polynomial latencies of bounded degree, while in the single-commodity case we obtain a surprising bound, which only depends on the topology of the networkcase we obtain a surprising bound, which only depends on the topology of the network.<\/p>\n

Biography<\/h2>\n

Evdokia Nikolova is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin, where she is a member of the Wireless Networking & Communications Group. She graduated with a BA in Applied Mathematics with Economics from Harvard University, MS in Mathematics from Cambridge University, U.K. and Ph.D. in Computer Science from MIT. Evdokia Nikolova\u2019s research aims to improve the design and efficiency of complex systems (such as networks and electronic markets), by integrating stochastic, dynamic and economic analysis. Her recent work examines how human risk aversion transforms traditional computational models and solutions. One of her algorithms has been adapted in the MIT CarTel (opens in new tab)<\/span><\/a> project for traffic-aware routing. She currently focuses on developing algorithms for risk mitigation in networks, with applications to transportation and energy. She is a recipient of an NSF CAREER award. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tRed Hangover: Legacies of 20th Century Communism\u2013 Kristen Ghodsee\t\t\t<\/h4>\n
\n

\n<\/p>

Kristen Ghodsee (opens in new tab)<\/span><\/a>, Bowdoin | Wednesday, June 7, 2017<\/p>\n

Description<\/h2>\n

In Red Hangover Kristen Ghodsee examines the legacies of twentieth-century communism twenty-five years after the Berlin Wall fell. Ghodsee reflects on the lived experience of postsocialism and how many ordinary men and women across Eastern Europe suffered from the massive social and economic upheavals in their lives after 1989. Ghodsee shows how recent major crises\u2014from the Russian annexation of Crimea and the Syrian Civil War to the rise of Islamic State and the influx of migrants in Europe\u2014are linked to mistakes made after the collapse of the Eastern Bloc when fantasies about the triumph of free markets and liberal democracy blinded Western leaders to the human costs of \u201cregime change.\u201d Just as the communist ideal has become permanently tainted by its association with the worst excesses of twentieth-century Eastern European regimes, today the democratic ideal is increasingly sullied by its links to the ravages of neoliberalism. An accessible introduction to the history of European state socialism and postcommunism, Red Hangover reveals how the events of 1989 continue to shape the world today.<\/p>\n

Biography<\/h2>\n

Kristen Ghodsee is a professor and the director of the Gender, Sexuality, and Women\u2019s Studies Program at Bowdoin College. She is the author of seven books including, The Left Side of History; World War Two and the Unfulfilled Promise of Communism in Eastern Europe (Duke University Press, 2015) and Red Hangover: Legacies of 20th Century Communism, forthcoming with Duke University Press in October 2017. Ghodsee has held visiting fellowships at Harvard University, the Institute for Advanced Study in Princeton, and at the Freiburg Institute for Advanced Studies in Germany. In 2012, she won a Guggenheim Fellowship for her work in Anthropology and Cultural Studies. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tSome Recent Advances in Scalable Optimization\u2013 Mladen Kolar\t\t\t<\/h4>\n
\n

\n<\/p>

Mladen Kolar (opens in new tab)<\/span><\/a>, U Chicago | Wednesday, May 3, 2017<\/p>\n

Description<\/h2>\n

In this talk, I will present two recent ideas that can help solve large scale optimization problems. In the first part, I will present a method for solving an ell-1 penalized linear and logistic regression problems where data are distributed across many machines. In such a scenario it is computationally expensive to communicate information between machines. Our proposed method requires a small number of rounds of communication to achieve the optimal error bound. Within each round, every machine only communicates a local gradient to the central machine and the central machine solves a ell-1 penalized shifter linear or logistic regression. In the second part, I will discuss usage of sketching as a way to solve linear and logistic regression problems with large sample size and many dimensions. This work is aimed at solving large scale optimization procedures on a single machine, while the extension to a distributed setting is work in progress.<\/p>\n

Biography<\/h2>\n

Mladen Kolar is Assistant Professor of Econometrics and Statistics at the University of Chicago Booth School of Business. His research is focused on high-dimensional statistical methods, graphical models, varying-coefficient models and data mining, driven by the need to uncover interesting and scientifically meaningful structures from observational data. Particular applications arise in studies of dynamic regulatory networks and social media analysis. His research has appeared in several publications including the Journal of Machine Learning Research, Annals of Applied Statistics, and the Electronic Journal of Statistics. He also regularly presents his research at the top machine learning conferences, including Advances in Neural Information Processing Systems and the International Conference of Machine Learning. Kolar was awarded a prestigious Facebook Fellowship in 2010 for his work on machine learning and network models. He spent a summer with Facebook\u2019s ads optimization team working on a large scale system for click-through rate prediction. His other past research included work with INRIA Rocquencourt in Paris, France and Joint Research Center in Ispra, Italy. Kolar earned his PhD in Machine Learning in 2013 from Carnegie Mellon University, as well as a diploma in Computer Engineering from the University of Zagreb. For his Ph.D. thesis work on \u201cUncovering Structure in High-Dimensions: Networks and Multi-task Learning Problems,\u201d Kolar received from 2014 SIGKDD Dissertation Award honorable mention. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThe \u2018Mastery\u2019 of the Swipe: Smartphones and Precarity in a Culture of Narcissism \u2013 Sharif Mowlabocus\t\t\t<\/h4>\n
\n

\n<\/p>

Sharif Mowlabocus (opens in new tab)<\/span><\/a>, University of Sussex | Wednesday, March 1, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

What do you do when you\u2019re waiting for the bus?<\/em> Or waiting for a class to start? <\/em> Or waiting at the doctor\u2019s office?<\/em> Or in line at the grocery store?<\/em> In this paper, I establish a dialogue between two discrete critical methodologies in order to consider the role of \u2018distracted\u2019 smartphone use within a socio-political context. By \u2018distracted\u2019 I am referring to the banal, everyday interactions we have with our smartphones throughout our day; the processes of swiping, tapping and gazing at our handheld devices, which occur dozens, if not hundreds of times a day, and which have taken on the appearance of a habit or social \u2018tic\u2019 (see also Caronia, 2005; Bittman et al. 2009). Drawing on the work of Winnicott (1971) Lasch (1991), Silverstone (1993), Ribak, (2009) and Kullman (2010), I commute between psychoanalytic and political-economy methods in order to connect an analysis of distracted smartphone use to a broader discussion of social, political and economic precarity<\/em>. Such an approach allows me to explore the relationship between the individual and society in order to identify how contemporary digital media practice is both a product of, and a response, to political, social and economic uncertainty.<\/p>\n

Biography<\/h2>\n

Sharif Mowlabocus is a visiting researcher from the University of Sussex, UK, where he is a Senior Lecturer (Assoc. Prof.) in Digital Media. Sharif\u2019s research is located at the intersection of LGBTQ studies and digital media studies. His work touches upon themes of digital embodiment, sexual representation and LGBTQ politics. While at MSRNE he will be working on a variety of projects that map onto these themes in different ways. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tSchelling models in mathematics, social science, physics, and computer science \u2013 Richard Elwes\t\t\t<\/h4>\n
\n

\n<\/p>

Richard Elwes (opens in new tab)<\/span><\/a>, University of Leeds | Wednesday, December 7<\/p>\n

Description<\/h2>\n

In 1969, the economist Thomas Schelling devised some very simple theoretical models of racial segregation. His interest was in examples of the decoupling of \u201cmicromotives\u201d from \u201cmacrobehaviour\u201d, i.e. groups of agents who, by each acting according to their individual local preferences, cause a global effect desired by none of them. Schelling was unaware of the close resemblance of his models to others studied in depth since the early 20th century by statistical physicists. Later, similar constructions would appear within computer science as models of cascading phenomena on networks, and as neural nets. In this talk we will introduce Schelling models and survey some recent progress on their rigorous mathematical analysis.<\/p>\n

Biography<\/h2>\n

Richard Elwes is a mathematician at University of Leeds (UK). His research background is in mathematical logic, but in recent years his interests have evolved to include the analysis of random processes arising in other sciences, and the interface between the two subjects. He is the author of five books on maths aimed at the general public, including Math 1001<\/em> and Chaotic Fishponds & Mirror Universes,<\/em> and delivers regular talks and masterclasses to audiences of all types and levels. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tNewtonian Regulation, Einsteinian Actors: The Magic Shoebox and the Contradictions of US Share Trading \u2013 Donald Mackenzie\t\t\t<\/h4>\n
\n

\n<\/p>

Donald Mackenzie (opens in new tab)<\/span><\/a>, University of Edinburgh <\/strong>| Wednesday, November 2, 2016<\/p>\n

Description<\/h2>\n

The \u2018magic shoebox\u2019 is a 38-mile coil of optical fibre in a computer datacentre in Secaucus, NJ known as \u2018NY4\u2019. For the last year, the world of professional share trading in the US has been convulsed by controversy about the shoebox. The shoebox is deployed by a new stock exchange, IEX, which will be familiar to the readers of Michael Lewis\u2019s 2014 bestseller Flash Boys<\/em>. The purpose of the coil is to slow trading down, albeit by less than a thousandth of a second. The coil and the controversy surrounding it, MacKenzie will argue, have a much longer history than that sketched in Flash Boys.<\/em> Their roots are in a decision in the late 1970s that was implicitly about how US share trading should be configured technologically, a decision whose long-time effects continue to shape how trading is regulated today. Regulation abstracts away from the finite speed of signals and from the measurement of time and simultaneity, but in the \u2018machine time\u2019 of today\u2019s high-frequency trading these issues matter. MacKenzie will sketch five features of today\u2019s share trading that in part reflect the 1970s\u2019 decision. The \u2018shoebox\u2019 is a response to these features, but in many ways epitomises the contradictions that afflict share trading rather than (as Lewis might have it) resolving them. The talk will be based on an extensive, largely oral-historical study of the development of high-frequency trading (including the technologies and electronic trading venues that make it possible) and of its interactions with market regulation and \u2013 indirectly \u2013 with the US political system.<\/p>\n

Biography<\/h2>\n

Donald MacKenzie is a professor of sociology at the University of Edinburgh. His current research is on the sociology of financial markets, especially the development of automated high-frequency trading (HFT), of the technologies and electronic trading venues that make it possible, and the interaction between HFT, regulation and the political system. His books include Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance (MIT Press, 1990) and An Engine, Not a Camera: How Financial Models Shape Markets (MIT Press, 2006). He writes regularly about financial markets in the London Review of Books. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tA neurally-inspired model of habit and its empirical implications \u2013 Colin Camerer \t\t\t<\/h4>\n
\n

\n<\/p>

Colin (opens in new tab)<\/span><\/a> Camerer (opens in new tab)<\/span><\/a>, Caltech <\/strong>| Wednesday, October 5, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

The busy human brain creates fast, low-cost habits when choices are frequent and are providing stable rewards. Using evidence from animal learning and cognitive neuroscience, we model a two-controller system in which habit and model-based choice coexist. The key inputs are reward prediction error (RPE) and the absolute magnitude of RPE. As the RPEs from a choice move toward zero, habits form. When the magnitude of averaged RPE exceeds a threshold, habits are overridden by model-based choice. The model contrasts with long-standing approach in economics (which relies on complementarity of consumption choice) and has several interesting properties that can be tested with behavioral and cognitive dataeconomics.<\/p>\n

Biography<\/h2>\n

Professor Colin F. Camerer is the Robert Kirby Professor of Behavioral Finance and Economics at the California Institute of Technology (located in Pasadena, California), where he teaches cognitive psychology and economics. Professor Camerer earned a BA degree in quantitative studies from Johns Hopkins in 1977, and an MBA in finance (1979) and a Ph.D. in decision theory (1981, at age 22) from the University of Chicago Graduate School of Business. Before coming to Caltech in 1994, Camerer worked at the Kellogg, Wharton, and University of Chicago business schools. He studies both behavioral and experimental economics. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tPrediction and Causation\u2013 Sendhil Mullainathan \t\t\t<\/h4>\n
\n

\n<\/p>

Sendhil Mullainathan (opens in new tab)<\/span><\/a>, Harvard <\/strong>| Wednesday, September 7, 2016<\/p>\n

Description<\/h2>\n

Machine learning tools provide powerful new tools for prediction. They are often criticized for being weak on causal inference. In this talk, I will describe how they can play a central role in areas that are thought to be causal in nature: policy making, theory testing and experimentation. I will illustrate using applications from crime, finance and behavioral economics.<\/p>\n

Biography<\/h2>\n

Sendhil Mullainathan is Professor of Economics at Harvard and a MacArthur Fellow.  He is author with Eldar Shafir of Scarcity: How Having Too Little Means So Much<\/em> and has done influential research on the way human psychology shapes economic decisions, especially in developing countries and among the poor in developed countries.  He has founded or led a number of institutions that have helped apply ideas from his research to improve the lives of the global poor (through the Jamil Poverty Action Lab and Ideas42 that he co-founded) and the poor in the United States (through serving as Chief Economist of the Consumer Financial Protection Bureau).  In recent years his research has increasingly turned, in a collaboration with Jon Kleinberg, to human-computer interaction and the ways that machine learning can both help us understand economic choice and improve the choices individuals make.  He has also been working on automated discover of patterns in health data that may help save lives. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tThin Spanning Trees and their Algorithmic Applications\u2013 Amin Saberi\t\t\t<\/h4>\n
\n

\n<\/p>

Amin Saberi (opens in new tab)<\/span><\/a>, Stanford<\/strong> | Wednesday, August 24, 2016<\/p>\n

Description<\/h2>\n

Motivated by Jaeger\u2019s modular orientation conjecture, Goddyn asked the following question: A spanning tree of a graph G is called epsilon-thin if it contains at most an epsilon fraction of the edges of each cut in that graph. Is there a function f:(0,1)\u2192\u2124 such that every f(epsilon)-edge-connected graph has an epsilon-thin spanning tree? I will talk about our journey in search of such thin trees, their applications concerning traveling salesman problems, and unexpected connections to graph sparsification and the Kadison-Singer problem.<\/p>\n

Biography<\/h2>\n

Amin Saberi is Associate Professor and 3COM faculty scholar in Stanford University. He received his B.Sc. from Sharif University of Technology and his Ph.D. from Georgia Institute of Technology in Computer Science. His research interests include algorithms, design and analysis of social networks, and applications. He is a recipient of the Terman Fellowship, Alfred Sloan Fellowship and a number of best paper awards. Amin is also co-founder and chairman of NovoEd, a social learning environment designed in his research lab and used by universities such as Stanford, UC Berkeley, and University of Michigan as well as non-profit and for-profit institutions for offering courses to hundreds of thousands of learners around the world. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tHow Can Natural Language Processing Help Cure Cancer?\u2013 Regina Barzilay\t\t\t<\/h4>\n
\n

\n<\/p>

Regina Barzilay (opens in new tab)<\/span><\/a>, MIT<\/strong> | Wednesday, August 17, 2016<\/p>\n

Description<\/h2>\n

Cancer inflicts a heavy toll on our society. One out of seven women will be diagnosed with breast cancer during their lifetime, a fraction of them contributing to about 450,000 deaths annually worldwide. Despite billions of dollars invested in cancer research, our understanding of the disease, treatment, and prevention is still limited. Majority of cancer research today takes place in biology and medicine. Computer science plays a minor supporting role in this process if at all. In this talk, I hope to convince you that NLP as a field has a chance to play a significant role in this battle. Indeed, free-form text remains the primary means by which physicians record their observations and clinical findings. Unfortunately, this rich source of textual information is severely underutilized by predictive models in oncology. Current models rely primarily only on structured data. In the first part of my talk, I will describe a number of tasks where NLP-based models can make a difference in clinical practice. For example, these include improving models of disease progression, preventing over-treatment, and narrowing down to the cure. This part of the talk draws on active collaborations with oncologists from Massachusetts General Hospital (MGH). In the second part of the talk, I will push beyond standard tools, introducing new functionalities and avoiding annotation-hungry training paradigms ill-suited for clinical practice. In particular, I will focus on interpretable neural models that provide rationales underlying their predictions, and semi-supervised methods for information extraction.<\/p>\n

Biography<\/h2>\n

Regina Barzilay is a professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Her research interests are in natural language processing. She is a recipient of various awards including of the NSF Career Award, the MIT Technology Review TR-35 Award, Microsoft Faculty Fellowship and several Best Paper Awards at NAACL and ACL. She received her Ph.D. in Computer Science from Columbia University, and spent a year as a postdoc at Cornell University. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tLearning and Equilibrium in Games \u2013 Drew Fudenberg\t\t\t<\/h4>\n
\n

\n<\/p>

Drew Fudenberg (opens in new tab)<\/span><\/a>, Harvard<\/strong> | Wednesday, August 3, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

When and why will observed play in a game approximate an equilibrium, and what sort of equilibria will persist? To understand this, we study the long-run outcomes of rational non-equilibrium learning. In one-shot simultaneous-move games, steady states of such processes must be Nash equilibria, but this is not true in   extensive- form games, where mistaken beliefs about opponents\u2019 play and non-Nash outcomes can persist due to the tradeoff between exploration and exploitation.   When players are patient, learning leads players to have the correct beliefs about the path of play and so to a subset of the Nash equilibria. Ongoing research analyzes this subset for the class of signalling games, which are known to have many implausible Nash equilibria.<\/p>\n

Biography<\/h2>\n

Drew Fudenberg is the Paul A. Samuelson Professor of Economics at MIT.  He is best known for his work on game theory, which ranges from foundational work on learning and equilibrium to the study of  particular games used to study e.g. reciprocal altruism, reputation effects,   and competition between firms. More recently he has also worked on topics in behavioral economics and decision theory. He is the author of four books, including a leading game theory text.  He received an A.B. in applied mathematics from Harvard College in 1978, and a Ph.D. in economics from MIT in 1981, followed by faculty appointments at UC Berkeley, MIT, and Harvard.  He is a Fellow of the Econometric Society, and will be its President in 2017. He is a member of both the National Academy of Sciences and the American Academy of Arts and Sciences. He is also a past editor of Econometrica <\/em>and a co-founder of the open access journal Theoretical Economics<\/em>. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tBlack + Twitter: A Cultural Informatics Approach \u2013 Andre Brock\t\t\t<\/h4>\n
\n

\n<\/p>

Andre Brock (opens in new tab)<\/span><\/a>, U Michigan<\/strong> | Wednesday, July 27, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

Chris Sacca, activist investor, recently argued that Twitter IS Black Twitter.  African American usage of the service often dominates user metrics in the United States, despite their minority demographic status among computer users.  This talk unpacks Black Twitter use from two perspectives: analysis of the interface and associated practice alongside discourse analysis of Twitter\u2019s utility and audience.  Using examples of Black Twitter practice, I offer that Twitter\u2019s feature set and ubiquity map closely onto Black discursive identity.  Thus, Twitter\u2019s outsized function  as mechanism for cultural critique  and political activism can be understood as the awakening of Black digital practice and an abridging of a digital divide.<\/p>\n

Biography<\/h2>\n

Andr\u00e9 Brock is an Assistant Professor of Communication Studies at the University of Michigan. Brock is one of the preeminent scholars of Black Cyberculture. His work bridges Science and Technology Studies and Critical Discourse Analysis, showing how the communicative affordances of online media align with those of Black communication practices. Through December 2016, he is a Visiting Researcher with the Social Media Collective at Microsoft Research New England. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tEngaging Patients In Their Own Health: Will Predictive Analytics Help To Sharpen Blunt Instruments? \u2013 Niteesh Choudhry\t\t\t<\/h4>\n
\n

\n<\/p>

Niteesh Choudhry (opens in new tab)<\/span><\/a>, Harvard Medical School<\/strong> | Wednesday, July 20, 2016<\/p>\n

Description<\/h2>\n

Many highly effective health interventions are never widely adopted into routine care. For example, each year only 40% of adults receive a flu vaccine and only half of patients who have had a heart attack continue to take their cardiac medications over the long-term. Overcoming these gaps in health care implementation requires patients and providers to be more actively involved in healthcare delivery. Engagement techniques that rely on strategies from behavioral economics, marketing, cognitive psychology, information technology and other related disciplines have shown promise, although even the best of these interventions have changed behavior to only a modest extent. This may result from either the interventions themselves having limited effectiveness or their not being optimally targeted to those who may benefit the most. In this lecture, I will use the example of medication non-adherence, an exceptionally common public health problem that annually accounts for hundreds of billions of dollars of potentially avoidable health spending in the U.S. alone, to describe how predictive analytics is being used to refine patient engagement interventions. I will review both what appears to be possible today (as in identifying who is likely to be non-adherent in the future and when they will exhibit this behavior) and describe what else needs to be developed to fully capture an individual\u2019s behavioral response phenotype.<\/p>\n

Biography<\/h2>\n

Niteesh K. Choudhry, MD, PhD, is an Associate Professor at Harvard Medical School and Executive Director of the Center for Healthcare Delivery Sciences (www.c4hds.org) at Brigham and Women\u2019s Hospital, where he also Associate Physician in the Division of Pharmacoepidemiology and Pharmacoeconomics. Much of Dr. Choudhry\u2019s recent work has dealt with the design and implementation of large simple clinical trials embedded in real-world health systems. He was the principal investigator of the Post-MI Free Rx and Event and Economic Evaluation (MI FREEE) trial, on the basis of which Aetna has changed their benefits to waive medication copayments for post-MI secondary prevention medications. He is also the principal investigator of several other large pragmatic clinical trials designed to engage patients in improving their own health care. These include the Randomized Evaluation to Measure Improvements in Nonadherence from low-cost Devices (REMIND) trials, the NHLBI-funded Study of a Telepharmacy Intervention for Chronic disease to Improve Treatment adherence (STIC 2 IT), and the Targeted Adherence intervention to Reach Glycemic control with Insulin Therapy for Diabetes patients (TARGIT \u2013 Diabetes) study and the ENhancing outcomes through Goal Assessment and Generating Engagement in Diabetes Mellitus (ENGAGE-DM) trial. He is also the Co-Principal Investigator of the Mail Outreach To Increase Vaccine Acceptance Through Engagement (MOTIVATE) trial, conducted in partnership with the White House Social and Behavioral Sciences Team and the Center for Medicare and Medicaid Services; it seeks to increase rates of influenza vaccination among Medicare beneficiaries. Dr. Choudhry attended McGill University, received his M.D. and completed his residency training in Internal Medicine at the University of Toronto and then served as Chief Medical Resident for the Toronto General and Toronto Western Hospitals. He earned his Ph.D. in Health Policy from Harvard University. He has published over 190 scientific papers in leading medical and policy journals and has won awards from AcademyHealth, the Society of General Internal Medicine, the International Society of Pharmacoeconomics and Outcomes Research, and the National Institute of Health Care Management for his research. His work is supported by both public and private funders including the National Heart, Lung, and Blood Institute, the Agency for Healthcare Quality and Research, CVS Caremark, Aetna, the Robert Wood Johnson Foundation, the Commonwealth Fund, the Arnold Foundation, Merck, Sanofi, AstraZeneca and the Pharmaceutical Research and Manufacturers of America. Dr. Choudhry practices inpatient general internal\/hospital medicine and has won numerous awards for teaching excellence. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tPersonal Control of Data \u2013 Butler Lampson\t\t\t<\/h4>\n
\n

\n<\/p>

Butler Lampson (opens in new tab)<\/span><\/a>, MSR New England<\/strong> | Wednesday, July 13, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

Description<\/h2>\n

People around the world are concerned that more and more of their personal data is on the Internet, where it\u2019s easy to find, copy, and link up with other data. Data about people\u2019s presence and actions in the physical world (from cameras, microphones, and other sensors) soon will be just as important as data that is born digital. What people most often want is a sense of control over their data (even if they don\u2019t exercise this control very often). Control means that you can tell who has your data, limit what they can do with it, and change your mind about the limits. Many people feel that this control is a fundamental human right (thinking of personal data as an extension of the self), or an essential part of your property rights to your data. Regulators are starting to respond to these concerns. Because societies around the world have different cultural norms and governments have different priorities, there will not be a single worldwide regulatory regime. However, it does seem possible to have a single set of basic technical mechanisms that support regulation, based on the idea of requiring data holders to respect the current policy of data subjects about how their data is used.<\/p>\n

Biography<\/h2>\n

Butler is a Technical Fellow at Microsoft and an Adjunct Professor at MIT.  He has worked on computer architecture, local area networks, raster printers, page description languages, operating systems, remote procedure call, programming languages and their semantics, programming in the large, fault-tolerant computing, transaction processing, computer security, WYSIWYG editors, and tablet computers.  He was one of the designers of the SDS 940 time-sharing system, the Alto personal distributed computing system, the Xerox 9700 laser printer, two-phase commit protocols, the Autonet LAN, the SPKI system for network security, the Microsoft Tablet PC software, the Microsoft Palladium high-assurance stack, and several programming languages. He received the ACM Software Systems Award in 1984 for his work on the Alto, the IEEE Computer Pioneer award in 1996 and von Neumann Medal in 2001, the Turing Award in 1992, and the NAE\u2019s Draper Prize in 2004. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tIncentive Alignment for Machine Learning \u2013 Yiling Chen\t\t\t<\/h4>\n
\n

\n<\/p>

Yiling Chen (opens in new tab)<\/span><\/a>, Harvard<\/strong> | Wednesday, June 29, 2016<\/p>\n

Description<\/h2>\n

We are blessed with unprecedented abilities to connect with people all over the world: buying and selling products, sharing information and experiences, asking and answering questions, collaborating on projects, borrowing and lending money, and exchanging excess resources. These activities result in rich data that scientists can use to understand human social behavior, generate accurate predictions, and make policy recommendations. Machine learning traditionally take such data as given, often treating them as independent samples drawn from some underlying true distribution. However, such data are possessed or generated by (potentially strategic) people in the context of specific interaction rules. Hence, what data become available depends on the interaction rules. In this talk, I argue that a holistic view that jointly considers data acquisition and inference and learning is important for machine learning. As an example, I will present a project on incentivizing strategic agents to generate high-quality data for the purpose of regression.<\/p>\n

Biography<\/h2>\n

Yiling Chen is the Gordon McKay Professor of Computer Science at Harvard University. She received her Ph.D. in Information Sciences and Technology from the Pennsylvania State University. Prior to working at Harvard, she spent two years at Yahoo! Research in New York City. Her current research focuses on topics in the intersection of computer science and economics. Her awards include an ACM EC Outstanding Paper Award, an AAMAS Best Paper Award, an NSF Career award and The Penn State Alumni Association Early Career Award, and she was selected by IEEE Intelligent Systems as one of \u201cAI\u2019s 10 to Watch\u201d in 2011. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tRigorous Foundations for Privacy in Statistical Databases \u2013 Adam Smith\t\t\t<\/h4>\n
\n

\n<\/p>

Adam Smith (opens in new tab)<\/span><\/a>, Penn State<\/strong> | Wednesday, June 22, 2016<\/p>\n

Description<\/h2>\n

Consider an agency holding a large database of sensitive personal information \u2014 medical records, census survey answers, web search records, or genetic data, for example. The agency would like to discover and publicly release global characteristics of the data (say, to inform policy or business decisions) while protecting the privacy of individuals\u2019 records. This problem is known variously as \u201cstatistical disclosure control\u201d, \u201cprivacy-preserving data mining\u201d or \u201cprivate data analysis\u201d. I will begin by discussing what makes this problem difficult, and exhibit some of the nontrivial issues that plague simple attempts at anonymization and aggregation. Motivated by this, I will present differential privacy, a rigorous definition of privacy in statistical databases that has received significant attention. I\u2019ll explain some recent results on the design of differentially private algorithms, as well as the application of these ideas in contexts with no (previously) apparent connection to privacy.<\/p>\n

Biography<\/h2>\n

Adam Smith is a professor of Computer Science and Engineering at Penn State. His research interests lie in data privacy and cryptography and their connections to information theory, statistical learning and quantum computing. He received his Ph.D. from MIT in 2004 and was subsequently a visiting scholar at the Weizmann Institute of Science and UCLA and a visiting professor at Boston University and Harvard. He received a 2009 Presidential Early Career Award for Scientists and Engineers (PECASE) and the 2016 Theory of Cryptography Test of Time Award (with Dwork, McSherry and Nissim). <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tIncentive Auctions and Spectrum Repacking: A Case Study for Deep Optimization \u2013 Kevin Leyton-Brown\t\t\t<\/h4>\n
\n

\n<\/p>

Kevin Leyton-Brown (opens in new tab)<\/span><\/a>, U British Columbia<\/strong> | Wednesday, June 15, 2016<\/p>\n

Description<\/h2>\n

This talk will discuss the FCC\u2019s \u201cincentive auction\u201d\u2013currently underway!\u2013which proposes to give television broadcasters an opportunity to sell their broadcast rights, to repack remaining broadcasters into a smaller block of spectrum, and to resell the freed airwaves to telecom companies. The stakes for this auction are huge\u2013projected tens of billions of dollars in revenue for the government\u2013justifying the design of a special-purpose descending-price auction mechanism. An inner-loop problem in this mechanism is determining whether a given set of broadcasters can be repacked into a smaller block of spectrum while respecting radio interference constraints. This is an instance of a (worst-case intractable) graph coloring problem; however, stations\u2019 broadcast locations and interference constraints are all known in advance. Early efforts to solve this problem considered hand-crafted mixed-integer programming formulations, but were unable to reliably solve realistic, national-scale problem instances. We advocate instead for a \u201cdeep optimization\u201d approach that applies abundant offline computation to tailor an algorithm to the problem at hand. In particular, we leveraged automatic algorithm configuration and algorithm portfolio techniques, alongside constraint graph decomposition; novel caching mechanisms that allow reuse of partial solutions from related, solved problems; and the marriage of local-search and complete SAT solvers. We show that our approach solves virtually all of a set of problems derived from auction simulations within the short time budget required in practice.<\/p>\n

Biography<\/h2>\n

Kevin Leyton-Brown is a professor of computer science at the University of British Columbia. He studies the intersection of computer science and microeconomics, addressing computational problems in economic contexts and incentive issues in multiagent systems. He also applies machine learning to the automated design and analysis of algorithms for solving hard computational problems. Lately, he has been involved in designing an algorithm to clear the FCC\u2019s upcoming \u201cincentive auction\u201d for radio spectrum; applying deep learning to model human behavior in games and discrete choice settings; building an online market for agricultural commodities in Uganda (\u201cKudu\u201d); and building a system for TA-supported student peer grading (\u201cMechanical TA\u201d) and analyzing its game theoretic properties. Kevin received his PhD from Stanford University. He is the recipient of UBC\u2019s 2015 Charles A. McDowell Award for Excellence in Research, a 2014 NSERC E.W.R. Steacie Memorial Fellowship, and a 2013 Outstanding Young Computer Science Researcher Prize from the Canadian Association of Computer Science. <\/p>\n

<\/p><\/div>\n

\n\t\t\t\tDecision making at scale: Algorithms, Mechanisms, and Platforms \u2013 Ashish Goel\t\t\t<\/h4>\n
\n

\n<\/p>

Ashish Goel (opens in new tab)<\/span><\/a>, Stanford <\/strong>| Wednesday, May 25, 2016<\/p>\n

Description<\/h2>\n

YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism. Twitter has a similar relationship to news media, and Coursera to Universities. But there are no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe two algorithmic approaches towards large scale decision making that we are exploring. We will also describe our experience with helping implement participatory budgeting in close to two dozen cities and municipalities, including Cambridge, MA, and briefly comment on issues of fairness.<\/p>\n

    \n
  1. Knapsack voting and participatory budgeting: All budget problems are knapsack problems at their heart, since the goal is to pack the largest amount of societal value into a budget. This naturally leads to \u201cknapsack voting\u201d where each voter solves a knapsack problem, or comparison-based voting where each voter compares pairs of projects in terms of benefit-per-dollar. We analyze natural aggregation algorithms for these mechanisms, and natural utility models for voters, and show that knapsack voting is strategy-proof under these models.<\/li>\n
  2. Triadic consensus: Here, we divide individuals into small groups (say groups of three) and ask them to come to consensus; the results of the triadic deliberations in each round form the input to the next round. We show that this method is efficient and incentivizes truth-telling in fairly general settings, whereas no pair-wise deliberation process can have the same properties.<\/li>\n<\/ol>\n

    This is joint work with Tanja Aitamurto, Brandon Fain, Anilesh Krishnaswamy, David Lee, Kamesh Munagala, and Sukolsak Sakshuwong.<\/p>\n

    Biography<\/h2>\n

    Ashish Goel is a Professor of Management Science and Engineering and (by courtesy) Computer Science at Stanford University, and a member of Stanford\u2019s Institute for Computational and Mathematical Engineering. He received his PhD in Computer Science from Stanford in 1999, and was an Assistant Professor of Computer Science at the University of Southern California from 1999 to 2002. His research interests lie in the design, analysis, and applications of algorithms; current application areas of interest include social networks, participatory democracy, Internet commerce, and large scale data processing. Professor Goel is a recipient of an Alfred P. Sloan faculty fellowship (2004-06), a Terman faculty fellowship from Stanford, an NSF Career Award (2002-07), and a Rajeev Motwani mentorship award (2010). He was a co-author on the paper that won the best paper award at WWW 2009, and an Edelman Laureate in 2014. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tHow Spreadsheets Shape Organizational Life: A Case Study in the Materialities of Information \u2013 Paul Dourish\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Paul Dourish<\/b> (opens in new tab)<\/span><\/a>, UC Irvine<\/b> | Wednesday, May 18, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Material manifestations of digital representations \u2014 as bits of a disk, signals in a wire, or images on a screen \u2014 shape, constrain, and enable various forms of human and social action. Based on a book in progress that draws on a range of examples including network protocols and database architectures, this talk will focus on example of spreadsheets to make this argument. A number of recent studies have examined the role that Powerpoint plays in organizational life. Organizational scholars like Wanda Orlikowski and Joanne Yates have looked at the Powerpoint presentation as a particular genre of organizational practice; critics like Edwards Tufte have bemoaned the dumbing-down of powerpoint-driven communication. In ethnographic work of large-scale science, my colleagues and I were struck by a related phenomenon, which is the prevalence of spreadsheets, not just as a document format, but as something that gets incorporated into meetings. This talk will show how the ways spreadsheets are designed \u2014 the constraints and shapes they offer and require \u2014 structure talk and action in particular ways that get organizational work done.<\/p>\n

    Biography<\/h2>\n

    Paul Dourish is Professor of Informatics at UC Irvine, with courtesy appointments in Computer Science and Anthropology; he also has visiting appointments at the University of Melbourne and with Comparative Media Studies at MIT. His research lies primarily in human-computer interaction, software studies, science and technology studies, and cultural studies of digital media. Before joining the faculty at Irvine, he was Senior Research Scientist in the Computer Science Laboratory at Xerox PARC. His current book project, for MIT Press, explores the representational materialities of digital information through a range of case studies including Internet routing algorithms, databases, and spreadsheets. He is a Fellow of the ACM, a member of the ACM SIGCHI Academy, and has received the AMIA Diana Forsythe Award and the CSCW \u201cLasting Impact\u201d award. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tGenetic Screens with CRISPR: A New Hope in Functional Genomics \u2013 John Doench\t\t\t<\/h4>\n
    \n

    \n<\/p>

    John Doench<\/b> (opens in new tab)<\/span><\/a>, Broad Institute<\/b> | Wednesday, May 4, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Functional genomics attempts to understand the genome by disrupting the flow of information from DNA to RNA to protein and then observing how the cell or organism changes in response. Both RNAi and CRISPR technologies are simply hacks of systems that originally evolved to silence viruses, reprogrammed to target genes we\u2019re interested in studying, as decoding the function of genes is a critical step towards understanding how gene dysfunction leads to disease. Here we will discuss the development and optimization of CRISPR technology for genome-wide genetic screens and its application to multiple biological problems.<\/p>\n

    Biography<\/h2>\n

    John Doench is the Associate Director of the Genetic Perturbation Platform at the Broad Institute. He develops and applies the latest approaches in functional genomics, including RNAi, ORF, and CRISPR technologies, to understand the function of genes and how gene dysfunction leads to disease. John collaborates with researchers across the community to develop faithful biological models and execute genetic screens. Prior to joining the Broad in 2009, John did his postdoctoral work at Harvard Medical School, received his PhD from the biology department at MIT, and majored in history at Hamilton College. John lives in Jamaica Plain, MA with his wife and daughter, where he enjoys coaching soccer, cheering on the Red Sox and Patriots, playing volleyball, running, and avoiding imminent death while navigating the streets of Boston on a bicycle. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tCollective Graph Identification \u2013 Lise Getoor\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Lise<\/b> (opens in new tab)<\/span><\/a> Getoor<\/b> (opens in new tab)<\/span><\/a>, UC Santa Cruz<\/b> | Wednesday, April 13, 2016<\/p>\n

    Description<\/h2>\n

    Graph data (e.g., communication data, financial transaction networks, data describing biological systems, collaboration networks, the Web, etc.) is ubiquitous. While this observational data is useful, it is usually noisy, often only partially observed, and only hints at the actual underlying social, scientific or technological structures that give rise to the interactions. For example, an email communication network provides useful insight, but is not the same as the \u201creal\u201d social network among individuals. In this talk, I introduce the problem of graph identification, i.e., the discovery of the true graph structure underlying an observed network. This involves inferring the nodes, edges, and node labels of a hidden graph based on evidence provided by the observed graph. I show how this can be cast as a collective probabilistic inference task and describe a scalable approach to solving this problem.<\/p>\n

    Biography<\/h2>\n

    Lise Getoor is a professor in the Computer Science Department at the University of California, Santa Cruz. Her research areas include machine learning, data integration and reasoning under uncertainty, with an emphasis on graph and network data. She has over 200 publications and extensive experience with machine learning and probabilistic modeling methods for graph and network data. She is a Fellow of the Association for Artificial Intelligence, an elected board member of the International Machine Learning Society, serves on the board of the Computing Research Association (CRA), and was co-chair for ICML 2011. She is a recipient of an NSF Career Award and ten best paper and best student paper awards. She received her PhD from Stanford University in 2001, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a professor in the Computer Science Department at the University of Maryland, College Park from 2001-2013. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tReframing the Financial Inclusion Debate: Evidence from an Up-Close View of Check Cashers and Payday Lenders \u2013 Lisa Servon\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Lisa Servon<\/b> (opens in new tab)<\/span><\/a>, New School<\/b> | Wednesday, April 6, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    What do a Mexican immigrant living in the South Bronx, a twenty-something graduate student, and a telemarketer in Dallas have in common? All three are victims of our dysfunctional mainstream bank and credit system. As banks have grown larger and focused less on serving ordinary consumers, many have begun to get their financial needs meet from alternative financial services providers like check cashers and predatory lenders. Although these businesses are labeled as predatory and sleazy, their customers find that they offer three things banks no longer provide: less expensive products and services, greater transparency, and better service. At a time when 57 percent of Americans are struggling financially, and trust in banks is at an all-time low, it\u2019s imperative that we understand how we got here, and what we can do to make financial health a reality for all Americans.<\/p>\n

    Biography<\/h2>\n

    Lisa J. Servon is Professor and former dean at the Milano School of International Affairs, Management, and Urban Policy at The New School. She is currently a Scholar at the Russell Sage Foundation. Professor Servon holds a PhD in Urban Planning from the University of California, Berkeley, an MA in the History of Art from the University of Pennsylvania and a BA from Bryn Mawr College. She teaches and conducts research in the areas of urban poverty, community development, economic development, and issues of gender and race. Her current research focuses on the alternative financial services industry. Her book, The Unbanking of American: How the New Middle Class Survives<\/i>, will be published by Houghton Mifflin Harcourt in 2017. She spent 2004-2005 as Senior Research Fellow at the New America Foundation in Washington, DC. Servon is the author or editor of numerous journal articles and four books: Bridging the Digital Divide: Technology, Community, and Public Policy<\/i> (Blackwell 2002), and Bootstrap Capital: Microenterprises and the American Poor<\/i> (Brookings 1999), Gender and Planning: A Reader<\/i> (With Susan Fainstein, Rutgers University Press 2005), and Otra Vida es Posible: Practicas Economicas Alternativas Durante la Crisis<\/i> (With Manuel Castells, Joana Conill, Amalia Cardenas and Sviatlana Hlebik. UOC Press 2012). <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tSecurity Games: Key Algorithmic Principles, Deployed Applications and Research Challenges \u2013 Milind Tambe\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Milind Tambe<\/b> (opens in new tab)<\/span><\/a>, USC <\/b>| Wednesday, March 16, 2016 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical infrastructure, protecting endangered wildlife, forests and fisheries, suppressing urban crime or security in cyberspace. Unfortunately, limited security resources prevent full security coverage at all times; instead, we must optimize the use of limited security resources. To that end, our \u201csecurity games\u201d framework \u2014 based on computational game theory, while also incorporating elements of human behavior modeling, AI planning under uncertainty and machine learning \u2014 has led to building and deployment of decision aids for security agencies in the US and around the world. These decision aids are in use by agencies such as the US Coast Guard, the Federal Air Marshals Service and by various police agencies at university campuses, airports and metro trains. Moreover, recent work on \u201cgreen security games\u201d has led our decision aids to begin assisting NGOs in protection of wildlife; and \u201copportunistic crime security games\u201d have focused on suppressing urban crime. I will discuss our use-inspired research in security games that is leading to new research challenges, including algorithms for scaling up security games as well as for handling significant adversarial uncertainty and learning models of human adversary behaviors. Joint work with a number of current and former PhD students, postdocs all listed here (opens in new tab)<\/span><\/a>.<\/p>\n

    Biography<\/h2>\n

    Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California(USC). He is a fellow of AAAI and ACM, as well as recipient of the ACM\/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, INFORMS Wagner prize for excellence in Operations Research practice, Rist Prize of the Military Operations Research Society, IBM Faculty Award, Okawa foundation faculty research award, RoboCup scientific challenge award, and other local awards such as the Orange County Engineering Council Outstanding Project Achievement Award, USC Associates award for creativity in research and USC Viterbi use-inspired research award. Prof. Tambe has contributed several foundational papers in AI in areas such as multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the influential paper award and a dozen best paper awards at conferences such as AAMAS, IJCAI, IAAI and IVA. In addition, Prof. Tambe pioneering real-world deployments of \u201dsecurity games\u201d has led him and his team to receive the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District\u2019s Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by LA Airport police from the city of Los Angeles. For his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. He has also co-founded a company based on his research, ARMORWAY, where he serves as the director of research. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tParadoxes of Openness and Distinction in the Sharing Economy \u2013 Juliet Schor\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Juliet Schor<\/b> (opens in new tab)<\/span><\/a>, Boston College <\/b>| Wednesday, December 16, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Since the 1980s, Pierre Bourdieu\u2019s influence in sociology has increased markedly, including on the study of consumption and economic life (Sallaz and Zavisca 2007). Bourdieu\u2019s formulation of multiple types of capital (economic, cultural and social) and their role in producing and reproducing durable inequality has been highly productive in a variety of contexts. However, while scholars have examined practices of distinction, the structure of particular fields, and the role of specific capitals in social reproduction, there has been less attention to economic exchanges at a micro, interactional level (King 2000). In this paper, we use a Bourdieusian approach to study new kinds of exchanges in the \u201csharing economy\u201d and the ways in which distinction and inequality operate within them. To do this, we extend Bourdieu by bringing in conceptual tools from relational economic sociology. This literature, pioneered by Viviana Zelizer (2010, 2005b, 2012), emphasizes the importance of meaning, the role of culture in structuring economic activity, and the idea that economic exchanges require ongoing interpersonal negotiations. We use relational analysis to study how people deploy, convert, and use their capital. In particular, we show how cultural capital is used to establish superior position in the context of various types of exchanges. Thus, our contribution is an investigation into how Bourdieusian inequality is reproduced via interpersonal relations in the context of exchange.<\/p>\n

    Biography<\/h2>\n

    Juliet Schor is Professor of Sociology at Boston College. She is also a member of the MacArthur Foundation Connected Learning Research Network. Schor\u2019s research focuses on issues of time use, consumption and environmental sustainability. A graduate of Wesleyan University, Schor received her Ph.D. in economics at the University of Massachusetts. Before joining Boston College, she taught at Harvard University for 17 years, in the Department of Economics and the Committee on Degrees in Women\u2019s Studies. In 2014 Schor received the American Sociological Association\u2019s award for Public Understanding of Sociology. She also served as the Matina S. Horner Distinguished Visiting Professor at the Radcliffe Institute at Harvard University. Schor\u2019s most recent books are Sustainable Lifestyles and the Quest for Plenitude: Case Studies of the New Economy (Yale University Press, 2014) which she co-edited with Craig Thompson, and True Wealth: How and Why Millions of Americans are Creating a Time-Rich, Ecologically Light, Small-Scale, High-Satisfaction Economy (2011 by The Penguin Press, previously published as Plenitude. As part of her work with the MacArthur Foundation, Schor is currently researching the \u201cconnected economy,\u201d via a series of case studies of sharing platforms and their participants. She is also studying the relation between working hours, carbon emissions and economic growth. Schor\u2019s previous books include the national best-seller The Overworked American: The Unexpected Decline of Leisure (Basic Books, 1992) and The Overspent American: Why We Want What We Don\u2019t Need (Basic Books, 1998). She appears frequently on national and international media, and profiles on her and her work have appeared in scores of magazines and newspapers, including The New York Times, Wall Street Journal, Newsweek, and People magazine. She has appeared on 60 Minutes, the Today Show, Good Morning America, The Early Show on CBS, numerous stories on network news, as well as many other television and radio news programs. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tThe Strange Logic of Galton-Watson Trees \u2013 Joel Spencer\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Joel Spencer<\/b> (opens in new tab)<\/span><\/a>, NYU<\/b> | Wednesday, December 2, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    The Galton-Watson tree is a basic demographic model. The classic equation for a Galton-Watson tree being infinite has two solutions, only one of which is \u201ccorrect.\u201d What about other properties. (Example: Some node has precisely two children.) We show that when the property is what is called first order than there is a unique solution to the corresponding equation. We consider \u201ctree automata\u201d and the situation for monadic second order properties.<\/p>\n

    Biography<\/h2>\n

    Joel Spencer is a Professor of Mathematics and Computer Science at the Courant Institute, New York. His work is at the fecund intersection of Probability, Discrete Math and Logic, with a strong asymptotic flavor. He is a disciple of Paul Erdos. A new edition of His book (with Noga Alon) The Probabilistic Method will be published in December. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tThe Rise of the Sharing Economy: Estimating the Impact of Airbnb on the Hotel Industry \u2013 Giorgos Zervas\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Giorgos Zervas<\/b> (opens in new tab)<\/span><\/a>, BU<\/b> | Wednesday, November 18, 2015<\/p>\n

    Description<\/h2>\n

    A number of decentralized peer-to-peer markets, now colloquially known as the sharing economy, have emerged as alternative suppliers of goods and services traditionally provided by long-established industries. A central question surrounding the sharing economy regards its long-term impact: will peer-to-peer platforms materialize as viable mainstream alternatives to traditional providers, or will they languish as niche markets? In this paper, we study Airbnb, a sharing economy pioneer offering short-term accommodation. Combining data from Airbnb and the Texas hotel industry, we estimate the impact of Airbnb\u2019s entry into the Texas market on hotel room revenue, and study the market response of hotels. To identify Airbnb\u2019s causal impact on hotel room revenue, we use a difference-in-differences empirical strategy that exploits the significant spatiotemporal variation in the patterns of Airbnb adoption across city-level markets. We estimate that in Austin, where Airbnb supply is highest, the impact on hotel revenue is roughly 8-10%. We find that Airbnb\u2019s impact is non-uniformly distributed, with lower-priced hotels, and hotels not catering to business travel being the most affected segments. Finally, we find that affected hotels have responded by reducing prices, an impact that benefits all consumers, not just participants in the sharing economy. Our work provides empirical evidence that the sharing economy is making inroads by successfully competing with, and acquiring market share from, incumbent firms.<\/p>\n

    Biography<\/h2>\n

    Georgios Zervas is an assistant professor of Marketing at Questrom School of Business at Boston University. Before joining BU in 2013 he was a Simons postdoctoral fellow at Yale, and an affiliate at the Center for Research on Computation and Society at Harvard. He received his PhD in Computer Science in 2011 from Boston University. He is broadly interested in understanding the strategic interactions of firms and consumers participating in internet markets using large-scale data collection and econometric analysis. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tComics and Stuff: An Introduction \u2013 Henry Jenkins\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Henry Jenkins<\/b> (opens in new tab)<\/span><\/a>, USC<\/b> | Wednesday, November 11, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    The status and nature of comics are under transition, as comics move from a disposable medium to one which is perceived as having enduring value. The emergence of the so-called \u201cgraphic novel\u201d represents a shift in how comics are published, in terms of what kind of cultural status they command, in terms of who reads and writes them, in terms of how people access them, and in terms of what kinds of stories they tell. Comics artists and readers have historically been collectors who sorted through this \u201ctrash\u201d medium to decide what should be kept and discarded. And today\u2019s graphic novels often telling \u201ccollecting stories,\u201d that is, stories by, for and about collectors, using their protagonist\u2019s relationships with material objects as a means of sorting through their own relationship to this evolving medium. This talk will draw insights from contemporary work in cultural anthropology, literary criticism, and art history that speaks about \u201cstuff,\u201d \u201cobjects,\u201d \u201cthings,\u201d to think about the ways contemporary comics represent our relationship to the material world and through this, reflect on our relationship to issues of memory, nostalgia, and history.<\/p>\n

    Biography<\/h2>\n

    Henry Jenkins is the Provost\u2019s Professor of Communication, Journalism, Cinematic Art, and Education at the University of Southern California. He is the author or editor of 17 books on various aspects of media change and popular culture, including Textual Poachers: Television Fans and Participatory Culture, Convergence Culture: Where Old and New Media Collide, Spreadable Media: Creating Meaning and Value in a Networked Society, Participatory Culture in a Networked Era, and By Any Media Necessary: The New Activism of American Youth. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tRecovering usable hidden structure using exploratory data analyses on genomic data \u2013 Barbara Engelhardt\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Barbara Engelhardt<\/b> (opens in new tab)<\/span><\/a>, Princeton<\/b> | Wednesday, November 4, 2015<\/p>\n

    Description<\/h2>\n

    Methods for exploratory data analysis have been the recent focus of much attention in `big data\u2019 applications because of their ability to quickly allow the user to explore structure in the underlying data in a controlled and interpretable way. In genomics, latent factor models are commonly used to identify population substructure, identify gene clusters, and control noise in large data sets. In this talk I will describe a series of statistical models for exploratory data analysis to illustrate the structure that they are able to identify in large genomic data sets. I will consider several downstream uses for the recovered latent structure: understanding technical noise in the data, developing undirected networks from the recovered structure, and using this latent structure to study genomic differences among people.<\/p>\n

    Biography<\/h2>\n

    Barbara Engelhardt is an assistant professor in the Computer Science Department and the Center for Statistics and Machine Learning at Princeton University. Prior to that, she was at Duke University as an assistant professor in Biostatistics and Bioinformatics and Statistical Sciences. She graduated from Stanford University and received her Ph.D. from the University of California, Berkeley, advised by Professor Michael Jordan. She did postdoctoral research at the University of Chicago, working with Professor Matthew Stephens. Interspersed among her academic experiences, she spent two years working at the Jet Propulsion Laboratory, a summer at Google Research, and a year at 23andMe, a personal genomics company. Professor Engelhardt received an NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Walter M. Fitch Prize from the Society for Molecular Biology and Evolution. She also received the NIH NHGRI K99\/R00 Pathway to Independence Award. Professor Engelhardt is currently a PI on the Genotype-Tissue Expression (GTEx) Consortium. Her research interests involve statistical models and methods for analysis of high-dimensional data, with a goal of understanding the underlying biological mechanisms of complex phenotypes and human diseases. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tImproving Urban Public Education: Lessons from Charter Schools \u2013 Parag Pathak\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Parag Pathak<\/b> (opens in new tab)<\/span><\/a>, MIT<\/b> | Wednesday, October 28, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Charter schools represent one of the fastest growing, yet controversial innovations in education reform. In this talk, I will review several papers measuring urban school performance from a series of papers using data from Boston, New York City, Denver, and New Orleans from MIT\u2019s School Effectiveness and Inequality Initiative. In addition to discussing the broader debates on sources of achievement gaps, I will also briefly touch upon some new methodological issues emerging from this work.<\/p>\n

    Biography<\/h2>\n

    Parag A. Pathak is a Professor of Economics at MIT, founding co-director of the NBER Working Group on Market Design, and founder of MIT\u2019s School Effectiveness and Inequality Initiative (SEII), a laboratory focused on education, human capital, and the income distribution. His work on market design and education was recognized with a Presidential Early Career Award for Scientists and Engineers, an Alfred P. Sloan Fellowship, the Shapley Lectureship, and the 2016 Social Choice and Welfare Prize. More than a million students have been assigned to school in choice systems he has helped to design in Boston, Chicago, Denver, New Orleans, New York, and Washington DC. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tThe Contextual Bandits Problem: A New, Fast, and Simple Algorithm \u2013 Robert Schapire\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Robert Schapire<\/b> (opens in new tab)<\/span><\/a>, MSR-NYC <\/b>| <\/b>Wednesday, October 14, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    We study the general problem of how to learn through experience to make intelligent decisions. In this setting, called the contextual bandits problem, the learner must repeatedly decide which action to take in response to an observed context, and is then permitted to observe the received reward, but only for the chosen action. The goal is to learn through experience to behave nearly as well as the best policy (or decision rule) in some possibly very large and rich space of candidate policies. Previous approaches to this problem were all highly inefficient and often extremely complicated. In this work, we present a new, fast, and simple algorithm that learns to behave as well as the best policy at a rate that is (almost) statistically optimal. Our approach assumes access to a kind of oracle for classification learning problems which can be used to select policies; in practice, most off-the-shelf classification algorithms could be used for this purpose. Our algorithm makes very modest use of the oracle, which it calls far less than once per round, on average, a huge improvement over previous methods. These properties suggest this may be the most practical contextual bandits algorithm among all existing approaches that are provably effective for general policy classes. This is joint work with Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford and Lihong Li.<\/p>\n

    Biography<\/h2>\n

    Robert Schapire is a Principal Researcher at Microsoft Research in New York City. He received his PhD from MIT in 1991. After a short post-doc at Harvard, he joined the technical staff at AT&T Bell Laboratories (later, AT&T Labs) in 1991. In 2002, he became a Professor of Computer Science at Princeton University where he was later named the David M. Siegel \u201983 Professor in Computer Science. He joined Microsoft Research in 2014. His awards include the 1991 ACM Doctoral Dissertation Award, the 2003 G\u00f6del Prize, and the 2004 Kanelakkis Theory and Practice Award (both of the last two with Yoav Freund). He is a fellow of the AAAI, and a member of the National Academy of Engineering. His main research interest is in theoretical and applied machine learning, with particular focus on boosting, online learning, game theory, and maximum entropy. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tAn algorithm for precision medicine \u2013 Matt Might\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Matt Might<\/b> (opens in new tab)<\/span><\/a>, Harvard Medical School<\/b> | <\/b>Wednesday, September 30, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    President Obama recently launched the Precision Medicine Initiative, a confluence of efforts in data science, bioinformatics, systems biology and genomics. Precision medicine\u2019s promise of \u201cthe right medicine to the right patient at the right time\u201d is predicated on the assumption that a patient\u2019s health data may be mapped directly to the \u201cright medicine.\u201d It is reasonable to assume that such a mapping exists (in theory), but it is not yet clear how complex the implementation of that mapping will become. With the claim that genomic data will be a key driver in precision medicine, rare genetic disorders offer a window into the genome-guided aspects of precision medicine. This talk provides a cautionary yet optimistic portrait of what full-scale precision medicine will entail, illustrated by the speaker\u2019s first-hand experience with aftermath of the discovery that his son was the first known patient of a novel and ultra-rare genetic disorder \u2014 NGLY1 deficiency.<\/p>\n

    Biography<\/h2>\n

    Matt Might is an associate professor of computer science at the University of Utah and a visiting associate professor in computer science at the Harvard Medical School. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tLocal views and global conclusions \u2013 Nati Linial\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Nati Linial<\/b> (opens in new tab)<\/span><\/a>, Hebrew University of Jerusalem<\/b> | Thursday, September 17, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    We start by describing a challenge in bioinformatics to illustrate a very universal phenomenon: The Protein Interaction graph G of an organism has one vertex for each of its proteins and an edge for each pair of interacting proteins. Several competing theories attempt to describe how such graphs emerge in evolution and we wish to tell which theory provides a better explanation. A major difficulty in resolving such problems is that G is huge so it is unrealistic to calculate most of its nontrivial graph parameters. But even a huge graph G can be efficiently sampled. Given a small integer k (say k=10), the k-profile of G is a distribution on k-vertex graphs. It is derived by randomly sampling k vertices in G and observing the subgraph that they induce. A theory largely developed in MSR (\u201cTheory of graph limits\u201d \u2013 Lovasz, Szegedy, Chayes, Borgs, Cohn, Friedman\u2026) offers a clue. It says essentially that to decide whether a series of large graphs is derived from a given statistical model it is enough to check that the graphs\u2019 profiles behave as they should. I will give you some sense of the theory of graph limits and then move to discuss profiles. The two main questions are: (i) Which profiles are possible? (ii) What global properties of G can you derive, based on its profiles? <\/b> <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tPersonalized Health with Gaussian Processes \u2013 Neil Lawrence\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Neil Lawrence<\/b> (opens in new tab)<\/span><\/a>, University of Sheffield <\/b>| Wednesday, August 19, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Modern data connectivity gives us different views of the patient which need to be unified for truly personalized health care. I\u2019ll give an personal perspective on the type of methodological and social challenges we expect to arise in this this domain and motivate Gaussian process models as one approach to dealing with the explosion of data.<\/p>\n

    Biography<\/h2>\n

    Neil Lawrence received his bachelor\u2019s degree in Mechanical Engineering from the University of Southampton in 1994. Following a period as an field engineer on oil rigs in the North Sea he returned to academia to complete his PhD in 2000 at the Computer Lab in Cambridge University. He spent a year at Microsoft Research in Cambridge before leaving to take up a Lectureship at the University of Sheffield, where he was subsequently appointed Senior Lecturer in 2005. In January 2007 he took up a post as a Senior Research Fellow at the School of Computer Science in the University of Manchester where he worked in the Machine Learning and Optimisation research group. In August 2010 he returned to Sheffield to take up a collaborative Chair in Neuroscience and Computer Science. Neil\u2019s main research interest is machine learning through probabilistic models. He focuses on both the algorithmic side of these models and their application. He has a particular focus on applications in personalized health and computational biology, but happily dabbles in other areas such as speech, vision and graphics. Neil was Associate Editor in Chief for IEEE Transactions on Pattern Analysis and Machine Intelligence (from 2011-2013) and is an Action Editor for the Journal of Machine Learning Research. He was the founding editor of the JMLR Workshop and Conference Proceedings (2006) and is currently series editor. He was an area chair for the NIPS conference in 2005, 2006, 2012 and 2013, Workshops Chair in 2010 and Tutorials Chair in 2013. He was General Chair of AISTATS in 2010 and AISTATS Programme Chair in 2012. He was Program Chair of NIPS in 2014 and is General Chair for 2015. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tCan We Agree on Science? Measuring the Ideological Alignment of Science with Book Co-purchase Data \u2013 James Evans\t\t\t<\/h4>\n
    \n

    \n<\/p>

    James Evans<\/b> (opens in new tab)<\/span><\/a>, University of Chicago <\/b>| Wednesday, August 12, 2015<\/p>\n

    Description<\/h2>\n

    Does science constitute a \u201cpublic sphere\u201d for reasoned debate in the United States? Attacks on science in the media and the liberal credentials of most scientists could suggest no, but recent surveys find the public does not regard science as liberal and overwhelmingly acknowledges scientific contributions to society. We used millions of Alpha XR recommendations based on co-purchases between political and scientific books as a behavioral indicator for whether science bridges or deepens political divides. Findings reveal that books from the social sciences and hot-button fields (e.g., climatology) are most politically relevant, but books from general scientific disciplines (e.g., physics, astronomy, and zoology) are more co-purchased with liberal books, while those in practical, commercially relevant fields (e.g., medicine, criminology, and geology) are more co-purchased with conservative books. Moreover, liberal books tend to be co-purchased with a much broader sample of science books, indicating that conservatives have more selective interest in science. We conclude that the political left and right share an interest in science in general, but not science in particular.<\/p>\n

    Biography<\/h2>\n

    James Evans is Director of Knowledge Lab (http:\/\/knowledgelab.org), senior fellow at the Computation Institute, associate professor of Sociology and the College, and member of the Committee on Conceptual and Historical Studies of Science at the University of Chicago. He is founding director of the Masters program in Computational Social Science (starting 2016) at the University of Chicago. His research focuses on the collective system of thinking and knowing, ranging from the distribution of attention and intuition, the origin of ideas and shared habits of reasoning to processes of agreement (and dispute), accumulation of certainty (and doubt), and the texture\u2013novelty, ambiguity, topology\u2013of human understanding. Evans is especially interested in innovation\u2013how new ideas and practices emerge\u2013and the role that social and technical institutions (e.g., the Internet, markets, collaborations) play in collective cognition and discovery. Much of Evans work has focused on areas of modern science and technology, but he is also interested in other domains of knowledge\u2013news, law, religion, gossip, hunches and historical modes of thinking and knowing. Evans supports the creation of novel observatories for human understanding and action through crowd sourcing, information extraction from text and images, and the use of distributed sensors (e.g., RFID tags, cell phones). He uses machine learning, generative modeling, social and semantic network representations to explore knowledge processes, scale up interpretive and field-methods, and create alternatives to current discovery regimes. His research is funded by the National Science Foundation, the National Institutes of Health, the Templeton Foundation and other sources, and has been published in Science, American Journal of Sociology, American Sociological Review, Social Studies of Science, Administrative Science Quarterly, PLoS Computational Biology and other journals. My work has been featured in Nature, the Economist, Atlantic Monthly, Wired, NPR, BBC, El Pa\u00eds, CNN and many other outlets. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tLearning and Efficiency in Games with Dynamic Population \u2013 Eva Tardos\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Eva Tardos<\/b> (opens in new tab)<\/span><\/a>, Cornell <\/b>| Wednesday, July 29, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Selfish behavior can often lead to suboptimal outcome for all participants. This is especially true in dynamically changing environments where the game or the set of the participants can change at any time without even the players realizing it. Over the last decade we have developed good understanding how to quantify the impact of strategic user behavior on overall performance via studying via equilibria of the games. In this talk we will consider the quality of outcomes in games when the population of players is dynamically changing, and where participants have to adapt to the dynamic environment. We show that in large classes of games (including congestion games), if players use a form of learning that helps them to adapt to the changing environment, this guarantees high social welfare, even under very frequent changes. A main technical tool for our analysis is a connection between differential privacy and high efficiency of learning outcomes in frequently changing repeated games. Joint work with Thodoris Lykouris and Vasilis Syrgkanis.<\/p>\n

    Biography<\/h2>\n

    \u00c9va Tardos is a Jacob Gould Schurman Professor of Computer Science Professor, at Cornell University, and was department chair 2006-2010. She received her BA and PhD from Eotvos University in Budapest. She has been elected to the National Academy of Engineering, the National Academy of Sciences, the American Academy of Arts and Sciences, is an external member of the Hungarian Academy of Sciences, and is the recipient of a number of fellowships and awards including the Packard Fellowship, the Goedel Prize, Dantzig Prize, Fulkerson Prize, and the IEEE Technical Achievement Award. She was editor editor-in-Chief of SIAM Journal of Computing 2004-2009, and is currently editor of several other journals including the Journal of the ACM and Combinatorica, served as problem committee member and chair for many conferences. Tardos\u2019s research interest is algorithms and algorithmic game theory, the subarea of theoretical computer science theory of designing systems and algorithms for selfish users. Her research focuses on algorithms and games on networks. She is most known for her work on network-flow algorithms, approximation algorithms, and quantifying the efficiency of selfish routing. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tPrivacy Protection, Personalized Medicine and Genetic Testing \u2013 Catherine Tucker\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Catherine Tucker<\/b> (opens in new tab)<\/span><\/a>, MIT<\/b> | <\/b>Wednesday, July 15, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Personalized medicine \u2013 where the treatment is as individual as the patient \u2013 has been discussed as the future of medicine. But the use of a person\u2019s genetic code to personalize treatment raises new and difficult privacy concerns. Professor Tucker will discuss current approaches to regulating genetic privacy at the state level and the degree of success such approaches have had at promoting the spread of personalized medicine and testing.<\/p>\n

    Biography<\/h2>\n

    Catherine Tucker is the Mark Hyman Jr. Career Development Professor and Associate Professor of Management Science at MIT Sloan. Her research interests lie in how technology allows firms to use digital data to improve their operations and marketing and in the challenges this poses for regulations designed to promote innovation. She has particular expertise in online advertising, digital health, social media, and electronic privacy. Generally, most of her research lies in the interface between Marketing, Economics and Law. She has received an NSF CAREER award for her work on digital privacy, the Erin Anderson Award for Emerging Marketing Scholar and Mentor, the Paul E. Green Award for contributions to the practice of Marketing Research and a Garfield Award for her work on electronic medical records. She has testified before Congress on privacy regulation, as well as presenting her research on privacy to the FCC, FTC and OECD. In addition to her work on privacy and digital data, she has also written extensively on how the online and technology environment changes and challenges intellectual property regimes in the sphere of patent assertion entities, trademarks used as search terms, and copyright issues for online aggregators. Her more practitioner-oriented research in marketing tackles the challenge of how to design online advertising campaigns which do not appear intrusive to the viewer, and have the potential to be spread virally. Dr. Tucker is Associate Editor at Management Science, Co-Editor at Quantitative Marketing and Economics and Co-Editor of the recent NBER volume on the Economics of Digitization. She is a Research Associate at the National Bureau of Economic Research. She teaches MIT Sloan\u2019s MBA Elective on `Pricing\u2019 and the Executive MBA course `Marketing Management for the Senior Executive\u2019. She also teaches in various specialized executive education programs on entrepreneurship, creating thriving platforms ecosystems and innovation. She has received the Jamieson Prize for Excellence in Teaching as well as being voted `Teacher of the Year\u2019 at MIT Sloan. She holds a PhD in economics from Stanford University, and a BA in Politics, Philosophy and Economics from Oxford University. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tUnbalanced Random Matching Markets: The Stark Effect of Competition \u2013 Itai Ashlagi\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Itai Ashlagi<\/b> (opens in new tab)<\/span><\/a>, MIT <\/b>| Wednesday, July 1, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Stability is used often as a criterion in organizing clearinghouses for two-sided matching markets, where agents on both sides of the market have preferences over potential matches. We study competition in matching markets with random heterogeneous preferences by considering markets with an unequal number of agents on either side. First, we show that even the slightest imbalance yields an essentially unique stable matching. Second, we give a tight description of stable outcomes, showing that matching markets are extremely competitive. Each agent on the short side of the market is matched to one of his top preferences and each agent on the long side does almost no better than being matched to a random partner. Our results suggest that any matching market is likely to have a small core, explaining why empirically small cores are ubiquitous and solving a longstanding puzzle.<\/p>\n

    Biography<\/h2>\n

    Itai Ashlagi is an Assistant Professor of Operations Management at Sloan, MIT. He graduated from the Technion and did his postdoc at Harvard Business School. Ashlagi is mainly interested in market design. He received the outstanding paper award in the ACM conference of Electronic Commerce and the NSF Career award. He is a Franz Edelman laureate for his work on kidney exchange, which shaped policies of numerous kidney exchange programs. He will join MS&E at Stanford in the fall of 2015. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tSome limitations and possibilities toward data-driven optimization \u2013 Yaron Singer\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Yaron Singer<\/b> (opens in new tab)<\/span><\/a>, Harvard<\/b> | Wednesday, June 24, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    As we grow highly dependent on big data for making predictions, we translate these predictions into models that help us make informed decisions. But how do the guarantees we have on predictions translate to guarantees on decisions? In many cases, we learn models from sampled data and then aim to use these models to make decisions. In some cases, despite having access to large data sets, the current frameworks we have for learnability do not suffice to guarantee desirable outcomes. In other cases, the learning techniques we have introduce estimation errors which can result in poor outcomes and stark impossibility results. In this talk we will formalize some of these ideas using convex and combinatorial optimization.<\/p>\n

    Biography<\/h2>\n

    Yaron Singer is an Assistant Professor of Computer Science at Harvard University. He was previously a postdoctoral researcher at Google Research and obtained his PhD from UC Berkeley. He is the recipient of the NSF CAREER award, 2012 Best Student Paper Award at the ACM conference on Web Search and Data Mining, the 2010 Facebook Fellowship, the 2009 Microsoft Research Fellowship, and several awards for entrepreneurial work on social networks. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tLearning, mixing, and complexity- a free ride on the second law \u2013 James Lee\t\t\t<\/h4>\n
    \n

    \n<\/p>

    James Lee<\/b> (opens in new tab)<\/span><\/a>, University of Washington <\/b>| <\/b>Monday, June 1, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    The principle of maximum entropy states that, given some data, among all hypothetical probability distributions that agree with the data, the one of maximum entropy best represents the current state of knowledge. It might be natural to expect that this philosophy often yields a \u201csimple\u201d hypothesis since it tries to avoid making the hypothesis more informative than it deserves to be. Viewing entropy as an additional resource to be optimized is an extremely powerful idea with a wide range of applications (and a correspondingly large array of names: boosting, entropy-regularized gradient descent, multiplicative weights update, log-Sobolev inequalities, Gibbs measures, etc.). I will focus specifically on the role of entropy maximization in encouraging simplicity. This has a number of surprising applications in discrete mathematics and the theory of computation. We\u2019ll see three instantiations of this principle: in additive number theory, functional analysis, and complexity theory. For the last application, it will turn out that one needs to extend max-entropy to the setting of quantum information and von Neumann entropy. The philosophy and applications will be discussed at a high level suitable for a general scientific audience.<\/p>\n

    Biography<\/h2>\n

    James R. Lee is an Associate Professor of Computer Science at the University of Washington. His research leverages tools from probability and analysis to attack fundamental problems in discrete mathematics, algorithms, and complexity theory. His work has been recognized by an NSF CAREER award, a Sloan Research Fellowship, and recently a best paper award at STOC 2015 (with P. Raghavendra and D. Steurer) for an application of entropy maximization to computational lower bounds. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tBetter Science Through Better Bayesian Computation \u2013 Ryan Adams\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Ryan Adams<\/b> (opens in new tab)<\/span><\/a>, Harvard<\/b> | Wednesday, May 20, 2015<\/p>\n

    Description<\/h2>\n

    As we grapple with the hype of \u201cbig data\u201d in computer science, it is important to remember that the data are not the central objects: we collect data to answer questions and inform decisions in science, engineering, policy, and beyond. In this talk, I will discuss my work in developing tools for large-scale data analysis, and the scientific collaborations in neuroscience, chemistry, and astronomy that motivate me and keep this work grounded. I will focus on two lines of research that I believe capture an important dichotomy in my work and in modern probabilistic modeling more generally: identifying the \u201cbest\u201d hypothesis versus incorporating hypothesis uncertainty. In the first case, I will discuss my recent work in Bayesian optimization, which has become the state-of-the-art technique for automatically tuning machine learning algorithms, finding use across academia and industry. In the second case, I will discuss scalable Markov chain Monte Carlo and the new technique of Firefly Monte Carlo, which is the first provably correct MCMC algorithm that can take advantage of subsets of data.<\/p>\n

    Biography<\/h2>\n

    Ryan Adams is an Assistant Professor of Computer Science at Harvard. He received his Ph.D. in Physics at Cambridge as a Gates Scholar. He was a CIFAR Junior Research Fellow at the University of Toronto before joining the faculty at Harvard. He has won paper awards at ICML, AISTATS, and UAI, and his Ph.D. thesis received Honorable Mention for the Savage Award for Theory and Methods from the International Society for Bayesian Analysis. He also received the DARPA Young Faculty Award and the Sloan Fellowship. Dr. Adams is the CEO of Whetlab, a machine learning startup, and co-hosts the popular Talking Machines podcast. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tAssessing the Creepy Factor: Shifting from regulatory ethics models to more proactive approaches to \u2018doing the right thing\u2019 in technology research \u2013 Annette Markham\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Annette Markham<\/b> (opens in new tab)<\/span><\/a>, Aarhus University <\/b>| <\/b>Wednesday, May 6, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    What constitutes ethical design of technologies, ethical use of data, and ethical research? How can we pay better attention to the ways in which some aspects of our research or outcomes of our designs might seem \u2018creepy\u2019? In this talk, I begin with the premise that \u201cdoing the right thing\u201d is an outcome of rhetorically powerful tangles of human and non-human elements, embedded in deep\u2014often invisible\u2014structures of software, politics, and habits. Every action by individuals\u2014whether designers, programmers, marketers, researchers, policy makers or consumers\u2014reinforces, resists, and reconfigures existing ethical boundaries for what is acceptable and just. Despite the development of nuanced approaches for ethics in digital and technology studies, the general language surrounding ethics has remained ensconced in that of regulations, requirements, and concepts, born from biomedical models that don\u2019t fit well with contemporary research environments and practices. In this talk, I suggest a framework of ethics in digital research that focuses less on \u2018ethics\u2019 and more on what might be potentially \u2018creepy\u2019 about what we\u2019re doing in our everyday research and design. This is combined with a future oriented \u2018what if\u2019 approach. Placing more responsibility on one\u2019s personal choices is not the most comfortable position, but as the world grows more technologically mediated and digitally saturated, it is particularly important to speculate about future possibilities and harms. I hope to conclude this talk by introducing and getting feedback on sample scenarios that could be used to help Microsoft Researchers include ethical considerations in both conceptual and practical research contexts.<\/p>\n

    Biography<\/h2>\n

    Annette Markham is Associate Professor of Information Studies at Aarhus University in Denmark and Affiliate Professor of Digital Ethics in the School of Communication at Loyola University in Chicago. She earned her PhD in organizational communication (Purdue University, 1998), with a strong emphasis in interpretive, qualitative, and ethnographic methods. Annette\u2019s early research focused on how identity, relationships, and cultural formations constructed in and influenced by digitally saturated socio-technical contexts. Her pioneering sociological work related to digital identity is well represented in her book Life Online: Researching real experience in virtual space (Altamira 1998). Her more recent research focuses on innovative qualitative methodologies for studying networked sociality and ethics of social research and interaction design. Her work can be found in a range of international journals, handbooks, and edited collections, including the book Internet Inquiry (2009, co-edited with Nancy Baym). <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tThe Limits of Reputation in Platform Markets: An Empirical Analysis and Field Experiment \u2013 Steve Tadelis\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Steve Tadelis<\/b> (opens in new tab)<\/span><\/a>, Berkeley<\/b> | Wednesday, April 22, 2015<\/p>\n

    Description<\/h2>\n

    Reputation mechanisms used by platform markets suffer from two problems. First, buyers may draw conclusions about the quality of the platform from single transactions, causing a reputational externality. Second, reputation measures may be coarse or biased, preventing buyers from making proper inferences. We document these problems using eBay data and claim that platforms can benefit from identifying and promoting higher quality sellers. Using an unobservable measure of seller quality we demonstrate the benefits of our approach through a large-scale controlled experiment. Highlighting the importance of reputational externalities, we chart an agenda that aims to create more realistic models of platform markets.<\/p>\n

    Biography<\/h2>\n

    These days my research primarily revolves around e-commerce and the economics of the internet. During the 2011-2013 academic years I was on leave at eBay research labs, where I hired and led a team of research economists. Our work focused on the economics of e-commerce, with particular attention to creating better matches of buyers and sellers, reducing market frictions by increasing trust and safety in eBay\u2019s marketplace, understanding the underlying value of different advertising and marketing strategies, and exploring the market benefits of different pricing structures. Aside from the economics of e-commerce, my main fields of interest are the economics of incentives and organizations, industrial organization, and microeconomics. Some of my past research aspired to advance our understanding of the roles played by two central institutions\u2014firms and contractual agreements\u2014and how these institutions facilitate the creation of value. Within this broader framework, I explored firm reputation as a valuable, tradable asset; the effects of contract design and organizational form on firm behavior with applications to outsourcing and privatization; public and private sector procurement and award mechanisms; and the determinants of trust. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tMeasuring Rhetoric: Statistical Language Models in Social Science \u2013 Matt Taddy\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Matt Taddy<\/b> (opens in new tab)<\/span><\/a>, University of Chicago <\/b>| Wednesday, April 8, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Social scientists are embracing the idea of using `text as data\u2019 as a way to quantify and evaluate social theories. I\u2019ll discuss a brief history of how this strategy has worked and evolved, and pitch some new approaches for combining social measurement with state-of-the-art natural language processing. We\u2019ll focus on the massive multinomial regression models that serve as a basis for text analysis and the distributed computing strategies that allow inference on truly Big Data. I\u2019ll then work through a number of examples of social science questions being asked and answered via statistical NLP, with data from online reviews on Yelp, the US congressional record, and communications between buyers and sellers on eBay.<\/p>\n

    Biography<\/h2>\n

    Matt Taddy is Associate Professor of Econometrics and Statistics at the University of Chicago Booth School of Business. His research is focused on statistical methodology and data mining, driven by applications in business and engineering. He developed and teaches the MBA \u2018Big Data\u2019 course at Chicago Booth. Taddy works on building robust solutions for large scale data analysis problems, at the interface of econometrics and machine learning. This involves dimension reduction techniques for massive datasets and development of models for inference on the output of these algorithms. He has collaborated both with small start-ups and with large research agencies, including NASA Ames, and Lawrence Livermore, Sandia, and Los Alamos National Laboratories, and is a scientist at eBay research labs. Taddy earned his PhD in Applied Math and Statistics in 2008 from the University of California, Santa Cruz, as well as a BA in Philosophy and Mathematics and an MSc in Mathematical Statistics from McGill University. He joined the Chicago Booth faculty in 2008. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tThe Eureka Myth: Creators, Innovators and Everyday Intellectual Property \u2013 Jessica Silbey\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Jessica Silbey<\/b> (opens in new tab)<\/span><\/a>, Suffolk<\/b> | <\/b>Wednesday, March 11, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Are innovation and creativity helped or hindered by our intellectual property laws? In the two hundred plus years since the Constitution enshrined protections for those who create and innovate, we\u2019re still debating the merits of IP laws and whether or not they actually work as intended. Artists, scientists, businesses, and the lawyers who serve them, as well as the Americans who benefit from their creations all still wonder: what facilitates innovation and creativity in our digital age? And what role, if any, do our intellectual property laws play in the growth of innovation and creativity in the United States? Incentivizing the \u201cprogress of science and the useful arts\u201d has been the goal of intellectual property law since our constitutional beginnings. The Eureka Myth cuts through the current debates and goes straight to the source: the artists and innovators themselves. Silbey makes sense of the intersections between intellectual property law and creative and innovative activity by centering on the stories told by artists, scientists, their employers, lawyers and managers, describing how and why they create and innovate and whether or how IP law plays a role in their activities. Their employers, business partners, managers, and lawyers also describe their role in facilitating the creative and innovative work. Silbey\u2019s connections and distinctions made between the stories and statutes serve to inform present and future innovative and creative communities. Breaking new ground in its examination of the U.S. economy and cultural identity, The Eureka Myth draws out new and surprising conclusions about the sometimes misinterpreted relationships between creativity and intellectual property protections.<\/p>\n

    Biography<\/h2>\n

    Professor Jessica Silbey teaches at Suffolk University Law School in Boston in the areas of intellectual property and constitutional law. Professor Silbey received her B.A. from Stanford University and her J.D. and Ph.D. (Comparative Literature) from the University of Michigan. After clerking for Judge Robert E. Keeton on the United States District Court for the District of Massachusetts and Judge Levin Campbell on the United States Court of Appeals for the First Circuit, she practiced law in the disputes department of the Boston office of Foley Hoag LLP focusing on intellectual property, bankruptcy and reproductive rights. Professor Silbey\u2019s scholarly expertise is in the cultural analysis of law, exploring the law beyond its doctrine to the contexts and processes in which legal relations develop and become significant for everyday actors. In the field of intellectual property, Professor Silbey\u2019s scholarship focuses on the humanistic and sociological dimensions of the legal regulation of creative and innovative work. Some of her IP publications include The Eureka Myth: Creators, Innovators and Everyday Intellectual Property (Stanford University Press 2014); Patent Variation: Discerning Diversity Among Patent Functions, 45 Loy. U. Chi. L. Rev. 441 (2013); Harvesting Intellectual Property, \u2018Inspired Beginnings and \u2018Work Makes Work\u2019: Two Stages in the Creative Process of Artists and Innovators, 86 Notre Dame L. R. 2091 (2011), Comparative Tales of Origins and Access: The Future of Intellectual Property Law, 61 Case Wes. Res. L. R. 195 (2011), and Mythical Beginnings of Intellectual Property, 15 Geo. Mason L. R. 319 (2008). Professor Silbey has also published widely in the field of law and film, exploring how film is used as a legal tool and how it becomes an object of legal analysis in light of its history as a cultural object and art form. Representative publications include Law and Justice on the Small Screen (Hart, 2012) (with Peter Robson); Evidence Verit\u00e9 and the Law of Film, 31 Cardozo L. R. 1257 (2010); Cross-Examining Film, 8 U. Md. J. Race, Religion & Gender & L. 101 (2009); and Judges as Film Critics: New Approaches to Filmic Evidence, 39 Mich. J. L. Reform 493 (2004). <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tAfter Math: Following Mathematics into the Digital \u2013 Stephanie Dick\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Stephanie Dick<\/b> (opens in new tab)<\/span><\/a>, Harvard <\/b>| Wednesday, March 4, 2015 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    The advent of modern digital computing in the mid-twentieth century precipitated many transformations in the practices of mathematical knowledge production. However, early computing practitioners throughout the United States subscribed to complicated and conflicting visions of just how much the computer could contribute to mathematics \u2013 each suggesting a different division of mathematical labor between humans and computers and a hierarchization of the tasks involved. Some imagined computers as mere plodding \u201cslaves\u201d who would take over tedious and mechanical elements of mathematical research. Others imagined them more generously as \u201cmentors\u201d or \u201ccollaborators\u201d that could offer novel insight and direction to human mathematicians. Still others believed that computers would eventually become autonomous agents of mathematical research. And computing communities did not simply imagine the potential of the computer differently; they also built those different visions right in to computer programs that enabled new ways of doing mathematics with computers. With a focus on communities based in the United States in the second half of the twentieth century, this talk will explore different visions of the computer as a mathematical agent, the software that was crafted to animate those imaginings, and the communities and practices of mathematical knowledge-making that emerged in tandem.<\/p>\n

    Biography<\/h2>\n

    Stephanie Dick is a Junior Fellow with the Harvard Society of Fellows. She recently completed a PhD in the Department of History of Science at Harvard University. Her work explores the history of mathematics and computing in the postwar United States. She focuses on the history of mathematical software and its epistemological significance. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tPerspectives on Recombination \u2013 Elizabeth Pontikes\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Elizabeth Pontikes<\/b> (opens in new tab)<\/span><\/a>, University of Chicago <\/b>| Wednesday, February 25, 2015<\/p>\n

    Description<\/h2>\n

    Research in economics and sociology over the past century has pointed to recombination as the source for novel social and economic developments. This study suggests that the categorical structure a person uses to understand a domain is fundamental to this concept. This is studied in an investigation of venture capital financing of software organizations. Findings show that venture capitalists are more likely to invest in companies that engage in recombination based on market categories, but that traditional measures of recombination based on patent classes do not have predictive value. Results are strongest for private equity venture capitalists and weakest for corporate venture capitalists, suggesting that people who value novelty based on breaking down existing boundaries will favor recombination, while those who prefer progress that reinforces existing categories will avoid it.<\/p>\n

    Biography<\/h2>\n

    Elizabeth Pontikes is an Associate Professor of Organizations and Strategy at the University of Chicago Booth School of Business. Her research focuses on market classification, innovation, and knowledge development. In her research, Pontikes shows that in systems of market classification, categories vary in how constraining they are, and that category leniency affects how organizational members are evaluated, and when new categories emerge in a classification system. She has studied these ideas in the context of the software industry, the computer industry, and also has studied negative categorization in the context of the Red Scare in Hollywood. In addition, Pontikes is currently working on a project that investigates how rap artists define their identity in their lyrics, and how this is received by both popular and critical audiences. Pontikes has been published in a number of scholarly journals including Administrative Science Quarterly<\/i>, American Sociological Review<\/i>, Management Science<\/i>, and Sociological Science<\/i>. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tOptimal Design for Social Learning \u2013 Johannes Horner\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Johannes Horner<\/b> (opens in new tab)<\/span><\/a>, Yale<\/b> | Wednesday, December 3, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    We study the design of a recommender system for organizing social learning on a product. The optimal design trades off fully transparent social learning to improve incentives for early experimentation, by selectively over-recommending a product in the early phase of the product release. Under the optimal scheme, experimentation occurs faster than under full transparency but slower than under the first-best opti- mum, and the rate of experimentation increases over an initial phase and lasts until the posterior becomes sufficiently bad in which case the recommendation stops along with experimentation on the product. Fully transparent recommendation may become optimal if the (socially-benevolent) designer does not observe the agents\u2019 costs or the agents choose the timing of receiving a recommendation.<\/p>\n

    Biography<\/h2>\n

    Johannes H\u00f6rner is Professor of Economics, Department of Economics, and Cowles Foundation for Research in Economics, Yale University. He has received his Ph.D. in economics from the University of Pennsylvania in 2000, and has held previous positions at the Kellogg School of Management, Northwestern University (2000\u20132008). His academic interests range from game theory to the theory of industrial organization. His research has focused on repeated games, dynamic games, and auctions. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tPhysics-inspired algorithms and phase transitions in community detection \u2013 Christopher Moore\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Cristopher Moore<\/b> (opens in new tab)<\/span><\/a>, Santa Fe Institute<\/b> | Tuesday, November 18, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Detecting communities, and labeling nodes, is a ubiquitous problem in the study of networks. Recently, we developed scalable Belief Propagation algorithms that update probability distributions of node labels until they reach a fixed point. In addition to being of practical use, these algorithms can be studied analytically, revealing phase transitions in the ability of any algorithm to solve this problem. Specifically, there is a detectability transition<\/i> in the stochastic block model, below which no algorithm can label nodes better than chance. This transition was subsequently established rigorously by Mossel, Neeman, and Sly, and Massoulie. I\u2019ll explain this transition, and give an accessible introduction to Belief Propagation and the analogy with free energy and the cavity method of statistical physics. We\u2019ll see that the consensus of many good solutions is a better labeling than the \u201cbest\u201d solution \u2014 something that is true for many real-world optimization problems. While many algorithms overfit, and find \u201ccommunities\u201d even in random graphs where none exist, our method lets us focus on statistically-significant communities. In physical terms, we focus on the free energy rather than the ground state energy. I\u2019ll then turn to spectral methods. It\u2019s popular to classify nodes according to the first few eigenvectors of the adjacency matrix or the graph Laplacian. However, in the sparse case these operators get confused by localized eigenvectors, focusing on high-degree nodes or dangling trees rather than large-scale communities. As a result, they fail significantly above the detectability transition. I will describe a new spectral algorithm based on the non-backtracking matrix, which avoids these localized eigenvectors: it appears to be optimal in the sense that it succeeds all the way down to the transition. Making this rigorous will require us to prove an interesting conjecture in the theory of random matrices and random graphs. This is joint work with Aurelien Decelle, Florent Krzakala, Elchanan Mossel, Joe Neeman, Mark Newman, Allan Sly, Lenka Zdeborova, and Pan Zhang.<\/p>\n

    Biography<\/h2>\n

    Cristopher Moore is a Professor at the Santa Fe Institute. He received his B.A. in Physics, Mathematics, and Integrated Science from Northwestern University, and his Ph.D. in Physics from Cornell. In 2000, he joined the University of New Mexico faculty, with joint appointments in Computer Science, and Physics and Astronomy. In 2012, Moore left the University of New Mexico and became full-time resident faculty at the Santa Fe Institute. He has published over 120 papers at the boundary between physics and computer science, ranging from quantum computing, to phase transitions in NP-complete problems, to the theory of social networks and efficient algorithms for analyzing their structure. With Stephan Mertens, he is the author of The Nature of Computation, published by Oxford University Press. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tMapping single cells: A geometric approach \u2013 Dana Pe’er\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Dana Pe\u2019er<\/b> (opens in new tab)<\/span><\/a>, Columbia<\/b> | Wednesday, November 5, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    High dimensional single cell technologies are on the rise, rapidly increasing in accuracy and throughput. These offer computational biology both a challenge and an opportunity. One of the big challenges with this data-type is to understand regions of density in this multi-dimensional space, given millions of noisy measurements. Underlying many of our approaches is mapping this high-dimensional geometry into a nearest neighbor graph and characterization single cell behavior using this graph structure. We will discuss a number of approaches (1) An algorithm that harnesses the nearest neighbor graph to order cells according to their developmental maturity and its use to identify novel progenitor B-cell sub-populations. (2) Using reweighted density estimation to characterize cellular signal processing in T-cell activation. (2) New clustering and dimensionality reduction approaches to map heterogeneity between cells; with an application to characterizing tumor heterogeneity in Acute Myeloid Leukemia.<\/p>\n

    Biography<\/h2>\n

    Dana Pe\u2019er is an associate professor in the Departments of Biological Sciences and Computer Science. Her lab endeavors to understand the organization, function, and evolution of molecular networks, particularly how variation in DNA sequence alters regulatory networks and leads to the vivid phenotypic diversity of life. Her team develops computational methods that integrate diverse high-throughput data to provide a holistic, systems-level view of molecular networks. She is particularly interested in exploring how systems biology can be used to personalize care for people with cancer. By developing models that can predict how individual tumors will respond to certain drugs and drug combinations, her goal is to develop ways to determine the best drug regime for each patient. Her interest is not only in understanding which molecular components go wrong in cancer cells, but also in using this information to improve cancer therapeutics. Dr. Pe\u2019er is the recipient of the 2014 Overton Prize, and has been recognized with the Burroughs Wellcome Fund Career Award, an NIH Directors New Innovator Award, an NSF CAREER Award, and a Stand Up To Cancer Innovative Research Grant. She was also named a Packard Fellow in Science and Engineering. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tBarack Obama and the politics of social media for national policy-making \u2013 James Katz\t\t\t<\/h4>\n
    \n

    \n<\/p>

    James Katz (opens in new tab)<\/span><\/a><\/b>, Boston University<\/b> | Wednesday, October 15, 2014<\/p>\n

    Description<\/h2>\n

    Social media help people do most everything, ranging from meeting new friends and finding new restaurants to overthrowing dictatorships. This includes political campaigning; one need look no further than Barack Obama\u2019s successful presidential campaigns to see how these communication technologies can alter the way politics is conducted. Yet social media have not had much import for setting national policy as part of regular administrative routines. This is the case despite the fact that, since his election in 2008, President Obama has on several occasions proclaimed that he wanted his administration to draw on social media to make the federal government run better. While there have been some modifications to governmental procedures due to the introduction of social media, the Obama administration practices have fallen far short of its leader\u2019s audacious vision. Despite voluminous attention to social media in other spheres of activity, there has been little to point to in terms of successfully drawing on the public to help set national policies. What might account for this? I try to answer this question in my talk by exploring the attempts by the Obama White House to use social media tools and the consequences arising from such attempts. I also suggest some potential reasons behind the particular uses and outcomes that have emerged in terms of presidential-level social media outreach. As part of my conclusion, I outline possible future directions.<\/p>\n

    Biography<\/h2>\n

    James E. Katz, Ph.D., is the Feld Family Professor of Emerging Media at Boston University\u2019s College of Communication where he directs its Center for Mobile Communication Studies and Division of Emerging Media. His research on the internet, social media and mobile communication has been internationally recognized, and he is frequently invited to address high-level industry, governmental and academic groups on his research findings. His latest book, with Barris and Jain, is The Social Media President: Barack Obama and the Politics of Citizen Engagement <\/i>on which this talk is based. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tCooperation on Social Networks \u2013 Nageeb Ali\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Nageeb Ali<\/b> (opens in new tab)<\/span><\/a>, UCSD<\/b> | Wednesday, October 1, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    At most places, and at most times, cooperation takes place in the absence of legal or contractual enforcement. What motivates players to cooperate? A growing literature in the social sciences emphasizes the importance of future interactions and social mechanisms by which defectors are punished both by their victims and third-parties. This perspective has, in recent years, influenced our understanding of contractual and lending relationships in developing economies, reputations in market platforms such as eBay, and even that of indirect reciprocity in theoretical biology. In this talk, I will describe how the nature and strength of these incentives varies with a social network, how a player may cooperate so as to preserve his reputation in a social network, and what guarantees that a victim of defection truthfully reveals to others that someone else has violated the social norm. We will see that dividing society into cliques and that a modicum of forgiveness can facilitate cooperation. We might see that a commonly made assumption made in much of the literature on cooperation\u2014that victims always reveal when someone else has defected\u2014may be less innocuous than it seems.<\/p>\n

    Biography<\/h2>\n

    S. Nageeb Ali is an assistant professor of economics at UCSD. He studies game-theoretic models of cooperation, social learning, political economy, and behavioral economics. He received his Ph.D. from Stanford University in 2007, and is a frequent Microsoft visitor. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tThe origins of common sense: Modeling human intelligence with probabilistic programs and program induction \u2013 Joshua Tenenbaum\t\t\t<\/h4>\n
    \n

    \n<\/p>

    <\/b>Joshua Tenenbaum (opens in new tab)<\/span><\/a><\/b>, MIT<\/strong> | Wednesday, September 17, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Our work seeks to understand the roots of human thinking by looking at the core cognitive capacities and learning mechanisms of young children and infants. We build computational models of these capacities with the twin goals of explaining human thought in more principled, rigorous \u201creverse engineering\u201d terms, and engineering more human-like AI and machine learning systems. This talk will focus on two ways in which the intelligence of very young children goes beyond the conventional paradigms in machine learning: (1) Scene understanding, where we cannot detect not only objects and their locations, but what is happening, what will happen next, who is doing what to whom and why, in terms of our intuitive theories of physics (forces, masses) and psychology (beliefs, desires, \u2026); (2) Learning concepts from examples, where just a single example is often sufficient to grasp a new concept and generalize in richer ways than machine learning systems can typically do even with hundreds or thousands of examples. I will show how we are beginning to capture these reasoning and learning abilities in computational terms using techniques based on probabilistic programs and program induction, embedded in a broadly Bayesian framework for inference under uncertainty.<\/p>\n

    Biography<\/h2>\n

    Josh Tenenbaum studies learning, reasoning and perception in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. His current work focuses on building probabilistic models to explain how people come to be able to learn new concepts from very sparse data, how we learn to learn, and the nature and origins of people\u2019s intuitive theories about the physical and social worlds. He is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, and is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his Ph.D. from MIT in 1999, and was a member of the Stanford University faculty in Psychology and (by courtesy) Computer Science from 1999 to 2002. His papers have received awards at numerous conferences, including CVPR (the IEEE Computer Vision and Pattern Recognition conference), ICDL (the International Conference on Learning and Development), NIPS, UAI, IJCAI and the Annual Conference of the Cognitive Science Society. He is the recipient of early career awards from the Society for Mathematical Psychology (2005), the Society of Experimental Psychologists, and the American Psychological Association (2008), and the Troland Research Award from the National Academy of Sciences (2011). <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tRobust Probabilistic Inference \u2013 Yishay Mansour\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Yishay Mansour<\/b> (opens in new tab)<\/span><\/a>, MSR Israel <\/b>| Wednesday, August 27, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Probabilistic Inference is the task of given a certain set of observations, to deduce the probability of various outcomes. This is a very basic task both in statistics and in machine learning. Robust probabilistic inference is an extension of probabilistic inference, where some of the observations are adversarially corrupted. Examples of where such a model may be relevant are spam detection, where spammers try adversarially to fool the spam detectors, or failure detection and correction, where the failure can be modeled as a \u201cworse case\u201d failure. The framework can be also used to model selection between a few alternative models that possibly generate the data. Technically, we model robust probabilistic inference as a zero-sum game between an adversary, who can select a modification rule, and a predictor, who wants to accurately predict the state of nature. Our main result is an efficient near optimal algorithm for the robust probabilistic inference problem. More specifically, given a black-box access to a Bayesian inference in the classic (adversary-free) setting, our near optimal policy runs in polynomial time in the number of observations and the number of possible modification rules. This is a joint work with Aviad Rubinstein and Moshe Tennenholtz.<\/p>\n

    Biography<\/h2>\n

    Prof. Yishay Mansour got his PhD from MIT in 1990, following it he was a postdoctoral fellow in Harvard and a Research Staff Member in IBM T. J. Watson Research Center. Since 1992 he is at Tel-Aviv University, where he is currently a Professor of Computer Science and has served as the head of the School of Computer Science during 2000-2002. Prof. Mansour has held visiting positions with Bell Labs, AT&T research Labs, IBM Research, and Google Research. Prof. Mansour has published over 50 journal papers and over 100 proceeding paper in various areas of computer science with special emphasis on communication networks machine learning, and algorithmic game theory. Prof. Mansour is currently an associate editor in a number of distinguished journals and has been on numerous conference program committees. He was both the program chair of COLT (1998) and served on the COLT steering committee. He has supervised over a dozen graduate students in various areas including communication networks, machine learning, algorithmic game theory and theory of computing. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tNerves and Synapses – A General Preview \u2013 Michal Linial\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Michal Linial<\/b> (opens in new tab)<\/span><\/a><\/b>, Hebrew University <\/b>| Wednesday, August 13, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    My talk is a brief preview of neuroscience (pre-101..). I will share with you some of the brain\u2019s mysteries and will illustrate the capacity of neurons to rewire and thus to learn (and forget). To do so, we will discuss (briefly) how neurons convey information, what are the principles underlying neuronal communication and the fundamental rules of electrical and chemical messengers. The uniformity and the variability of neurons that are involved in high brain functions (mathematics?) and those that make sure that we quickly remove our finger from a hot plate will be discussed. I will mention the capacity of the human brain vis-\u00e0-vis that of our cousins, the chimps, and other nerve systems. Is our brain really so different? (Probably so), what makes us human? (I have no clue..), why are we all fascinated by the brain? (easy to demonstrate). I will introduce you to synapses and describe classical and novel approaches to understand the brain (or at least better describe it). Importantly, I will emphasize how essential it is to study the brain at different levels of resolution and by applying an interdisciplinary approach. I promise to pose more questions than answers\u2026<\/p>\n

    Biography<\/h2>\n

    Michal Linial is a Professor of Biochemistry, The Hebrew University, Jerusalem, Israel and a Director of the Israel Institute for Advanced Studies.ML had published over 150 scientific papers and abstracts on diverse topics in molecular biology, cellular biology, bioinformatics, neuroscience the integration of tools to improve knowledge extractions. M. Linial has an experimental and computational laboratory. M.L is the leader and the founder of the first established educational program in Israel for Computer Science and Life Science (from 1999) for Undergraduate-Graduate studies. Her expertise in the synapse let to the study of protein families, protein-protein interactions with a global view on protein networks and their regulation. Molecular biology, cell biology and biochemical methods are applied in all research initiated in her laboratory. She and her laboratory are developing new computational and technological tools for large-scale cell biological research M. Linial and her colleagues apply MS based and genomics (DNA Chip) approaches for studying changes in neuronal development, and disease oriented research. She published over 180 scientific papers including book chapters and numerous reviews. The solid informatics approaches are used for large database storage and constant updating of several systems in view of classification, validation and functional predictions. M.L. and her students has been an active participant in NIH structural genomics initiatives and she participated in Structural Genomics effort Task for target selections. She and her colleagues have created several global classification systems that are used by the biomedical and biology communities. Most notably are the ProtoNet, EVEREST PANDORA, miRror,-Suite, ClanTox and more. All those developed web systems are provided as an open source for investigators. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tExplore or Exploit? Reflections on an Ancient Dilemma in the Age of the Web \u2013 Robert Kleinberg\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Robert Kleinberg<\/b> (opens in new tab)<\/span><\/a>, Cornell <\/b>| Wednesday, August 6, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Learning and decision-making problems often boil down to a balancing act between exploring new possibilities and exploiting the best known one. For more than fifty years, the multi-armed bandit problem has been the predominant theoretical model for investigating these issues. The emergence of the Web as a platform for sequential experimentation at a massive scale is leading to shifts in our understanding of this fundamental problem as we confront new challenges and opportunities. I will present two recent pieces of work addressing these challenges. The first concerns the misalignment of incentives in systems, such as online product reviews and citizen science platforms, that depend on a large population of users to explore a space of options. The second concerns situations in which the learner\u2019s actions consume one or more limited-supply resources, as when a ticket seller experiments with prices for an event with limited seating.<\/p>\n

    Biography<\/h2>\n

    Robert Kleinberg is an Associate Professor of Computer Science at Cornell University. His research studies the design and analysis of algorithms, and their relations to economics, learning theory, and networks. Prior to receiving his doctorate from MIT in 2005, Kleinberg spent three years at Akamai Technologies, where he assisted in designing the world\u2019s largest Internet Content Delivery Network. He is the recipient of a Microsoft Research New Faculty Fellowship, an Alfred P. Sloan Foundation Fellowship, and an NSF CAREER Award. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tVisual Nearest Neighbor Search \u2013 Shai Avidan\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Shai Avidan<\/b> (opens in new tab)<\/span><\/a>, Tel-Aviv University<\/b> | <\/b>Wednesday, July 30, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Template Matching finds the best match in an image to a given template and this is used in a variety of computer vision applications. I will discuss several extensions to Template Matching. First, dealing with the case where we have millions of templates that we must match at once, second dealing with the case of RGBD images, where depth information is available and finally, presenting a fast algorithm for template matching under 2D affine transformations with global approximation guarantees. Joint work with Simon Korman, Yaron Eshet, Eyal Ofek, Gilad Tsur and Daniel Reichman.<\/p>\n

    Biography<\/h2>\n

    Shai Avidan is an Associate Professor at the School of Electrical Engineering at Tel-Aviv University, Israel. He earned his PhD at the Hebrew University, Jerusalem, Israel, in 1999. Later, he was a Postdoctoral Researcher at Microsoft Research, a Project Leader at MobilEye, a startup company developing camera based driver assisted systems, a Research Scientist at Mitsubishi Electric Research Labs (MERL), and a Senior Researcher at Adobe. He published extensively in the fields of object tracking in video and 3-D object modeling from images. Recently, he has been working on Computational Photography. Dr. Avidan is an Associate Editor of PAMI and was on the program committee of multiple conferences and workshops in the fields of Computer Vision and Computer Graphics. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tA Grand Gender Convergence: Its Last Chapter \u2013 Claudia Goldin\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Claudia Goldin (opens in new tab)<\/span><\/a><\/b>, Harvard<\/b> | Wednesday, July 23, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    The converging roles of men and women are among the grandest advances in society and the economy in the last century. These aspects of the grand gender convergence are figurative chapters in a history of gender roles. But what must the \u201clast\u201d chapter contain for there to be equality in the labor market? The answer may come as a surprise. The solution does not (necessarily) have to involve government intervention and it need not make men more responsible in the home (although that wouldn\u2019t hurt). But it must involve changes in the labor market, in particular how jobs are structured and remunerated to enhance temporal flexibility. The gender gap in pay would be considerably reduced and might vanish altogether if firms did not have an incentive to disproportionately reward individuals who labored long hours and worked particular hours. Such change has taken off in various sectors, such as technology, science and health, but is less apparent in the corporate, financial and legal worlds.<\/p>\n

    Biography<\/h2>\n

    Claudia Goldin is the Henry Lee Professor of Economics at Harvard University and director of the NBER\u2019s Development of the American Economy program. Goldin is an economic historian and a labor economist. Her research has covered a wide array of topics, such as slavery, emancipation, the post-bellum south, women in the economy, the economic impact of war, immigration, New Deal policies, inequality, technological change, and education. Most of her research interprets the present through the lens of the past and explores the origins of current issues of concern. In the past several years her work has concerned the rise of mass education in the United States and its impact on economic growth and wage inequality. More recently she has focused her attention on college women\u2019s achievement of career and family. She is the author and editor of several books, among them Understanding the Gender Gap: An Economic History of American Women (Oxford 1990), The Regulated Economy: A Historical Approach to Political Economy (with G. Libecap; University of Chicago Press 1994), The Defining Moment: The Great Depression and the American Economy in the Twentieth Century (with M. Bordo and E. White; University of Chicago Press 1998), and Corruption and Reform: Lesson\u2019s from America\u2019s Economic History (with E. Glaeser; Chicago 2006). Her most recent book is The Race between Education and Technology (with L. Katz; The Belknap Press, 2008), winner of the 2008 R.R. Hawkins Award for the most outstanding scholarly work in all disciplines of the arts and sciences. Goldin is best known for her historical work on women in the U.S. economy. Her most recent papers in that area have concerned the history of women\u2019s quest for career and family, coeducation in higher education, the impact of the \u201cpill\u201d on women\u2019s career and marriage decisions, women\u2019s surnames after marriage as a social indicator, and the reasons why women are now the majority of undergraduates. She has recently embarked on a wide ranging project on the family and career transitions of male and female graduates of selective universities from the late 1960s to the present. Goldin is the current president of the American Economic Association. In 2007 Goldin was elected a member of the National Academy of Sciences and was the Gilman Fellow of the American Academy of Political and Social Science. She is a fellow of the American Academy of Arts and Sciences, the Society of Labor Economists (SOLE), the Econometric Society, and the Cliometric Society. In 2009 SOLE awarded Goldin the Mincer Prize for life-time contributions to the field of labor economics. Goldin completed her term as the President of the Economic History Association in 2000. In 1991 she was elected Vice President of the American Economic Association. From 1984 to 1988 she was editor of the Journal of Economic History and is currently an associate editor of the Quarterly Journal of Economics and a member of various editorial boards. She is the recipient of various teaching awards. Goldin received her B.A. from Cornell University and her Ph.D. from the University of Chicago. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tWhy Not Be Evil? The Costs and Benefits of Corporate Social Responsibility \u2013 Siva Vaidhyanathan\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Siva Vaidhyanathan<\/b> (opens in new tab)<\/span><\/a>, University of Virginia <\/strong>| Wednesday, July 9, 2014<\/p>\n

    Description<\/h2>\n

    Corporate Social Responsibility (CSR) and its Silicon Valley cousin, Social Entrepreneurship, have a rich but recent history. This talk will briefly explore the roots of these schools of thought and practice and examine their rise through business-school curricula and scholarship in the late 20th Century. Why did they come about when they came about? What are their effects on the world? Do they affect consumer behavior and investor behavior? And to what ends? Most seriously, does the identification of a company with particular values or social goals have the effect of depoliticizing an otherwise democratic republic?<\/p>\n

    Biography<\/h2>\n

    Siva Vaidhyanathan is the Robertson Professor of Media Studies at the University of Virginia and the author, most recently, of The Googlization of Everything \u2014 and Why We Should Worry (University of California Press, 2011) <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tRethinking Machine Learning In The 21St Century: From Optimization To Equilibration \u2013 Sridhar Mahadevan\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Sridhar Mahadevan<\/strong> (opens in new tab)<\/span><\/a>, UMASS Amherst <\/strong>| Wednesday, June 11, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    The past two decades has seen machine learning (ML) transformed from an academic curiosity to a multi-billion dollar industry, and a centerpiece of our economic, social, scientific, and security infrastructure. Much work in machine learning has drawn on research in optimization, motivated by large-scale applications requiring analysis of massive high-dimensional data. In this talk, I\u2019ll argue that the growing importance of networked data environments, from the Internet to cloud computing, requires a fundamental rethinking of our basic analytic tools. My thesis will be that ML needs to shift from its current focus on optimization to equilibration, from modeling the world as uncertain, but stationary and benign, to one where the world is non-stationary, competitive, and potentially malicious. Adapting to this new world will require developing new ML frameworks and algorithms. My talk will introduce one such framework \u2014 equilibration using variational inequalities and projected dynamical systems \u2014which not only generalizes optimization, but is better suited to the distributed networked cloud-oriented future that ML faces. To explain this paradigm change, I\u2019ll begin by summarizing the au courant optimization-based approach to ML using recent research in the Autonomous Learning Laboratory. I will then present an equilibration-based framework using variational inequalities and projected dynamical systems, which originated in mathematics for solving partial differential equations in physics, but has been since been widely applied in its finite-dimensional formulation to network equilibrium problems in economics, transportation, and other areas. I\u2019ll describe a range of algorithms for solving variational inequalities, showing their scope allows ML to extend beyond optimization, to finding game-theoretic equilibria, solving complementarity problems, and many other areas.<\/p>\n

    Biography<\/h2>\n

    Professor Sridhar Mahadevan directs the Graduate Program at the School of Computer Science at the University of Massachusetts, Amherst. He is a co-director of the Autonomous Learning Laboratory, one of the oldest academic research centers for machine learning in the US, which has graduated more than 30 doctoral students in its three decade history, and includes 3 AAAI fellows among its alumni. The lab currently includes 14 PhD students, who work in a variety of areas in machine learning, including equilibration algorithms, optimization, reinforcement learning, and unsupervised learning. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tDo Neighborhoods Matter for Disadvantaged Families? Long-Term Evidence from the Moving to Opportunity Experiment \u2013 Larry Katz\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Larry Katz<\/b> (opens in new tab)<\/span><\/a>, Harvard<\/b> | Wednesday, May 21, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    We examine long-term neighborhood effects on low-income families using data from the Moving to Opportunity (MTO) randomized housing-mobility experiment, which offered some public-housing families but not others the chance to move to less-disadvantaged neighborhoods. MTO succeed in moving families to lower-poverty and safer residential neighborhoods, but MTO moves did not substantially improve the quality of schools attended by the children. We show that 10-15 years after baseline, MTO improves adult physical and mental health, has no detectable effect on economic outcomes or youth schooling or physical health, and mixed results by gender on other youth outcomes, with girls doing better on some measures and boys doing worse. Despite the somewhat mixed pattern of impacts on traditional behavioral outcomes, MTO moves substantially improve adult subjective well-being. And when opportunities to move with housing vouchers lead to better schools for the children, such moves do have long-run positive impacts on youth education and reduce youth risky behaviors.<\/p>\n

    Biography<\/h2>\n

    Lawrence F. Katz is the Elisabeth Allison Professor of Economics at Harvard University and a Research Associate of the National Bureau of Economic Research. His research focuses on issues in labor economics and the economics of social problems. He is the author (with Claudia Goldin) of The Race between Education and Technology (Harvard University Press, 2008), a history of U.S. economic inequality and the roles of technological change and the pace of educational advance in affecting the wage structure. Katz also has been studying the impacts of neighborhood poverty on low-income families as the principal investigator of the long-term evaluation of the Moving to Opportunity program, a randomized housing mobility experiment. And Katz is working with Claudia Goldin on a major project studying the historical evolution of career and family choices and outcomes for U.S. college men and women. His past research has explored a wide range of topics including U.S. and comparative wage inequality trends, educational wage differentials and the labor market returns to education, the impact of globalization and technological change on the labor market, the economics of immigration, unemployment and unemployment insurance, regional labor markets, the evaluation of labor market programs, the problems of low-income neighborhoods, and the social and economic consequences of the birth control pill. Professor Katz has been editor of the Quarterly Journal of Economics since 1991 and served as the Chief Economist of the U.S. Department of Labor for 1993 and 1994. He is the co-Scientific Director of J-PAL North America, current President of the Society of Labor Economists, and has been elected a fellow of the National Academy of Sciences, American Academy of Arts and Sciences, the Econometric Society, and the Society of Labor Economists. Katz serves on the Panel of Economic Advisers of the Congressional Budget Office as well as on the Boards of the Russell Sage Foundation and the Manpower Demonstration Research Corporation. He graduated from the University of California at Berkeley in 1981 and earned his Ph.D. in Economics from the Massachusetts Institute of Technology in 1985. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tPrincipled Approaches for Learning Latent Variable Models \u2013 Anima Anandkumar\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Anima Anandkumar<\/b> (opens in new tab)<\/span><\/a>, UC Irvine<\/b> | <\/b>Wednesday, May 14, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    In any learning task, it is natural to incorporate latent or hidden variables which are not directly observed. For instance, in a social network, we can observe interactions among the actors, but not their hidden interests\/intents, in gene networks, we can measure gene expression levels but not the detailed regulatory mechanisms, and so on. I will present a broad framework for unsupervised learning of latent variable models, addressing both statistical and computational concerns. We show that higher order relationships among observed variables have a low rank representation under natural statistical constraints such as conditional-independence relationships. We also present efficient computational methods for finding these low rank representations. These findings have implications in a number of settings such as finding hidden communities in networks, discovering topics in text documents and learning about gene regulation in computational biology. I will also present principled approaches for learning overcomplete models, where the latent dimensionality can be much larger than the observed dimensionality, under natural sparsity constraints. This has implications in a number of applications such as sparse coding and feature learning.<\/p>\n

    Biography<\/h2>\n

    Anima Anandkumar is a faculty at the EECS Dept. at U.C. Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009- 2010. She is the recipient of the Alfred P. Sloan Fellowship, Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, IBM Fran Allen PhD fellowship, thesis award from ACM SIGMETRICS society, and paper awards from the ACM SIGMETRICS and IEEE Signal Processing societies. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tEconomies of Visibility: Girl Empowerment Organizations and the Market for Empowerment \u2013 Sarah Banet-Weiser\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Sarah Banet-Weiser<\/b> (opens in new tab)<\/span><\/a>, USC Annenberg\u2019s School of Communication | <\/b>Wednesday, April 30, 201<\/p>\n

    Description<\/h2>\n

    In the past two decades, the invocation of \u201cgirl power\u201d as an increasingly normative discourse to describe young girls and women in their everyday practices has been met with both excitement and challenge. However, while many have theorized how the \u201cgirl\u201d in girl power is a racially and class specific girl, one that has economic and cultural privilege to access power, the \u201cpower\u201d in girl power still needs rigorous theorization. In this talk, I examine what the \u201cpower\u201d of girl power means in the current moment, arguing that for the most part, this form of power is legible within an economy of media visibility, where media incessantly look at and invite us to look at girls. More specifically, I examine the construction of a market within the contemporary economy of visibility: the market for empowerment. Looking at girl empowerment organizations, I analyze this market in both a US and international development context, and argue that it works to consolidate a specific kind of empowerment that is personal and individual.<\/p>\n

    Biography<\/h2>\n

    Sarah Banet-Weiser is Professor of Communication at the Annenberg School of Communication and Journalism and in the Department of American Studies and Ethnicity at the University of Southern California. She is the author of The Most Beautiful Girl in the World: Beauty Pageants and National Identity (1999), Kids Rule! Nickelodeon and Consumer Citizenship (2007), and Authentic\u2122: The Politics of Ambivalence in a Brand Culture (winner of the Outstanding Book Award at the International Communication Association). She is the co-editor of Cable Visions: Television Beyond Broadcasting and Commodity Activism: Cultural Resistance in Neoliberal Times. She edited the NYU press book series Critical Cultural Communication until 2012, and is currently the editor of American Quarterly. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tThose Of You Who Need a Little More Time \u2013 Jonathan Sterne\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Jonathan Sterne<\/b> (opens in new tab)<\/span><\/a>, McGill<\/b> | Wednesday, April 16, 201<\/p>\n

    Description<\/h2>\n

    This talk examines the lesser-known work and legacy of Dennis Gabor. Gabor was a physicist famous for inventing holography. But he also applied quantum theory to sound, and in so doing offered an important corrective to prevailing interpretations of wave theories of sound derived from Joseph Fourier\u2019s work. To prove his point, Gabor built a device called the \u201ckinematic frequency compressor,\u201d which could time-stretch or pitch-shift audio independently of the other operation, a feat previously considered impossible in the analog domain. After considering the machine, I trace its technical and cultural descendants in advertising, cinema, avant-garde music, and today in the world\u2019s most popular audio software, Ableton Live.<\/p>\n

    Biography<\/h2>\n

    Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. He is author of MP3: The Meaning of a Format (Duke 2012), The Audible Past: Cultural Origins of Sound Reproduction (Duke, 2003); and numerous articles on media, technologies and the politics of culture. He is also editor of The Sound Studies Reader (Routledge, 2012). His new projects consider instruments and instrumentalities; histories of signal processing; and the intersections of disability, technology and perception. Visit his website at http:\/\/sterneworks.org (opens in new tab)<\/span><\/a>. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tDeceptive Products \u2013 Botond Koszegi\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Botond Koszegi<\/b> (opens in new tab)<\/span><\/a>, Central European University<\/b><\/strong> | Wednesday, April 2, 201<\/p>\n

    Description<\/h2>\n

    A literature in behavioral economics documents that in a number of retail markets, some consumers misunderstand key fees or other central product features, and many argue that this leads firms to offer contracts and products that take advantage of such naive consumers. This talk will give an overview of some theoretical research on the market for deceptive products. Questions might include (i) what kinds of contracts will be offered in the presence of naive consumers; (ii) how naive and sophisticated consumers affect each other in the market; (iii) how firms attempt to discriminate between naive and sophisticated consumers, and how this affects economic welfare; (iv) whether and when firms have an incentive to \u201ccome clean\u201d regarding their products; and (v) what kinds of products will be sold in a deceptive way. Based on joint work with Paul Heidhues and Takeshi Murooka<\/p>\n

    Biography<\/h2>\n

    Botond Koszegi is Professor at the Department of Economics at Central European University in Budapest, Hungary, since August 1, 2012. He was previously Professor of Economics at the University of California at Berkeley, and has held visiting positions at Massachusetts Institute of Technology, Cambridge, MA, and CEU. He earned his BA in mathematics from Harvard University in 1996, and his Ph.D. in economics from the Massachusetts Institute of Technology in 2000. His research interests are primarily in the theoretical foundations of behavioral economics. He has produced research on self-control problems and the consumption of harmful products, self-image and anticipatory utility, reference-dependent preferences and loss aversion, and focusing and attention. Recently, he has been studying how firms respond to consumers\u2019 psychological tendencies, especially in the pricing of products and the design of credit and other financial contracts. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tMechanism Design for Data Science \u2013 Jason Hartline\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Jason Hartline<\/b> (opens in new tab)<\/span><\/a>, Northwestern <\/strong>| <\/strong>Wednesday, March 19, 2014 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    The promise of data science is that system data can be analyzed and its understanding can be used to improve the system (i.e., to obtain good outcomes). For this promise to be realized, the necessary understanding must be inferable from the data. Whether or not this understanding is inferable often depends on the system itself. Therefore, the system needs to be designed to both obtain good outcomes and to admit good inference. This talk will explore this issue in a mechanism design context where the designer would like use past bid data to adapt an auction mechanism to optimize revenue. Data analysis is necessary for revenue optimization in auctions, but revenue optimization is at odds with good inference. The revenue-optimal auction for selling an item is typically parameterized by a reserve price, and the appropriate reserve price depends on how much the bidders are willing to pay. This willingness to pay could be potentially be learned by inference, but a reserve price precludes learning anything about willingness-to-pay of bidders who are not willing to pay the reserve price. The auctioneer could never learn that lowering the reserve price would give a higher revenue (even if it would). To address this impossibility, the auctioneer could sacrifice revenue-optimality in the initial auction to obtain better inference properties so that the auction\u2019s parameters can be adapted to changing preferences in the future. In this talk, I will develop a theory for optimal auction design subject to good inference.<\/p>\n

    Biography<\/h2>\n

    Prof. Hartline is on sabbatical at Harvard Economics and Computer Science Departments for the 2014 calendar year (January 2014-December 2014). Prof. Hartline\u2019s current research interests lie in the intersection of the fields of theoretical computer science, game theory, and economics. With the Internet developing as the single most important arena for resource sharing among parties with diverse and selfish interests, traditional algorithmic and distributed systems approaches are insufficient. Instead, in protocols for the Internet, game-theoretic and economic issues must be considered. A fundamental research endeavor in this new field is the design and analysis of auction mechanisms and pricing algorithms. Dr. Hartline joined the EECS department (and MEDS, by courtesy) in January of 2008. He was a researcher at Microsoft Research, Silicon Valley from 2004 to 2007, where his research covered foundational topic of algorithmic mechanism design<\/em> and applications to auctions for sponsored search<\/em>. He was an active researcher in the San Francisco bay area algorithmic game theory<\/em> community and was a founding organizer of the Bay Algorithmic Game Theory Symposium. In 2003, he held a postdoctoral research fellowship at the Aladdin Center at Carnegie Mellon University. He received his Ph.D. in Computer Science from the University of Washington in 2003 with advisor Anna Karlin and B.S.s in Computer Science and Electrical Engineering from Cornell University in 1997. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tAn Experiment in Hiring Discrimination Via Online Social Networks \u2013 Alessandro Acquisti\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Alessandro Acquisti<\/b> (opens in new tab)<\/span><\/a>, CMU<\/b> | Wednesday, Feb 26, 201<\/p>\n

    Description<\/h2>\n

    Surveys of U.S. employers suggest that numerous firms seek information about job applicants online. However, little is known about how this information gathering influences employers\u2019 hiring behavior. We present results from two complementary randomized experiments (a field experiment and an online experiment) on the impact of online information on U.S. firms\u2019 hiring behavior. We manipulate candidates\u2019 personal information that is protected under either federal laws or some state laws, and may be risky for employers to enquire about during interviews, but which may be inferred from applicants\u2019 online social media profiles. In the field experiment, we test responses of over 4,000 U.S. employers to a Muslim candidate relative to a Christian candidate, and to a gay candidate relative to a straight candidate. We supplement the field experiment with a randomized, survey-based online experiment with over 1,000 subjects (including subjects with previous human resources experience) testing the effects of the manipulated online information on hypothetical hiring decisions and perceptions of employability. The results of the field experiment suggest that a minority of U.S. firms likely searched online for the candidates\u2019 information. Hence, the overall effect of the experimental manipulations on interview invitations is small and not statistically significant. However, in the field experiment, we find evidence of discrimination linked to political party affiliation. Following the Gallup Organization\u2019s segmentation of U.S. states by political ideology, we use results from the 2012 presidential election and find evidence of discrimination against the Muslim candidate compared to the Christian candidate among employers in more Romney-leaning states and counties. These results are robust to controlling for firm characteristics, state fixed effects, and a host of county-level variables. We find no evidence of discrimination against the gay candidate relative to the straight candidate. Results from the online experiment are consistent with those from the field experiment: we find more evidence of bias among subjects more likely to self-report more political conservative party affiliation.<\/p>\n

    Biography<\/h2>\n

    Alessandro Acquisti is a professor of information technology and public policy at the Heinz College, Carnegie Mellon University (CMU) and the co-director of CMU Center for Behavioral and Decision Research. He has held visiting positions at the Universities of Rome, Paris, and Freiburg (visiting professor); Harvard University (visiting scholar); University of Chicago (visiting fellow); Microsoft Research (visiting researcher); and Google (visiting scientist). Alessandro investigates economic, policy, and technological issues surrounding privacy. His studies have spearheaded the application of behavioral economics to the analysis of privacy and information security decision making, and the analysis of privacy risks and disclosure behavior in online social networks. Alessandro has been the recipient of the PET Award for Outstanding Research in Privacy Enhancing Technologies, the IBM Best Academic Privacy Faculty Award, multiple Best Paper awards, and the Heinz College School of Information\u2019s Teaching Excellence Award. He has testified before the U.S. Senate and House committees on issues related to privacy policy and consumer behavior, and was a TED Global 2013 speaker. Alessandro\u2019s findings have been featured in national and international media outlets, including the Economist, the New York Times, the Wall Street Journal, the Washington Post, the Financial Times, Wired.com, NPR, CNN, and CBS 60 Minutes. His 2009 study on the predictability of Social Security numbers was featured in the \u201cYear in Ideas\u201d issue of the New York Times Magazine. Alessandro holds a PhD from UC Berkeley, and Master degrees from UC Berkeley, the London School of Economics, and Trinity College Dublin. He has been a member of the National Academies\u2019 Committee on public response to alerts and warnings using social media. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tEconomic Models as Analogies \u2013 Larry Samuelson\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Larry Samuelson<\/b> (opens in new tab)<\/span><\/a>, Yale<\/b> | Wednesday, December 18, 201<\/p>\n

    Description<\/h2>\n

    People often wonder why economists analyze models whose assumptions are known to be false, while economists feel that they learn a great deal from such exercises. We suggest that part of the knowledge generated by academic economists is case-based rather than rule-based. That is, instead of offering general rules or theories that should be contrasted with data, economists often analyze models that are \u201ctheoretical cases\u201d, which help understand economic problems by drawing analogies between the model and the problem. According to this view, economic models, empirical data, experimental results and other sources of knowledge are all on equal footing, that is, they all provide cases to which a given problem can be compared. We offer complexity arguments that explain why case-based reasoning may sometimes be the method of choice and why economists prefer simple cases. Joint work with Itzhak Gilboa, Andrew Postlewaite, and David Schmeidler<\/p>\n

    Biography<\/h2>\n

    Samuelson is a Fellow of the Econometric Society and a Fellow of the American Academy of Arts and Sciences. He has been a Co-editor of Econometrica and is currently a Co-editor of the American Economic Review. His research spans microeconomic theory and game theory. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tTools for Large Scale Public Engagement in Research \u2013 Krzysztof Gajos\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Krzysztof Gajos<\/b> (opens in new tab)<\/span><\/a>, Harvard<\/b> | Wednesday, December 4, 201<\/p>\n

    Description<\/h2>\n

    Non-scientists have long been contributing to research: by gathering observations on plant and animal behavior, by gazing at the sky through private amateur telescopes, or by participating in psychology experiments. The Internet has created entirely new opportunities for enabling public participation in research, both in terms of the scale of public participation and the kinds of activities that the non-professional scientists can perform in support of scientific inquiry. Yet, inclusion of the broader publics in one\u2019s research program remains an exception rather than a norm, presumably because of concerns related to technical infrastructure, recruitment, and reliability of contributions. I will highlight two strands of research in my group that contribute toward wider involvement of broader publics in research. In the first strand, we have specifically focused on methods for studying human motor performance on computer input tasks. We have developed and validated mechanisms for collecting lab-quality data in three settings: 1. unobtrusively in situ from observations of a user\u2019s natural interactions with a computer; 2. on Amazon Mechanical Turk; 3. with unpaid online volunteers through our Lab in the Wild platform. Our recent study with 500,000 participants allowed us to replicate several past results and also to conduct new analyses that were not possible before. For example, we provided fine grained estimates of when in life basic abilities (such as cognitive processing speed, fine motor control, and gross motor control) peak. In the second strand, we focused on developing procedures to enable non-experts to perform expert-level analytical tasks accurately and at scale. Specifically, we have developed PlateMate, a system for crowdsourcing nutritional analysis from food photographs. In an ongoing project, we are studying the behavioral and nutritional factors impacting preterm birth. A key technical enabler of this project is a mechanism, based on our PlateMate system, for scalable nutritional analysis, which will make it possible to track the nutritional intake of 400 pregnant women for several months each.<\/p>\n

    Biography<\/h2>\n

    Krzysztof Z. Gajos is an associate professor of computer science at the Harvard School of Engineering and Applied Sciences. Krzysztof is primarily interested in intelligent interactive systems, an area that spans human-computer interaction, artificial intelligence, and applied machine learning. Krzysztof received his B.Sc. and M.Eng. degrees in Computer Science from MIT. Subsequently he was a research scientist at the MIT Artificial Intelligence Laboratory, where he managed The Intelligent Room Project. In 2008, he received his Ph.D. in Computer Science from the University of Washington in Seattle. Before coming to Harvard in September of 2009, he spent a year as a post-doctoral researcher in the Adaptive Systems and Interaction group at Microsoft Research. URL: http:\/\/www.eecs.harvard.edu\/~kgajos\/ (opens in new tab)<\/span><\/a>. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tUnderstanding Audition Via Sound Synthesis \u2013 Josh McDermott\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Josh McDermott<\/b> (opens in new tab)<\/span><\/a>, MIT<\/b> | <\/b>Wednesday, November 20, 2013<\/p>\n

    Description<\/h2>\n

    Humans infer many important things about the world from the sound pressure waveforms that enter the ears. In doing so we solve a number of difficult and intriguing computational problems. We recognize sound sources despite large variability in the waveforms they produce, extract behaviorally relevant attributes that are not explicit in the input to the ear, and do so even when sound sources are embedded in dense mixtures with other sounds. This talk will describe recent progress in understanding these remarkable auditory abilities. The work stems from the premise that a theory of the perception of some property should enable the synthesis of signals that appear to have that property. Sound synthesis can thus be used to test theories of perception and to explore representations of sound. I will describe several examples of this approach.<\/p>\n

    Biography<\/h2>\n

    Josh McDermott is a perceptual scientist studying sound, hearing, and music in the Department of Brain and Cognitive Sciences at MIT. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. He is particularly interested in using the gap between human and machine competence to both better understand biological hearing and design better algorithms for analyzing sound. McDermott obtained a BA in Brain and Cognitive Science from Harvard, an MPhil in Computational Neuroscience from University College London, a PhD in Brain and Cognitive Science from MIT, and postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU. He is the recipient of a Marshall Scholarship, a National Defense Science and Engineering fellowship, and a James S. McDonnell Foundation Scholar Award. He is currently an Assistant Professor in the Department of Brain and Cognitive Sciences at MIT. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tGraphical approaches to Biological Problems \u2013 Ernest Fraenkel\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Ernest Fraenkel<\/b> (opens in new tab)<\/span><\/a>, MIT<\/b> | <\/b>Wednesday, November 6, 2013 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Biology has been transformed by new technologies that provide detailed descriptions of the molecular changes that occur in diseases. However, it is difficult to use these data to reveal new therapeutic insights for several reasons. Despite their power, each of these methods still only captures a small fraction of the cellular response. Moreover, when different assays are applied to the same problem, they provide apparently conflicting answers. I will show that network modeling reveals the underlying consistency of the data by identifying small, functionally coherent pathways linking the disparate observations. We have used these methods to analyze how oncogenic mutations alter signaling and transcription and to prioritize experiments aimed at discovering therapeutic targets.<\/p>\n

    Biography<\/h2>\n

    Ernest Fraenkel was first introduced to computational biology in high school when the field did not yet have a name. His early experiences with Professor Cyrus Levinthal of Columbia University taught him that biological insights often come from unexpected disciplines. After graduating summa cum laude from Harvard College in Chemistry and Physics he obtained his Ph.D. at MIT in the department of Biology and did post-doctoral work at Harvard. As the field of Systems Biology began to emerge, he established a research group in this area at the Whitehead Institute and then moved to the Department of Biological Engineering at the Massachusetts Institute of Technology. His research group takes a multi-disciplinary approach involving tightly connected computational and experimental methods to uncover the molecular pathways that are altered in cancer, neurodegenerative diseases, and diabetes. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tSocial Norms and the Impact of Laws \u2013 Matt Jackson\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Matt Jackson,<\/b> (opens in new tab)<\/span><\/a> Stanford <\/b>| Wednesday, September 18, 2013<\/p>\n

    Description<\/h2>\n

    We examine the impact of laws in a model of social norms. Agents each choose a level of behavior (e.g., a speed of driving, an amount of corruption, etc.). Agents choose behaviors not only based on their personal preference but also based on a preference to match or conform to the behaviors of other agents with whom they interact. A law caps the level of behavior and a law-abiding agent may whistle-blow on an agent who is breaking the law: correcting the behavior of the latter and making him or her pay a fine. The impact of a law is endogenous to the social norm (equilibrium of behavior) and as such laws can have nonmonotone effects: a strict law may be broken more frequently than an lax one. Moreover, law-breakers may choose more extreme behavior as a law becomes stricter. Historical behavior can influence the impact of a law: exactly the same law can have drastically different impacts in two different societies depending on past social norms.<\/p>\n

    Biography<\/h2>\n

    Matthew O. Jackson is the Eberle Professor of Economics at Stanford University and an external faculty member of the Santa Fe Institute and a fellow of CIFAR. Jackson\u2019s research interests include game theory, microeconomic theory, and the study of social and economic networks, including diffusion, learning, and network formation. He was at Northwestern and Caltech before joining Stanford, and has a PhD from Stanford and BA from Princeton. Jackson is a Fellow of the Econometric Society and the American Academy of Arts and Sciences, and former Guggenheim Fellow. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tCrowdsourcing Audio Production Interfaces \u2013 Bryan Pardo\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Bryan Pardo<\/b> (opens in new tab)<\/span><\/a>, Northwestern <\/b>| Wednesday, September 11, 2013 | Video (opens in new tab)<\/span><\/a><\/p>\n

    Description<\/h2>\n

    Potential users of audio production software, such as audio equalizers, may be discouraged by the complexity of the interface and a lack of clear affordances in typical interfaces. We seek to simplify interfaces for task such as audio production (e.g. mastering a music album with ProTools), audio tools (e.g. equalizers) and related consumer devices (e.g. hearing aids). Our approach is to use an evaluative paradigm (\u201cI like this sound better than that sound\u201d) with the use of descriptive language (e.g. \u201cMake the violin sound \u2018warmer.\u2019\u201d). To achieve this goal, a system must be able to tell whether the stated goal is appropriate for the selected tool (e.g. making the violin \u201cwarmer\u201d with a panning tool does not make sense). If the goal is appropriate for the tool, it must know what actions need to be taken (e.g. add some reverberation). Further, the tool should not impose a vocabulary on users, but rather understand the vocabulary users prefer. In this talk, Bryan Pardo describes, iQ, an equalizer that uses an evaluative control paradigm and SocialEQ, a web-based project to crowdsource a vocabulary of actionable audio descriptors.<\/p>\n

    Biography<\/h2>\n

    Bryan Pardo, head of the Northwestern University Interactive Audio Lab, is an associate professor in the Northwestern University Department of Electrical Engineering and Computer Science. Prof. Pardo received a M. Mus. in Jazz Studies in 2001 and a Ph.D. in Computer Science in 2005, both from the University of Michigan. He has authored over 70 peer-reviewed publications. He has developed speech analysis software for the Speech and Hearing department of the Ohio State University, statistical software for SPSS and worked as a machine learning researcher for General Dynamics. While finishing his doctorate, he taught in the Music Department of Madonna University. When he\u2019s not programming, writing or teaching, he performs throughout the United States on saxophone and clarinet at venues such as Albion College, the Chicago Cultural Center, the Detroit Concert of Colors, Bloomington Indiana\u2019s Lotus Festival and Tucson\u2019s Rialto Theatre. <\/p>\n

    <\/p><\/div>\n

    \n\t\t\t\tSeeing the invisible; Predicting the unexpected \u2013 Michal Irani\t\t\t<\/h4>\n
    \n

    \n<\/p>

    Michal Irani (opens in new tab)<\/span><\/a><\/b>, Weizmann <\/b>| Wednesday, September 4, 201<\/p>\n

    Description<\/h2>\n

    In this talk I will show how complex visual inference tasks can be performed, with no prior examples, by exploiting internal redundancy within visual data. Comparing and integrating local pieces of visual information gives rise to complex notions of visual similarity and to a general \u201cInference by Composition\u201d approach. This allows to infer about the likelihood of new visual data that was never seen before, and make inferences about complex static and dynamic visual information without any prior examples. I will demonstrate the power of this approach to several example problems (as time permits):<\/p>\n

      \n
    1. Detecting complex objects and actions.<\/li>\n
    2. Prediction of missing visual information.<\/li>\n
    3. Inferring the \u201clikelihood\u201d of \u201cnever-before-seen\u201d visual data.<\/li>\n
    4. Detecting the \u201cirregular\u201d and \u201cunexpected\u201d<\/li>\n
    5. Spatial super-resolution (from a single image) & Temporal super-resolution (from a single video).<\/li>\n
    6. Generating visual summaries (of images and videos)<\/li>\n
    7. Segmentation of complex visual data.<\/li>\n<\/ol>\n

      Biography<\/h2>\n

      Michal Irani is a Professor at the Weizmann Institute of Science, in the Department of Computer Science and Applied Mathematics. She received a B.Sc. degree in Mathematics and Computer Science from the Hebrew University of Jerusalem in 1985, and M.Sc. and Ph.D. degrees in Computer Science from the same institution in 1989 and 1994, respectively. From 1993 to 1996, she was a member of the technical staff of the Vision Technologies Laboratory at the David Sarnoff Research Center (Princeton, New Jersey, USA). She joined the Weizmann Institute at 1997. Michal\u2019s research interests center around computer vision, image processing, and video information analysis. Michal\u2019s prizes and honors include the David Sarnoff Research Center Technical Achievement Award (1994), the Yigal Allon three-year Fellowship for Outstanding Young Scientists (1998), and the Morris L. Levinson Prize in Mathematics (2003). At the European Conference on Computer Vision, she received awards for Best Paper in 2000 and in 2002, and was awarded an Honorable Mention for the Marr Prize at the IEEE International Conference on Computer Vision in 2001 and in 2005. <\/p>\n

      <\/p><\/div>\n

      \n\t\t\t\tDifferential Privacy: Theoretical and Practical Challenges \u2013 Salil Vadhan\t\t\t<\/h4>\n
      \n

      \n<\/p>

      Salil Vadhan<\/b> (opens in new tab)<\/span><\/a>, Harvard <\/b>| Wednesday, August 14, 201<\/p>\n

      Description<\/h2>\n

      Differential Privacy is framework for enabling the analysis of privacy-sensitive datasets while ensuring that individual-specific information is not revealed. The concept was developed in a body of work in theoretical computer science starting about a decade ago, largely coming from Microsoft Research. It is now flourishing as an area of theory research, with deep connections to many other topics in theoretical computer science. At the same time, its potential for addressing pressing privacy problems in a variety of domains has attracted the interest of scholars from many other areas, including statistics, databases, medical informatics, law, social science, computer security and programming languages. In this talk, I will give a general introduction to differential privacy, and discuss some of the theoretical and practical challenges for future work in this area. I will also describe a large, multidisciplinary research project at Harvard, called \u201cPrivacy Tools for Sharing Research Data,\u201d in which we are working on some of these challenges as well as others associated with the collection, analysis, and sharing of personal data for research in social science and other fields.<\/p>\n

      Biography<\/h2>\n

      Salil Vadhan is the Vicky Joseph Professor of Computer Science (opens in new tab)<\/span><\/a> and Applied Mathematics at the School of Engineering & Applied Sciences (opens in new tab)<\/span><\/a> at Harvard University (opens in new tab)<\/span><\/a>. He is a member of the Theory of Computation (opens in new tab)<\/span><\/a> research group. His research areas include computational complexity, cryptography, randomness in computation, and data privacy. <\/p>\n

      <\/p><\/div>\n

      \n\t\t\t\tTechnologies of Choice? \u2013 ICTs, development and the capabilities approach \u2013 Dorothea Kleine\t\t\t<\/h4>\n
      \n

      \n<\/p>

      Dorothea Kleine<\/b> (opens in new tab)<\/span><\/a>, University of London <\/b>| Wednesday, July 31, 2013 | Video (opens in new tab)<\/span><\/a><\/p>\n

      Description<\/h2>\n

      ICT for development (ICT4D) scholars claim that the internet, radio and mobile phones can support development. Yet the dominant paradigm of development as economic growth is too limiting to understand the full potential of these technologies. One key rival to such econocentric understandings is Amartya Sen\u2019s capabilities approach to development \u2013 focusing on a pluralistic understanding of people\u2019s values and the lives they want to lead. In her book, Technologies of Choice?<\/i> (MIT Press 2013), Dorothea Kleine translates Sen\u2019s approach into policy analysis and ethnographic work on technology adaptation. She shows how technologies are not neutral, but imbued with values that may or may not coincide with the values of users. The case study analyses Chile\u2019s pioneering ICT policies in the areas of public access, digital literacy, and online procurement and the sobering reality of one of the most marginalised communities in the country where these policies play out. The book shows how both neoliberal and egalitarian ideologies are written into technologies as they permeate the everyday lives and livelihoods of women and men in the town. Technologies of Choice?<\/i> examines the relationship between ICTs, choice, and development. It argues for a people-centred view of development that has individual and collective choice at its heart.<\/p>\n

      Biography<\/h2>\n

      Dorothea Kleine is Senior Lecturer in Human Geography and Director of the interdisciplinary ICT4D Centre at Royal Holloway, University of London (www.ict4dc.org (opens in new tab)<\/span><\/a>). In 2013 the Centre was named among the top 10 global think tanks in science and technology (U of Penn survey of experts) and has a highly recognized PhD and Masters program in ICT for development. Dorothea\u2019s work focuses on the relationship between notions of \u201cdevelopment\u201d, choice and individual agency, sustainability, gender and technology. She has published widely on these subjects, and has worked as an advisor to UNICEF, UNEP, EUAid, DFID, GIZ and to NGOs. The Centre runs various collaborative research projects with international agencies and private sector partners. <\/p>\n

      <\/p><\/div>\n

      \n\t\t\t\tThe Cryptographic Lens \u2013 Shafi Goldwasser\t\t\t<\/h4>\n
      \n

      \n<\/p>

      Shafi Goldwasser<\/b> (opens in new tab)<\/span><\/a>, MIT<\/b> | W<\/span>ednesday, July 17, 201<\/p>\n

      Description<\/h2>\n

      Going beyond the basic challenge of private communication<\/i>, in the last 35 years, cryptography has become the general study of correctness and privacy of computation<\/i> in the presence of a computationally bounded adversary, and as such has changed how we think of proofs, reductions, randomness, secrets, and information. In this talk I will discuss some beautiful developments in the theory of computing through this cryptographic lens, and the role cryptography can play in the next successful shift from local to global computation.<\/p>\n

      Biography<\/h2>\n

      Goldwasser is the RSA Professor of Electrical Engineering and Computer Science at MIT and a professor of computer science and applied mathematics at the Weizmann Institute of Science. Goldwasser received a BS (1979) in applied mathematics from CMU and PhD (1984) in computer science from UC Berkeley. Goldwasser is the 2012 recipient of the ACM Turing Award. <\/p>\n

      <\/p><\/div>\n

      \n\t\t\t\tDoes the Classic Microfinance Model Discourage Entrepreneurship Among the Poor? Experimental Evidence from India \u2013 Erica Field\t\t\t<\/h4>\n
      \n

      \n<\/p>

      Erica Field (opens in new tab)<\/span><\/a>, Duke <\/b>| <\/span>Wednesday, July 10, 2013 | Video (opens in new tab)<\/span><\/a><\/p>\n

      Description<\/h2>\n

      Do the repayment requirements of the classic microfinance contract inhibit investment in high-return but illiquid business opportunities among the poor? Using a field experiment, we compare the classic contract which requires that repayment begin immediately after loan disbursement to a contract that includes ta two-month grace period. The provision of a grace period increased short-run business investment and long-run profits but also default rates. The results, thus, indicate that debt contracts that require early repayment discourage illiquid risky investment and thereby limit the potential impact of microfinance on microenterprise growth and household poverty.<\/p>\n

      Biography<\/h2>\n

      Erica M. Field joined the Duke faculty as an associate professor in 2011. She is also a faculty research fellow at the National Bureau of Economic Research. Professor Field received her Ph.D. and M.A. in economics from Princeton University in 2003 and her B.A. in economics and Latin American studies from Vassar College in 1996. Since receiving her doctorate, she has worked at Princeton, Stanford, and most recently Harvard, where she was a professor for six years before coming to Duke. <\/p>\n

      <\/p><\/div>\n

      \n\t\t\t\tMachine Learning for Complex Social Processes \u2013 Hanna Wallach\t\t\t<\/h4>\n
      \n

      \n<\/p>

      Hanna Wallach<\/b> (opens in new tab)<\/span><\/a>, UMass Amherst <\/b>| Wednesday, July 3, 201 Description From the activities of the US Patent Office or the National Institutes of Health to communications between scientists or political legislators, complex social processes\u2014groups of people interacting with each other in order to achieve specific and sometimes contradictory goals\u2014underlie almost all human endeavor. In order draw thorough, data-driven conclusions about complex social processes, researchers and decision-makers need new quantitative tools for exploring, explaining, and making predictions using massive collections of interaction data. In this talk, I will discuss the development of machine learning methods for modeling interaction data. I will concentrate on exploratory analysis of communication networks \u2014 specifically, discovery and visualization of topic-specific subnetworks in email data sets. I will present a new Bayesian latent variable model of network structure and content and explain how this model can be used to analyze intra-governmental email networks.<\/p>\n

      Biography<\/h2>\n

      In fall 2010, Hanna Wallach started as an assistant professor in the Department of Computer Science (opens in new tab)<\/span><\/a> at the University of Massachusetts Amherst. She is one of five core faculty members involved in UMass\u2019s new Computational Social Science Initiative (opens in new tab)<\/span><\/a>. Prior to this, Hanna was a senior postdoctoral research associate, also at UMass, where she developed statistical machine learning techniques for analyzing complex data regarding communication and collaboration within scientific and technological innovation communities. Hanna\u2019s Ph.D. work, undertaken at the University of Cambridge (opens in new tab)<\/span><\/a>, introduced new methods for statistically modeling text using structured topic models\u2014models that automatically infer semantic information from unstructured text and information about document structure, ranging from sentence structure to inter-document relationships. Hanna holds an M.Sc. from the University of Edinburgh (opens in new tab)<\/span><\/a>, where she specialized in neural computing and learning from data, and was awarded the University of Edinburgh\u2019s 2001\/2002 prize for Best M.Sc. Student in Cognitive Science. Hanna received her B.A. from the University of Cambridge Computer Laboratory (opens in new tab)<\/span><\/a> in 2001. Her undergraduate project, \u201cVisual Representation of Computer-Aided Design Constraints,\u201d won the award for the best computer science student in the 2001 U.K. Science Engineering and Technology Awards (opens in new tab)<\/span><\/a>. In addition to her many papers on statistical machine learning techniques for analyzing structured and unstructured data, Hanna\u2019s tutorial on conditional random fields is extremely widely cited and used in machine learning courses around the world. Her recent work (with Ryan Prescott Adams and Zoubin Ghahramani) on infinite belief networks won the best paper award at AISTATS 2010. As well as her research, Hanna works to promote and support women\u2019s involvement in computing. In 2006, she co-founded an annual workshop (opens in new tab)<\/span><\/a> for women in machine learning, in order to give female faculty, research scientists, postdoctoral researchers, and graduate students an opportunity to meet, exchange research ideas, and build mentoring and networking relationships. In her not-so-spare time, Hanna is a member of Pioneer Valley Roller Derby (opens in new tab)<\/span><\/a>, where she is better known as Logistic Aggression. <\/p>\n

      <\/p><\/div>\n

      \n\t\t\t\tCrowd Computing \u2013 Rob Miller\t\t\t<\/h4>\n
      \n

      \n<\/p>

      Rob Miller<\/b> (opens in new tab)<\/span><\/a>, MIT <\/b>|  Wednesday, June 19, 2013 | Video (opens in new tab)<\/span><\/a><\/p>\n

      Description<\/h2>\n

      Crowd computing harnesses the power of people out in the web to do tasks that are hard for individual users or computers to do alone. Like cloud computing, crowd computing offers elastic, on-demand human resources that can drive new applications and new ways of thinking about technology. This talk will describe several prototype systems we have built, including:<\/p>\n