{"id":247136,"date":"2016-07-01T15:38:47","date_gmt":"2016-07-01T22:38:47","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=247136"},"modified":"2022-08-31T12:43:43","modified_gmt":"2022-08-31T19:43:43","slug":"microsoft-research-colloquium","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-research-colloquium\/","title":{"rendered":"Microsoft Research Colloquium"},"content":{"rendered":"\n\n\n\n\n
Important update: Due to COVID-19, the MSR New England Colloquium Series has been put on hiatus until further notice. We look forward to being able to host externally facing <\/b>colloquiums <\/b><\/span>again soon. <\/b><\/span><\/p>\n The Microsoft Research Colloquium at Microsoft Research New England focuses on research in the foundational aspects of computer science, mathematics, economics, anthropology and sociology. With an interdisciplinary flavor, this colloquium series features some of the foremost researchers in their fields talking about their research, breakthroughs and advances. The agenda typically consists of approximately 50 minutes of prepared presentation and brief Q&A, followed immediately by a brief reception* to meet the speaker and address detailed questions. We welcome members of the local academic community to attend.<\/p>\n\n\n\n\n\n \n<\/p> Bill Thies (opens in new tab)<\/span><\/a>, MSR New England<\/span> | December 04, 2019<\/p>\n With the global proliferation of mobile phones has also come a rise of inclusive financial services, many of them accessible via mobile phones. This talk will describe two projects, both works-in-progress, that seek to leverage a combination of mobiles and micropayments to advance development goals in low-income communities. In the Learn2Earn project, we seek to leverage mobile payments to bolster the effectiveness of public awareness campaigns, while in Project Karya, we seek to build a mobile platform for dignified crowdsourced work. Both systems are being piloted in various settings in rural India, and have ripe opportunities for collaboration as they transition to scaled deployments.<\/span><\/p>\n Bill Thies has been a researcher at Microsoft Research since 2008, and at Microsoft Research New England since 2017. His research focuses on building computer systems that positively impact lower income communities, primarily in developing countries. Previously, Bill worked on programming languages and compilers for multicore architectures and microfluidic chips. He earned his B.S., M.Eng and Ph.D. degrees from the Massachusetts Institute of Technology.<\/span> <\/p>\n <\/p><\/div>\n \n<\/p> Devon Powers (opens in new tab)<\/span><\/a>, Temple University | November 20, 2019<\/p>\n We\u2019re all familiar with trending: the word we use to describe \u201crelevant right now\u201d and a prominent feature of our social media platforms. Yet before there was trending, there were trends: broad dynamics that redirect culture, traceable shifts in dress, language, beliefs, and ways of doing and being. For businesses, the ability to identify, understand, and anticipate trends brings enormous benefits, since they can adjust their planning and behavior accordingly. Since trends suggest the direction that culture might go, they have given rise to an industry dedicated to finding them, interpreting them, and turning them in to corporate products that help steer the future. This talk will tell the story of that industry, exploring how and why the trends business came to be and examining how trend professionals do their work. Human trend forecasters digest abundant information in search of patterns and use these patterns to make educated guesses about the course of cultural change. In the talk, I will discuss some of the methods and strategies trend forecasters use, how these strategies came to be understood as \u201cfuturism,\u201d and why forecasters are adamant that their work will never be automated\u2014even as trend companies experiment with machine learning, artificial intelligence, and data science. The talk draws from my recent book On Trend: The Business of Forecasting the Future (2019).<\/i><\/p>\n Devon Powers is Associate Professor of Advertising, Temple University. She is the author of On Trend: The Business of Forecasting the Future (2019) and Writing the Record: The Village Voice and the Birth of Rock Criticism (2013), and co-editor of Blowing Up the Brand: Critical Perspectives on Promotional Culture (2010). Her research explores consumer culture, cultural circulation, and promotion. Her work has appeared in Journal of Consumer Culture, New Media; Society, Critical Studies in Media and Communication, and Popular Communication, among other venues, and she is the chair (2018-2020) of the Popular Communication division of the International Communication Association (ICA). <\/p>\n <\/p><\/div>\n \n<\/p> Cliff Lampe (opens in new tab)<\/span><\/a>, Michigan | November 06, 2019 | <\/span>Video (opens in new tab)<\/span><\/a><\/p>\n Restorative justice is the idea that remediating a \u201ccrime\u201d should focus on supporting the victim, restoring the community, and returning the perpetrator to good community standing. In the U.S. justice system, restorative approaches have been used in parallel with more common retributive approaches and have had strong outcomes in terms of reduced recidivism. The concept of \u201ccrime\u201d in online space is more generally referred to as harassment, bullying, trolling or a variety of other terms related to adversarial interactions. Content moderation that uses retributive approaches is constrained in its effectiveness, so our project is looking to design content moderation approaches that use restorative justice. The project I will describe is in the early stages of applying restorative approaches to moderating adversarial online moderation. I will address theories of retributive and restorative justice, and show how those have been connected to different moderation mechanisms. I will present research we\u2019ve done on \u201cretributive harassment\u201d and connect that to an upcoming research agenda related to restorative justice.<\/p>\n Cliff Lampe is a Professor at the University of Michigan School of Information. His work on social media and online communities has been widely cited over the past 15 years. Cliff\u2019s general interest is how interaction in social computing platforms leads to positive outcomes, and how to overcome barriers to those positive outcomes. His work touches on social capital development via social media interactions, the benefits of anonymity, civic technology, alt right organization in online spaces, and similar topics. Dr. Lampe is also the Director of the Citizen Interaction Design Program at the University of Michigan, which works closely with city governments to improve citizenship opportunities through the targeted application of information technology. He is a Distinguished Member of The ACM, and has various service roles in the HCI research community. <\/p>\n <\/p><\/div>\n \n<\/p> Vivienne Sze (opens in new tab)<\/span><\/a>, MIT | October 30, 2019<\/p>\n Computing near the sensor is preferred over the cloud due to privacy and\/or latency concerns for a wide range of applications including robotics\/drones, self-driving cars, smart Internet of Things, and portable\/wearable electronics. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to the throughput and accuracy requirements of the application. In this talk, we will describe how joint algorithm and hardware design can be used to reduce energy consumption while delivering real-time and robust performance for applications including deep learning, computer vision, autonomous navigation\/exploration and video\/image processing. We will show how energy-efficient techniques that exploit correlation and sparsity to reduce compute, data movement and storage costs can be applied to various tasks including image classification, depth estimation, super-resolution, localization and mapping.<\/p>\n Vivienne Sze is an Associate Professor at MIT in the Electrical Engineering and Computer Science Department. Her research interests include energy-aware signal processing algorithms, and low-power circuit and system design for portable multimedia applications, including computer vision, deep learning, autonomous navigation, and video process\/coding. Prior to joining MIT, she was a Member of Technical Staff in the R&D Center at TI, where she designed low-power algorithms and architectures for video coding. She also represented TI in the JCT-VC committee of ITU-T and ISO\/IEC standards body during the development of High Efficiency Video Coding (HEVC), which received a Primetime Engineering Emmy Award. She is a co-editor of the book entitled \u201cHigh Efficiency Video Coding (HEVC): Algorithms and Architectures\u201d (Springer, 2014). Prof. Sze received the B.A.Sc. degree from the University of Toronto in 2004, and the S.M. and Ph.D. degree from MIT in 2006 and 2010, respectively. In 2011, she received the Jin-Au Kong Outstanding Doctoral Thesis Prize in Electrical Engineering at MIT. She is a recipient of the 2018 Facebook Faculty Award, the 2018 & 2017 Qualcomm Faculty Award, the 2018; 2016 Google Faculty Research Award, the 2016 AFOSR Young Investigator Research Program (YIP) Award, the 2016 3M Non-Tenured Faculty Award, the 2014 DARPA Young Faculty Award, the 2007 DAC\/ISSCC Student Design Contest Award, and a co-recipient of the 2018 Symposium on VLSI Circuits Best Student Paper Award, the 2017 CICC Outstanding Invited Paper Award, the 2016 IEEE Micro Top Picks Award and the 2008 A-SSCC Outstanding Design Award. For more information about research in the Energy-Efficient Multimedia Systems Group at MIT visit: <\/i>http:\/\/www.rle.mit.edu\/eems\/ (opens in new tab)<\/span><\/a><\/i> <\/p>\n <\/p><\/div>\n \n<\/p> Nicki Dell (opens in new tab)<\/span><\/a>, Cornell Tech | October 02, 2019<\/p>\n Digital technologies, including mobile devices, cloud computing services, and social networks, play a nuanced role in intimate partner violence (IPV) settings, including domestic abuse, stalking, and surveillance of victims by abusive partners. In this talk, I\u2019ll cover our recent and ongoing work on understanding technology\u2019s role in IPV, improving the privacy and security of current technologies, and designing new tools and systems that increase security, privacy, and safety for victims.<\/p>\n Nicki Dell is an Assistant Professor at Cornell University based at the Cornell Tech campus in New York City. Her research interests are in human-computer interaction (HCI) and information and communication technologies and development (ICTD), with a focus on designing, building, and evaluating novel computing systems that improve the lives of underserved populations in the US and around the world. At Cornell, Nicki is part of the Center for Health Equity, the Digital Life Initiative, the Atkinson Center for a Sustainable Future, and she co-leads a research team studying computer security and privacy in the context of intimate partner violence. Her research is funded by the NSF, RWJF, Google, Facebook, and others. <\/p>\n <\/p><\/div>\n \n<\/p> David Autor (opens in new tab)<\/span><\/a>, MIT | September 04, 2019<\/p>\n David plans on presenting a version of his Ely Lecture, Work of the Past, Work of the Future, which can be found here: https:\/\/www.nber.org\/papers\/w25588<\/p>\n David Autor, one of the leading labor economists in the world and a member of the American Academy of Arts and Sciences, is a professor and associate department head of the Massachusetts Institute of Technology Department of Economics. He is also a faculty research associate of the National Bureau of Economic Research and editor in chief of the Journal of Economic Perspectives. His current fields of specialization include human capital and earnings inequality, labor market impacts of technological change and globalization, disability insurance and labor supply, and temporary help and other intermediated work arrangements. Dr. Autor received a BA in psychology from Tufts University and a PhD in public policy at Harvard University\u2019s Kennedy School of Government. <\/p>\n <\/p><\/div>\n \n<\/p> Christian Borgs (opens in new tab)<\/span><\/a>, MSR NE | Wednesday, August 28, 2019<\/p>\n Graphons were invented to model the limit of large, dense graphs. While this led to interesting applications in combinatorics, most applications require limits of sparse graphs. In this talk, I will review the notion of graph limits for both dense and sparse graphs, and discuss a couple of applications: non-parametric modelling of sparse graphs, and recommendation systems where the matrix of known ratings is so sparse that two typical users have never rated the same item, making standard similarity based recommendation algorithms challenging. This is joint work with Jennifer Chayes, Henry Cohn, and several others.<\/p>\n Christian Borgs is deputy managing director and co-founder of Microsoft Research New England (opens in new tab)<\/span><\/a> in Cambridge, Massachusetts. He studied physics at the University of Munich (opens in new tab)<\/span><\/a>, the University Pierre et Marie Curie (opens in new tab)<\/span><\/a> in Paris, the Institut des Hautes Etudes (opens in new tab)<\/span><\/a> in Bures-sur-Yvettes, and the Max-Planck-Institute for Physics (opens in new tab)<\/span><\/a> in Munich. He received his Ph.D. in mathematical physics from the University of Munich, held a postdoctoral fellowship at the ETH Zurich (opens in new tab)<\/span><\/a>, and received his Habilitation in mathematical physics from the Free University in Berlin (opens in new tab)<\/span><\/a>. After his Habilitation he became the C4 Chair for Statistical Mechanics at the University of Leipzig (opens in new tab)<\/span><\/a>, and in 1997 he joined Microsoft Research to co-found the Theory Group (opens in new tab)<\/span><\/a>. He was a manager of the Theory group until 2008, when he co-founded Microsoft Research New England. Christian Borgs is well known for his work on the mathematical theory of first-order phase transitions and finite-size effects, for which he won the 1993 Karl-Scheel Prize of the German Physical Society. Since joining Microsoft, Christian Borgs has become one of the world leaders in the study in phase transitions in combinatorial optimization, and more generally, the use of methods from statistical physics and probability theory in problems of interest to computer science and technology. He is one of the top researchers in the modeling and analysis of self-organized networks (such as the Internet, the World Wide Web and social networks), as well as the analysis of processes and algorithms on networks. <\/p>\n <\/p><\/div>\n \n<\/p> Simson L Garfinkel (opens in new tab)<\/span><\/a>, US Census Office\/ MIT | Wednesday, August 21, 2019<\/p>\n When differential privacy was created more than a decade ago, the motivating example was statistics published by an official statistics agency. In theory there is no difference between theory and practice, but in practice there is. In attempting to transition differential privacy from the theory to practice, and in particular for the 2020 Census of Population and Housing, the U.S. Census Bureau has encountered many challenges unanticipated by differential privacy\u2019s creators. Many of these challenges had less to do with the mathematics of differential privacy and more to do with operational requirements that differential privacy\u2019s creators had not discussed in their writings. These challenges included obtaining qualified personnel and a suitable computing environment, the difficulty of accounting for all uses of the confidential data, the lack of release mechanisms that align with the needs of data users, the expectation on the part of data users that they will have access to micro-data, the difficulty in setting the value of the privacy-loss parameter, \u03b5 (epsilon), and the lack of tools and trained individuals to verify the correctness of differential privacy, and push-back from same members of the data user community. Addressing these concerns required developing a novel hierarchical algorithm that makes extensive use of a high-performance commercial optimizer; transitioning the computing environment to the cloud; educating insiders about differential privacy; engaging with academics, data users, and the general public; and redesigning both data flows inside the Census Bureau and some of the final data publications to be in line with the demands of formal privacy.<\/p>\n Simson Garfinkel received undergraduate degrees in Chemistry, Political Science, and the Science, Technology and Society program from the Massachusetts Institute of Technology in 1987; a MS in Journalism from Columbia University in 1988; and a PhD in Computer Science from MIT in 2005. He has over 30 years of research and development experience with over 50 publications in peer-reviewed journals and conferences. His research interests include digital forensics, usable security, and technology transfer. In 2017 Garfinkel was appointed the Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau; he was previously a Senior Advisor at the US National Institute of Standards and Technology, and an Associate Professor in the Computer Science Department at the Naval Postgraduate School. He is a fellow of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers, holds a PhD in Computer Science from MIT, and teaches as an adjunct faculty member at the George Mason University in Vienna, Virginia. Garfinkel shared the 2017 NIST Information Technology Laboratory Outstanding Standards Document Award for NIST SP 800-188, Trustworthy Email, and the 2011 Department of Defense Value Engineering Achievement Award for his leadership in the Bulk Extractor Program. He has received three Best Paper awards at the DFRWS digital forensics research symposium, as well as multiple national awards for his work in technology journalism. Garfinkel is the author or co-author of fifteen books on computing. His most recent book is The Computer Book, which features 250 chronologically arranged milestones in the history of computing from the ancient Abacus (c. 2500BCE) to the limits of computation far in the future. He is also known for Database Nation, which explored privacy issues, and Practical UNIX and Internet Security, which sold more than 250,000 copies. <\/p>\n <\/p><\/div>\n \n<\/p> Katrina Ligett (opens in new tab)<\/span><\/a>, Hebrew University | Wednesday, August 14, 2019<\/p>\n Technology makes it possible to measure and digitize almost every aspect of our existence, introducing serious information risks to both individuals and society. The societal benefits from uses of personal data, meanwhile, are far from fully realized. We will discuss the potential and perils of one particular approach to revising the data ecosystem\u2014the introduction of a new layer, \u201cdata co-ops\u201d, between individuals and those who use their data. While the idea of co-ops has been around for decades, we argue that perhaps its time has finally come, and invite engagement with the broad array of crucial research questions it raises. Based on joint work with Kobbi Nissim.<\/i><\/p>\n Katrina Ligett is an Associate Professor of Computer Science at the Hebrew University of Jerusalem, where her research interests include data privacy, algorithmic fairness, algorithmic economics, and machine learning theory. Her work has been recognized with a NSF CAREER award and a Microsoft Faculty Fellowship. She is currently a Visiting Researcher at MSR-NE. <\/p>\n <\/p><\/div>\n \n<\/p> Naomi Caselli (opens in new tab)<\/span><\/a> Boston University | Wednesday, August 07, 2019<\/p>\n The majority of deaf children experience a period of limited exposure to language (spoken or signed) during early childhood, which has cascading effects on many aspects of learning and development. In order to better support deaf children, Dr. Caselli\u2019s lab has developed two tools for research and clinical practice. The first is ASL-LEX, a lexical database cataloguing more than 50 pieces of information about 2,500 ASL signs including, for example, how to \u2018pronounce\u2019 the sign or how frequently the sign occurs. The second is the ASL CDI, a sign language vocabulary assessment designed for children birth to five-years-old. Both ASL-LEX and the ASL CDI have research applications in psychology, education, and computer science, and as well as clinical applications for teachers and speech pathologists. In this talk, Caselli will describe how she has used these tools investigate how deaf children naturally build a vocabulary in sign language, and how limited exposure to language during early childhood affects vocabulary acquisition.<\/p>\n Naomi Caselli, PhD is an Assistant Professor in the Programs in Deaf Studies at Boston University. She is the PI on three NIH and NSF funded grants examining to vocabulary of ASL, and how language deprivation affects how people learn and process ASL signs. She earned a joint PhD in Psychology and Cognitive Science from Tufts University, as well as an Ed.M. in Deaf Education and an M.A. in Psychology from Boston University. She is hearing, and a native speaker of both ASL and English. <\/p>\n <\/p><\/div>\n \n<\/p> Moira Weigel (opens in new tab)<\/span><\/a>, Harvard | Wednesday, July 31, 2019<\/p>\n In this talk, I take up the concept of screen or Internet addiction and place it in a global context, by comparing official and popular discourses about pathological Internet use in the United States and the People\u2019s Republic of China. I show that in the first case the harms of screen addiction have been characterized primarily as dehumanizing, while in the second, they have more often been construed as damaging to suzhi (\u7d20<\/span>\u8d28<\/span>), or \u201cquality.\u201d The contrast does not only demonstrate that this ostensibly biomedical phenomenon is, in fact, (also) socially and culturally constructed. It highlights the role that paradigms for understanding screen addiction play in constructing the ideal users of digital networks. Taking this comparative analysis as a point of departure, I then turn to theoretical and methodological questions. How can we account for the kinds of variability that such distinct constructions of users introduce into the global system alternately described as \u201cplatform capitalism,\u201d \u201csurveillance capitalism,\u201d or \u201cdata colonialism\u201d? I propose that feminist and postcolonial traditions offer crucial conceptual tools for analyzing how successful platforms sustain socially reproductive data relations\u2013and for understanding how the global history of digital capital is always locally instantiated.<\/p>\n I am a Junior Fellow at the Harvard Society of Fellows. I recently received my PhD in Comparative Literature and Film and Media from Yale University. Before Yale, I earned a BA (summa cum laude) from Harvard University, and an M. Phil from the University of Cambridge, where I was the Harvard Scholar in residence at Emmanuel College. My dissertation explores the prehistory of posthumanism from the perspective of cinema and media studies. My goal is to show how, before explicit discourses concerning \u201cThe End of Man\u201d emerged after World War II, the cinema served as an alternative public sphere, where new relations between \u201csociety\u201d and \u201cnature\u201d were modeled and negotiated. My first book, \u201cLabor of Love: The Invention of Dating\u201d was published by Farrar, Straus, and Giroux in 2016. In a series of interlinking essays, LOL investigates the shape-shifting institution of dating\u2013which, I contend, names the logic of courtship under consumer driven capitalism. <\/p>\n <\/p><\/div>\n \n<\/p> Indrani Medhi Thies (opens in new tab)<\/span><\/a>, MSR India | Wednesday, July 24, 2019<\/p>\n About 800 million people in the world are completely non-literate and many are able to read only with great difficulty and effort. Even though mobile phone penetration is growing very fast, people with low levels of literacy have been found to avoid complex functions, and primarily use mobile phones for voice communication only. \u201cText-Free UIs\u201d are design principles and recommendations for computer-human interfaces that would allow a first-time, non-literate person, on first contact with a PC or a mobile phone, to immediately realize useful interaction with minimal or no external assistance. We followed an ethnographic design and iterative prototyping process involving hundreds of hours spent in the field among low-income, low-literate communities across rural and urban India, the Philippines and South Africa.<\/p>\n Indrani Medhi Thies is a Researcher in the Technology for Emerging Markets group at Microsoft Research in Bangalore, India. She currently sits with MSRNE. Her research interests are in the area of User Interfaces, User Experience Design, and ICTs for Global Development. Over the years, Indrani\u2019s primary work has been in user interfaces for low-literate and novice technology users. Her distinctions include an MIT TR35 award and an ACM SIGCHI Social Impact Award. Indrani has a PhD from the Industrial Design Centre, IIT Bombay, India. <\/p>\n <\/p><\/div>\n \n<\/p> Geoffrey Baym (opens in new tab)<\/span><\/a>, Temple | Wednesday, July 17, 2019<\/p>\n Years before Twitter, Fox News, or reality TV, Donald Trump became a public figure through his presence in tabloid media. Much of that focused on sex and spectacle, but early tabloid coverage of Trump was also surprisingly political, with speculation about a possible presidential campaign beginning as early as 1987. Although that coverage has been largely overlooked, this study reveals that tabloid media played a central role in building the foundations of Trump\u2019s political identity. It tracks the early articulation of the Trump character and its simultaneous politicization within a media space outside the ostensibly legitimate arena of institutional public-affairs journalism. In so doing, it reveals the deeper contours of an imagined political world in which a Trump presidency could be conceivable in the first instance \u2013 a political imaginary adjacent to the deep assumptions of liberal Democracy, and therefore long invisible to most serious observers of presidential politics.<\/p>\n Geoffrey Baym is Professor of Media Studies at the Klein College of Media and Communication, Temple University. The author of From Cronkite to Colbert: The Evolution of Broadcast News, he has published numerous articles and chapters exploring the hybridization of news, public affairs media, and political discourse. <\/p>\n <\/p><\/div>\n \n<\/p> Jonathan Gruber (opens in new tab)<\/span><\/a>, MIT | Wednesday, July 10, 2019<\/p>\n This new book, by Jonathan Gruber and Simon Johnson, argues that a return to large public investments in research and development, implemented in a place based fashion, can lead to faster and more equitable growth in the US.<\/p>\n Dr. Jonathan Gruber is the Ford Professor of Economics at the Massachusetts Institute of Technology, where he has taught since 1992. He is also the Director of the Health Care Program at the National Bureau of Economic Research, and President of the American Society of Health Economists. He is a member of the Institute of Medicine, the American Academy of Arts and Sciences, the National Academy of Social Insurance, and the Econometric Society. He has published more than 160 research articles, has edited six research volumes, and is the author of Public Finance and Public Policy<\/em>, a leading undergraduate text, and Health Care Reform<\/em>, a graphic novel. In 2006 he received the American Society of Health Economists Inaugural Medal for the best health economist in the nation aged 40 and under. During the 1997-1998 academic year, Dr. Gruber was on leave as Deputy Assistant Secretary for Economic Policy at the Treasury Department. From 2003-2006 he was a key architect of Massachusetts\u2019 ambitious health reform effort, and in 2006 became an inaugural member of the Health Connector Board, the main implementing body for that effort. During 2009-2010 he served as a technical consultant to the Obama Administration and worked with both the Administration and Congress to help craft the Patient Protection and Affordable Care Act. In 2011 he was named \u201cOne of the Top 25 Most Innovative and Practical Thinkers of Our Time\u201d by Slate Magazine. In both 2006 and 2012 he was rated as one of the top 100 most powerful people in health care in the United States by Modern Healthcare Magazine. Dr. Gruber is the Chair of the Industry Advisory Board for Flare Capital Partners and is on the board of the Health Care Cost Institute. <\/p>\n <\/p><\/div>\n \n<\/p> David Karger (opens in new tab)<\/span><\/a>, MIT | Wednesday, June 26, 2019 | Video (opens in new tab)<\/span><\/a><\/p>\n My group develops systems to help people manage information and share it with others. We study both text (online discussion tools) and structured data (information visualization and management applications). Our guiding principle is that humans are powerful and creative information managers, and that the key challenge is to build systems that can accurately store and present the sophisticated thinking that people apply to their information. In this talk I\u2019ll take a rapid tour through several of our online discussion projects, emphasizing this common solution principle. I\u2019ll discuss NB, an online-education tool for discussing course content in the margins, Murmur, a system that modernizes the mailing list to address its drawbacks while preserving its great utility, Squadbox, a tool that scaffolds workflows to help protect targets of online harassment, and Wikum, a system that bridges between discussion forums and wikis by helping forum participants work together to build a summary of a long discussion\u2019s main points and conclusions.<\/p>\n David R. Karger is a Professor of Electrical Engineering and Computer Science at MIT\u2019s Computer Science and Artificial Intelligence Laboratory. David earned his Ph.D. at Stanford University in 1994 and has since contributed to many areas of computer science, publishing in algorithms, machine learning, information retrieval, personal information management, networking, peer to peer systems, databases, coding theory, and human-computer interaction. A general interest has been to make it easier for people to create, find, organize, manipulate, and share information. He formed and leads the Haystack group to investigate these issues. <\/p>\n <\/p><\/div>\n \n<\/p> Adam Frost (opens in new tab)<\/span><\/a>, Harvard | Wednesday, May 08, 2019 | Video (opens in new tab)<\/span><\/a><\/p>\n Contrary to popular belief, the modern Chinese economy did not spring into being in 1978 with Deng Xiaoping\u2019s call for \u201cReform and Opening Up.\u201d Rather, it was a product of an incremental, bottom-up transformation, decades in the making. As my research shows, throughout China\u2019s socialist era, citizens at all levels of society\u2014 from farmers who illegally traded ration coupons, to state officials who colluded with underground factories to manufacture goods\u2014 actively subverted state control to profit from inefficiencies in planning and, more generally, to make things work. In the absence of \u201cgood institutions,\u201d they formed illicit networks that subsumed the ordinary functions of markets (ex. coordination, information aggregation, risk sharing, lending) and created productive assemblages of capital, labor, and knowledge. Drawing upon an array of unconventional sources that have never before been examined by scholars, I will argue that capitalism and entrepreneurship not only supported the functioning of China\u2019s socialist economy, but fundamentally reshaped it and created the conditions for subsequent economic growth.<\/p>\n Adam Frost is a Ph.D. student at Harvard University who specializes in the economic history of modern China. His research broadly explores the history of informal economies, from unlicensed taxi drivers in 1920\u2019s Shanghai to underground entrepreneurs in socialist China to beggars in contemporary Xi\u2019an. In addition to conducting traditional archival research, Adam draws heavily upon ethnography, oral history, and contraband documents. Most recently he completed a documentary on the everyday lives of beggars in Northwest China entitled, The End of Bitterness. <\/p>\n <\/p><\/div>\n \n<\/p> Cynthia Rudin, (opens in new tab)<\/span><\/a> Duke | Wednesday, December 12, 2018<\/p>\n With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed models for medical imaging, and poor bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable models, they come with their own explanations, which are faithful to what the model actually computes. I will present work on (i) optimal decision lists, (ii) interpretable neural networks for computer vision, and (iii) optimal scoring systems (sparse linear models with integer coefficients). In our applications, we have always been able to achieve interpretable models with the same accuracy as black box models.<\/p>\n Cynthia Rudin is an associate professor of computer science, electrical and computer engineering, and statistics at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD in applied and computational mathematics from Princeton University. She is the recipient of the 2013 and 2016 INFORMS Innovative Applications in Analytics Awards, an NSF CAREER award, was named as one of the \u201cTop 40 Under 40\u201d by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. Work from her lab has won 10 best paper awards in the last 5 years. She is past chair of the INFORMS Data Mining Section, and is currently chair of the Statistical Learning and Data Science section of the American Statistical Association. She has served on committees for DARPA, the National Institute of Justice, and AAAI. She has served on three committees for the National Academy of Sciences, including the Committee on Applied and Theoretical Statistics, the Committee on Law and Justice, and the Committee on Analytic Research Foundations for the Next-Generation Electric Grid. <\/p>\n <\/p><\/div>\n \n<\/p> Ernest Fraenkel (opens in new tab)<\/span><\/a>, MIT | Wednesday, October 03, 2018<\/p>\n Artificial intelligence (AI) is widely touted as the solution to almost every problem in society.AI is predicted to transform the workplace, manufacturing, farming, marketing, banking, insurance, transportation, policing, education and even dating. What are the prospects for applying AI to healthcare? What problems are ripe for data driven approaches? Which solutions are within reach if we plan properly, and which remain in the distant future? I will provide somewhat opinionated answers to these questions and look forward to a healthy discussion.<\/p>\n Ernest Fraenkel is a Professor of Biological Engineering at the Massachusetts Institute of Technology. His laboratory seeks to understand diseases from the perspective of systems biology.T hey develop computational and experimental approaches for finding new therapeutic strategies by analyzing molecular networks, clinical and behavioral data. He received his PhD in Biology from MIT after graduating summa cum laude from Harvard College with an AB in Chemistry and Physics. <\/p>\n <\/p><\/div>\n \n<\/p> Alex Chouldechova (opens in new tab)<\/span><\/a>, Carnegie Mellon University| Wednesday, August 29, 2018<\/p>\n Risk assessment tools are widely used around the country to inform decision making within the criminal justice system. Recently, considerable attention has been paid to whether such tools may suffer from predictive racial bias, and whether their use may result in racially disparate impact. Evaluating a tool for predictive bias typically entails a comparison of different predictive accuracy metrics across racial groups. Problematically, such evaluations are conducted with respect to target variables that may represent biased measurements of an unobserved outcome of more central interest. For instance, while it would be desirable to predict whether an individual will commit a future crime (reoffend), we only observe proxy outcomes such as rearrest and reconviction. My talk will focus on how this issue of \u201ctarget variable bias\u201d affects evaluations of a tool\u2019s predictive bias. I will also discuss various reasons why risk assessment tools may result in racially disparate impact.<\/p>\n Alexandra Chouldechova is an Assistant Professor of Statistics and Public Policy at Carnegie Mellon University\u2019s Heinz College. Her research over the past few years has centered on fairness in predictive modeling, particularly in the context of criminal justice and public services applications. Some of her recent work has delved into tradeoffs between different notions of predictive bias, disparate impact of data-driven decision-making, and inferential challenges stemming from common forms of \u201cdata bias\u201d. A statistician by training, Alex received her Ph.D. in Statistics from Stanford University in 2014. <\/p>\n <\/p><\/div>\n \n<\/p> Yael Kalai (opens in new tab)<\/span><\/a>, MSR-NE | Wednesday, August 22, 2018<\/p>\n Efficient verification of computation, also known as delegation of computation, is one of the most fundamental notions in computer science, and in particular it lies at the heart of the P vs. NP question. In this talk I will give a brief overview of the evolution of proofs in computer science, and show how this evolution is instrumental to solving the problem of delegating computation. I will highlight a curious connection between the problem of delegating computation and the notion of no-signaling strategies from quantum physics.<\/p>\n Most recently an Assistant Professor of Computer Science at Georgia Tech. Before this, Yael was a post-doc at the Weizmann Institute in Israel and Microsoft Research in Redmond. She graduated from MIT, working in cryptography under the superb supervision of Shafi Goldwasser. <\/p>\n <\/p><\/div>\n \n<\/p> Matt Jackson (opens in new tab)<\/span><\/a>, Stanford | Wednesday, August 15, 2018<\/p>\n Technological advances are changing interaction patterns from world trade to social network patterns. Two different implications of evolving networks are discussed \u2013 one is changing trade patterns and their impact on military alliances and wars, and the other is the formation and evolution of friendships among students, and resulting academic performance.<\/p>\n Matthew O. Jackson is the William D. Eberle Professor of Economics at Stanford University and an external faculty member of the Santa Fe Institute and a senior fellow of CIFAR. He was at Northwestern University and Caltech before joining Stanford, and received his BA from Princeton University and PhD from Stanford. Jackson\u2019s research interests include game theory, microeconomic theory, and the study of social and economic networks, on which he has published many articles and the books Social and Economic Networks and The Human Network. He teaches an online course on networks and co-teaches two others on game theory. Jackson is a Member of the National Academy of Sciences, a Fellow of the American Academy of Arts and Sciences, a Fellow of the Econometric Society, a Game Theory Society Fellow, and an Economic Theory Fellow, and his other honors include the von Neumann Award of the Rajk Laszlo College, a Guggenheim Fellowship, the Social Choice and Welfare Prize, and the B.E.Press Arrow Prize. <\/p>\n <\/p><\/div>\n \n<\/p> Cass Sunstein (opens in new tab)<\/span><\/a>, Harvard | Wednesday, August 08, 2018<\/p>\n Some information is beneficial; it makes people\u2019s lives go better. Some information is harmful; it makes people\u2019s lives go worse. Some information has no welfare effects at all; people neither gain nor lose from it. Under prevailing executive orders, agencies must investigate the welfare effects of information by reference to cost-benefit analysis. Federal agencies have (1) claimed that quantification of benefits is essentially impossible; (2) engaged in \u201cbreak-even analysis\u201d; (3) projected various endpoints, such as health benefits or purely economic savings; and (4) relied on private willingness-to-pay for the relevant information. All of these approaches run into serious objections. With respect to (4), people may lack the information that would permit them to say how much they would pay for (more) information; they may not know the welfare effects of information; and their tastes and values may shift over time, in part as a result of information. These points suggest the need to take the willingness-to-pay criterion with many grains of salt, and to learn more about the actual effects of information, and of the behavioral changes produced by information, on people\u2019s experienced well-being.<\/p>\n Cass R. Sunstein is currently the Robert Walmsley University Professor at Harvard. From 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs. He is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. Mr. Sunstein has testified before congressional committees on many subjects, and he has been involved in constitution-making and law reform activities in a number of nations. Mr. Sunstein is author of many articles and books, including Republic.com (2001), Risk and Reason (2002), Why Societies Need Dissent (2003), The Second Bill of Rights (2004), Laws of Fear: Beyond the Precautionary Principle (2005), Worst-Case Scenarios (2001), Nudge: Improving Decisions about Health, Wealth, and Happiness (with Richard H. Thaler, 2008), Simpler: The Future of Government (2013) and most recently Why Nudge? (2014) and Conspiracy Theories and Other Dangerous Ideas (2014). He is now working on group decision making and various projects on the idea of liberty. <\/p>\n <\/p><\/div>\n \n<\/p> Vijay Vazirani (opens in new tab)<\/span><\/a>, University of California, Irvine | Wednesday, August 01, 2018 | <\/span> Video (opens in new tab)<\/span><\/a> Please Note: This seminar is of a more technical nature than our typical colloquium talks.<\/p>\n Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #P-complete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution! The case of bipartite planar graphs was solved by Miller and Naor in 1989 via a flow-based algorithm. In 2000, Mahajan and Varadarajan gave an elegant way of using counting matchings to finding one, hence giving a different NC algorithm. However, non-bipartite planar graphs still didn\u2019t yield: the stumbling block being odd tight cuts. Interestingly enough, these are also a key to the solution: a balanced tight odd cut leads to a straight-forward divide and conquer NC algorithm. However, a number of ideas are needed to find such a cut in NC; the central one being an NC algorithm for finding a face of the perfect matching polytope at which Omesdfsdfn) new conditions, involving constraints of the polytope, are simultaneously satisfied. Paper available at: https:\/\/arxiv.org\/pdf\/1709.07822.pdf Joint work with Nima Anari.<\/p>\n Vijay Vazirani received his BS at MIT and his Ph.D. from University of California, Berkeley. He is currently Distinguished Professor at University of California, Irvine. He has made seminal contributions to the theory of algorithms, in particular to the classical maximum matching problem, approximation algorithms, and complexity theory. Over the last decade and a half, he has contributed widely to an algorithmic study of economics and game theory. Vazirani is author of a definitive book on Approximation Algorithms, published in 2001, and translated into Japanese, Polish, French and Chinese. He was McKay Fellow at U. C. Berkeley in Spring 2002, and Distinguished SISL Visitor at Caltech during 2011-12. He is a Guggenheim Fellow and an ACM Fellow. <\/p>\n <\/p><\/div>\n \n<\/p> Henry Cohn (opens in new tab)<\/span><\/a>, MSR-NE | Wednesday, July 25, 2018<\/p>\n Many topics in science and engineering involve a delicate interplay between order and disorder. For example, this occurs in the study of interacting particle systems, as well as related problems such as designing error-correcting codes for noisy communication channels. Some solutions of these optimization problems exhibit beautiful long-range order while others are amorphous. Finding a clear basis for this dichotomy is a fundamental mathematical problem, sometimes called the crystallization problem. It\u2019s natural to assume that any occurrence of dramatic structure must happen for a good reason, but is that really true? I wish I knew. In this talk (intended to be accessible to an interdisciplinary audience) we\u2019ll take a look at some test cases.<\/p>\n Henry Cohn\u2019s mathematical interests include symmetry and exceptional structures; more generally, he enjoys any area in which concrete problems are connected in surprising ways with abstract mathematics. He came to MSR as a postdoc in 2000 and joined the theory group long-term in 2001. In 2007 he became head of the cryptography group, and in 2008 he moved to Cambridge with Jennifer Chayes and Christian Borgs to help set up Microsoft Research New England. He stays up late at night worrying about why the 16th dimension isn\u2019t like the 8th or 24th. <\/p>\n <\/p><\/div>\n \n<\/p> Avi Goldfarb (opens in new tab)<\/span><\/a>, University of Toronto | Wednesday, July 18, 2018<\/p>\n Recent excitement in artificial intelligence has been driven by advances in machine learning. In this sense, AI is a prediction technology. It uses data you have to fill in information you don\u2019t have. These advances can be seen as a drop in the cost of prediction. This framing generates powerful, but easy-to-understand implications. As the cost of something falls, we will do more of it. Cheap prediction means more prediction. Also, as the cost of something falls, it affects the value of other things. As machine prediction gets cheap, human prediction becomes less valuable while data and human judgment become more valuable. Business models that are constrained by uncertainty can be transformed, and organizations with an abundance of data and a good sense of judgment have an advantage. Based on the book Prediction Machines by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.<\/p>\n Avi Goldfarb is the Rotman Professor of Artificial Intelligence and Healthcare at the Rotman School of Management, University of Toronto, and coauthor of the Globe & Mail bestselling book Prediction Machines: The Simple Economics of Artificial Intelligence. Avi is also Senior Editor at Marketing Science, Chief Data Scientist at the Creative Destruction Lab, and Research Associate at the National Bureau of Economic Research where he helps run the initiatives around digitization and artificial intelligence. Avi\u2019s research focuses on the opportunities and challenges of the digital economy. He has published over 60 academic articles in a variety of outlets in marketing, statistics, law, computing, management, and economics. This work has been discussed in White House reports, Congressional testimony, European Commission documents, The Economist, Globe and Mail, National Public Radio, The Atlantic, New York Times, Financial Times, Wall Street Journal, and elsewhere. He holds a Ph.D. in economics of Northwestern University. <\/p>\n <\/p><\/div>\n \n<\/p> Greg Lewis (opens in new tab)<\/span><\/a>, MSR- NE | Wednesday, July 11, 2018 | Video (opens in new tab)<\/span><\/a><\/p>\n We introduce a model of search by imperfectly informed consumers with unit demand. Consumers learn spatially: sampling the payoff to one product causes them to update their payoffs about all products that are nearby in some attribute space. Search is costly, and so consumers face a trade-off between \u201cexploring\u201d far apart regions of the attribute space and \u201cexploiting\u201d the areas they already know they like. We present evidence of spatial learning in data on online camera purchases, as consumers who sample unexpectedly low quality products tend to subsequently sample products that are far away in attribute space. We develop a flexible parametric specification of the model where consumer utility is sampled as a Gaussian process and use it to estimate demand in the camera data using Markov Chain Monte Carlo (MCMC) methods. We conclude with a counterfactual experiment in which we manipulate the initial product shown to a consumer, finding that a bad initial experience can lead to early termination of search. Product search rankings can therefore substantially affect consumer search paths and purchase decisions.<\/p>\n Greg Lewis is an economist, whose main research interests lie in industrial organization, market design and applied econometrics. He received his bachelor\u2019s degree in economics and statistics from the University of the Witwatersrand in South Africa, and his MA and PhD both from the University of Michigan. He then served on the economics faculty at Harvard, as assistant and then associate professor. Recently, his time has been spent analyzing strategic learning by firms in the British electricity market, suggesting randomized mechanisms for price discrimination in online display advertising, developing econometric models of auction markets, and evaluating the design of procurement auctions. <\/p>\n <\/p><\/div>\n \n<\/p> Nancy Baym (opens in new tab)<\/span><\/a> MSR- NE | Wednesday, June 27, 2018<\/p>\n The architectures and norms of new media push people toward sharing everyday intimacies they might historically have kept to close friends and family. As more people are pushed toward gig work, the original gig workers \u2013 musicians \u2013 provide an exemplary lens for exploring the implications of this widespread blurring of interpersonal communication into everyday practices of professional viability. This talk, based on the new book Playing to the Crowd: Musicians, Audiences, and the Intimate Work of Connection, draws on nearly a decade of work to show how the pressure to be \u201cauthentic\u201d in communicating with audiences, combined with the designs and materialities of new communication technologies, raises dialectic tensions that musicians \u2013 and many others \u2013 must manage as social media platforms become integral to professional life.<\/p>\n Nancy Baym is a Principal Researcher at Microsoft Research in Cambridge, Massachusetts. After earning her Ph.D. in 1994 in the Department of Speech Communication at the University of Illinois, she was on the faculty of Communication departments for 18 years before joining Microsoft in 2012. With Steve Jones (and others), she was a founder of the Association of Internet Researchers and served as its second President. She is the author of Personal Connections in the Digital Age (Polity Press), now in its second edition, Tune In, Log On: Soaps, Fandom and Online Community (Sage Press), and co-editor of Internet Inquiry: Conversations About Method (Sage Press) with Annette Markham. She serves on the editorial boards of New Media & Society, the Journal of Communication, The Journal of Computer Mediated Communication, and numerous other journals. Her book Playing to the Crowd: Musicians, Audiences, and the Intimate Work of Connection will be published in July by NYU Press. More information, most of her articles, and some of her talks are available at nancybaym.com. <\/p>\n <\/p><\/div>\n \n<\/p> Tarleton Gillespie (opens in new tab)<\/span><\/a>, MSR- NE | Wednesday, June 13, 2018<\/p>\n This talk will give an overview of my new book, and highlight the public debate about content moderation and its implications for those studying or building information systems that host user content. Most social media users want their chosen platforms free from harassment and porn. But they also want to see the content they choose to see. This means platforms face an irreconcilable contradiction: while platforms promise an open space for participation and community, every one of them imposes rules of some kind. In the early days of social media, content moderation was hidden away, even disavowed. But the illusion of the open platform has, in recent years, begun to crumble. Today, content moderation has never been more important, or more controversial. In this book, I discuss how social media platforms police what we post online \u2013 and the societal impact of these decisions. Content moderation still receives too little public scrutiny. How and why platforms moderate can shape societal norms and alter the contours of public discourse, cultural production, and the fabric of society \u2014 and the very fat of moderation should change how we understand what platforms are.<\/p>\n Tarleton Gillespie is a principal researcher at Microsoft Research, an affiliated associate professor in Cornell\u2019s Department of Communication and Department of Information Science, co-founder of the blog Culture Digitally, author of Wired Shut: Copyright and the Shape of Digital Culture (MIT, 2007), co-editor of Media Technologies: Essays on Communication, Materiality, and Society (MIT, 2014), and the author of the forthcoming Custodians of the Internet (Yale, 2018). <\/p>\n <\/p><\/div>\n \n<\/p> Josh Tenenbaum (opens in new tab)<\/span><\/a>, MIT | Wednesday, June 6, 2018<\/p>\n Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.<\/p>\n Joshua Tenenbaum is Professor of Cognitive Science and Computation at the MIT. He and his colleagues in the Computational Cognitive Science group want to understand that most elusive aspect of human intelligence: our ability to learn so much about the world, so rapidly and flexibly. While their core interests are in human learning and reasoning, they also work actively in machine learning and artificial intelligence. These two programs are inseparable: bringing machine-learning algorithms closer to the capacities of human learning should lead to more powerful AI systems as well as more powerful theoretical paradigms for understanding human cognition. Their current research explores the computational basis of many aspects of human cognition: learning concepts, judging similarity, inferring causal connections, forming perceptual representations, learning word meanings and syntactic principles in natural language, noticing coincidences and predicting the future, inferring the mental states of other people, and constructing intuitive theories of core domains, such as intuitive physics, psychology, biology, or social structure. He is known for contributions to mathematical psychology and Bayesian cognitive science. Tenenbaum previously taught at Stanford University, where he was the Wasow Visiting Fellow from October 2010-January 2011. <\/p>\n <\/p><\/div>\n \n<\/p> Nathan Ensmenger (opens in new tab)<\/span><\/a>, Indiana | Wednesday, May 2, 2018<\/p>\n Drawing on the literature on environmental history, this paper surveys the multiple ways in which humans, environment, and computing technology have been in interaction over the past several centuries. From Charles Babbage\u2019s Difference Engine (a product of an increasingly global British maritime empire) to Herman Hollerith\u2019s tabulating machine (designed to solve the problem of \u201cseeing like a state\u201d in the newly trans-continental American Republic) to the emergence of the ecological sciences and the modern petrochemical industry, information technologies have always been closely associated with the human desire to understand and manipulate their physical environment. More recently, humankind has started to realize the environmental impacts of information technology, including not only the toxic byproducts associated with their production, but also the polluting effects of the massive amounts of energy and water required by data centers at Google and Facebook (whose physicality is conveniently and deliberately camouflaged behind the disembodied, ethereal \u201ccloud\u201d). More specifically, this paper will explore the global life-cycle of a digital commodity \u2014 in this case a unit of the virtual currency Bitcoin \u2014 from lithium mines in post-colonial South America to the factory city-compounds of southern China to a \u201cserver farm\u201d in the Pacific Northwest to the \u201ccomputer graveyards\u201d outside of Agbogbloshie, Ghana. The goal is to ground the history of information technology in the material world by focusing on the relationship between \u201ccomputing power\u201d and more traditional processes of resource extraction, exchange, management, and consumption.<\/p>\n Nathan Ensmenger is an Associate Professor in the School of Informatics, Computing and Engineering at Indiana University, where he also serves as the Chair of the Informatics department. He specializes in the social and labor history of computing, gender and computing, and the relationship between computing and the environment. <\/p>\n <\/p><\/div>\n \n<\/p> Jack Copeland (opens in new tab)<\/span><\/a>, University of Canterbury | Wednesday, April 11, 2018<\/p>\n At the turn of the millennium Time magazine listed Alan Turing among the twentieth century\u2019s 100 greatest minds, alongside the Wright brothers, Albert Einstein, Crick and Watson, and Alexander Fleming. Turing\u2019s achievements during his short life of 42 years were legion. Best known as the genius who broke some of Germany\u2019s most secret codes during the war of 1939-45, Turing was also the father of the modern computer. Today, all who click or touch to open are familiar with the impact of his ideas. To Turing we owe the concept of storing applications, and the other programs necessary for computers to do our bidding, inside the computer\u2019s memory, ready to be opened when we wish. We take for granted that we use the same slab of hardware to shop, manage our finances, type our memoirs, play our favourite music and videos, and send instant messages across the street or around the world. Like many great ideas this one now seems as obvious as the cart and the arch, but with this single invention\u2014the stored-program universal computer\u2014Turing changed the world. Turing was a theoretician\u2019s theoretician, yet he also had immensely practical interests. In 1945 he designed a large stored-program electronic computer called the Automatic Computing Engine, or ACE. Turing\u2019s sophisticated ACE design achieved commercial success as the English Electric Company\u2019s DEUCE, one of the earliest electronic computers to go on the market. In those days\u2014the first eye-blink of the Information Age\u2014the new machines sold at a rate of no more than a dozen or so a year. But in a handful of decades, Turing\u2019s ideas transported us from an era where \u2018computer\u2019 was the term for a human clerk who did the sums in the back office of an insurance company or science lab, into a world where many have never known life without the Internet.<\/p>\n Jack Copeland FRS NZ is Distinguished Professor of Philosophy at the University of Canterbury in New Zealand. He is also Co-Director and Permanent Visiting Fellow of the Turing Centre at the Swiss Federal Institute of Technology in Zurich, and Honorary Research Professor of Philosophy at the University of Queensland, Australia, and is currently the John Findlay Visiting Professor of Philosophy at Boston University. In 2016 Jack received the international Covey Award in recognition of \u201ca substantial record of innovative research in the field of computing and philosophy\u201d, and in 2017 his name was added to the IT History Society Honor Roll, which the Society describes as \u201ca listing of a select few that have made an out-of-the-ordinary contribution to the information industry\u201d. He has just been awarded the annual Barwise Prize by the American Philosophical Association for \u201csignificant and sustained contributions to areas relevant to philosophy and computing\u201d. A Londoner by birth, Jack gained a D.Phil. in mathematical logic from the University of Oxford. His books include The Essential Turing (Oxford University Press); Colossus: The Secrets of Bletchley Park\u2019s Codebreaking Computers (Oxford University Press); Alan Turing\u2019s Electronic Brain (Oxford University Press); Computability: Turing, G\u00f6del, Church, and Beyond (MIT Press); Logic and Reality (Oxford University Press), and Artificial Intelligence (Blackwell). He has published more than 100 journal articles on the history and philosophy of both computing and mathematical logic. In 2014 Oxford University Press published his highly accessible paperback biography Turing, and in 2017 released his latest book The Turing Guide. Jack has been script advisor, co-writer, and scientific consultant for a number of historical documentaries. One of them, Arte TV\u2019s The Man Who Cracked the Nazi Codes is based on his bio Turing and won the audience\u2019s Best Documentary prize at the 2015 FIGRA European film festival; another, the BBC\u2019s Code-Breakers: Bletchley Park\u2019s Lost Heroes won two BAFTAs and was listed as one of the year\u2019s three best historical documentaries at the 2013 Media Impact Awards in New York City. Jack was Visiting Professor of Information Science at Copenhagen University in 2014-15 and Visiting Professor in Computer Science and Philosophy at the Swiss Federal Institute of Technology in 2013-14; and in 2012 he was the Royden B. Davis Visiting Chair of Interdisciplinary Studies in the Department of Psychology at Georgetown University, Washington DC. <\/p>\n <\/p><\/div>\n \n<\/p> Ann Cudd (opens in new tab)<\/span><\/a>, BU | Wednesday, April 4, 2018<\/p>\n This talk explores the concept of the connected self-owner, which takes account of the metaphysical significance of relations among persons for persons\u2019 capacities to be owners. This concept of the self-owner conflicts with the traditional, libertarian understanding of the self as atomistic or essentially separable from all others. I argue that the atomistic self cannot be a self-owner. A self-owner is a moral person with intentions, desires, thoughts. But to have intentions, desires, thoughts a being has to relate to others through language and norm-guided behavior. Individual beings require the pre-existence of norms and norm-givers to bootstrap their selves, and norms and norm-givers and norm-takers are necessary to continue to support the self. That means, I argue, that the self who can be an owner is essentially connected. Next, I ask how humans become connected selves and whether that matters morally. I distinguish among those connections that support development of valuable capacities. One such capacity is the autonomous individual. I argue that the social connections that allow the development of autonomous individuals have moral value and should be fostered; oppressive social connections, on the other hand, tend to thwart autonomy. This has implications for how we should think about privacy rights and online networks.<\/p>\n Ann E. Cudd is Dean of the College and Graduate School of Arts & Sciences and Professor of Philosophy at Boston University. In her role as dean, she has worked to increase diversity and inclusion at the College, fundraising tirelessly for need-based scholarships and taking concrete steps to increase faculty diversity. She has identified five areas of research and teaching excellence within the College as priorities for investment: the digital revolution in the arts & sciences, neuroscience, climate change and sustainability, the humanities, and the study of inequality. A champion of the arts and sciences, she has been instrumental in launching a dialogue among Boston colleges and universities about the importance of liberal education. Prior to joining BU in 2015, Cudd was Vice Provost and Dean of Undergraduate Studies and University Distinguished Professor of Philosophy at the University of Kansas. Among other roles during her 27 years at KU, she directed the Women, Gender, and Sexuality Studies program and served as associate dean for the humanities in the College of Liberal Arts & Sciences. Her award-winning 2006 book, Analyzing Oppression (Oxford University Press), examines the economic, social, and psychological causes and effects of oppression. She is co-editor or co-author of six other books, including Capitalism For and Against: A feminist debate (with Nancy Holmstrom; Cambridge University Press). Her recent work concerns the moral value of capitalism, conceptions of domestic violence in international law, the injustice of educational inequality, and the self-ownership debate in liberal political philosophy. She is past president and founding member of the Society for Analytical Feminism and vice president and president-elect of the American section of the International Society for the Philosophy of Law and Social Philosophy (AMINTAPHIL). She received her BA in Mathematics and Philosophy from Swarthmore College and an MA in Economics and PhD in Philosophy from the University of Pittsburgh. <\/p>\n <\/p><\/div>\n \n<\/p> Juanjuan Zhang (opens in new tab)<\/span><\/a>, MIT | Wednesday, December 13, 2017<\/p>\n Demand estimation is important for new-product strategies, but is challenging in the absence of actual sales data. We develop a cost-effective method to estimate the demand of new products based on incentive-aligned choice experiments. Our premise is that there exists a structural relationship between manifested demand and the probability of consumer choice being realized. We illustrate the mechanism using a theory model, in which consumers learn their product valuation through costly effort and their effort incentive depends on the realization probability. We run a large-scale choice experiment on a mobile game platform, where we randomize the price and realization probability when selling a new product. We find reduced-form support of the theoretical prediction and the decision effort mechanism. We then estimate a structural model of consumer choice. The structural estimates allow us to infer actual demand using data from incentive-aligned choice experiments with small to moderate realization probabilities.<\/p>\n Juanjuan Zhang is the Epoch Foundation Professor of International Management and Professor of Marketing at the MIT Sloan School of Management. Zhang studies marketing strategies in today\u2019s social context. Her research covers industries such as consumer goods, social media, and healthcare, and functional areas such as product development, pricing, and sales. She has received the Frank Bass Award for the best marketing thesis, is a four-time finalist for the John Little Award for the best marketing paper, and a twice finalist for the INFORMS Society for Marketing Science Long Term Impact Award. Zhang currently serves as Department Editor of Management Science, and Associate Editor of the Journal of Marketing Research, Marketing Science, and Quantitative Marketing and Economics. She also serves as a VP of the INFORMS Society for Marketing Science (ISMS). Zhang teaches Marketing Management at MIT Sloan. She is a recipient of the MIT d\u2019Arbeloff Fund for Excellence in Education and MIT Sloan\u2019s highest teaching award, the Jamieson Prize. She holds a B. Econ. from Tsinghua University and a Ph.D. in Business Administration from the University of California, Berkeley. <\/p>\n <\/p><\/div>\n \n<\/p> Ziad Obermeyer (opens in new tab)<\/span><\/a>, Harvard | Wednesday, December 06, 2017<\/p>\n Low-value health care\u2014care that provides little health benefit relative to its cost\u2014is a central concern for policy makers. Identifying exactly which care is likely to be of low-value ex ante, however, has proven challenging. We use a novel machine learning approach to gauge the extent of low-value care. We focus on the decision to perform high-cost tests on emergency department (ED) patients whose symptoms suggest they might be having a heart attack\u2014notoriously difficult to differentiate from more benign causes. We build an algorithm to predict whether a particular patient is in fact having a heart attack using a training sample of randomly-selected patients), and apply it to a new population of patients the algorithm has never seen. We find that a large set of patients tested by doctors have extremely low ex ante predicted risk of having a heart attack; and these patients do indeed have a very low rate of positive test results when tested. Our focus on testing decisions on the margin reveals that the rate of over-testing is substantially higher than we would think if we simply measured overall effectiveness of the test: the marginal test has much lower value than the average test, and our approach can quantify this difference. We also find that many patients who go untested in fact appear high risk to the algorithm. Doctors\u2019 decisions not to test these patients does not appear to reflect private information: we find that these patients develop serious complications (or death) at remarkably high rates in the months after emergency visits. By isolating specific conditions under which patients in emergency departments are quasi-randomly assigned to doctors, we are able to minimize the influence of unobservables. These results suggest that both under-testing and over-testing are prevalent. We conclude with exploratory analysis of the behavioral mechanisms underlying under-testing, by examining those high-risk beneficiaries whom physicians fail to test. These patients often have concurrent health issues with symptoms similar to heart attack that may lead physicians to anchor prematurely on an alternative diagnosis. Applying deep learning to electrocardiographic waveform data from these patients, we can also isolate specific physiological characteristics of the heart attacks that doctors overlook.<\/p>\n Ziad Obermeyer is an Assistant Professor at Harvard Medical School and an emergency physician at the Brigham and Women\u2019s Hospital, both in Boston. His lab applies machine learning to solve clinical problems. As patients age and medical technology advances, the complexity of health data strains the capabilities of the human mind. Using a combination of machine learning and traditional methods, his work seeks to find hidden signal in health data, help doctors make better decisions, and drive innovations in clinical research. He is a recipient of multiple research awards from NIH (including the Office of the Director and the National Institute on Aging) and private foundations, and a faculty affiliate at ideas42, Ariadne Labs, and the Harvard Institute for Quantitative Social Science. He holds an A.B. (magna cum laude) from Harvard and an M.Phil. from Cambridge, and worked as a consultant at McKinsey & Co. in Geneva, New Jersey, and Tokyo, before returning to Harvard for his M.D (magna cum laude). <\/p>\n <\/p><\/div>\n \n<\/p> Kostas Bimpikis (opens in new tab)<\/span><\/a>, Stanford | Wednesday, November 29, 2017<\/p>\n We explore spatial price discrimination in the context of a ride-sharing platform that serves a network of locations. Riders are heterogeneous in terms of their destination preferences and their willingness to pay for receiving service. Drivers decide whether, when, and where to provide service so as to maximize their expected earnings, given the platform\u2019s prices. Our findings highlight the impact of the demand pattern on the platform\u2019s prices, profits, and the induced consumer surplus. In particular, we establish that profits and consumer surplus are maximized when the demand pattern is \u201cbalanced\u201d across the network\u2019s locations. In addition, we show that they both increase monotonically with the balancedness of the demand pattern (as formalized by its structural properties). Furthermore, if the demand pattern is not balanced, the platform can benefit substantially from pricing rides differently depending on the location they originate from. Finally, we consider a number of alternative pricing and compensation schemes that are commonly used in practice and explore their performance for the platform. (joint work with Ozan Candogan and Daniela Saban)<\/p>\n Kostas Bimpikis is an Associate Professor of Operations, Information and Technology at Stanford University\u2019s Graduate School of Business. Prior to joining Stanford, he spent a year as a postdoctoral research fellow at the Microsoft Research New England Lab. Professor Bimpikis has received a PhD in Operations Research from the Massachusetts Institute of Technology in 2010, an MS in Computer Science from the University of California, San Diego and a BS degree in Electrical and Computer Engineering from the National Technical University of Athens, Greece. <\/p>\n <\/p><\/div>\n \n<\/p> Pinar Yildirim (opens in new tab)<\/span><\/a>, UPENN | Wednesday, November 15, 2017<\/p>\n We study the strategic interaction between the media and Senate candidates during elections. While the media is instrumental for candidates to communicate with voters, candidates and media outlets have conflicting preferences over the contents of the reporting. In competitive electoral environments such as most US Senate races, this can lead to a strategic environment resembling a matching pennies game. Based on this observation, we develop a model of bipartisan races where media outlets report about candidates, and candidates make decisions on the type of constituencies to target with their statements along the campaign trail. We develop a methodology to classify news content as suggestive of the target audience of candidate speech, and show how data on media reports and poll results, together with the behavioral implications of the model, can be used to estimate its parameters. We implement this methodology on US Senatorial races for the period 1980-2012, and find that Democratic candidates have stronger incentives to target their messages towards turning out their core supporters than Republicans. We also find that the cost in swing-voter support from targeting core supporters is larger for Democrats than for Republicans. These effects balance each other, making media outlets willing to cover candidates from both parties at similar rates. Joint work with Camilo Garcia Jimeno Department of Economics, University of Pennsylvania and NBER.<\/p>\n Pinar Yildirim (opens in new tab)<\/span><\/a> is Assistant Professor of Marketing at the Wharton School of the University of Pennsylvania and is also a Senior Fellow at the Leonard Davis Institute. Pinar\u2019s research areas are media, information economics, and network science. She focuses on applied theory and applied economics problems relevant to online platforms, advertising, networks, media and political economy. Her research appeared in top management and marketing journals including Marketing Science, Journal of Marketing Research, Management Science, and Journal of Marketing. Pinar is on the editorial board of Marketing Science and recently received the 2017 MSI Young Scholar award. She holds Ph.D. degrees in Marketing and Business Economics, as well as Industrial Engineering from the University of Pittsburgh. She joined the Wharton School in 2012 and has been teaching in the Executive, MBA, and undergraduate programs since. <\/p>\n <\/p><\/div>\n \n<\/p> Nick Couldry (opens in new tab)<\/span><\/a>, LSE | Wednesday, November 1, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n This talk will explore how to use social theory to understand problems of social order and its relationship to an era of Big Data. I take as a starting-point of the talk the neglected late work of theorist Norbert Elias and the concept of figurations, which I draw upon in my recent book (The Mediated Construction of Reality, with Andrew Hepp, Polity 2016), as a way of thinking better about the social world\u2019s real complexity. I will sketch the historical background that shapes what we know about the role of communications in the growth of industrial capitalism in the 19th century to argue that we are in a parallel phase of major transformation today. This raises two problems on which the talk will reflect: first, what are the distinctive features of the social knowledge that is today being generated through big data processes, compared with the 19th century\u2019s rise of statistics as the primary generator of social knowledge; second, what are the implications for the enduring value of freedom of the data collection processes on which Big Data is founded?<\/p>\n Nick Couldry is Full Professor of Media, Communications and Social Theory at the London School of Economics and Political Science, UK. From August 2014 to August 17, he was chair of LSE\u2019s Department of Media and Communications. He is the author of 8 books, including most recently The Mediated Construction of Reality (with Andreas Hepp, Polity 2016). He has since 2015 been joint coordinating author of the chapter on media and communications in the International Panel on Social Progress: https:\/\/www.ipsp.org\/ (opens in new tab)<\/span><\/a>. For more information, please go to http:\/\/www.nickcouldry.org\/ (opens in new tab)<\/span><\/a>. <\/p>\n <\/p><\/div>\n \n<\/p> Bin Yu (opens in new tab)<\/span><\/a>, UC Berkeley | Thursday, October 26, 2017 | Video (opens in new tab)<\/span><\/a><\/p>\n In this talk, I will discuss intertwining importance and connections of three principles of data science. The three principles will be demonstrated in the context of two neuroscience projects and through analytical connections. In particular, the first project adds stability to predictive models used for reconstruction of movies from fMRI brain signals to gain interpretability of the predictive models. The second project employs predictive transfer learning and stable (manifold) deep dream images to characterize the difficult V4 neurons in primate vision cortex. Our results lend support, to a certain extent, to the resemblance to a primate brain of Convolutional Neural Networks (CNNs).<\/p>\n Bin Yu is Chancellor\u2019s Professor in the Departments of Statistics and of Electrical Engineering & Computer Sciences at the University of California at Berkeley. Her current research interests focus on statistics and machine learning algorithms and theory for solving high-dimensional data problems. Her lab is engaged in interdisciplinary research with scientists from genomics, neuroscience, precision medicine and political science. She obtained her B.S. degree in Mathematics from Peking University in 1984, her M.A. and Ph.D. degrees in Statistics from the University of California at Berkeley in 1987 and 1990, respectively. She held faculty positions at the University of Wisconsin-Madison and Yale University and was a Member of Technical Staff at Bell Labs, Lucent. She was Chair of Department of Statistics at UC Berkeley from 2009 to 2012, and is a founding co-director of the Microsoft Lab on Statistics and Information Technology at Peking University, China, and Chair of the Scientific Advisory Committee of the Statistical Science Center at Peking University. She is Member of the U.S. National Academy of Sciences and Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, an Invited Speaker at ICIAM in 2011, and the Tukey Memorial Lecturer of the Bernoulli Society in 2012. She was President of IMS (Institute of Mathematical Statistics) in 2013-2014 and the Rietz Lecturer of IMS in 2016. She is a Fellow of IMS, ASA, AAAS and IEEE. She served on the Board of Mathematics Sciences and Applications (BMSA) of NAS and as co-chair of SAMSI advisory committee, and on the Board of Trustees at ICERM and Scientific Advisory Board of IPAM. She has served or is serving on many editorial boards, including Journal of Machine Learning Research (JMLR), Annual Reviews in Statistics, Annals of Statistics, and American Statistical Association (JASA). <\/p>\n <\/p><\/div>\n\n\t\t\t\tMobiles and Micropayments as Tools in Global Development- Bill Thies\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tWhere Do Trends Come From?- Devon Powers\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tDesigning Restorative Approaches to Moderating Adversarial Online Interactions- Cliff Lampe\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tEfficient Computing for AI and Robotics- Vivienne Sze\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tComputer Security and Safety for Victims of Intimate Partner Violence- Nicki Dell\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tWork of the Past, Work of the Future – David Autor\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tLarge Networks, from Mathematics to Machine Learning – Christian Borgs\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tDeploying Differential Privacy for the 2020 Census of Population and Housing – Simson L Garfinkel\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tShifting the Balance Between Individuals & Data Processors: Asserting Cooperative Autonomy Over Personal Data While Generating New Value – Katina Ligett\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tLanguage Deprivation and the American Sign Language Lexicon – Naomi Caselli\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tElectric Opium War: Case Studies in Screen Addiction and Accumulation- Moira Weigel\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tDesigning for Low-Literate Users- Indrani Medhi Thies\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tTabloid Trump and the Political Imaginary- Geoffrey Baym\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tJump Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream- Jonathan Gruber\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tHigher Fidelity Systems for Online Discussion- David Karger\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tCapitalism and Entrepreneurship in Socialist China- Adam Frost\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tNew Algorithms for Interpretable Machine Learning in High Stakes Decisions- Cynthia Rudin\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tWill AI Cure Healthcare?- Ernest Fraenkel\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tBiased data, biased predictions, and disparate impacts: Evaluating risk assessment instruments in criminal justice- Alex Chouldechova\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tDelegating Computation- Yael Kalai\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tThe Dynamics of Network Formation and Some Social and Economic Consequences- Matt Jackson\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tThe Welfare Effects of Information- Cass Sunstein\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tPlanar Graph Perfect Matching is in NC- Vijay Vazirani\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tCan Intricate Structure Occur by Accident?- Henry Cohn\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tThe Simple Economics of Artificial Intelligence- Avi Goldfarb\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tYou Can Lead a Horse to Water: Spatial Learning and Path Dependence in Consumer Search- Greg Lewis\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tNew Media, New Work, and the New Call to Intimacy: The Case of Musicians- Nancy Baym\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tCustodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media- Tarleton Gillespie\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tBuilding Machines That Learn and Think Like People- Josh Tenenbaum\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tThe Cloud is a Factory: A Material History of the Digital Economy- Nathan Ensmenger\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tAlan Turing: Pioneer of the Information Age- Jack Copeland\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tConnected self-ownership and implications for online networks and privacy rights- Ann Cudd\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tPrelaunch Demand Estimation- Juanjuan Zhang\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tAre we over-testing? Using machine learning to understand physician decision making- Ziad Obermeyer\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tSpatial Pricing in Ride-Sharing Networks- Kostas Bimpikis\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tMatching Pennies on the Campaign Trail: An Empirical Study of Senate Elections and Media Coverage- Pinar Yildirim\t\t\t<\/h4>\n
Abstract:<\/h2>\n
Biography:<\/h2>\n
\n\t\t\t\tSocial Order in the Age of Big Data: Exploring the Knowledge Problem and the Freedom Problem\u2013 Nick Couldry\t\t\t<\/h4>\n
Description<\/h2>\n
Biography<\/h2>\n
\n\t\t\t\tThree principles of data science: predictability, stability, and computability\u2013 Bin Yu\t\t\t<\/h4>\n
Description<\/h2>\n
Biography<\/h2>\n
\n\t\t\t\tConsumer Reviews and Regulation: Evidence from NYC Restaurants\u2013 Chiara Farronato\t\t\t<\/h4>\n