Faculty Summit 2017
July 17, 2017 - July 18, 2017

Faculty Summit 2017: The Edge of AI

Lieu: Redmond, WA

Monday, July 17

  • Speaker: Eric Horvitz, Microsoft[Full Video]

    Fielding AI solutions in the open world requires systems to grapple with incompleteness and uncertainty. This session addresses several promising areas of research in open world AI, including enhancing robustness via leveraging algorithmic portfolios, learning from experiences in rich simulation environments, harnessing approaches to transfer learning, and learning and personalization from small training sets. In addition, this session covers mechanisms for engaging people to identify and address uncertainties, failures, and blind spots in AI systems.

  • Speaker: Barbara J. Grosz, Harvard University

    [Full Video]

    For much of its history, AI research has aimed toward building intelligent machines independently of their interactions with people. As the world of computing has evolved, and systems pervade ever more facets of life, the challenges of building computer systems smart enough to work effectively with people, in groups as well as individually, has become increasingly important. Furthermore, recent advances in AI-capable systems raise societal and ethical questions about the effects of such systems on people and societies at large. In this talk, Barbara argues that the ability to work with is essential for truly intelligent behavior, identifies fundamental scientific questions this teamwork requirement raises, describes research by her group on computational models of collaboration and their use in supporting health-care coordination, and briefly discusses ethical challenges AI-capable systems pose, along with approaches to those challenges.

  • Speakers: Isabelle Augenstein, University College London; Jianfeng Gao, Microsoft; Percy Liang, Stanford University; Rangan Majumder, Microsoft

    [Video Abstract | Full Video]

    Teaching machines to read, process and comprehend natural language documents and images is a coveted goal in modern AI. We see growing interest in machine reading comprehension (MRC) due to potential industrial applications as well as technological advances, especially in deep learning and the availability of various MRC datasets that can benchmark different MRC systems. Despite the progress, many fundamental questions remain unanswered: Is question answer (QA) the proper task to test whether a machine can read? What is the right QA dataset to evaluate the reading capability of a machine? For speech recognition, the switchboard dataset was a research goal for 20 years – why is there such a proliferation of datasets for machine reading? How important is model interpretability and how can it be measured? This session brings together experts at the intersection of deep learning and natural language processing to explore these topics.

  • Speakers: Dan Bohus, Microsoft; Louis-Philippe Morency, Carnegie Mellon University; Besmira Nushi, Microsoft

    [Video Abstract | Full Video]

    Over the last decade, algorithmic developments coupled with increased computation and data resources have led to advances in well-defined verticals of AI such as vision, speech recognition, natural language processing, and dialog technologies. However, the science of engineering larger, integrated systems that are efficient, robust, transparent, and maintainable is still very much in its infancy. Efforts to develop end-to-end intelligent systems that encapsulate multiple competencies and act in the open world have brought into focus new research challenges. Making progress towards this goal requires bringing together expertise from AI and systems, and this progress can be sped up with shared best practices, tools and platforms. This session highlights opportunities and challenges for research and development for integrative AI systems. The speakers address various aspects of integrative AI systems, from multimodal learning and troubleshooting to development through shared platforms.

  • Speakers: Jeffrey Bigham, Carnegie Mellon University; Shaun Kane, University of Colorado Boulder; Walter Lasecki, University of Michigan

    [Video Abstract | Full Video]

    Advances in AI technologies have important ramifications for the development of accessible technologies. These technologies can augment the capabilities of people with sensory disabilities, enabling new and empowering experiences. In this session, we present examples of how breakthroughs in AI can support key tasks for diverse user populations. Examples of such applications include image labeling on behalf of people with visual impairments, fast audio captioning for people who are hard-of-hearing, and better word prediction for people who rely on communication augmentation tools to speak.

  • Speakers: Jackie Chi Kit Cheung, McGill University; Michel Galley, Microsoft; Ian Lane, Carnegie Mellon University; Alan Ritter, Ohio State University; Lucy Vanderwende, Microsoft; Jason Williams, Microsoft

    [Video Abstract | Full Video]

    Recent research in recurrent neural models, combined with the availability of massive amounts of dialog data, have together spurred the development of a new generation of conversational systems. Where past approaches focused on task-oriented dialog and relied on a pipeline of modules (e.g., language understanding, state tracking, etc.), new techniques learn end-to-end models trained exclusively on massive text transcripts of conversations. While promising, these new methods raise important questions: how can neural models go beyond chat-style dialog and interface with structured domain knowledge and programmatic APIs? How can these techniques be applied in domains where there is no existing dialog data? What new system behaviors are possible with these techniques and resources? This session brings together experts at the intersection of deep learning and conversational systems to explore these topics through their on-going work and expectations for the future.

  • Speakers: Rama Chellappa, University of Maryland; Katsu Ikeuchi, Microsoft; Song-Chun Zhu, University of California, Los Angeles

    [Video Abstract | Full Video]

    Computer vision is arguably one of the most challenging subfields of AI. To better address the key challenges, the vision research community has long been branched off from the general AI community and focused on its core problems. In recent years, we have witnessed tremendous progress in visual sensing due to big data and more powerful learning machines. However, we still lack a holistic view of how visual sensing relates to more general intelligence. This session brings researchers together to discuss research trends in computer vision, the role of visual sensing in more integrated general intelligence systems, and how visual sensing systems will interact with other sensing modalities from a computational sense.

  • Speakers: Olaf Blanke, Ecole Polytechnqiue de Lausanne; Mel Slater, University of Barcelona; Ana Tajadura-Jiménez, Universidad Loyola Andalucía & University College London

    [Video Abstract | Full Video]

    Scientists have long explored the different sensory inputs to better understand how humans perceive the world and control their bodies. Many of the great discoveries about the human perceptual system were first found through laboratory experiments that stimulated inbound sensory inputs as well outbound sensory predictions. These aspects of cognitive neuroscience have important implications when building technologies, as we learn to transfer abilities that are natural to humans to leverage the strengths of machines. Machines can also be used to learn further about human perception, because technology allows scientists to reproduce impossible events and observe how humans would respond and adapt to those events. This loop from human to machine and back again can help transfer what we learn from our evolutionary intelligence to future machines and AI. This session addresses progress and challenges in applying human perception to machines, and vice versa.

Tuesday, July 18

  • Speaker: Amy Greenwald, Brown University[Full Video]

    Humans make hundreds of routine decisions daily. More often than not, the impact of our decisions depends on the decisions of others. As AI progresses, we are offloading more and more of these decisions to artificial agents. This research is aimed at building AI agents that make effective decisions in multiagent—part human, part artificial—environments. Current efforts are relevant to economic domains, mostly in the service of perfecting market designs. This talk covers AI agent design in applications ranging from renewable energy markets and online ad exchanges to wireless spectrum auctions.

  • Speakers: Sham Kakade, University of Washington; Ravi Kannan, Microsoft; Santosh Vempala, Georgia Institute of Technology

    [Video Abstract | Full Video]

    Machine learning (ML) has demonstrated success in various domains such as web search, ads, computer vision, natural language processing (NLP), and more. These success stories have led to a big focus on democratizing ML and building robust systems that can be applied to a variety of domains, problems, and data sizes. However, due many times to poor understanding of typical ML algorithms, an expert tries a lot of hit-and-miss efforts to get the system working, thus limiting the types and applications of ML systems. Hence, designing provable and rigorous algorithms is critical to the success of such large-scale, general-purpose ML systems. The goal of this session was to bring together researchers from various communities (ML, algorithms, optimization, statistics, and more) along with researchers from more applied ML communities such as computer vision and NLP, with the intent of understanding challenges involved in designing end-to-end robust, rigorous, and predictable ML systems.

  • Speakers: Rich Caruana, Microsoft; Jung Hee Cheon, Seoul National University; Kristin Lauter, Microsoft

    [Video Abstract | Full Video]

    As the volume of data goes up, the quality of machine learning models, predictions, and services will improve. Once models are trained, predictive cloud services can be built on them, but users who want to take advantage of the services have serious privacy concerns about exposing consumer and enterprise data—such as private health or financial data—with machine learning services running in the cloud. Recent developments in cryptography provide tools to build and enable “Private AI,” including private predictive services that do not expose user data to the model owner, and that also provide the means to train powerful models across several private datasets that can be shared only in encrypted form. This session examines the state of the art for these tools, and discusses important directions for the future of Private AI.

  • Speakers: Tanya Berger-Wolf, University of Illinois at Chicago; Carla Gomes, Cornell University; Milind Tambe, University of Southern California

    [Video Abstract | Full Video]

    Human society is faced with an unprecedented challenge to mitigate and adapt to changing climates, ensure resilient water supplies, sustainably feed a population of 10 billion, and stem a catastrophic loss of biodiversity. Time is too short, and resources too thin, to achieve these outcomes without the exponential power and assistance of AI. Early efforts are encouraging, but current solutions are typically one-off attempts that require significant engineering beyond what’s available from the AI research community. In this session we explore, in collaboration with the Computational Sustainability Network (a twice-funded National Science Foundation (NSF) Expedition) the latest applications of AI research to sustainability challenges, as well as ways to streamline environmental applications of AI so they can work with traditional academic programs. The speakers in this session set the scene for the state of the art in AI for Earth research and frame the agenda for the next generation of AI applications.

  • Speakers: Sayan Pathak, Microsoft; Yanmin Qian, Shanghai Jiaotong University; Cha Zhang, Microsoft

    [Video Abstract | Full Video]

    Microsoft Cognitive Toolkit (CNTK) is a production-grade, open-source, deep-learning library. In the spirit of democratizing AI tools, CNTK embraces fully open development, is available on GitHub, and provides support for both Windows and Linux. The recent 2.0 release (currently in release candidate) packs in several enhancements—most notably Python/C++ API support, easy-to-onboard tutorials (as Python notebooks) and examples, and an easy-to-use Layers interface. These enhancements, combined with unparalleled scalability on NVIDIA hardware, were demonstrated by both NVIDIA at SuperComputing 2016 and Cray at NIPS 2016. These enhancements from the CNTK supported Microsoft in its recent breakthrough in speech recognition, reaching human parity in conversational speech. The toolkit is used in all kinds of deep learning, including image, video, speech, and text data. The speakers discuss the current features of the toolkit’s release and its application to deep learning projects.

  • Speakers: Taesoo Kim, Georgia Institute of Technology; Dawn Song, University of California-Berkeley; Michael Walker, Defense Advanced Research Projects Agency

    [Video Abstract | Full Video]

    In the future, every company will be using AI, which means that every company will need a secure infrastructure that addresses AI security concerns. At the same time, the domain of computer security has been revolutionized by AI techniques, including machine learning, planning, and automatic reasoning. What are the opportunities for researchers in both fields—security infrastructure and AI—to learn from each other and continue this fruitful collaboration? This session covers two main topics. In the first half, we discuss how AI techniques have changed security, using a case study of the DARPA Cyber Grand Challenge, where teams built systems that can reason about security in real time. In the second half, we talk about security issues inherent in AI. How can we ensure the integrity of decisions from the AI that drives a business? How can we defend against adversarial control of training data? Together, we identify common problems for future research.

  • Panelists: Justine Cassell, Carnegie Mellon University; Jonathan Gratch, University of Southern California; Daniel McDuff, Microsoft; Louis-Philippe Morency, Carnegie Mellon University

    [Full Video]

    Social signals and emotions are fundamental to human interactions and influence memory, decision-making and wellbeing. As AI systems, in particular, intelligent agents, become more advanced, there is increasing interest in applications that can fulfil tasks goals, social goals and respond to emotional states. Research has shown that cognitive agents with these capabilities can increase empathy, rapport and trust with their users, amongst other things. However, designing such agents is extremely complex, as most human knowledge of emotion is implicit/tacit and defined by unwritten rules. Furthermore, these rules are culturally dependent and not universal. This session focuses on research into intelligent cognitive agents. It covers the measurement and understanding of verbal and non-verbal cues, the computational modeling of emotion and the design of sentient virtual agents.

  • Speakers: Helmut Katzgraber, Texas A&M; Matthias Troyer, Microsoft; Nathan Wiebe, Microsoft

    [Video Abstract | Full Video]

    In 1982, Richard Feynman first proposed using a “quantum computer” to simulate physical systems with exponential speed over conventional computers. Quantum algorithms can solve problems in number theory, chemistry, and materials science that would otherwise take longer than the lifetime of the universe to solve on an exascale machine. Quantum computers offer new methods for machine learning, including training Boltzmann machines and perceptron models. These methods have the potential to dramatically improve upon today’s machine learning algorithms used in almost every device, from cell phones to cars. But can quantum models make it possible to probe altogether different types of questions and solutions? If so, how can we take advantage of new representations in machine learning? How will we handle large amounts of data and input/output on a quantum computer? This session focuses on both known improvements and open challenges in using quantum techniques for machine learning and optimization.

  • Speakers: Eric Horvitz, Microsoft; Subbarao Kambhampati, Arizona State University; Milind Tambe, University of Southern California

    [Video Abstract | Full Video]

    The new wave of excitement about AI in recent years has been based on successes in perception tasks or on domains with limited and known dynamics. Because machines have achieved human parity in accuracy for image recognition and speech recognition and have beaten human champions on games such as Go and Poker, they have led to an impression of a future in which AI systems function alone. However, for more complex and open-ended tasks, current AI technologies have limitations. Future deployments of AI systems in daily life are likely to emerge from the complementary abilities of humans and machines and require close partnerships between them. The goal of this session was to highlight the potential of human-machine partnership through real-world applications. In addition, the speakers identified challenges for research and development that, when solved, will build towards successful AI systems that can partner with people.

  • Speakers: Cristian Danescu-Niculescu-Mizil, Cornell University; Daniel McDuff, Microsoft; Christopher Potts, Stanford University

    [Video Abstract]

    How do we make AI agents appear to be more “human”? The goal of this session was to bring together researchers in human-computer interaction, linguistics, machine learning, speech, and natural language processing to discuss what is required of AI that goes beyond functional intelligence, and that helps agents display social and cultural intelligence. We present an overview of the research that we are doing at Microsoft Research India toward the goal of building socially and culturally aware AI, such as chatbots for young, urban India, and socio-linguistic norms in multilingual communities. This was followed by a panel discussion entitled “AI for socio-culturally enriching interactions: What is it and when is it a success?” This panel discussed what constitutes socio-culturally aware AI, what are the metrics of success, and what are desired outcomes.

  • Speakers: Solon Barocas, Microsoft; Carla Gomes, Cornell University; Percy Liang, Stanford University; Gireeja Ranade, Microsoft

    [Full Video]

    Advances in AI promise great benefit to people and organizations. However, as we push the science of AI forward, we need to consider potential downsides, unintended consequences and costly outcomes. Challenges include ethical and legal issues with the use of autonomous systems, end-user distrust in reasoning, errors and biases in reasoning, the rise of inadvertent side effects, and criminal uses of AI. We discuss rising concerns with the influences of AI on people and society, and promising directions for addressing them.

  • Speaker: Christopher Bishop, Microsoft

    [Full Video]

    Today, thousands of scientists and engineers are applying machine learning to an extraordinarily broad range of domains, and over the last five decades, researchers have created literally thousands of machine learning algorithms. Traditionally an engineer wanting to solve a problem using machine learning must choose one or more of these algorithms to try, and their choice is often constrained by their familiar with an algorithm, or by the availability of software implementations. In this talk we talk about ‘model-based machine learning’, a new approach in which a custom solution is formulated for each new application. We show how probabilistic graphical models, coupled with efficient inference algorithms, provide a flexible foundation for model-based machine learning, and we describe several large-scale commercial applications of this framework. We also introduce the concept of ‘probabilistic programming’ as a powerful approach to model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.