Our research group investigates a wide variety of topics within distributed systems, operating systems, and networking, while drawing on techniques from machine learning, theory, and privacy to build more powerful systems.
We are strong believers in responsible AI—our goal is to apply AI to systems in a way that is minimally disruptive, synergistic with human solutions, and safe. Our current research agendas reflect this goal:
- Harvesting randomness: We are combining reinforcement learning with the ability to ask counterfacutal (“what if”) questions about any decision-making system, allowing us to synthesize better decision-making policies.
- HAIbrid (Human + AI) algorithms: We are synergizing human solutions to classical problems—ranging from data structure design to chess—with AI techniques, achieving a form of human-AI collaboration rather than one overtaking the other.
- Safeguards: We are designing a general abstraction called a “safeguard” for protecting any AI system from violating a safety specification, while allowing the system (and the safeguard) to continuously adapt and learn.
Our team is inherently interdisciplinary, as is our mission. We engage in deep collaborations with other research groups, including world experts in machine learning, computational social science, economics, and FATE. In all of our work, a top priority is impact: we publish at the best academic conferences and journals, in systems and beyond; our work has also led to multiple tech transfers with product groups across the company (Azure, Skype, MSN, Xbox, etc.), resulting in real business impact.