À propos
Hi! I’m Director of Research Engineering for Aether (opens in new tab), Microsoft’s internal group on AI, Engineering and Ethics. My team focuses on bringing Responsible AI research to the hands of practitioners through open-source tools, libraries, and integrations into ML platforms.
I co-founded the InterpretML (opens in new tab) framework, which is widely used by data scientists and ML engineers for building interpretable models and explaining opaque model predictions. I’ve also contributed to a number of other open-source machine learning libraries across the Python ecosystem. Lately I’ve been focused on Guidance (opens in new tab), a library for helping developers build better prompts and control the outputs of LLMs (large language models).
My current research interests are in interpretability, privacy-preserving machine learning (via differential privacy), fairness, and machine learning for healthcare. I’ve published on these topics at conferences like ICML, NeurIPS, KDD, CHI, AAAI, and USENIX ATC (see my Google Scholar page (opens in new tab) for details).
Prior to joining Aether, I worked as an applied scientist on problems like malware detection, large scale experimentation, and time-series forecasting. I’m also a graduate of the Georgia Institute of Technology (opens in new tab). If you’re interested in research engineering roles, responsible AI, or in any potential collaborations, feel free to send me an email!
Contenu en vedette
Capabilities of GPT-4 on Medical Challenge Problems
We present a comprehensive evaluation of GPT-4, a state-of-the-art LLM, on medical competency examinations and benchmark datasets. Our results show that GPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier general-purpose models (GPT-3.5) as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B).
InterpretML: A Unified Framework for Machine Learning Interpretability
InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive…
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting
We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little…