Portrait de Harsha Nori

Harsha Nori

Director, Research Engineering

À propos

Hi! I’m Director of Research Engineering for Aether (opens in new tab), Microsoft’s internal group on AI, Engineering and Ethics. My team focuses on bringing Responsible AI research to the hands of practitioners through open-source tools, libraries, and integrations into ML platforms.

I co-founded the InterpretML (opens in new tab) framework, which is widely used by data scientists and ML engineers for building interpretable models and explaining opaque model predictions. I’ve also contributed to a number of other open-source machine learning libraries across the Python ecosystem. Lately I’ve been focused on Guidance (opens in new tab), a library for helping developers build better prompts and control the outputs of LLMs (large language models).

My current research interests are in interpretability, privacy-preserving machine learning (via differential privacy), fairness, and machine learning for healthcare. I’ve published on these topics at conferences like ICML, NeurIPS, KDD, CHI, AAAI, and USENIX ATC (see my Google Scholar page (opens in new tab) for details).

Prior to joining Aether, I worked as an applied scientist on problems like malware detection, large scale experimentation, and time-series forecasting. I’m also a graduate of the Georgia Institute of Technology (opens in new tab). If you’re interested in research engineering roles, responsible AI, or in any potential collaborations, feel free to send me an email!