News & features
Research Focus: Week of October 28, 2024
New Research | FLASH: Workflow automation agent for diagnosing recurring incidents; METAREFLECTION: Learning instructions for language agents using past reflections; Boosting LLM training efficiency through faster communication between GPUs; and more.
Research Focus: Week of September 23, 2024
Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Time-series forecasting is a technique used to predict future values based on previously…
Research Focus: Week of April 29, 2024
In this edition: Can LLMs transform natural language into formal method postconditions; Semantically aligned question + code generation for automated insight generation; Explaining CLIP performance disparities on blind/low vision data; plus recent news.
Intelligent monitoring: Towards AI-assisted monitoring for cloud services
| Anjaly Parayil, Ayush Choure, Fiza Husain, Avi Nayak, Piyali Jana, Rujia Wang, Chetan Bansal, and Saravan Rajmohan
Integrating AI into cloud service monitoring improves incident detection accuracy, reduces unnecessary alerts, and enhances overall system reliability. This helps organizations better align with business goals and increase customer satisfaction.
TaskWeaver: A code-first agent framework for efficient data analytics and domain adaptation
| Shilin He, Liqun Li, Xu Zhang, Bo Qiao, Chaoyun Zhang, Yu Kang, Rujia Wang, Qingwei Lin 林庆维, Saravan Rajmohan, and Dongmei Zhang
AI-backed virtual assistants face challenges in handling complex data structures. TaskWeaver helps users build assistants that understand diverse domain questions, follow examples, and efficiently execute customizable algorithms on complex data structures.
LLMLingua: Innovating LLM efficiency with prompt compression
| Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu
Advanced prompting technologies for LLMs can lead to excessively long prompts, causing issues. Learn how LLMLingua compresses prompts up to 20x, maintaining quality, reducing latency, and supporting improved UX.
Rethinking trust in direct messages in the AI era
| Kim Laine, Shrey Jain, Betül Durak, Radames Cruz Moreno, and Robert Sim
Microsoft researchers are proposing a new way to ensure greater trust and accountability in email, texts, direct messages on social platforms, even phone calls, to help mitigate sophisticated threats from AI-related scams and fraud.
Large-language models for automatic cloud incident management
| Rujia Wang, Chetan Bansal, Supriyo GHOSH, Tom Zimmermann, Xuchao Zhang, and Saravan Rajmohan
This research was accepted by the IEEE/ACM International Conference on Software Engineering (ICSE) (opens in new tab), which is a forum for researchers, practitioners, and educators to gather, present, and discuss the most recent innovations, trends, experiences, and issues in…
FLUTE: A scalable federated learning simulation platform
| Dimitrios Dimitriadis, Mirian Hipolito Garcia, Daniel Eduardo Madrigal Diaz, Andre Manoel, and Robert Sim
Federated learning has become a major area of machine learning (ML) research in recent years due to its versatility in training complex models over massive amounts of data without the need to share that data with a centralized entity. However,…