{"id":1025451,"date":"2024-04-17T09:00:00","date_gmt":"2024-04-17T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1025451"},"modified":"2024-06-03T09:17:09","modified_gmt":"2024-06-03T16:17:09","slug":"research-focus-week-of-april-15-2024","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-focus-week-of-april-15-2024\/","title":{"rendered":"Research Focus: Week of April 15, 2024"},"content":{"rendered":"\n

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code\/datasets, new hires and other milestones from across the research community at Microsoft.<\/em><\/p><\/blockquote><\/figure>\n\n\n\n

\"Research<\/figure>\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

Appropriate reliance on Generative AI: Research synthesis<\/h2>\n\n\n\n

Appropriate reliance on AI happens when people accept correct AI outputs and reject incorrect ones. It requires users of AI systems to know when to trust the AI and when to trust themselves. But fostering appropriate reliance comes with new complexities when generative AI (genAI) systems are involved. Though their capabilities are advancing, genAI systems, which use generative models to produce content such as text, music, images, and videos, have limitations as well. Inappropriate reliance \u2013 either under-reliance or overreliance \u2013 on genAI can have negative consequences, such as poor task performance and even product abandonment.  <\/p>\n\n\n\n

In a recent paper: Appropriate reliance on Generative AI: Research synthesis<\/a>, researchers from Microsoft, who reviewed 50 papers from various disciplines, provide an overview of the factors that affect overreliance on genAI, the effectiveness of different mitigation strategies for overreliance on genAI, and potential design strategies to facilitate appropriate reliance on genAI. <\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

Characterizing Power Management Opportunities for LLMs in the Cloud<\/h2>\n\n\n\n

Cloud providers and datacenter operators are grappling with increased demand for graphics processing units (GPUs) due to expanding use of large language models (LLMs). To try to keep up, enterprises are exploring various means to address the challenge, such as power oversubscription and adding more servers. Proper power usage analysis and management could help providers meet demand safely and more efficiently. <\/p>\n\n\n\n

In a recent paper: Characterizing Power Management Opportunities for LLMs in the Cloud<\/a>, researchers from Microsoft analyze power patterns for several popular, open-source LLMs across commonly used configurations and identify opportunities to improve power management for LLMs in the cloud. They present a new framework called POLCA, which enables power oversubscription in LLM inference clouds. POLCA is robust, reliable, and readily deployable. Using open-source models to replicate the power patterns observed in production, POLCA simulations demonstrate it could deploy 30% more servers in existing clusters while incurring minimal power throttling events. POLCA improves power efficiency, reduces the need for additional energy sources and datacenters, and helps to promptly meet demand for running additional LLM workloads. <\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n\t
\n\t\t\n\n\t\t

\n\t\tSpotlight: AI-POWERED EXPERIENCE<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"\"\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Microsoft research copilot experience<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Discover more about research at Microsoft through our AI-powered experience<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tStart now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression<\/h2>\n\n\n\n

Various prompting techniques, such as chain-of-thought (CoT), in-context learning (ICL), and retrieval augmented generation (RAG), can empower large language models (LLMs) to handle complex and varied tasks through rich and informative prompts. However, these prompts are lengthy, sometimes exceeding tens of thousands of tokens, which increases computational and financial overhead and degrades the LLMs\u2019 ability to perceive information. Recent efforts to compress prompts in a task-aware manner, without losing essential information, have resulted in shorter prompts tailored to a specific task or query. This typically enhances performance on downstream tasks, particularly in question answering. However, the task-specific features present challenges in efficiency and generalizability. <\/p>\n\n\n\n

In a recent paper: LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression<\/a>, researchers from Microsoft and Tsinghua University propose a data distillation procedure to derive knowledge from an LLM (GPT-4) and compress the prompts without losing crucial information. They introduce an extractive text compression dataset, containing pairs of original texts from MeetingBank and their compressed versions. Despite its small size, their model shows significant performance gains over strong baselines and demonstrates robust generalization ability across different LLMs. The new model is 3x-6x faster than existing prompt compression methods, while accelerating the end-to-end latency by 1.6x-2.9x with compression ratios of 2x-5x. <\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages<\/h2>\n\n\n\n

Despite recent progress in scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measuring this progress remains challenging. Evaluation is often performed using n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics like COMET have a higher correlation; however, challenges such as the lack of evaluation data with human ratings for under-resourced languages, the complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and the limited language coverage of multilingual encoders, have hampered their applicability to African languages. <\/p>\n\n\n\n

In a recent paper: AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages (opens in new tab)<\/span><\/a>, researchers from University College London, University of Maryland, Unbabel, Microsoft and the Masakhane Community (opens in new tab)<\/span><\/a>, address these challenges, creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. They also develop AFRICOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLMR) to create state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441). <\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

Comparing the Agency of Hybrid Meeting Remote Users in 2D and 3D Interfaces of the Hybridge System<\/h2>\n\n\n\n

Video communication often lacks the inclusiveness and simultaneity enabled by physical presence in a shared space. This is especially apparent during hybrid meetings, where some attendees meet physically in a room while others join remotely. Remote participants are at a disadvantage, unable to navigate the physical space like in-room participants. <\/p>\n\n\n\n

In a Late Breaking Work paper to be presented at CHI2024: Comparing the Agency of Hybrid Meeting Remote Users in 2D and 3D Interfaces of the Hybridge System,\u201d<\/a> Microsoft researchers present an experimental system for exploring designs for improving the inclusion of remote attendees in hybrid meetings. In-room users see remote\u202fparticipants on individual displays positioned around a table. Remote participants see video feeds from the room integrated into a digital twin\u202fof the meeting room, choosing where they appear in the meeting room and from where they view it. The researchers designed both a 2D and a 3D version of the interface. They\u202ffound that 3D outperformed 2D in participants\u2019 perceived sense\u202fof awareness, sense of agency, and physical presence. A majority of\u202fparticipants also subjectively preferred 3D over 2D. The next step in this research will test the inclusiveness of Hybridge 3D meetings against fully in-room meetings and traditional hybrid meetings. <\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

FeatUp: A Model-Agnostic Framework for Features at Any Resolution<\/h2>\n\n\n\n

Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime. However, these features often lack the spatial resolution to directly perform dense prediction tasks like segmentation and depth prediction. This is because models like transformers and convolutional networks aggressively pool information over large areas. <\/p>\n\n\n\n

In a paper that was published at ICLR 2024: FeatUp: A Model-Agnostic Framework for Features at Any Resolution<\/a>, researchers from Microsoft and external colleagues introduce a task- and model-agnostic framework to restore lost spatial information in deep features. The paper introduces two variants of FeatUp: one that guides features with high-resolution signal in a single forward pass, and one that fits an implicit model to a single image to reconstruct features at any resolution. Both approaches use a multiview consistency loss with deep analogies to neural radiance fields (NeRFs), a deep learning method of building 3D representations of a scene using sparse 2D images. In the new research, features retain their original semantics and can be swapped into existing applications to yield resolution and performance gains, even without re-training. FeatUp significantly outperforms other feature upsampling and image super-resolution approaches in class activation map generation, transfer learning for segmentation and depth prediction, and end-to-end training for semantic segmentation. <\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n\n\n\n
Project page<\/a><\/div>\n\n\n\n
Code<\/a><\/div>\n\n\n\n
Related video<\/a><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"

In this issue: New research on appropriate reliance on generative AI; Power management opportunities for LLMs in the cloud; LLMLingua-2 improves task-agnostic prompt compression; Enhancing COMET to embrace under-resourced African languages: <\/p>\n","protected":false},"author":37583,"featured_media":1025466,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"categories":[1],"tags":[],"research-area":[13561,13556,13562,13545,13554,13559,13547],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1025451","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-algorithms","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-human-language-technologies","msr-research-area-human-computer-interaction","msr-research-area-social-sciences","msr-research-area-systems-and-networking","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199561,199565,1021599],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[913161,282170,714577,815140],"related-projects":[1017939,978333,937905,1068003,483294],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Mihaela Vorvoreanu","user_id":36804,"display_name":"Mihaela Vorvoreanu","author_link":"Mihaela Vorvoreanu<\/a>","is_active":false,"last_first":"Vorvoreanu, Mihaela","people_section":0,"alias":"mivorvor"},{"type":"user_nicename","value":"Esha Choukse","user_id":40417,"display_name":"Esha Choukse","author_link":"Esha Choukse<\/a>","is_active":false,"last_first":"Choukse, Esha","people_section":0,"alias":"eschouks"},{"type":"user_nicename","value":"Chaojie Zhang","user_id":42705,"display_name":"Chaojie Zhang","author_link":"Chaojie Zhang<\/a>","is_active":false,"last_first":"Zhang, Chaojie","people_section":0,"alias":"chaojiezhang"},{"type":"user_nicename","value":"Íñigo Goiri","user_id":32102,"display_name":"Íñigo Goiri","author_link":"Íñigo Goiri<\/a>","is_active":false,"last_first":"Goiri, \u00cd\u00f1igo","people_section":0,"alias":"inigog"},{"type":"guest","value":"brijesh-warrier","user_id":"956994","display_name":"Brijesh Warrier","author_link":"Brijesh Warrier<\/a>","is_active":true,"last_first":"Warrier, Brijesh","people_section":0,"alias":"brijesh-warrier"},{"type":"user_nicename","value":"Ricardo Bianchini","user_id":33393,"display_name":"Ricardo Bianchini","author_link":"Ricardo Bianchini<\/a>","is_active":false,"last_first":"Bianchini, Ricardo","people_section":0,"alias":"ricardob"},{"type":"user_nicename","value":"Qianhui Wu","user_id":40741,"display_name":"Qianhui Wu","author_link":"Qianhui Wu<\/a>","is_active":false,"last_first":"Wu, Qianhui","people_section":0,"alias":"qianhuiwu"},{"type":"user_nicename","value":"Huiqiang Jiang","user_id":40807,"display_name":"Huiqiang Jiang","author_link":"Huiqiang Jiang<\/a>","is_active":false,"last_first":"Jiang, Huiqiang","people_section":0,"alias":"hjiang"},{"type":"user_nicename","value":"Molly Xia","user_id":41943,"display_name":"Molly Xia","author_link":"Molly Xia<\/a>","is_active":false,"last_first":"Xia, Molly","people_section":0,"alias":"mollyxia"},{"type":"user_nicename","value":"Xufang Luo","user_id":40324,"display_name":"Xufang Luo","author_link":"Xufang Luo<\/a>","is_active":false,"last_first":"Luo, Xufang","people_section":0,"alias":"xufluo"},{"type":"user_nicename","value":"Jue Zhang","user_id":41212,"display_name":"Jue Zhang","author_link":"Jue Zhang<\/a>","is_active":false,"last_first":"Zhang, Jue","people_section":0,"alias":"juezhang"},{"type":"user_nicename","value":"Qingwei Lin \u6797\u5e86\u7ef4","user_id":33318,"display_name":"Qingwei Lin \u6797\u5e86\u7ef4","author_link":"Qingwei Lin \u6797\u5e86\u7ef4<\/a>","is_active":false,"last_first":"\u6797\u5e86\u7ef4, Qingwei Lin","people_section":0,"alias":"qlin"},{"type":"user_nicename","value":"Victor Ruehle","user_id":41027,"display_name":"Victor Ruehle","author_link":"Victor Ruehle<\/a>","is_active":false,"last_first":"Ruehle, Victor","people_section":0,"alias":"virueh"},{"type":"user_nicename","value":"Yuqing Yang","user_id":40654,"display_name":"Yuqing Yang","author_link":"Yuqing Yang<\/a>","is_active":false,"last_first":"Yang, Yuqing","people_section":0,"alias":"yuqyang"},{"type":"user_nicename","value":"Chin-Yew Lin","user_id":31493,"display_name":"Chin-Yew Lin","author_link":"Chin-Yew Lin<\/a>","is_active":false,"last_first":"Lin, Chin-Yew","people_section":0,"alias":"cyl"},{"type":"user_nicename","value":"Lili Qiu","user_id":41320,"display_name":"Lili Qiu","author_link":"Lili Qiu<\/a>","is_active":false,"last_first":"Qiu, Lili","people_section":0,"alias":"liliqiu"},{"type":"user_nicename","value":"Dongmei Zhang","user_id":31665,"display_name":"Dongmei Zhang","author_link":"Dongmei Zhang<\/a>","is_active":false,"last_first":"Zhang, Dongmei","people_section":0,"alias":"dongmeiz"},{"type":"user_nicename","value":"Millicent Ochieng","user_id":40678,"display_name":"Millicent Ochieng","author_link":"Millicent Ochieng<\/a>","is_active":false,"last_first":"Ochieng, Millicent","people_section":0,"alias":"mochieng"},{"type":"user_nicename","value":"Payod Panda","user_id":42171,"display_name":"Payod Panda","author_link":"Payod Panda<\/a>","is_active":false,"last_first":"Panda, Payod","people_section":0,"alias":"t-payodpanda"},{"type":"user_nicename","value":"Lev Tankelevitch","user_id":43209,"display_name":"Lev Tankelevitch","author_link":"Lev Tankelevitch<\/a>","is_active":false,"last_first":"Tankelevitch, Lev","people_section":0,"alias":"levt"},{"type":"user_nicename","value":"Kori Inkpen","user_id":32569,"display_name":"Kori Inkpen","author_link":"Kori Inkpen<\/a>","is_active":false,"last_first":"Inkpen, Kori","people_section":0,"alias":"kori"},{"type":"user_nicename","value":"John Tang","user_id":32380,"display_name":"John Tang","author_link":"John Tang<\/a>","is_active":false,"last_first":"Tang, John","people_section":0,"alias":"johntang"},{"type":"user_nicename","value":"Sasa Junuzovic","user_id":33528,"display_name":"Sasa Junuzovic","author_link":"Sasa Junuzovic<\/a>","is_active":false,"last_first":"Junuzovic, Sasa","people_section":0,"alias":"sasajun"},{"type":"user_nicename","value":"Qianqian Qi","user_id":39633,"display_name":"Qianqian Qi","author_link":"Qianqian Qi<\/a>","is_active":false,"last_first":"Qi, Qianqian","people_section":0,"alias":"qiq"},{"type":"user_nicename","value":"Pat Sweeney","user_id":36999,"display_name":"Pat Sweeney","author_link":"Pat Sweeney<\/a>","is_active":false,"last_first":"Sweeney, Pat","people_section":0,"alias":"patricsw"},{"type":"user_nicename","value":"Andy Wilson","user_id":31159,"display_name":"Andy Wilson","author_link":"Andy Wilson<\/a>","is_active":false,"last_first":"Wilson, Andy","people_section":0,"alias":"awilson"},{"type":"user_nicename","value":"Abigail Sellen","user_id":31112,"display_name":"Abigail Sellen","author_link":"Abigail Sellen<\/a>","is_active":false,"last_first":"Sellen, Abigail","people_section":0,"alias":"asellen"},{"type":"user_nicename","value":"Sean Rintel","user_id":33579,"display_name":"Sean Rintel","author_link":"Sean Rintel<\/a>","is_active":false,"last_first":"Rintel, Sean","people_section":0,"alias":"serintel"},{"type":"user_nicename","value":"Mark Hamilton","user_id":39345,"display_name":"Mark Hamilton","author_link":"Mark Hamilton<\/a>","is_active":false,"last_first":"Hamilton, Mark","people_section":0,"alias":"marhamil"}],"msr_type":"Post","featured_image_thumbnail":"\"Research","byline":"","formattedDate":"April 17, 2024","formattedExcerpt":"In this issue: New research on appropriate reliance on generative AI; Power management opportunities for LLMs in the cloud; LLMLingua-2 improves task-agnostic prompt compression; Enhancing COMET to embrace under-resourced African languages:","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1025451"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37583"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1025451"}],"version-history":[{"count":30,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1025451\/revisions"}],"predecessor-version":[{"id":1030833,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1025451\/revisions\/1030833"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1025466"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1025451"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1025451"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1025451"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1025451"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1025451"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1025451"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1025451"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1025451"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1025451"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1025451"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1025451"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}