{"id":1045134,"date":"2024-06-12T09:00:00","date_gmt":"2024-06-12T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1045134"},"modified":"2024-06-18T10:42:55","modified_gmt":"2024-06-18T17:42:55","slug":"research-focus-week-of-june-10-2024","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-focus-week-of-june-10-2024\/","title":{"rendered":"Research Focus: Week of June 10, 2024"},"content":{"rendered":"\n

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code\/datasets, new hires and other milestones from across the research community at Microsoft.<\/em><\/p><\/blockquote><\/figure>\n\n\n\n

\"Research<\/figure>\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

RELEVANCE: Automatic evaluation framework for LLM responses<\/h2>\n\n\n\n

Relevance in AI refers to the usefulness of information or actions to a specific task or query. It helps determine the accuracy, effectiveness, efficiency, and user satisfaction of content from search engines, chatbots, and other AI systems.<\/p>\n\n\n\n

RELEVANCE<\/a> (Relevance and Entropy-based Evaluation with Longitudinal Inversion Metrics) is a generative AI evaluation framework designed by researchers at Microsoft to automatically evaluate creative responses from large language models (LLMs). RELEVANCE combines custom tailored relevance assessments with mathematical metrics to ensure AI-generated content aligns with human standards and maintains consistency. Monitoring these metrics over time enables the automatic detection of when the LLM\u2019s relevance evaluation starts to slip or hallucinate.<\/p>\n\n\n\n

Custom relevance evaluation alone involves scoring responses based on predefined criteria. However, while these scores provide a direct assessment, they might not capture the full complexity and dynamics of response patterns over multiple evaluations or different sets of data (e.g. model hallucination and model slip). To address this issue, RELEVANCE integrates mathematical techniques with custom evaluations to ensure LLM response accuracy over time and adaptability to evolving LLM behaviors without involving manual review.<\/p>\n\n\n\n

\n
Learn more<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

Recyclable vitrimer-based printed circuit boards for sustainable electronics<\/h2>\n\n\n\n

Printed circuit boards (PCBs) are ubiquitous in electronics and make up a substantial fraction of environmentally hazardous electronic waste when devices reach end-of-life. Their recycling is challenging due to their use of irreversibly cured thermoset epoxies in manufacturing. Researchers at Microsoft and the University of Washington aim to tackle this challenge<\/a>, and potentially pave the way for sustainability transitions in the electronics industry. In a recent paper, published in Nature Sustainability: Recyclable vitrimer-based printed circuit boards for sustainable electronics<\/a>, they present a PCB formulation using transesterification vitrimers (vPCBs) and an end-to-end fabrication process compatible with standard manufacturing ecosystems. This cradle-to-cradle life cycle assessment shows substantial environmental impact reduction of vPCBs over conventional PCBs in 11 categories. The team successfully manufactured functional prototypes of internet of things devices transmitting 2.4\u2009GHz radio signals on vPCBs with electrical and mechanical properties meeting industry standards. Fractures and holes in vPCBs are repairable while retaining comparable performance over multiple repair cycles. The researchers also demonstrate a non-destructive recycling process based on polymer swelling with small-molecule solvents. Unlike traditional solvolysis recycling, this swelling process does not degrade the materials. A dynamic mechanical analysis finds negligible catalyst loss, minimal changes in storage modulus, and equivalent polymer backbone composition across multiple recycling cycles. This recycling process achieves 98% polymer recovery, 100% fiber recovery, and 91% solvent recovery to create new vPCBs without performance degradation, potentially paving the way to circularity in electronics.<\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n\t
\n\t\t\n\n\t\t

\n\t\ton-demand event<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"Microsoft\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Microsoft Research Forum Episode 3<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Dive into the importance of globally inclusive and equitable AI, updates on AutoGen and MatterGen, explore novel new use cases for AI, and more.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tWatch on-demand\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\nOpens in a new tab<\/span>\t<\/div>\n\t\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

LeanAttention: Hardware-Aware Scalable Attention Mechanism for the Decode-Phase of Transformers<\/h2>\n\n\n\n

Transformer-based models have emerged as one of the most widely used architectures for natural language processing, natural language generation, and image generation. The size of the state-of-the-art models has reached billions of parameters, requiring large amounts of memory and resulting in significant inference latency, even on cutting edge AI-accelerators, such as graphics processing units (GPUs). Attempts to deliver the low latency demands of the applications relying on such large models do not cater to the computationally distinct nature of different phases during inference and thus fail to utilize the underlying hardware efficiently.<\/p>\n\n\n\n

In a recent paper: Lean Attention: Hardware-Aware Scalable Attention Mechanism for the Decode-Phase of Transformers<\/a>, researchers from Microsoft propose a scalable technique of computing self-attention for the token-generation phase (decode-phase) of decoder-only transformer models. LeanAttention enables scaling the attention mechanism implementation for the challenging case of long context lengths by re-designing the execution flow for the decode-phase. The researchers show that the associative property of online softmax can be treated as a reduction operation, thus allowing them to parallelize the attention computation over these large context lengths. They extend the \u201cstream-K\u201d style reduction of tiled calculation to self-attention to enable the parallel computation, resulting in near 100% GPU utility and an average of 2.6x attention execution speedup over FlashAttention-2 and up to 8.33x speedup for 512k context lengths.<\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

NEW RESEARCH<\/h3>\n\n\n\n

WaveCoder: Widespread and Versatile Enhanced Instruction Tuning with Refined Data Generation<\/h2>\n\n\n\n

Recent research demonstrates that an LLM finetuned on a high-quality instruction dataset can obtain impressive abilities to address code-related tasks. However, existing methods for instruction data generation often produce duplicate data and are not controllable enough on data quality.<\/p>\n\n\n\n

In a recent paper: WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation<\/a>, researchers from Microsoft extend the generalization of instruction tuning by classifying the instruction data to four code-related tasks and propose an LLM-based generator-discriminator data process framework to generate diverse, high-quality instruction data from open source code. They introduce CodeSeaXDataset, a dataset comprising 19,915 instruction instances across four universal code-related tasks. In addition, they present WaveCoder, a fine-tuned code LLM with widespread and versatile enhanced instruction tuning. This model is specifically designed for enhancing instruction tuning of code LLMs. Their experiments show that WaveCoder models outperform other open-source models in terms of generalization ability across different code-related tasks at the same level of fine-tuning scale. Moreover, WaveCoder exhibits high efficiency in previous code generation tasks.<\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n\n\n\n
Get the code<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

TRAINING COURSE<\/h3>\n\n\n\n

New course offers AutoGen training<\/h2>\n\n\n\n

DeepLearning.AI (opens in new tab)<\/span><\/a>, in collaboration with Microsoft and Penn State University, is offering a short training course: AI Agentic Design Patterns with AutoGen<\/strong> (opens in new tab)<\/span><\/a>, centered around the multi-agent framework for next-generation AI applications. Taught by AutoGen creators Chi Wang, principal researcher at Microsoft Research AI Frontiers, and Qingyun Wu, assistant professor at Penn State, the course explores how to use AutoGen to build and customize multi-agent systems, enabling agents to take on different roles and collaborate to accomplish complex tasks. You can learn more details in this video (opens in new tab)<\/span><\/a>.<\/p>\n\n\n\n

AutoGen<\/a> was designed to simplify the orchestration, optimization, and automation of LLM workflows, and is adopted widely as a generic programming framework for agentic AI. It offers customizable and conversable agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools and having conversations between multiple agents via automated chat.<\/p>\n\n\n\n

\n
Training course<\/a><\/div>\n<\/div>\n\n\n\n
\n\t\n\t
\n\t\t
\n\t\t\t
\n\t
\n\n\t\t\n\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t<\/div>\n\t<\/div>\n<\/div>\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\nOpens in a new tab<\/span>","protected":false},"excerpt":{"rendered":"

In this issue: RELEVANCE automatically evaluates creative LLM responses; Recyclable vitrimer-based printed circuit boards; Lean Attention: Hardware-aware scalable attention mechanism; WaveCoder: a fine-tuned code LLM; New AutoGen training course.<\/p>\n","protected":false},"author":37583,"featured_media":1045137,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556,13562,13563,13552,13547,13568],"msr-region":[],"msr-event-type":[],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199560],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[811276,983424],"related-projects":[1041426,973047],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Arman Salimi","user_id":43338,"display_name":"Arman Salimi","author_link":"Arman Salimi<\/a>","is_active":false,"last_first":"Salimi, Arman","people_section":0,"alias":"armansalimi"},{"type":"user_nicename","value":"Kali Frost","user_id":41284,"display_name":"Kali Frost","author_link":"Kali Frost<\/a>","is_active":false,"last_first":"Frost, Kali","people_section":0,"alias":"kalifrost"},{"type":"user_nicename","value":"Jake Smith","user_id":40891,"display_name":"Jake Smith","author_link":"Jake Smith<\/a>","is_active":false,"last_first":"Smith, Jake","people_section":0,"alias":"jakesmith"},{"type":"user_nicename","value":"Bichlien Nguyen","user_id":35942,"display_name":"Bichlien Nguyen","author_link":"Bichlien Nguyen<\/a>","is_active":false,"last_first":"Nguyen, Bichlien","people_section":0,"alias":"bnguy"},{"type":"user_nicename","value":"Rya Sanovar","user_id":43320,"display_name":"Rya Sanovar","author_link":"Rya Sanovar<\/a>","is_active":false,"last_first":"Sanovar, Rya","people_section":0,"alias":"t-ryasanovar"},{"type":"user_nicename","value":"Srikant Bharadwaj","user_id":41644,"display_name":"Srikant Bharadwaj","author_link":"Srikant Bharadwaj<\/a>","is_active":false,"last_first":"Bharadwaj, Srikant","people_section":0,"alias":"srbharadwaj"},{"type":"user_nicename","value":"Renee St. Amant","user_id":43080,"display_name":"Renee St. Amant","author_link":"Renee St. Amant<\/a>","is_active":false,"last_first":"St. Amant, Renee","people_section":0,"alias":"reneestamant"},{"type":"user_nicename","value":"Victor Ruehle","user_id":41027,"display_name":"Victor Ruehle","author_link":"Victor Ruehle<\/a>","is_active":false,"last_first":"Ruehle, Victor","people_section":0,"alias":"virueh"},{"type":"user_nicename","value":"Saravan Rajmohan","user_id":41039,"display_name":"Saravan Rajmohan","author_link":"Saravan Rajmohan<\/a>","is_active":false,"last_first":"Rajmohan, Saravan","people_section":0,"alias":"saravar"},{"type":"user_nicename","value":"Yangyu Huang","user_id":41488,"display_name":"Yangyu Huang","author_link":"Yangyu Huang<\/a>","is_active":false,"last_first":"Huang, Yangyu","people_section":0,"alias":"yanghuan"},{"type":"user_nicename","value":"Can Xu","user_id":40108,"display_name":"Can Xu","author_link":"Can Xu<\/a>","is_active":false,"last_first":"Xu, Can","people_section":0,"alias":"caxu"},{"type":"user_nicename","value":"Wenxiang Hu","user_id":39763,"display_name":"Wenxiang Hu","author_link":"Wenxiang Hu<\/a>","is_active":false,"last_first":"Hu, Wenxiang","people_section":0,"alias":"wenxh"},{"type":"user_nicename","value":"Qiufeng Yin","user_id":33296,"display_name":"Qiufeng Yin","author_link":"Qiufeng Yin<\/a>","is_active":false,"last_first":"Yin, Qiufeng","people_section":0,"alias":"qfyin"},{"type":"user_nicename","value":"Chi Wang","user_id":31406,"display_name":"Chi Wang","author_link":"Chi Wang<\/a>","is_active":false,"last_first":"Wang, Chi","people_section":0,"alias":"chiw"}],"msr_type":"Post","featured_image_thumbnail":"\"Research","byline":"","formattedDate":"June 12, 2024","formattedExcerpt":"In this issue: RELEVANCE automatically evaluates creative LLM responses; Recyclable vitrimer-based printed circuit boards; Lean Attention: Hardware-aware scalable attention mechanism; WaveCoder: a fine-tuned code LLM; New AutoGen training course.","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1045134"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37583"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1045134"}],"version-history":[{"count":15,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1045134\/revisions"}],"predecessor-version":[{"id":1048410,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1045134\/revisions\/1048410"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1045137"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1045134"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1045134"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1045134"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1045134"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1045134"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1045134"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1045134"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1045134"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1045134"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1045134"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}