{"id":1083645,"date":"2024-09-12T09:00:00","date_gmt":"2024-09-12T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1083645"},"modified":"2024-11-05T06:41:25","modified_gmt":"2024-11-05T14:41:25","slug":"research-focus-week-of-september-9-2024","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/research-focus-week-of-september-9-2024\/","title":{"rendered":"Research Focus: Week of September 9, 2024"},"content":{"rendered":"\n

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code\/datasets, new hires and other milestones from across the research community at Microsoft.<\/p><\/blockquote><\/figure>\n\n\n\n

\"Decorative<\/figure>\n\n\n\n
\n

NEW RESEARCH<\/h2>\n\n\n\n

Can LLMs be Fooled? Investigating Vulnerabilities in LLMs<\/h2>\n\n\n\n

Large language models (LLMs) are the de facto standard for numerous machine learning tasks, ranging from text generation and summarization to even code generation. They also play an integral role in various natural language processing (NLP) tasks. However, recent studies show they are susceptible to adversarial attacks, including prompt injections, jailbreaking and other strategies. As people and organizations increasingly rely on LLMs, it is crucial to be aware of these vulnerabilities and take precautions when deploying them in real-world scenarios. Therefore, understanding and mitigating these vulnerabilities is critical. <\/p>\n\n\n\n

In a recent paper: Can LLMs be Fooled? Investigating Vulnerabilities in LLMs<\/a>, researchers from Microsoft examine multiple vulnerability categories, including model-based, training-time, and inference-time vulnerabilities, and then discuss mitigation strategies. These include \u201cmodel editing,\u201d which aims to modify LLMs\u2019 behavior, and \u201cchroma teaming,\u201d which leverages the synergy of different teaming strategies to make LLMs more resilient. This paper synthesizes the findings from each vulnerability category and proposes new directions for research and development. Understanding the focal points of current vulnerabilities will help people better anticipate and mitigate future risks, paving the road for more robust and secure LLMs.  <\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n\n\n\n

NEW RESEARCH<\/h2>\n\n\n\n

Total-Duration-Aware Duration Modeling for Text-to-Speech Systems<\/h2>\n\n\n\n

For many text-to-speech (TTS) applications, it is crucial that the total duration of the generated speech can be accurately adjusted to the target duration by modifying the speech rate. For example, in a video dubbing scenario, the output speech must match or closely approximate the duration of the source audio to ensure synchronization with the video. However, the impact of adjusting the speech rate on speech quality, such as intelligibility and speaker characteristics, has been underexplored. <\/p>\n\n\n\n

In a recent paper: Total-Duration-Aware Duration Modeling for Text-to-Speech Systems<\/a>, researchers from Microsoft propose a novel total-duration-aware (TDA) duration model for TTS, where phoneme durations are predicted not only from the text input but also from an additional input of the total target duration. They propose a MaskGIT-based duration model that enhances the diversity and quality of the predicted phoneme durations. Test results show that the proposed TDA duration models achieve better intelligibility and speaker similarity for various speech rate configurations compared to baseline models. The proposed MaskGIT-based model can also generate phoneme durations with higher quality and diversity compared to its regression or flow-matching counterparts.<\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n\t
\n\t\t\n\n\t\t

\n\t\tSpotlight: Blog post<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"White\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Eureka: Evaluating and understanding progress in AI<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

How can we rigorously evaluate and understand state-of-the-art progress in AI? Eureka is an open-source framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. Learn more about the extended findings.\u00a0<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tRead more\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n
\n

NEW RESEARCH<\/h2>\n\n\n\n

GEMS: Generative Expert Metric System through Iterative Prompt Priming<\/h2>\n\n\n\n

Metrics and measurements are fundamental to identifying challenges, informing decisions, and resolving conflicts across engineering domains. Despite the abundance of data available, a single expert may struggle to work across multi-disciplinary data, while non-experts may find it unintuitive to create effective measures or transform theories into appropriate context-specific metrics. <\/p>\n\n\n\n

In a recent technical report: GEMS: Generative Expert Metric System through Iterative Prompt Priming<\/a>, researchers from Microsoft and University of Illinois Urbana-Champaign address this challenge. They examine software communities within large software corporations, where different measures are used as proxies to locate counterparts within the organization to transfer tacit knowledge. They propose a prompt-engineering framework inspired by neural mechanisms, demonstrating that generative models can extract and summarize theories and perform basic reasoning, thereby transforming concepts into context-aware metrics to support software communities given software repository data. While this research focused on software communities, the framework\u2019s applicability could extend across various fields, showcasing expert-theory-inspired metrics that aid in triaging complex challenges.<\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n<\/div>\n\n\n\n
\n

NEW RESEARCH<\/h2>\n\n\n\n

On the Criticality of Integrity Protection in 5G Fronthaul Networks<\/h2>\n\n\n\n

The modern 5G fronthaul, which connects base stations to radio units in cellular networks, is designed to deliver microsecond-level performance guarantees using Ethernet-based protocols. Unfortunately, due to potential performance overheads, as well as misconceptions about the low risk and impact of possible attacks, integrity protection is not considered a mandatory feature in the 5G fronthaul standards. <\/p>\n\n\n\n

In a recent paper: On the Criticality of Integrity Protection in 5G Fronthaul Networks<\/a>, researchers from Microsoft and external colleagues show how the lack of protection can be exploited, making attacks easier and more powerful. They present a novel class of powerful attacks and a set of traditional attacks, which can both be fully launched from software over open packet-based interfaces, to cause performance degradation or denial of service to users over large geographical regions. These attacks do not require a physical radio presence or signal-based attack mechanisms, do not affect the network\u2019s operation (e.g., not crashing the radios), and are highly severe (e.g., impacting multiple cells). The researchers demonstrate that adversaries could degrade performance of connected users by more than 80%, completely block a subset of users from ever attaching to the cell, or even generate signaling storm attacks of more than 2,500 signaling messages per minute, with just two compromised cells and four mobile users. They also present an analysis of countermeasures that meet the strict performance requirements of the fronthaul.<\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
\n<\/div>\n\n\n\n
\n\t\n\t
\n\t\t
\n\t\t\t
\n\t
\n\n\t\t\n\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t<\/div>\n\t<\/div>\n<\/div>\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\n","protected":false},"excerpt":{"rendered":"

Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks:<\/p>\n","protected":false},"author":42735,"featured_media":1083654,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[1],"tags":[],"research-area":[13556,243062,13545,13560,13558,13559,13547],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[269148,243984,269142],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1083645","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-research-area-audio-acoustics","msr-research-area-human-language-technologies","msr-research-area-programming-languages-software-engineering","msr-research-area-security-privacy-cryptography","msr-research-area-social-sciences","msr-research-area-systems-and-networking","msr-locale-en_us","msr-post-option-approved-for-river","msr-post-option-blog-homepage-featured","msr-post-option-include-in-river"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[144860,783091],"related-projects":[922440],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Sara Abdali","user_id":42405,"display_name":"Sara Abdali","author_link":"Sara Abdali<\/a>","is_active":false,"last_first":"Abdali, Sara","people_section":0,"alias":"saraabdali"},{"type":"user_nicename","value":"Sefik Emre Eskimez","user_id":38655,"display_name":"Sefik Emre Eskimez","author_link":"Sefik Emre Eskimez<\/a>","is_active":false,"last_first":"Eskimez, Sefik Emre","people_section":0,"alias":"seeskime"},{"type":"user_nicename","value":"Xiaofei Wang","user_id":38658,"display_name":"Xiaofei Wang","author_link":"Xiaofei Wang<\/a>","is_active":false,"last_first":"Wang, Xiaofei","people_section":0,"alias":"xiaofewa"},{"type":"user_nicename","value":"Manthan Thakker","user_id":39627,"display_name":"Manthan Thakker","author_link":"Manthan Thakker<\/a>","is_active":false,"last_first":"Thakker, Manthan","people_section":0,"alias":"mathakke"},{"type":"user_nicename","value":"Jinyu Li","user_id":32312,"display_name":"Jinyu Li","author_link":"Jinyu Li<\/a>","is_active":false,"last_first":"Li, Jinyu","people_section":0,"alias":"jinyli"},{"type":"user_nicename","value":"Sheng Zhao","user_id":41137,"display_name":"Sheng Zhao","author_link":"Sheng Zhao<\/a>","is_active":false,"last_first":"Zhao, Sheng","people_section":0,"alias":"szhao"},{"type":"user_nicename","value":"Naoyuki Kanda","user_id":38661,"display_name":"Naoyuki Kanda","author_link":"Naoyuki Kanda<\/a>","is_active":false,"last_first":"Kanda, Naoyuki","people_section":0,"alias":"nakanda"},{"type":"user_nicename","value":"Carmen Badea","user_id":38544,"display_name":"Carmen Badea","author_link":"Carmen Badea<\/a>","is_active":false,"last_first":"Badea, Carmen","people_section":0,"alias":"cabadea"},{"type":"user_nicename","value":"Christian Bird","user_id":31346,"display_name":"Christian Bird","author_link":"Christian Bird<\/a>","is_active":false,"last_first":"Bird, Christian","people_section":0,"alias":"cbird"},{"type":"user_nicename","value":"Rob DeLine","user_id":33370,"display_name":"Rob DeLine","author_link":"Rob DeLine<\/a>","is_active":false,"last_first":"DeLine, Rob","people_section":0,"alias":"rdeline"},{"type":"user_nicename","value":"Nicole Forsgren","user_id":40150,"display_name":"Nicole Forsgren","author_link":"Nicole Forsgren<\/a>","is_active":false,"last_first":"Forsgren, Nicole","people_section":0,"alias":"niforsgr"},{"type":"user_nicename","value":"Denae Ford Robinson","user_id":38637,"display_name":"Denae Ford Robinson","author_link":"Denae Ford Robinson<\/a>","is_active":false,"last_first":"Ford Robinson, Denae","people_section":0,"alias":"denae"},{"type":"user_nicename","value":"Xenofon Foukas","user_id":39276,"display_name":"Xenofon Foukas","author_link":"Xenofon Foukas<\/a>","is_active":false,"last_first":"Foukas, Xenofon","people_section":0,"alias":"xefouk"}],"msr_type":"Post","featured_image_thumbnail":"\"Research","byline":"","formattedDate":"September 12, 2024","formattedExcerpt":"Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks:","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1083645"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1083645"}],"version-history":[{"count":11,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1083645\/revisions"}],"predecessor-version":[{"id":1085505,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1083645\/revisions\/1085505"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1083654"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1083645"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1083645"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1083645"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1083645"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1083645"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1083645"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1083645"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1083645"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1083645"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1083645"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1083645"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}