Research Focus: Week of September 9, 2024

Published

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

Decorative graphic with wavy shapes in the background in blues and purples. Text overlay in center left reads: “Research Focus: September 9, 2024”

Can LLMs be Fooled? Investigating Vulnerabilities in LLMs

Large language models (LLMs) are the de facto standard for numerous machine learning tasks, ranging from text generation and summarization to even code generation. They also play an integral role in various natural language processing (NLP) tasks. However, recent studies show they are susceptible to adversarial attacks, including prompt injections, jailbreaking and other strategies. As people and organizations increasingly rely on LLMs, it is crucial to be aware of these vulnerabilities and take precautions when deploying them in real-world scenarios. Therefore, understanding and mitigating these vulnerabilities is critical. 

In a recent paper: Can LLMs be Fooled? Investigating Vulnerabilities in LLMs, researchers from Microsoft examine multiple vulnerability categories, including model-based, training-time, and inference-time vulnerabilities, and then discuss mitigation strategies. These include “model editing,” which aims to modify LLMs’ behavior, and “chroma teaming,” which leverages the synergy of different teaming strategies to make LLMs more resilient. This paper synthesizes the findings from each vulnerability category and proposes new directions for research and development. Understanding the focal points of current vulnerabilities will help people better anticipate and mitigate future risks, paving the road for more robust and secure LLMs.  


Total-Duration-Aware Duration Modeling for Text-to-Speech Systems

For many text-to-speech (TTS) applications, it is crucial that the total duration of the generated speech can be accurately adjusted to the target duration by modifying the speech rate. For example, in a video dubbing scenario, the output speech must match or closely approximate the duration of the source audio to ensure synchronization with the video. However, the impact of adjusting the speech rate on speech quality, such as intelligibility and speaker characteristics, has been underexplored. 

In a recent paper: Total-Duration-Aware Duration Modeling for Text-to-Speech Systems, researchers from Microsoft propose a novel total-duration-aware (TDA) duration model for TTS, where phoneme durations are predicted not only from the text input but also from an additional input of the total target duration. They propose a MaskGIT-based duration model that enhances the diversity and quality of the predicted phoneme durations. Test results show that the proposed TDA duration models achieve better intelligibility and speaker similarity for various speech rate configurations compared to baseline models. The proposed MaskGIT-based model can also generate phoneme durations with higher quality and diversity compared to its regression or flow-matching counterparts.

About Microsoft Research

Advancing science and technology to benefit humanity

GEMS: Generative Expert Metric System through Iterative Prompt Priming

Metrics and measurements are fundamental to identifying challenges, informing decisions, and resolving conflicts across engineering domains. Despite the abundance of data available, a single expert may struggle to work across multi-disciplinary data, while non-experts may find it unintuitive to create effective measures or transform theories into appropriate context-specific metrics. 

In a recent technical report: GEMS: Generative Expert Metric System through Iterative Prompt Priming, researchers from Microsoft and University of Illinois Urbana-Champaign address this challenge. They examine software communities within large software corporations, where different measures are used as proxies to locate counterparts within the organization to transfer tacit knowledge. They propose a prompt-engineering framework inspired by neural mechanisms, demonstrating that generative models can extract and summarize theories and perform basic reasoning, thereby transforming concepts into context-aware metrics to support software communities given software repository data. While this research focused on software communities, the framework’s applicability could extend across various fields, showcasing expert-theory-inspired metrics that aid in triaging complex challenges.


On the Criticality of Integrity Protection in 5G Fronthaul Networks

The modern 5G fronthaul, which connects base stations to radio units in cellular networks, is designed to deliver microsecond-level performance guarantees using Ethernet-based protocols. Unfortunately, due to potential performance overheads, as well as misconceptions about the low risk and impact of possible attacks, integrity protection is not considered a mandatory feature in the 5G fronthaul standards. 

In a recent paper: On the Criticality of Integrity Protection in 5G Fronthaul Networks, researchers from Microsoft and external colleagues show how the lack of protection can be exploited, making attacks easier and more powerful. They present a novel class of powerful attacks and a set of traditional attacks, which can both be fully launched from software over open packet-based interfaces, to cause performance degradation or denial of service to users over large geographical regions. These attacks do not require a physical radio presence or signal-based attack mechanisms, do not affect the network’s operation (e.g., not crashing the radios), and are highly severe (e.g., impacting multiple cells). The researchers demonstrate that adversaries could degrade performance of connected users by more than 80%, completely block a subset of users from ever attaching to the cell, or even generate signaling storm attacks of more than 2,500 signaling messages per minute, with just two compromised cells and four mobile users. They also present an analysis of countermeasures that meet the strict performance requirements of the fronthaul.


Microsoft Research in the news

Microsoft works with students to launch 'Golden Record 2.0' into space 

Geekwire | September 5, 2024

Forty-seven years after NASA sent a “Golden Record” into deep space to document humanity’s view of the world, Microsoft’s Project Silica is teaming up with a citizen-science effort to lay the groundwork — or, more aptly, the glasswork — for doing something similar. 

Related: Collaborators: Silica in space with Richard Black and Dexter Greene 

Related publications

Continue reading

See all blog posts