{"id":1150818,"date":"2025-10-06T07:03:54","date_gmt":"2025-10-06T14:03:54","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1150818"},"modified":"2025-10-21T07:52:58","modified_gmt":"2025-10-21T14:52:58","slug":"when-ai-meets-biology-promise-risk-and-responsibility","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/when-ai-meets-biology-promise-risk-and-responsibility\/","title":{"rendered":"When AI Meets Biology: Promise, Risk, and Responsibility"},"content":{"rendered":"\n
\"Paraphrase<\/figure>\n\n\n\n

Advances in AI are opening extraordinary frontiers in biology. AI-assisted protein engineering holds the promise of new medicines, materials, and breakthroughs in scientific understandings. Yet these same technologies also introduce biosecurity risks and may lower barriers to designing harmful toxins or pathogens. This \u201cdual-use\u201d potential, where the same knowledge can be harnessed for good or misuse to cause harm, poses a critical dilemma for modern science.<\/p>\n\n\n\n

Great Promise\u2014and Potential Threat<\/h2>\n\n\n\n

I\u2019m excited about the potential for AI-assisted protein design to drive breakthroughs in biology and medicine. At the same time, I\u2019ve also studied how these tools could be misused. In computer-based studies, we found that AI protein design (AIPD) tools could generate modified versions of proteins of concern, such as ricin. Alarmingly, these reformulated proteins were able to evade the biosecurity screening systems used by DNA synthesis companies, which scientists rely on to synthesize AI-generated sequences for experimental use. <\/p>\n\n\n\n

In our paper published in Science<\/em> on October 2, \u201cStrengthening nucleic acid biosecurity screening against generative protein design tools (opens in new tab)<\/span><\/a>,\u201d we describe a two-year confidential project we began in late 2023 while preparing a case study for a workshop on AI and biosecurity.<\/p>\n\n\n\n

We worked confidentially with partners across organizations and sectors for 10 months to develop AI biosecurity \u201cred-teaming\u201d methods that allowed us to better understand vulnerabilities and craft practical solutions\u2014”patches\u201d that have now been adopted globally, making screening systems significantly more AI-resilient.<\/p>\n\n\n\n

\"An
Summary of AIPD red-teaming workflow.<\/figcaption><\/figure>\n\n\n\n

For structuring, methods, and process in our study, we took inspiration from the cybersecurity community, where \u201czero-day\u201d vulnerabilities are kept confidential until a protective patch is developed and deployed. Following the acknowledgment by a small group of workshop attendees of a zero-day for AI in biology, we worked closely with stakeholders\u2014including synthesis companies, biosecurity organizations, and policymakers\u2014to rapidly create and distribute patches that improved detection of AI-redesigned protein sequences. We delayed public disclosure until protective measures were in place and widely adopted.<\/p>\n\n\n\n

Dilemma of Disclosure<\/h2>\n\n\n\n

The dual use dilemma also complicates how we share information about vulnerabilities and safeguards. Across AI and other fields, researchers face a core question: <\/p>\n\n\n\n

\n

How can scientists share potentially risk-revealing methods and results in ways that enable progress without offering a roadmap for misuse?<\/p>\n<\/blockquote>\n\n\n\n

We recognized that our work itself\u2014detailing methods and failure modes\u2014could be exploited by malicious actors if published openly. To guide decisions about what to share, we held a multi-stakeholder deliberation involving government agencies, international biosecurity organizations, and policy experts. Opinions varied: some urged full transparency to maximize reproducibility\u2014and to help others to build on our work; others stressed restraint to minimize risk. It was clear that a new model of scientific communication<\/em> was needed, one that could balance openness and security.<\/p>\n\n\n\n

The Novel Framework<\/h2>\n\n\n\n

The risk of sharing dangerous information through biological research has become a growing concern. We have participated in community-wide discussion on the challenges, including a recent National Academies of Science, Engineering, and Medicine workshop and study. <\/p>\n\n\n\n

In preparing our manuscript for publication, we worked on designing a process to limit the spread of dangerous information while still enabling scientific progress. <\/p>\n\n\n\n

To address the dual challenges, we devised a tiered access system for data and methods, implemented in partnership with the International Biosecurity and Biosafety Initiative for Science (IBBIS) (opens in new tab)<\/span><\/a>, a nonprofit dedicated to advancing science while reducing catastrophic risks. The system works as follows:<\/p>\n\n\n\n