Description<\/strong>: <\/p>\n\n\n\nThis research challenge is to develop policy and practice recommendations for regulators and trade organizations in the biopharma industry, to enable trustworthy use of foundation models and generative AI in the drug development process. According to most estimates, it takes an average of 10-15 years and costs between $1-2.5B to bring a drug to the US market. AI\/ML could significantly accelerate development timelines, reduce costs, and optimize current processes, while enabling a future of personalized medicine. <\/p>\n\n\n\n
Generative AI has demonstrated remarkable capabilities in data structuring and causal reasoning, and further research involving multimodal and longitudinal datasets could transform healthcare at both the individual and population level. Despite the potential benefits, and the opportunity costs of not adopting advanced technologies, there is a lack of shared best practices and regulatory guidance on employing AI in the drug development process. There are also potential barriers to collaboration such as competitive pressures, ethical issues, and cultural dynamics. Establishing appropriate regulatory frameworks and standards will involve building trust and supporting transparency. It may also require policymakers and industry leaders to rethink traditional business models and approaches. <\/p>\n\n\n\n
This challenge is timely, as promising use cases for generative AI across the drug development lifecycle are being explored and regulators such as the FDA, EMA and MHRA are actively seeking feedback. Thus, we propose exploring regulatory frameworks and other conditions necessary for the successful implementation of AI\/ML in the biopharma industry. We expect to participate in industry working groups and convene multidisciplinary industry stakeholders to develop balanced, risk-based, and well-grounded regulatory policy and practice proposals.<\/p>\n\n\n\n
Ideal candidate<\/strong><\/p>\n\n\n\nThe Scientific & Regulatory Affairs team within Health Futures is well placed to support a fellow for this research topic. We have a small but diverse team that combines clinical, legal, compliance and regulatory expertise in healthcare and life sciences. We translate legal and regulatory requirements into practices that can be operationalized within our dynamic business. We take an agile approach to developing programs that balance compliance (quality systems, safety, privacy, security, ethics, etc.) with support for innovation.<\/p>\n\n\n\n
We would be seeking a regulatory affairs attorney with an additional degree or strong background experience in pharmaceutical development and\/or healthcare policy (ideally both). A fellow with this expertise and skillset would complement our team by bringing experience developing new policy proposals and putting them into practice in real-world biopharma settings. This would provide our team with an outside-in perspective and ground our work in practical experience.<\/p>\n\n\n\n
We are seeking a futurist with a general understanding of different types of AI systems who appreciates the potential that these systems have to transform the drug development process as we know it. The ideal candidate would have a track record of applying innovative approaches to law, regulation and policy. The fellow would develop policy positions in a highly complex and dynamic area. Although proposed standards may be without much precedent, they would be informed by a background in the drug development field and an understanding of implementation challenges.<\/p>\n\n\n\n
Alongside this expertise, the fellow should also have demonstrated success in policy or academic writing, as well as leading policy discussions, to enable productive and high-quality generation of outputs based on our research questions.<\/p>\n\n\n\n
Fellowship candidates are still being considered for this research challenge, and we may announce additional fellows during the month of February. If you have applied to this challenge, you can expect to receive your proposal status notification by the end of February 2024. For questions, please contact msfellow@microsoft.com<\/a><\/em>.<\/p>\n\n\n\nAdditional details<\/strong> <\/p>\n\n\n\nThe use of AI, including generative AI, in drug development is becoming increasingly popular due to its ability to accelerate the discovery of new drugs, reduce R&D spending, and efficiently analyze real-world data. AI has the potential to generate $100B across the pharmaceutical and medical product industries, according to McKinsey1<\/sup>. However, there are concerns about the trustworthiness of generative AI models and their ability to produce reliable results. <\/p>\n\n\n\nThis research challenge is to develop policy recommendations <\/em>for regulators and relevant trade organizations in the biopharma industry to enable use of generative AI in the drug development process.<\/em> We would examine existing regulatory frameworks for the use of AI in medical devices and propose measures to address specific risks associated with the use of generative AI models. This work aligns to Microsoft\u2019s work on Responsible AI generally, and the AI innovations being developed within Health Futures specifically, including AI technologies that generate real-world evidence to support regulatory decision-making. <\/p>\n\n\n\nAn appropriate regulatory framework must deal with questions about accountability, transparency, explainability, privacy, fairness, and reliability and safety. For example:<\/p>\n\n\n\n
\n- AI for drug discovery and repurposing: AI can be used to identify new drug candidates, optimize existing drugs, or find new indications for approved drugs by analyzing large and complex datasets. However, the underlying algorithms and models may not be fully interpretable or explainable, which raises questions about how to validate their accuracy, reliability, and safety. The industry needs clear guidance on how to ensure data quality, model transparency, and algorithmic accountability for AI-based drug discovery and repurposing.<\/li>\n\n\n\n
- AI for clinical trial design and execution: AI can be used to enhance various aspects of clinical trials, such as patient recruitment, stratification, randomization, monitoring, adherence, endpoint assessment, and analysis. However, the use of AI may introduce new sources of variability, uncertainty, or bias that may affect the validity and integrity of the trial results. The industry needs clear guidance on how to ensure scientific rigor, ethical conduct, and regulatory compliance for AI\/ML-based clinical trials.<\/li>\n<\/ul>\n\n\n\n
In addition to proposing an appropriate risk-based regulatory framework, our research will include a consideration of implementation science, to facilitate successful implementation<\/em> of AI systems and processes within the biopharma industry. This work may involve conducting interviews to understand common workflows and developing a change management protocol<\/em>, to enable stakeholders to adapt internal processes and culture to successfully implement these technologies.<\/p>\n\n\n\nMedical drug and device regulators, such as the US FDA and UK MHRA, have not yet provided guidance on how to ensure trustworthiness of AI in the drug development process, and this area is ripe for regulatory innovation proposals. We propose examining specific use cases to assess what AI systems are fit for what purposes and whether some systems should be classified for research use only, rather than for use in regulatory reviews. Regulators are seeking feedback from stakeholders2<\/sup> to shape the future landscape, and this is an opportune time to consider policy proposals to advance the use of LLMs and other foundation and generative models in this critical industry.<\/p>\n\n\n\n