{"id":995550,"date":"2024-01-05T08:03:59","date_gmt":"2024-01-05T16:03:59","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=995550"},"modified":"2025-03-31T12:31:51","modified_gmt":"2025-03-31T19:31:51","slug":"afmr-responsible-ai","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/afmr-responsible-ai\/","title":{"rendered":"AFMR: Responsible AI"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"white\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\tAccelerating Foundation Models Research\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Responsible AI<\/h1>\n\n\n\n

<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

\n

Academic research plays such an important role in advancing science, technology, culture, and society. This grant program helps ensure this community has access to the latest and leading AI models.<\/em><\/strong><\/p>\nBrad Smith, Vice Chair and President<\/cite><\/blockquote>\n\n\n\n

\n
<\/div>\n\n\n\n
\n
\"green<\/figure>\n\n\n\n

AFMR Goal: Align AI with shared human goals, values, and preferences via research on models<\/h2>\n\n\n\n

which enhances safety, robustness, sustainability, responsibility, and transparency, while ensuring rapid progress can be measured via new evaluation methods<\/p>\n<\/div>\n\n\n\n

<\/div>\n<\/div>\n\n\n\n
\n\t\n\t
\n\t\t
\n\t\t\t
<\/div>\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\n\n\n\n

These projects aim to make AI more responsible by focusing on safety, preventing misinformation, and improving auditing in a way that’s easy to understand. They look into protecting against harmful attacks and inappropriate responses, using feedback and fact-checking to combat misinformation, and incorporating logical reasoning for better auditing. The plans also address the safety of personalized AI models, reducing bias by involving multiple perspectives, and creating a thorough evaluation system for responsible AI. The methods involve comparing different approaches, using fact-checking, integrating reasoning into the framework, involving human collaboration, and comparing benchmark data. The expected outcomes include better defenses against certain types of attacks, improved accuracy in information, safer personalized AI models, unbiased solutions, and an evolving evaluation system for responsible AI.<\/p>\n\n\n\n

<\/div>\n\n\n\n\n\n

Alabama A&M University<\/strong>: Xiang (Susie) Zhao (PI)<\/p>\n\n\n\n

Environmental justice analysis fosters the fair treatment and involvement of all people, regardless of race, color, national origin, or income, in economic development and sustainability, resource allocation, environment protection, etc. Especially, it plays a critical role in intelligent disaster recovery and city planning which saves lives, assets, and energy. Many government agencies including NASA, NOAA, CDC, EPA provide full and open access to their datasets, which can be used to support environmental justice research and identify vulnerable populations and environmental challenges. However, it is difficult for researchers and students at HBCUs\/MSIs to understand and use these datasets due to various or complex data formats, limited computing resources and heavy workload. This project aims to bridge this gap and strengthen the research and education capabilities at HBCUs\/MSIs using Microsoft foundational models and Azure cloud platform. Azure OpenAI GPT-4 and DALL-E 2 will be used for natural language processing to survey and process scientific literature, government reports and blogs related to environmental justice, disaster recovery and city planning. A RA-Bot will be developed to assist the researchers and decision makers to answer inquires, generate summaries, and perform classification and sentiment analysis.<\/p>\n\n\n\n\n\n

Monash University Malaysia<\/strong>: Sailaja Rajanala (PI)<\/p>\n\n\n\n

The proposal aims to enhance auditing of large language models (LLMs) by integrating causal and logical reasoning into the Selection-Inference (SI) framework, offering a deeper understanding of how LLMs function and make decisions. It looks to identify and mitigate biases, and ensure LLM-generated content is ethically compliant. The research also seeks to create auditing pipelines that could be transferred to other AI systems.<\/p>\n\n\n\n\n\n

University of Texas at Arlington<\/strong>: Faysal Hossain Shezan (PI)<\/p>\n\n\n\n

The prevalence of vulnerable code poses a significant threat to software security, allowing attackers to exploit weaknesses and compromise systems. Traditional methods of manual vulnerability detection are expensive, requiring substantial domain expertise. Automated approaches, particularly those based on program analysis techniques like symbolic execution, have shown promise but face challenges in path convergence, scalability, accuracy, and handling complex language features. We propose to introduce a hybrid approach that combines a large language model (LLM), such as GPT-4, with a state-of-the-art symbolic execution tool like KLEE. Our approach aims to enhance symbolic execution by mitigating its inherent challenges. The strategy involves dynamically prioritizing execution paths based on contextual relevance and potential vulnerability disclosure. The LLM will guide symbolic execution towards paths likely to yield significant outcomes, adapting strategies based on evolving context and analysis information. Additionally, we will incorporate semantic information from the LLM to generate more meaningful constraints, reducing the complexity of constraints and directing symbolic execution towards pertinent paths.<\/p>\n\n\n\n

Related papers:<\/strong><\/p>\n\n\n\n