{"id":1019952,"date":"2024-04-02T09:02:54","date_gmt":"2024-04-02T16:02:54","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=1019952"},"modified":"2024-12-02T10:35:10","modified_gmt":"2024-12-02T18:35:10","slug":"earn-trust-ai-for-good","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/earn-trust-ai-for-good\/","title":{"rendered":"Earn Trust – AI for Good"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\"A\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\tAI for Good Lab\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Earn trust<\/h1>\n\n\n\n

<\/p>\n\n\n\n

Learn more<\/a><\/div>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n
\"AI4Good<\/figure>
\n

Building trust with ethical AI<\/h2>\n\n\n\n

To create positive impact with technology, people must be able to trust the technologies they use and the companies behind them. That\u2019s why we\u2019re committed to the responsible use of AI, protecting privacy, and advancing digital safety and cybersecurity.<\/p>\n\n\n\n

\n
How do we best govern AI<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n
\n\t\n\t
\n\t\t
\n\t\t\t

How we work<\/h2>\n\n\n\n
\n
\n
\"a<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Responsible AI<\/h4>\n\n\n\n

Microsoft is dedicated to advancing ethical AI principles, guiding us to construct safe, secure, and transparent AI systems aimed at benefiting society. As technological advancements accelerate, our efforts to govern AI responsibly must evolve accordingly.<\/p>\n\n\n\n

\n
Visit site<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n
\n
\"Woman<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Privacy<\/h4>\n\n\n\n

At Microsoft, we value, protect, and defend privacy. We believe in transparency, so that people and organizations can control their data and have meaningful choices in how it is used.<\/p>\n\n\n\n

\n
Visit site<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n
\n
\"Three<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Digital safety<\/h4>\n\n\n\n

Microsoft works to preserve digital safety while respecting human rights like privacy, freedom of speech, and security. Safe online spaces help people create, connect, and share knowledge.<\/p>\n\n\n\n

\n
Visit site<\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\n\n\n\n
\n\t\n\t
\n\t\t
\n\t\t\t

Our stories: protecting privacy, advancing safety<\/h2>\n\n\n\n
\n
\n
\"Women<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Preserving privacy with synthetic data <\/h4>\n\n\n\n

Our research explores the effectiveness of synthetic data in critical fields like healthcare and humanitarian action, where data privacy is non-negotiable. We found models trained with synthetic data can match the performance of those trained on real data, while upholding privacy and fairness standards.<\/p>\n\n\n\n

\n
Publication<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n
\n
\"Man<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Multifactor authentication protection<\/h4>\n\n\n\n

Our study reveals that Multifactor Authentication (MFA) implementation offers outstanding protection, with over 99.99% of MFA-enabled accounts remaining secure during the investigation period. Moreover, MFA reduces the risk of compromise by 99.22% across the entire population.<\/p>\n\n\n\n

\n
Publication<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n
\n
\"AI<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Unreliable website engagement<\/h4>\n\n\n\n

Misinformation is a growing concern in today\u2019s digital world. Our recent study, co-authored with researchers from Princeton reveals that most people engage with unreliable websites from search because they actively seek them out, rather than being unknowingly led there.<\/p>\n\n\n\n

\n
Publication<\/a><\/div>\n\n\n\n
Article<\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n
\n
\n
\"Crowd<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Abusive AI-generated content<\/h4>\n\n\n\n

AI-generated deepfakes are increasingly used for fraud, abuse, and manipulation, especially targeting kids and seniors. Tackling abusive AI-generated content requires collaboration across sectors. Together we can harness AI’s potential for good while creating a safer and more trustworthy digital environment.<\/p>\n\n\n\n

\n
Publication<\/a><\/div>\n\n\n\n
Article<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n
\n
\"two<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Detecting deepfakes with AI<\/h4>\n\n\n\n

AI-generated content is increasingly hard to detect, with only 61% of participants in a recent study able to distinguish real photos from AI-generated ones. To combat this, Microsoft and TrueMedia have developed an AI-powered app to help identify deepfakes and fight the spread of misinformation.<\/p>\n\n\n\n

\n
Video<\/a><\/div>\n\n\n\n
Quiz<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n
\n
<\/div>\n<\/div>\n<\/div>\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\n\n\n\n\n\n

<\/p>\n","protected":false},"excerpt":{"rendered":"

To create positive impact with technology, people must be able to trust the technologies they use and the companies behind them. That\u2019s why we\u2019re committed to the responsible use of AI, protecting privacy, and advancing digital safety and cybersecurity.<\/p>\n","protected":false},"featured_media":1033074,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[261676],"msr-pillar":[],"class_list":["post-1019952","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[],"related-downloads":[],"related-videos":[],"related-groups":[696544],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[],"msr_impact_theme":["Trust"],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1019952","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":18,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1019952\/revisions"}],"predecessor-version":[{"id":1108500,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1019952\/revisions\/1108500"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1033074"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1019952"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1019952"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1019952"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1019952"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=1019952"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}