Responsible AI Archives | The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/tag/responsible-ai/ Thu, 14 Nov 2024 23:11:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 More value, less risk: How to implement generative AI across the organization securely and responsibly http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/#respond Mon, 04 Nov 2024 16:00:00 +0000 The technology landscape is undergoing a massive transformation, and AI is at the center of this change.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
The technology landscape is undergoing a massive transformation, and AI is at the center of this change—posing both new opportunities as well as new threats.  While AI can be used by adversaries to execute malicious activities, it also has the potential to be a game changer for organizations to help defeat cyberattacks at machine speed. Already today generative AI stands out as a transformative technology that can help boost innovation and efficiency. To maximize the advantages of generative AI, we need to strike a balance between addressing the potential risks and embracing innovation. In our recent strategy paper, “Minimize Risk and Reap the Benefits of AI,” we provide a comprehensive guide to navigating the challenges and opportunities of using generative AI.

background pattern

Minimize Risk and Reap the Benefits of AI

Addressing security concerns and implementing safeguards

According to a recent survey conducted by ISMG, the top concerns for both business executives and security leaders on using generative AI in their organization range, from data security and governance, transparency and accountability to regulatory compliance.1 In this paper, the first in a series on AI compliance, governance, and safety from the Microsoft Security team, we provide business and technical leaders with an overview of potential security risks when deploying generative AI, along with insights into recommended safeguards and approaches to adopt the technology responsibly and effectively.

Learn how to deploy generative AI securely and responsibly

In the paper, we explore five critical areas to help ensure the responsible and effective deployment of generative AI: data security, managing hallucinations and overreliance, addressing biases, legal and regulatory compliance, and defending against threat actors. Each section provides essential insights and practical strategies for navigating these challenges. 

An infographic displaying the top 5 security and business leader concerns: data security, hallucinations, threat actors, biases, and legal and regulatory

Data security

build a foundation for AI success

Explore governance

Data security is a top concern for business and cybersecurity leaders. Specific worries include data leakage, over-permissioned data, and improper internal sharing. Traditional methods like applying data permissions and lifecycle management can enhance security. 

Managing hallucinations and overreliance

Generative AI hallucinations can lead to inaccurate data and flawed decisions. We explore techniques to help ensure AI output accuracy and minimize overreliance risks, including grounding data on trusted sources and using AI red teaming. 

Defending against threat actors

Threat actors use AI for cyberattacks, making safeguards essential. We cover protecting against malicious model instructions, AI system jailbreaks, and AI-driven attacks, emphasizing authentication measures and insider risk programs. 

Grow Your Business with AI You Can Trust

Addressing biases

Reducing bias is crucial to help ensure fair AI use. We discuss methods to identify and mitigate biases from training data and generative systems, emphasizing the role of ethics committees and diversity practices.

Microsoft’s journey to redefine legal support with AI

All in on AI

Navigating AI regulations is challenging due to unclear guidelines and global disparities. We offer best practices for aligning AI initiatives with legal and ethical standards, including establishing ethics committees and leveraging frameworks like the NIST AI Risk Management Framework.

Explore concrete actions for the future

As your organization adopts generative AI, it’s critical to implement responsible AI principles—including fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability. In this paper, we provide an effective approach that uses the “map, measure, and manage” framework as a guide; as well as explore the importance of experimentation, efficiency, and continuous improvement in your AI deployment.

I’m excited to launch this series on AI compliance, governance, and safety with a strategy paper on minimizing risk and enabling your organization to reap the benefits of generative AI. We hope this series serves as a guide to unlock the full potential of generative AI while ensuring security, compliance, and ethical use—and trust the guidance will empower your organization with the knowledge and tools needed to thrive in this new era for business.

Additional resources

Get more insights from Bret Arsenault on emerging security challenges from his Microsoft Security blogs covering topics like next generation built-in security, insider risk management, managing hybrid work, and more.


1, 2 ISMG’s First annual generative AI study – Business rewards vs. security risks: Research report, ISMG.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/feed/ 0
AI safety first: Protecting your business and empowering your people http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/#respond Thu, 31 Oct 2024 15:00:00 +0000 Microsoft has created some resources like the Be Cybersmart Kit to help organizations learn how to protect themselves.

The post AI safety first: Protecting your business and empowering your people appeared first on The Microsoft Cloud Blog.

]]>

Every technology can be used for good or bad. This was as true for fire and for writing as it is for search engines and for social networks, and it is very much true for AI. You can probably think of many ways that these latter two have helped and harmed in your own life—and you can probably think of the ways they’ve harmed more easily, because those stick out in our minds, while the countless ways they helped (finding your doctor, navigating to their office, the friends you made, the jobs you got) fade into the background of life. You’re not wrong to think this: when a technology is new it’s unfamiliar, and every aspect of it attracts our attention—how often do you get astounded by the existence of writing nowadays?—and when it doesn’t work, or gets misused, it attracts our attention a lot.

The job of the people who build technologies is to make them as good as possible at helping, and as bad as possible at harming. That’s what my job is: as CVP and Deputy CISO of AI Safety and Security at Microsoft, I have the rare privilege of leading a team whose job is to look at every aspect of every AI system we build, and figure out ways to make them safer and more effective. We use the word “safety” very intentionally, because our work isn’t just about security, or privacy, or abuse; our scope is simply “if it involves AI, and someone or something could get hurt.”

But the thing about tools is that no matter how safe you make them, they can go wrong and they can be misused, and if AI is going to be a major part of our lives—which it almost certainly is—then we all need to learn how to understand it, how to think about it, and how to keep ourselves safe both with and from it. So as part of Cybersecurity Awareness Month, we’ve created some resources like the Be Cybersmart Kit to help individuals and organizations learn about some of the most important risks and how to protect themselves.

Cybersecurity awareness

Explore cybersecurity awareness resources and training

I’d like to focus on the three risks that are most likely to affect you directly as individuals and organizations in the near future: overreliance, deepfakes, and manipulation. The most important lesson is that AI safety is about a lot more than how it’s built—it’s about the ways we use it.

Overreliance on AI

Because my job has “security” in the title, when people ask me about the number one risk from AI they often expect me to talk about sophisticated cyberattacks. But the reality is that the number one way in which people get hurt by AI is by not knowing when (not) to trust it. If you were around in the late 1990s or early 2000s, you might remember a similar problem with search engines: people were worried that if people saw something on the Internet, all nicely written and formatted, they would assume whatever they read was true—and unfortunately, this worry was well-founded. This might seem ridiculous to us with twenty years of additional experience with the Internet; didn’t people know that the Internet was written by people? Had they ever met people? But at the time, very few people ever encountered professionally-formatted text with clean layouts that wasn’t the result of a lengthy editorial process; our instincts for what “looked reputable” were wrong. Today’s AI has a similar concern because it communicates with you, and we aren’t used to things that speak to us in natural language not understanding basic things about our lives.

We call this problem “overreliance,” and it comes in four basic shapes:

  • Naive overreliance happens when users simply don’t realize that just because responses from AI sound intelligent and well-reasoned, that doesn’t mean the responses actually are smart. They treat the AI like an expert instead of like a helpful, but sometimes naive, assistant.
  • Rushed overreliance happens when people know they need to check, but they just don’t have time to—maybe they’re in a fast-paced environment, or they have too many things to check one by one, or they’ve just gotten used to clicking “accept.”
  • Forced overreliance is what happens when users can’t check, even if they want to; think of an AI helping a non-programmer write a complex website (are you going to check the code for bugs?) or vision augmentation for the blind.
  • Motivated overreliance is maybe the sneakiest: it happens when users have an answer they want to get, and keep asking around (or rephrasing the question, or looking at different information) until they get it.

In each case, the problem with overreliance is that it undermines the human role in oversight, validation, and judgment, which is crucial in preventing AI mistakes from leading to negative outcomes.

How to stay safe

The most important thing you can do to protect yourself is to understand that AI systems aren’t the infallible computers of science fiction. The best way to think of them is as earnest, smart, junior colleagues—excited to help and sometimes really smart but sometimes also really dumb. In fact, this rule applies to a lot more than just overreliance: we’ve found that asking “how would I make this safe if it were a person instead of an AI?” is one of the most reliable ways to secure an AI system against a huge range of risks.

  1. Treat AI as a tool, not a decision-maker: Always verify the AI’s output, especially in critical areas. You wouldn’t hand a key task to a new hire and assume what they did is perfect; treat AI the same way. Whether it’s generating code or producing a report, review it carefully before relying on it.
  2. Maintain human oversight: Think of this as building a business process. If you’re going to be using an AI to help make decisions, who is going to cross-check that? Will someone be overseeing the results for compliance, maybe, or doing a final editorial pass? This is especially true in high-stakes or regulated environments where errors could have serious consequences.
  3. Use AI for brainstorming: AI is at its best when you ask it to lean into its creativity. It’s especially good at helping come up with ideas and interactively brainstorming. Don’t ask AI to do the job for you; ask AI to come up with an idea for your next step, think about it and maybe tweak it a bit, then ask it about its thoughts for what to do next. This way its creativity is boosting yours, while your eye is still on whether the result is what you want.

Train your team to know that AI can make mistakes. When people understand AI’s limitations, they’re less likely to trust it blindly.

Impersonation using AI

Fighting deepfakes with more transparency

Read more

Deepfakes are highly realistic images, recordings, and videos created by AI. They’re called “fakes” when they’re used for deceptive purposes—and both this threat and the next one are about deception. Impersonation is when someone uses a deepfake to convince you that you’re talking to someone that you aren’t. This threat can have serious implications for businesses, as bad actors can use deepfake technology to deceive others into making decisions based on fraudulent information.

Imagine someone creates a deepfake of your chief finance officer’s voice and uses it to convince an employee to authorize a fraudulent transfer. This isn’t hypothetical—it already happened. A company in Hong Kong was taken for $25.6 million with the use of this exact technique.1

The real danger lies in how convincingly these AI-generated voices and videos can mimic trusted individuals, making it hard to know who you’re talking to. Traditional methods of identifying people—like hearing their voice on the phone or seeing them on a video call—are no longer reliable.

How to stay safe

As deepfakes become more compelling, the best defense is to communicate with people in ways where recognizing their face or voice isn’t the only thing you’re relying on. That means using authenticated communication channels like Microsoft Teams or email rather than phone calls or SMS, which are trivial to fake. Within those channels, you need to check that you’re talking to the person you think you’re talking to, and that software (if built right) can help you do that.

In the Hong Kong example above, the bad actor sent an email from a fake but realistic-looking email address inviting the victim to a Zoom meeting on an attacker-controlled but realistically-named server, where they had a conversation with “coworkers” who were actually all deepfakes. Email services such as Outlook can prevent situations like this by vividly highlighting that this is a message from an unfamiliar email address and one that isn’t part of your company; enterprise video conferencing (VC) systems like Teams can identify that you’re connecting to a system outside your own company as a guest. Use tools that provide indicators like these and pay attention to them.

If you find that you need to talk over an unauthenticated channel—say, you get a phone call from a family member in a bad situation and desperately needing you to send them money, or you get a WhatsApp message from an unfamiliar number—consider pre-arranging some secret code words with people you know so you can identify that they’re really who they say they are.

All of these are examples of a familiar technique that we use in security called multi-factor authentication (MFA), which is about using multiple means to verify someone is who they say they are. If you communicate over an authenticated channel, an attacker has to both compromise an account on your service (which itself should be protected by multiple factors) and create a convincing deepfake of that particular person. Forcing attackers to simultaneously do multiple different attacks against the same target at once makes the job exponentially harder for them. Most important services you use (email, social networks, and so on) allow you to set up MFA, and you should always do this when you can—preferably using “strong” MFA methods like physical keys or mobile apps, rather than weak methods like SMS, which are easily faked. According to our latest Microsoft Digital Defense Report, implementing modern day MFA reduces the likelihood of account compromise by 99.2%, significantly strengthening security and making unauthorized access more difficult for attackers to gain access. Although MFA techniques reduce the risk of identity compromise, many organization have been slow to adopt them. So, in January 2020, Microsoft introduced “security defaults” that turn on MFA while turning off basic and legacy authentication for new tenants and those with simple environments. The impact is clear: tenants that use security defaults experience 80% fewer compromises than tenants that don’t.

Scams, phishing, and social manipulation

What is phishing?

Learn more

Beyond impersonating someone you know, AI can be used to power a whole range of attacks against people. The most expensive part of running a scam is taking the victim from the moment they first pick up the bait—answering an email message, perhaps—to the moment the scammers get what they want, be it your password or your money. Phishing campaigns often require work to create cloned websites to steal your credentials. Spear-phishing requires crafting a targeted set of lures for each potential victim. All of these are things that bad actors can do much more quickly and easily with AI tools to help them; they are, after all, the same tools that good actors use to automate customer service, website building, or document creation.

On top of scams, an increasingly important use of AI is in social manipulation, especially by actors with political goals—whether they be real advocacy organizations or foreign intelligence services. Since the mid-2010s, a key goal of many governments has been to sow confusion in the information world in order to sway political outcomes. This can include:

  • Convincing you that something is true when it isn’t—maybe that some kind of crime is rampant and you need to be protected from it, or that your political enemies have been doing something awful.
  • Convincing you that something isn’t true when it is—maybe that the bad things they were caught doing are actually deepfakes and frauds.
  • Simply convincing you that you can’t know what’s true, and you can’t do anything about it anyway, so you should just give up and stay home and not try to affect things.

There are a lot of tricks to doing this, but the most important ones are to make it feel like “everybody feels” something (by making sure you see just enough comments saying something that you figure it must be right, and you start repeating them, making other people believe it even more) and by telling you what you want to hear—creating false stories that line up with what you’re already expecting to believe. (Remember motivated overreliance? This is the same thing!)

AI is supercharging this space as well; it used to be that if you wanted to make sure that every hot conversation about a subject had people voicing your opinion, you needed either very non-human-sounding scripts, or you needed to hire a room full of operators. Today, all you need is a computer.

You can learn more about these attacks in on our threat intelligence website called Microsoft Security Insider.

How to stay safe

Take your current habits for being aware of potential scams or phishing attempts, and turn them up a notch. Just because something showed up at the top of search results doesn’t mean it’s legitimate. Look at things like URLs and source email addresses carefully, and see if you’re looking at something genuine or not.

To detect sophisticated phishing attempts, always verify both the source and the information with trusted channels. Cybercriminals often create a false sense of urgency, use amplification tactics, and mimic trustworthy sources to make their emails or content appear legitimate. Stay especially cautious when approached by unfamiliar individuals online, as most fraud or influence operations begin with a simple social media reply or a seemingly innocent “wrong number” message. (More sophisticated attacks will send friend requests to people, and once you get one person to say yes, your further requests to their friends will look more legitimate, since they now have mutual “friends” with the attacker.)

Social manipulation can affect you both directly (you see messages created by a threat actor) or indirectly (your friends saw those messages and unwittingly repeated them). This means that just because you hear something from someone you trust, you can’t be sure they didn’t get fooled too. If you’re forming your opinion about something, or if you need to make an important decision about whether you believe something or not, do some research, and figure out where a story came from. (And don’t forget that “they won’t tell you about this!” is a common thing to add to frauds, just to make you believe that the lack of news coverage makes it more true.)

But on the other hand, don’t refuse to believe anything you hear, because making you not believe true things is another way you can be cheated. Too much skepticism can get you in just as much trouble as not enough.

And ultimately, remember—social media and similar fora are designed to get you more engaged, activated, and excited, and when you’re in that state, you’re more likely to amplify any feelings you encounter. Often the best thing you can do is simply disconnect for a while and take a breather.

The power and limitations of AI

While AI is a powerful tool, its safety and effectiveness rely on more than just the technology itself. AI functions as one part of a larger, interconnected system that includes human oversight, business processes, and societal context. Navigating the risks—whether it’s overreliance, impersonation, cyberattacks, and social manipulation—requires not only understanding AI’s role but also the actions people must take to stay safe. As AI continues to evolve, staying safe means remaining active participants—adapting, learning, and taking intentional steps to protect both the technology and ourselves. We encourage you to use the resources on the cybersecurity awareness page and help educate your organization so as to create a security-first culture and secure our world—together.

Learn more about AI safety and security


1Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, CNN, 2024.

The post AI safety first: Protecting your business and empowering your people appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/feed/ 0
3 ways social impact organizations can leverage AI to transform outcomes at scale http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/07/3-ways-social-impact-organizations-can-leverage-ai-to-transform-outcomes-at-scale/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/07/3-ways-social-impact-organizations-can-leverage-ai-to-transform-outcomes-at-scale/#respond Mon, 07 Oct 2024 15:00:00 +0000 We are supporting nonprofits through technology, and particularly by leveraging Azure AI, to deepen their impact in three significant ways.

The post 3 ways social impact organizations can leverage AI to transform outcomes at scale appeared first on The Microsoft Cloud Blog.

]]>
Nonprofits are building creative solutions in Microsoft AI to address some of the world’s most entrenched challenges. This evolving technology is helping unlock social impact organizations’ capacity to do good at scale, securely. 

Armed conflict, economic uncertainty, climate change, and countless other pressures contribute to global headwinds gusting against progress. Thousands of nonprofits and other social impact organizations are bringing the skills, passion, and boots-on-the-ground effort to meet these challenges directly. Their dedication improves the lives of people across the world. 

Mission-driven organizations face difficulties of their own, though. Changes in demographics, global economies, and geopolitics lead to rising demand for their services. Nonprofits have always operated with limited resources, but today’s economic climate makes fundraising even tougher. Increasingly sophisticated threats to democracy and cybersecurity make their work more needed and more difficult at the same time. 

Microsoft for Nonprofits

Empower your nonprofit with AI

Microsoft philanthropies

Read more stories

This is where AI can help—to enable nonprofits and other social impact organizations to do more good with less. AI, through Microsoft’s purpose-driven technology, can unlock the capacity of these vital organizations worldwide. As the sector increasingly adopts AI, we see more examples of its potential to accelerate societal impact.  

We are supporting nonprofits through technology, and particularly by leveraging Azure AI, to deepen their impact in three significant ways. AI is helping them protect and expand critical services, meet the needs of shifting demographics in the Global South, and partner across sectors to drive humanitarian progress. 

1. Securing and expanding critical services 

With roughly 8 billion people sharing the earth’s limited resources, and with too many people living on not enough, it’s important to steward critical supplies and services. From healthcare to clean water, these resources are foundational to well-being and the pursuit of fundamental rights. 

AI is enabling social impact organizations to reliably and securely scale these essential services. For example, while the Kenyan Red Cross offers mental health support in person and through its 24-hour phone line, this vital care remains out of reach for many people. The Kenyan Red Cross worked with psychologists and counselors, AI experts, people with lived experience of mental health conditions, and others to create an Azure AI-powered chatbot to expand its free mental health outreach.

The chatbot, which is in its beta release and is embedded in the organization’s website, prompts conversations about mental wellbeing, recommends helpful practices, and offers to connect users to human counselors and in-person resources such as humanitarian organizations or clinics. Kelvin Njenga, Digital Transformation Officer at Kenya Red Cross, adds, “In Kenya, there is a lot of stigma around getting mental health support. Leveraging AI in the chatbot provides that support, confidentially.”

This use of AI does not attempt to replace human connection. Rather, it complements person-to-person support and broadens the Kenyan Red Cross’s capacity to reach even more people with the mental health care they deserve. About one billion people worldwide live with a medical condition, and technology-enabled solutions like this chatbot can help overcome barriers to crucial services.1 

2. Delivering benefits for the most vulnerable and hard to reach people 

AI is enabling organizations to reach more people in some of the most remote areas of the world. Through better use of data and insights, AI solutions can lead to more informed decision-making and more efficient development programs that can change lives.

The International Fund for Agricultural Development (IFAD), a specialized agency of the United Nations and an International Financial Institution that invests in the world’s poorest people, has built an internal analytics platform with Microsoft Power Platform, Microsoft Azure—including Azure OpenAI Service and Azure Machine Learning—and other data and AI solutions to turn its information into insights and then action. 

IFAD developed the platform in compliance with the United Nations Principles on the Ethical Use of AI. The solution combines data, dashboards, and visualizations from diverse sources across IFAD, enabling staff around the world to connect and contribute to this wealth of information. IFAD anticipates the AI-enabled platform will help them develop and implement ever-more impactful interventions which benefit small-scale food producers and other rural people.

AI and machine learning can combine and analyze vast amounts of information at a pace and scale impossible for humans to achieve on their own. Empowered by the most complete information possible, leaders of social impact organizations can move the needle farther on the world’s most pressing challenges.

3. Partnering to empower the social impact ecosystem 

The problems our planet faces are too vast and complex for any one organization to solve. We must all work together to innovate solutions that make life better for everyone. By utilizing the expertise and lived experience of a diversity of stakeholders, AI solutions can make more of a difference than any single organization or agency could do alone. 

That is precisely the approach that one coalition is taking to tackle malnutrition in Kenya. A cross-sector collaboration between Amref Health Africa, the Kenyan Ministry of Health, the University of Southern California, and Microsoft is developing a model in Azure to predict and prevent malnutrition. 

The model combines a decade’s worth of detailed healthcare information, collected by the Kenyan Ministry of Health, with other inputs, such as satellite imagery and weather data. Machine learning-powered modeling will help Amref, Kenyan health agencies, and partner humanitarian organizations better understand current nutrition within communities and anticipate future problems. This forecasting will enable them to mobilize health workers and deploy resources to halt malnutrition, explains Dr. Shiphrah Kuria, Amref Regional Manager for Reproductive, Maternal, and Child Health.

“This technology puts us ahead because with better planning and better prevention, we are getting closer to our goals of ending malnutrition.”

Dr. Shiphrah Kuria, Amref Regional Manager for Reproductive, Maternal, and Child Health

We at Microsoft are not only providing the technology that enables nonprofits to build and utilize these Azure AI-based solutions. We are also investing deeply in the infrastructure and resources needed to run AI at an unprecedented scale. That way, we help bring the power of AI to social impact organizations everywhere—and transform the world for the better.   

Explore AI solutions for nonprofits

Learn more about how Microsoft is supporting nonprofits, see how other organizations are using AI to drive impact, and get more information about how you can safely and securely deploy AI to support your business needs.  


1World Health Organization Fact Sheet, 2022.

The post 3 ways social impact organizations can leverage AI to transform outcomes at scale appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/07/3-ways-social-impact-organizations-can-leverage-ai-to-transform-outcomes-at-scale/feed/ 0
Fighting deepfakes with more transparency about AI https://news.microsoft.com/source/features/ai/fighting-deepfakes-with-more-transparency-about-ai/ https://news.microsoft.com/source/features/ai/fighting-deepfakes-with-more-transparency-about-ai/#respond Thu, 03 Oct 2024 16:00:00 +0000 Supporting a more trustworthy information ecosystem with responsible AI tools and practices is just one way Microsoft is fighting harmful deepfakes. 

The post Fighting deepfakes with more transparency about AI appeared first on The Microsoft Cloud Blog.

]]>
Soon after the 2022 invasion of Ukraine, local photographers began documenting the destruction of cultural sites to help preserve the country’s heritage and collect evidence for restitution. But a spread of faked war-related images was causing a problem: People couldn’t be sure which photos were real. 

That prompted the photographers to use a new tool to show their pictures weren’t “deepfakes” — AI-generated content that realistically resembles existing people, places or events in an inauthentic way. The prototype led to Microsoft’s Content Integrity Suite of tools designed to bring more transparency to online content.  

With global elections expected to draw a record 2 billion voters this year, several political, elections and media organizations are now using the tools to attribute their work, improve transparency, instill trust and ward off disinformation. Supporting a more trustworthy information ecosystem with responsible AI tools and practices is just one way Microsoft is fighting harmful deepfakes

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools.  

“The repercussions of deepfakes can be incredibly severe. They’re a form of cognitive hacking that changes your relationship with reality and how you think about the world,” says Andrew Jenks, Microsoft director of Media Provenance.  
 
Jenks chairs the Coalition for Content Provenance and Authenticity (C2PA), an organization that Microsoft co-founded to develop an open technical standard for establishing the provenance — or the source and history — of digital content, including AI-generated assets.  

Manipulated content is nothing new, and sometimes it’s clearly satire or comedy. But the rise of generative AI is making it easier for people with bad motives to spread disinformation that can lead to fraud, identity theft, election interference and other harms. Details like attribution and captions often disappear when content is shared, making it harder for people to know what to trust. 

Knowing the origin and tracing the history of content can help people be more informed and less vulnerable to deception, Jenks says. Microsoft’s tools include an application, currently in private preview, for creators and publishers to add Content Credentials to their work, or certified metadata with details like who made the content, when it was made and whether AI was used. Part of C2PA’s technical standard, Content Credentials are attached to photo, video and audio cryptographically so any subsequent editing or tampering can be detected more easily.  

Because the metadata is invisible, Microsoft also provides a public Content Integrity Check tool and a web browser extension for consumers to scan for credentials and review provenance information. People can also look for a Content Credentials icon on images and videos on platforms such as LinkedIn.  

“Content Credentials provide an important layer of transparency, whether or not AI was involved, to help people make more informed decisions about content they share and consume online,” Jenks says. “As it becomes easier to identify content sourcing and history, people may become more skeptical of material that lacks specific provenance information.” 

Microsoft uses its Content Credentials tooling in its own image-generating AI products — DesignerCopilotPaint and select models in Azure OpenAI Service — to disclose that AI was used, when the image was created and other details. Other responsible AI controls to deter deepfake abuse include blurring faces of people in photos uploaded in Copilot.  

The repercussions of deepfakes can be incredibly severe. They’re a form of cognitive hacking that changes your relationship with reality and how you think about the world.

Andrew Jenks, Microsoft director of Media Provenance and C2PA chairperson

“AI-generated or -modified media can be helpful in a lot of contexts, from education to accessibility,” says Jessica Young, senior program manager of science and technology policy for Microsoft Chief Scientific Officer and media provenance expert Eric Horvitz.  

“But there should be disclosure about the source of the content and its journey so people can understand where it came from, the extent it was altered and if AI was used. Our approach is not to tell consumers what content is trustworthy, but to give them context they need to make informed decisions.” 

While Jenks says Content Credentials can help establish trust in everything from advertising to dating websites, Microsoft is offering the private preview of its provenance tools first to campaigns, elections organizations and journalists as part of a pledge to fight deceptive use of AI in this year’s elections. The company also created a site for candidates to report election deepfakes appearing on LinkedIn, Xbox and other Microsoft consumer services and launched a $2 million fund with OpenAI to increase AI education among voters and vulnerable communities.  

Microsoft has pushed for information integrity through co-founding C2PA, which now has nearly 200 members, developing provenance technologies with journalists and supporting democratic processes. Recognizing that no single company or approach can address the issue alone, it’s also advocating for relevant legislation and researching additional transparency techniques. The work integrates expertise in research, engineering, policy and threat intelligence, all aimed at strengthening information systems in a complex landscape of different media formats and platforms.  

“We’re continuing to develop and iterate to find the most robust solutions as media evolves with generative AI use in new formats like live video,” Young says, “and we’re sharing best practices and tools with the broader ecosystem.”  

How to spot deepfakes 

  • Know and understand the source: Look for attribution, captions and Content Credentials. Research images through a visual search. Ask yourself if the source is reputable and proceed with caution if there’s no clear source.  
  • Consider intent: Is the content meant to entertain, inform or persuade you? Analyzing the purpose can give you a better sense of whether someone might be trying to deceive. 
  • Look for inconsistencies and anomalies: AI-generated images may have misspellings, blurry figures, mismatched clothing, inconsistent lighting and odd textures. 
  • Test your AI detection skills: Take the Real or Not quiz to see if you can distinguish AI-generated images from real ones. 

Learn more about Microsoft’s Responsible AI work. 

The post Fighting deepfakes with more transparency about AI appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/ai/fighting-deepfakes-with-more-transparency-about-ai/feed/ 0
3 key features and benefits of small language models http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/09/25/3-key-features-and-benefits-of-small-language-models/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/09/25/3-key-features-and-benefits-of-small-language-models/#respond Wed, 25 Sep 2024 15:00:00 +0000 Bigger is not always necessary in the rapidly evolving world of AI, and that is true in the case of small language models (SLMs).

The post 3 key features and benefits of small language models appeared first on The Microsoft Cloud Blog.

]]>
Bigger is not always necessary in the rapidly evolving world of AI, and that is true in the case of small language models (SLMs). SLMs are compact AI systems designed for high volume processing that developers might apply to simple tasks. SLMs are optimized for efficiency and performance on resource-constrained devices or environments with limited connectivity, memory, and electricity—which make them an ideal choice for on-device deployment.1

Researchers at The Center for Information and Language Processing in Munich, Germany found that “… performance similar to GPT-3 can be obtained with language models that are much ‘greener’ in that their parameter count is several orders of magnitude smaller.”2 Minimizing computational complexity while balancing performance with resource consumption is a vital strategy with SLMs. Typically, SLMs are sized at just under 10 billion parameters, making them five to ten times smaller than large language models (LLMs).

Phi small language models

Tiny yet mighty, and ready to use off-the-shelf to build more customized AI experiences

3 key features and benefits of SLMs

While there are many benefits of small language models, here are three key features and benefits.

1. Task-specific fine-tuning

An advantage SLMs have over LLMs is that they can be more easily and cost-effectively fine-tuned with repeated sampling to achieve a high level of accuracy for relevant tasks in a limited domain—fewer graphics processing units (GPUs) required, less time consumed. Thus, fine-tuning SLMs for specific industries, such as customer service, healthcare, or finance, makes it possible for businesses to choose these models for their efficiency and specialization while at the same time benefiting from their computational frugality.

build a strategic plan for AI

Get started

Benefit: This task-specific optimization makes small models particularly valuable in industry-specific applications or scenarios where high accuracy is more important than broad general knowledge. For example, a small model fine-tuned for an online retailer running sentiment analysis in product reviews might achieve higher accuracy in this specific task than if they deployed a general-purpose large model.

2. Reduced parameter count

SLMs have a lower parameter count than LLMs and are trained to discern fewer intricate patterns from the data they work from. Parameters are a set of weights or biases used to define how a model handles and interprets information inputs before influencing and producing outputs. While LLMs might have billions or even trillions of parameters, SLMs often range from several million to a few hundred million parameters.

Here are several key benefits derived from a reduced parameter count:

  • This significant reduction in size allows them to fit into limited-memory devices like smartphones, embedded systems, or Internet of Things (IoT) devices such as smart home appliances, healthcare monitors, or certain security cameras. The smaller size is cost effective too, because it means SLMs can be more easily integrated into applications without requiring substantial storage space or powerful server hardware.
  • The lower latency leads to a quicker turnaround between input and output, which is ideal in scenarios such as real-time applications and environments where immediate feedback is necessary. Rapid responses help maintain user interest and can increase the overall experience with AI-powered applications.
  • With fewer parameters to process, SLMs can generate responses much more quickly than their larger counterparts. This speed is crucial for applications that require real-time or near-real-time interactions, such as chatbots, voice assistants, or translation services.
  • Low latency means queries are processed locally with near-instantaneous responses, making SLMs ideal solutions for time-sensitive applications like interactive customer support systems. Minimal on-device processing helps reduce the risk of data breaches, helps ensure information remains under organizational control, and aligns well with stringent data protection regulations, often found in the public sector as well as those proposed by the General Data Protection Regulation (GDPR). Plus, SLMs running at the edge helps ensure faster, more reliable performance, especially in scenarios where internet connectivity may be limited or unreliable. And devices with limited battery power or processing capabilities, such as low-end smartphones, can operate efficiently, thus extending their operational time between charges.

3. Enterprise-grade hosting on Microsoft Azure

Look for a small language model that provides streamlined full-stack development and hosting across static content and serverless application programming interfaces (APIs) that empower your development teams to scale productivity—from source code through to global high availability.

Benefit: For example, Microsoft Azure hosting for your globally deployed network enables faster page loads, enhanced security, and helps increase worldwide delivery of your cloud content to your users with minimal configuration or copious code required. Once your development team enables this feature for all required production applications in your ecosystem, we will then migrate your live traffic (at a convenient time for your business) to our enhanced global distributed network with no downtime.

Advantages of SLMs as efficient and cost-effective AI solutions

Azure AI and Machine learning blogs

Read the latest

To recap, when deploying an SLM for cloud-based services, smaller organizations, resource constrained environments, or smaller departments within larger enterprises, the main advantages are:

  • Streamlined monitoring and maintenance
  • Increased user control over their data
  • Improved data privacy and security
  • Reduced computational needs
  • Reduced data retention
  • Lower infrastructure
  • Functions offline

These features and benefits mentioned above make small language models such as the Phi model family and GPT-4o mini on Azure AI attractive options for businesses seeking efficient and cost-effective AI solutions. It is worth noting that these compact yet powerful tools play a role in democratizing AI technology, enabling even smaller organizations to leverage advanced language processing capabilities.

Choose SLMs over LLMs when processing specific language and vision tasks, more focused training is needed, or you are managing multiple applications—especially where resources are limited or where specific task performance is prioritized over broad capabilities. Because of their different advantages, many organizations find the best solution is to use a combination of SLMs and LLMs to suit their needs.

Microsoft Azure AI Fundamentals

Learn more about generative AI and language models

Our commitment to responsible AI

Organizations across industries are leveraging Microsoft Azure OpenAI Service and Microsoft Copilot services and capabilities to drive growth, increase productivity, and create value-added experiences. From advancing medical breakthroughs to streamlining manufacturing operations, our customers trust that their data is protected by robust privacy protections and data governance practices. As our customers continue to expand their use of our AI solutions, they can be confident that their valuable data is safeguarded by industry-leading data governance and privacy practices in the most trusted cloud on the market today. 

At Microsoft, we have a long-standing practice of protecting our customers’ information. Our approach to responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.

Learn more about Azure’s Phi model

Learn more about AI solutions from Microsoft


1MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices, Cornell University.

2It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners, The Center for Information and Language Processing in Munich Germany.

The post 3 key features and benefits of small language models appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/09/25/3-key-features-and-benefits-of-small-language-models/feed/ 0
Microsoft Trustworthy AI: Unlocking human potential starts with trust  https://aka.ms/MicrosoftTrustworthyAI https://aka.ms/MicrosoftTrustworthyAI#respond Tue, 24 Sep 2024 14:00:00 +0000 At Microsoft, we have commitments to ensuring Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer. Building on our commitments, today we are announcing new product capabilities to strengthen the security, safety and privacy of AI systems.

The post Microsoft Trustworthy AI: Unlocking human potential starts with trust  appeared first on The Microsoft Cloud Blog.

]]>
As AI advances, we all have a role to play to unlock AI’s positive impact for organizations and communities around the world. That’s why we’re focused on helping customers use and build AI that is trustworthy, meaning AI that is securesafe and private.

At Microsoft, we have commitments to ensure Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer.

Building on our commitments, today we are announcing new product capabilities to strengthen the security, safety and privacy of AI systems.

Security. Security is our top priority at Microsoft, and our expanded Secure Future Initiative (SFI) underscores the company-wide commitments and the responsibility we feel to make our customers more secure. This week we announced our first SFI Progress Report, highlighting updates spanning culture, governance, technology and operations. This delivers on our pledge to prioritize security above all else and is guided by three principles: secure by design, secure by default and secure operations. In addition to our first party offerings, Microsoft Defender and Purview, our AI services come with foundational security controls, such as built-in functions to help prevent prompt injections and copyright violations. Building on those, today we’re announcing two new capabilities:

  • Evaluations in Azure AI Studio to support proactive risk assessments.
  • Microsoft 365 Copilot will provide transparency into web queries to help admins and users better understand how web search enhances the Copilot response. Coming soon.

Our security capabilities are already being used by customers. Cummins, a 105-year-old company known for its engine manufacturing and development of clean energy technologies, turned to Microsoft Purview to strengthen their data security and governance by automating the classification, tagging and labeling of data. EPAM Systems, a software engineering and business consulting company, deployed Microsoft 365 Copilot for 300 users because of the data protection they get from Microsoft. J.T. Sodano, Senior Director of IT, shared that “we were a lot more confident with Copilot for Microsoft 365, compared to other large language models (LLMs), because we know that the same information and data protection policies that we’ve configured in Microsoft Purview apply to Copilot.”

Safety. Inclusive of both security and privacy, Microsoft’s broader Responsible AI principles, established in 2018, continue to guide how we build and deploy AI safely across the company. In practice this means properly building, testing and monitoring systems to avoid undesirable behaviors, such as harmful content, bias, misuse and other unintended risks. Over the years, we have made significant investments in building out the necessary governance structure, policies, tools and processes to uphold these principles and build and deploy AI safely. At Microsoft, we are committed to sharing our learnings on this journey of upholding our Responsible AI principles with our customers. We use our own best practices and learnings to provide people and organizations with capabilities and tools to build AI applications that share the same high standards we strive for.

Today, we are sharing new capabilities to help customers pursue the benefits of AI while mitigating the risks:

  • Correction capability in Microsoft Azure AI Content Safety’s Groundedness detection feature that helps fix hallucination issues in real time before users see them.
  • Embedded Content Safety, which allows customers to embed Azure AI Content Safety on devices. This is important for on-device scenarios where cloud connectivity might be intermittent or unavailable.
  • New evaluations in Azure AI Studio to help customers assess the quality and relevancy of outputs and how often their AI application outputs protected material.
  • Protected Material Detection for Code is now in preview in Azure AI Content Safety to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repositories, fostering collaboration and transparency, while enabling more informed coding decisions.

It’s amazing to see how customers across industries are already using Microsoft solutions to build more secure and trustworthy AI applications. For example, Unity, a platform for 3D games, used Microsoft Azure OpenAI Service to build Muse Chat, an AI assistant that makes game development easier. Muse Chat uses content-filtering models in Azure AI Content Safety to ensure responsible use of the software. Additionally, ASOS, a UK-based fashion retailer with nearly 900 brand partners, used the same built-in content filters in Azure AI Content Safety to support top-quality interactions through an AI app that helps customers find new looks.

We’re seeing the impact in the education space too. New York City Public Schools partnered with Microsoft to develop a chat system that is safe and appropriate for the education context, which they are now piloting in schools. The South Australia Department for Education similarly brought generative AI into the classroom with EdChat, relying on the same infrastructure to ensure safe use for students and teachers.

Privacy. Data is at the foundation of AI, and Microsoft’s priority is to help ensure customer data is protected and compliant through our long-standing privacy principles, which include user control, transparency and legal and regulatory protections. To build on this, today we’re announcing:

  • Confidential inferencing in preview in our Azure OpenAI Service Whisper model, so customers can develop generative AI applications that support verifiable end-to-end privacy. Confidential inferencing ensures that sensitive customer data remains secure and private during the inferencing process, which is when a trained AI model makes predictions or decisions based on new data. This is especially important for highly regulated industries, such as health care, financial services, retail, manufacturing and energy.
  • The general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, which allow customers to secure data directly on the GPU. This builds on our confidential computing solutions, which ensure customer data stays encrypted and protected in a secure environment so that no one gains access to the information or system without permission.
  • Azure OpenAI Data Zones for the EU and U.S. are coming soon and build on the existing data residency provided by Azure OpenAI Service by making it easier to manage the data processing and storage of generative AI applications. This new functionality offers customers the flexibility of scaling generative AI applications across all Azure regions within a geography, while giving them the control of data processing and storage within the EU or U.S.

We’ve seen increasing customer interest in confidential computing and excitement for confidential GPUs, including from application security provider F5, which is using Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs to build advanced AI-powered security solutions, while ensuring confidentiality of the data its models are analyzing. And multinational banking corporation Royal Bank of Canada (RBC) has integrated Azure confidential computing into their own platform to analyze encrypted data while preserving customer privacy. With the general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, RBC can now use these advanced AI tools to work more efficiently and develop more powerful AI models.

An illustration of circles with icons depicting Microsoft’s Trustworthy AI commitments and capabilities around Security, Privacy, and Safety against a white background.

Achieve more with Trustworthy AI 

We all need and expect AI we can trust. We’ve seen what’s possible when people are empowered to use AI in a trusted way, from enriching employee experiences and reshaping business processes to reinventing customer engagement and reimagining our everyday lives. With new capabilities that improve security, safety and privacy, we continue to enable customers to use and build trustworthy AI solutions that help every person and organization on the planet achieve more. Ultimately, Trustworthy AI encompasses all that we do at Microsoft and it’s essential to our mission as we work to expand opportunity, earn trust, protect fundamental rights and advance sustainability across everything we do.

Commitments

Capabilities

The post Microsoft Trustworthy AI: Unlocking human potential starts with trust  appeared first on The Microsoft Cloud Blog.

]]>
https://aka.ms/MicrosoftTrustworthyAI/feed/ 0
Measurement is the key to helping keep AI on track https://news.microsoft.com/source/features/ai/measurement-is-the-key-to-helping-keep-ai-on-track/ https://news.microsoft.com/source/features/ai/measurement-is-the-key-to-helping-keep-ai-on-track/#respond Mon, 09 Sep 2024 14:55:00 +0000 This new approach to measurement, or defining and assessing risks in AI and ensuring solutions are effective, looks at both social and technical elements of how the generative technology interacts with people.

The post Measurement is the key to helping keep AI on track appeared first on The Microsoft Cloud Blog.

]]>
When Hanna Wallach first started testing machine learning models, the tasks were well-defined and easy to evaluate. Did the model correctly identify the cats in an image? Did it accurately predict the ratings different viewers gave to a movie? Did it transcribe the exact words someone just spoke? 

But this work of evaluating a model’s performance has been transformed by the creation of generative AI, such as large language models (LLMs) that interact with people. So Wallach’s focus as a researcher at Microsoft has shifted to measuring AI responses for potential risks that aren’t easy to quantify — “fuzzy human concepts,” she says, such as fairness or psychological safety. 

This new approach to measurement, or defining and assessing risks in AI and ensuring solutions are effective, looks at both social and technical elements of how the generative technology interacts with people. That makes it far more complex but also critical for helping to keep AI safe for everyone. 

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools.  

“A lot of what my team does is figuring out how these ideas from the social sciences can be used in the context of responsible AI,” Wallach says. “It’s not possible to understand the technical aspects of AI without understanding the social aspects, and vice versa.” 

Her team of applied scientists in Microsoft Research analyzes risks that are uncovered by customer feedback, researchers, Microsoft’s product and policy teams, and the company’s AI Red Team — a group of technologists and other experts who poke and prod AI systems to see where things might go wrong.  

When potential issues emerge — with unfairness, for example, such as an AI system showing only women in the kitchen or only men as CEOs — Wallach’s team and others around the company step in to understand and define the context and extent of those risks and all the different ways they might show up in various interactions with the system. 

Once other teams develop fixes for any risks users might encounter, her group measures the system’s responses again to make sure those adjustments are effective. 

She and her colleagues grapple with nebulous concepts, such as what it means for AI to stereotype or demean particular groups of people. Their approach adapts frameworks from linguistics and the social sciences to pin down concrete definitions while respecting any contested meanings — a process known as “systematization.” Once they’ve defined, or systematized, a risk, they start measuring it using annotation techniques, or methods used to label system responses, in simulated and real-world interactions. Then they score those responses to see if the AI system performed acceptably or not. 

The team’s work helps with engineering decisions, giving granular information to Microsoft technologists as they develop mitigations. It also supports the company’s internal policy decisions, with the measurements helping leaders decide if and when a system is ready for deployment. 

How will we know if our mitigations and solutions are effective unless we measure? This is the most important thing in responsible AI right now.

Sarah Bird, Microsoft’s chief product officer of responsible AI

Since generative AI systems deal with text, images and other modalities that represent society and the world around us, Wallach’s team was formed with a unique mix of expertise. Her group includes applied scientists from computer science and linguistics backgrounds who study how different types of risks can manifest. They partner with researchers, domain experts, policy advisors, engineers and others to include as many perspectives and backgrounds as possible.  

As AI systems become more prevalent, it’s increasingly important that they represent and treat marginalized groups fairly. So last year, for example, the group worked with Microsoft’s chief accessibility officer’s team to understand fairness-related risks affecting people with disabilities. They started by diving deep into what it means to represent people with disabilities fairly and identifying how AI system responses can reflect ableism. The group also engaged with community leaders to gain insight into the experiences people with disabilities have when interacting with AI.  

Turning those findings into a clearly systematized concept helps with developing methods to measure the risks, revise systems as needed and then monitor the technology to ensure a better experience for people with disabilities.  

One of the new methodological tools Wallach’s team has helped develop, Azure AI Studio safety evaluations, uses generative AI itself — a breakthrough that can continuously measure and monitor increasingly complex and widespread systems, says Sarah Bird, Microsoft’s chief product officer of responsible AI.  

It’s not possible to understand the technical aspects of AI without understanding the social aspects, and vice versa.

Hanna Wallach, Microsoft researcher

Once the tool is given the right inputs and training in how to label an AI system’s outputs, it roleplays — for example, as someone trying to elicit inappropriate sexual content. It then rates the system’s responses, based on guidelines that reflect the carefully systematized risk. The resulting scores are then aggregated using metrics to assess the extent of the risk. Groups of experts regularly audit the testing to make sure it’s accurate and in alignment with humans’ ratings, Bird says. 

“Getting the AI system to behave like the experts, that’s something that takes a lot of work and innovation and is really challenging and fun to develop” as Microsoft invests in the evolving field of evaluation science, she says. 

Microsoft customers can use the tool, too, to measure how their chatbots or other AI systems are performing against their specific safety goals.  

“Evaluation is the robust thing that helps us understand how an AI system is behaving at scale,” Bird says. “How will we know if our mitigations and solutions are effective unless we measure?  

“This is the most important thing in responsible AI right now.” 

Read our first two posts in the series on AI hallucinations and red teaming

The post Measurement is the key to helping keep AI on track appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/ai/measurement-is-the-key-to-helping-keep-ai-on-track/feed/ 0
Why AI sometimes gets it wrong—and big strides to address it https://news.microsoft.com/source/features/company-news/why-ai-sometimes-gets-it-wrong-and-big-strides-to-address-it/ https://news.microsoft.com/source/features/company-news/why-ai-sometimes-gets-it-wrong-and-big-strides-to-address-it/#respond Thu, 20 Jun 2024 15:00:00 +0000 Around the time GPT-4 was making headlines for acing standardized tests, Microsoft researchers and collaborators were putting other AI models through a different type of test—one designed to make the models fabricate information.

The post Why AI sometimes gets it wrong—and big strides to address it appeared first on The Microsoft Cloud Blog.

]]>
Around the time GPT-4 was making headlines for acing standardized tests, Microsoft researchers and collaborators were putting other AI models through a different type of test — one designed to make the models fabricate information.

To target this phenomenon, known as “hallucinations,” they created a text-retrieval task that would give most humans a headache and then tracked and improved the models’ responses. The study led to a new way to reduce instances when large language models (LLMs) deviate from the data given to them.

It’s also one example of how Microsoft is creating solutions to measure, detect and mitigate hallucinations and part of the company’s efforts to develop AI in a safe, trustworthy and ethical way.

“Microsoft wants to ensure that every AI system it builds is something you trust and can use effectively,” says Sarah Bird, chief product officer for Responsible AI at the company. “We’re in a position of having many experts and the resources to invest in this space, so we see ourselves as helping to light the way on figuring out how to use new AI technologies responsibly — and then enabling everyone else to do it too.”

This post is the first in a Building AI Responsibly series, which explores top concerns with deploying AI and how Microsoft is addressing them with its Responsible AI practices and tools.

Technically, hallucinations are “ungrounded” content, which means a model has changed the data it’s been given or added additional information not contained in it.

There are times when hallucinations are beneficial, like when users want AI to create a science fiction story or provide unconventional ideas on everything from architecture to coding. But many organizations building AI assistants need them to deliver reliable, grounded information in scenarios like medical summarization and education, where accuracy is critical.

That’s why Microsoft has created a comprehensive array of tools to help address ungroundedness based on expertise from developing its own AI products like Microsoft Copilot.

Company engineers spent months grounding Copilot’s model with Bing search data through retrieval augmented generation, a technique that adds extra knowledge to a model without having to retrain it. Bing’s answers, index and ranking data help Copilot deliver more accurate and relevant responses, along with citations that allow users to look up and verify information.

“The model is amazing at reasoning over information, but we don’t think it should be the source of the answer,” says Bird. “We think data should be the source of the answer, so the first step for us in solving the problem was to bring fresh, high-quality, accurate data to the model.”

Being on the cutting edge of generative AI means we have a responsibility and an opportunity to make our own products safer and more reliable.

Ken Archer, Responsible AI principal product manager

Microsoft is now helping customers do the same with advanced tools. The On Your Data feature in Azure OpenAI Service helps organizations ground their generative AI applications with their own data in an enterprise-grade secure environment. Other tools available in Azure AI help customers safeguard their apps across the generative AI lifecycle. An evaluation service helps customers measure groundedness in apps in production and against pre-built groundedness metrics. Safety system messages templates make it easier for engineers to instruct a model to stay focused on sourcing data.

The company also announced a real-time tool to detect groundedness at scale in applications that access enterprise data, such as customer service chat assistants and document summarization tools. The Azure AI Studio tool is powered by a language model fine-tuned to evaluate responses against sourcing documents.

Microsoft is also developing a new mitigation feature to block and correct ungrounded instances in real time. When a grounding error is detected, the feature will automatically rewrite the information based on the data.

“Being on the cutting edge of generative AI means we have a responsibility and an opportunity to make our own products safer and more reliable, and to make our tools available for customers,” says Ken Archer, a Responsible AI principal product manager at Microsoft.

We see ourselves as helping to light the way on figuring out how to use new AI technologies responsibly — and then enabling everyone else to do it too.

Sarah Bird, chief product officer for Responsible AI

The technologies are supported by research from experts like Ece Kamar, managing director at Microsoft Research’s AI Frontiers lab. Guided by the company’s ethical AI principles, her team published the study that improved models’ responses and discovered a new way to predict hallucinations in another study that looked at how models pay attention to user inputs.

“There is a fundamental question: Why do they hallucinate? Are there ways we can open up the model and see when they happen?” she says. “We are looking at this from a scientific lens, because if you understand why they are happening, you can think about new architectures that enable a future generation of models where hallucinations may not be happening.”

Kamar says LLMs tend to hallucinate more around facts that are less available in internet training data, making the attention study an important step in understanding the mechanisms and impact of ungrounded content.

“As AI systems support people with critical tasks and information-sharing, we have to take every risk that these systems generate very seriously, because we are trying to build future AI systems that will do good things in the world,” she says.

Learn more about Microsoft’s Responsible AI work.

The post Why AI sometimes gets it wrong—and big strides to address it appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/company-news/why-ai-sometimes-gets-it-wrong-and-big-strides-to-address-it/feed/ 0
Global Governance: Goals and Lessons for AI https://blogs.microsoft.com/on-the-issues/2024/05/17/global-governance-goals-and-lessons-for-ai/ https://blogs.microsoft.com/on-the-issues/2024/05/17/global-governance-goals-and-lessons-for-ai/#respond Fri, 17 May 2024 15:00:00 +0000 Today, we’re excited to share Global Governance: Goals and Lessons for AI, a collection of external perspectives on international institutions from different domains, brought together with our own thoughts on goals and frameworks for global AI governance.

The post Global Governance: Goals and Lessons for AI appeared first on The Microsoft Cloud Blog.

]]>
As AI policy conversations expanded last year, they started to be punctuated by repeated references to unexpected abbreviations. Not the usual short names for new AI models or machine learning jargon, but acronyms for the different international institutions that today govern civil aviation, nuclear power, and global capital flows.

This piqued our curiosity. We wanted to go deeper and learn more about how approaches to governing civil aviation might apply to a set of technologies that would never be assembled in a hangar or guided by air traffic control officers. And we were eager to learn about nuclear commitments that emerged in an entirely different geopolitical era to regulate technology that showed promise as a tool but had only been used as a weapon.

Indeed, history has long taught us that the way in which technology transforms our world is in part a product of how effectively it is governed, and that international governance is vital for technologies that know no borders.

Today, we’re excited to share Global Governance: Goals and Lessons for AI, a collection of external perspectives on international institutions from different domains, brought together with our own thoughts on goals and frameworks for global AI governance. Through case studies and analysis, experts chart the history and evolution of institutions such as the International Civil Aviation Organization and the Financial Stability Board and share insights on their successes and challenges to inform the global governance of AI.

YouTube Video

https://www.youtube-nocookie.com/embed/7oirNRu3_-U?feature=oembed

Drawing on this deep, expert insight, we came away with three high-level takeaways for AI:

  • As with civil aviation and global capital flows, AI governance involves three interrelated layers: industry standards, domestic regulation, and international governance
  • At the international governance layer, three outcomes are important for AI: globally significant risk governance, regulatory interoperability, and inclusive progress.
  • Four international governance functions will enable those outcomes: monitoring for and managing global risks, setting standards, building scientific consensus, and strengthening appropriate access to resources.

Below, you can hear directly from our expert contributors, sharing some of their insights that helped us land on these takeaways.

From Sir Chris Llewellyn Smith, former CERN Director General and an Emeritus Professor at the University of Oxford, we learned that enabling access to resources is core to the European Organization for Nuclear Research or CERN.Audio Player

00:00

00:00

Use Up/Down Arrow keys to increase or decrease volume.

Building scientific consensus is a governance function epitomized by the Intergovernmental Panel on Climate Change (IPCC), about which we learned from Diana Liverman, a lead author at IPCC, and Youba Sokona, an IPCC vice-chair and lead author. Reflecting on the IPCC’s link to the United Nations, they shared the benefits and drawbacks of working to infuse a political process with science-based decision-making.Audio Player

00:00

00:00

Use Up/Down Arrow keys to increase or decrease volume.

As we learned from Dr. Julia Morse, an Assistant Professor at the University of California, Santa Barbara, many different international institutions have a standards-setting function, though how they perform it varies depending on the formality of their governance structures. Dr. Morse contributed a chapter on our “highly institutionalized world,” comparing international institutions that emerged in the immediate post-World War II era to those that have emerged more recently.Audio Player

00:00

00:00

Use Up/Down Arrow keys to increase or decrease volume.

The International Civil Aviation Organization (ICAO) facilitates collaboration among government and industry experts to set standards that are primarily enforced at the domestic level through member state audits. Incentives to implement standards are strong – ranging from safety and security imperatives to economic drivers, as detailed by David Heffernan and Rachel Schwartz, aviation law experts.Audio Player

00:00

00:00

Use Up/Down Arrow keys to increase or decrease volume.

As we learned from Christina Parajon Skinner, an assistant professor at the University of Pennsylvania, the Financial Action Task Force (FATF) and Financial Stability Board (FSB) also have a standards-setting role. However, the evolving nature of global financial institutions is emblematic of more recent and informal international governance structures, especially around the function of risk monitoring and management.Audio Player

00:00

00:00

Use Up/Down Arrow keys to increase or decrease volume.

Despite its more formal treaty basis, the International Atomic Energy Agency (IAEA) has evolved since its establishment. Best known for its mandate to monitor for and manage risks of nuclear weapon development, it has also grown to develop safety and security standards and to provide technical assistance to member states, as Dr. Trevor Findlay, a Principal Fellow at the University of Melbourne and former appointee to a United Nations advisory board on disarmament matters, helped us understand. Dr. Findlay also pointed out the nuclear energy industry’s limited involvement in IAEA until recently.Audio Player

00:00

00:00

Use Up/Down Arrow keys to increase or decrease volume.

These expert insights articulate the layered, evolving, and interconnected nature of global governance, and help us chart an informed path forward for international AI governance. There is a growing need for effective governance at the global level to ensure that domestic efforts towards safe, secure, and trustworthy AI are interoperable; that AI’s benefits are shared widely; and that globally significant risks are managed effectively.

Today, many governments, international institutions, and members of the private and non-profit sectors are engaged in initiatives that ladder up to these goals. But it remains the case that we are still in the early days of AI governance. To achieve the outcomes that we have offered in the book, we need durable frameworks to guide an evolving global governance system and new approaches that are informed by lessons of the past.

We hope that this book and the rich insights it shares are a useful contribution to that effort.

The post Global Governance: Goals and Lessons for AI appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/on-the-issues/2024/05/17/global-governance-goals-and-lessons-for-ai/feed/ 0
Providing further transparency on our responsible AI efforts https://blogs.microsoft.com/on-the-issues/2024/05/01/responsible-ai-transparency-report-2024/ https://blogs.microsoft.com/on-the-issues/2024/05/01/responsible-ai-transparency-report-2024/#respond Wed, 01 May 2024 15:00:00 +0000 In this inaugural annual report, we provide insight into how we build applications that use generative AI; make decisions and oversee the deployment of those applications; support our customers as they build their own generative applications; and learn, evolve, and grow as a responsible AI community.

The post Providing further transparency on our responsible AI efforts appeared first on The Microsoft Cloud Blog.

]]>
The following is the foreword to the inaugural edition of our annual Responsible AI Transparency Report. The FULL REPORT is available at this link.

We believe we have an obligation to share our responsible AI practices with the public, and this report enables us to record and share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust.  

In 2016, our Chairman and CEO, Satya Nadella, set us on a clear course to adopt a principled and human-centered approach to our investments in artificial intelligence (AI). Since then, we have been hard at work building products that align with our values. As we design, build, and release AI products, six values – transparency, accountability, fairness, inclusiveness, reliability and safety, and privacy and security – remain our foundation and guide our work every day.

To advance our transparency practices, in July 2023, we committed to publishing an annual report on our responsible AI program, taking a step that reached beyond the White House Voluntary Commitments that we and other leading AI companies agreed to. This is our inaugural report delivering on that commitment, and we are pleased to publish it on the heels of our first year of bringing generative AI products and experiences to creators, non-profits, governments, and enterprises around the world.

As a company at the forefront of AI research and technology, we are committed to sharing our practices with the public as they evolve. This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust. We’ve been innovating in responsible AI for eight years, and as we evolve our program, we learn from our past to continually improve. We take very seriously our responsibility to not only secure our own knowledge but also to contribute to the growing corpus of public knowledge, to expand access to resources, and promote transparency in AI across the public, private, and non-profit sectors.

In this inaugural annual report, we provide insight into how we build applications that use generative AI; make decisions and oversee the deployment of those applications; support our customers as they build their own generative applications; and learn, evolve, and grow as a responsible AI community. First, we provide insights into our development process, exploring how we map, measure, and manage generative AI risks. Next, we offer case studies to illustrate how we apply our policies and processes to generative AI releases. We also share details about how we empower our customers as they build their own AI applications responsibly. Last, we highlight how the growth of our responsible AI community, our efforts to democratize the benefits of AI, and our work to facilitate AI research benefit society at large.

There is no finish line for responsible AI. And while this report doesn’t have all the answers, we are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI practices. We invite the public, private organizations, non-profits, and governing bodies to use this first transparency report to accelerate the incredible momentum in responsible AI we’re already seeing around the world.

Click here to read the full report.

The post Providing further transparency on our responsible AI efforts appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/on-the-issues/2024/05/01/responsible-ai-transparency-report-2024/feed/ 0