Insights for Security Professionals | The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/job-function/security/ Thu, 14 Nov 2024 23:11:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 More value, less risk: How to implement generative AI across the organization securely and responsibly http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/#respond Mon, 04 Nov 2024 16:00:00 +0000 The technology landscape is undergoing a massive transformation, and AI is at the center of this change.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
The technology landscape is undergoing a massive transformation, and AI is at the center of this change—posing both new opportunities as well as new threats.  While AI can be used by adversaries to execute malicious activities, it also has the potential to be a game changer for organizations to help defeat cyberattacks at machine speed. Already today generative AI stands out as a transformative technology that can help boost innovation and efficiency. To maximize the advantages of generative AI, we need to strike a balance between addressing the potential risks and embracing innovation. In our recent strategy paper, “Minimize Risk and Reap the Benefits of AI,” we provide a comprehensive guide to navigating the challenges and opportunities of using generative AI.

background pattern

Minimize Risk and Reap the Benefits of AI

Addressing security concerns and implementing safeguards

According to a recent survey conducted by ISMG, the top concerns for both business executives and security leaders on using generative AI in their organization range, from data security and governance, transparency and accountability to regulatory compliance.1 In this paper, the first in a series on AI compliance, governance, and safety from the Microsoft Security team, we provide business and technical leaders with an overview of potential security risks when deploying generative AI, along with insights into recommended safeguards and approaches to adopt the technology responsibly and effectively.

Learn how to deploy generative AI securely and responsibly

In the paper, we explore five critical areas to help ensure the responsible and effective deployment of generative AI: data security, managing hallucinations and overreliance, addressing biases, legal and regulatory compliance, and defending against threat actors. Each section provides essential insights and practical strategies for navigating these challenges. 

An infographic displaying the top 5 security and business leader concerns: data security, hallucinations, threat actors, biases, and legal and regulatory

Data security

build a foundation for AI success

Explore governance

Data security is a top concern for business and cybersecurity leaders. Specific worries include data leakage, over-permissioned data, and improper internal sharing. Traditional methods like applying data permissions and lifecycle management can enhance security. 

Managing hallucinations and overreliance

Generative AI hallucinations can lead to inaccurate data and flawed decisions. We explore techniques to help ensure AI output accuracy and minimize overreliance risks, including grounding data on trusted sources and using AI red teaming. 

Defending against threat actors

Threat actors use AI for cyberattacks, making safeguards essential. We cover protecting against malicious model instructions, AI system jailbreaks, and AI-driven attacks, emphasizing authentication measures and insider risk programs. 

Grow Your Business with AI You Can Trust

Addressing biases

Reducing bias is crucial to help ensure fair AI use. We discuss methods to identify and mitigate biases from training data and generative systems, emphasizing the role of ethics committees and diversity practices.

Microsoft’s journey to redefine legal support with AI

All in on AI

Navigating AI regulations is challenging due to unclear guidelines and global disparities. We offer best practices for aligning AI initiatives with legal and ethical standards, including establishing ethics committees and leveraging frameworks like the NIST AI Risk Management Framework.

Explore concrete actions for the future

As your organization adopts generative AI, it’s critical to implement responsible AI principles—including fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability. In this paper, we provide an effective approach that uses the “map, measure, and manage” framework as a guide; as well as explore the importance of experimentation, efficiency, and continuous improvement in your AI deployment.

I’m excited to launch this series on AI compliance, governance, and safety with a strategy paper on minimizing risk and enabling your organization to reap the benefits of generative AI. We hope this series serves as a guide to unlock the full potential of generative AI while ensuring security, compliance, and ethical use—and trust the guidance will empower your organization with the knowledge and tools needed to thrive in this new era for business.

Additional resources

Get more insights from Bret Arsenault on emerging security challenges from his Microsoft Security blogs covering topics like next generation built-in security, insider risk management, managing hybrid work, and more.


1, 2 ISMG’s First annual generative AI study – Business rewards vs. security risks: Research report, ISMG.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/feed/ 0
AI safety first: Protecting your business and empowering your people http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/#respond Thu, 31 Oct 2024 15:00:00 +0000 Microsoft has created some resources like the Be Cybersmart Kit to help organizations learn how to protect themselves.

The post AI safety first: Protecting your business and empowering your people appeared first on The Microsoft Cloud Blog.

]]>

Every technology can be used for good or bad. This was as true for fire and for writing as it is for search engines and for social networks, and it is very much true for AI. You can probably think of many ways that these latter two have helped and harmed in your own life—and you can probably think of the ways they’ve harmed more easily, because those stick out in our minds, while the countless ways they helped (finding your doctor, navigating to their office, the friends you made, the jobs you got) fade into the background of life. You’re not wrong to think this: when a technology is new it’s unfamiliar, and every aspect of it attracts our attention—how often do you get astounded by the existence of writing nowadays?—and when it doesn’t work, or gets misused, it attracts our attention a lot.

The job of the people who build technologies is to make them as good as possible at helping, and as bad as possible at harming. That’s what my job is: as CVP and Deputy CISO of AI Safety and Security at Microsoft, I have the rare privilege of leading a team whose job is to look at every aspect of every AI system we build, and figure out ways to make them safer and more effective. We use the word “safety” very intentionally, because our work isn’t just about security, or privacy, or abuse; our scope is simply “if it involves AI, and someone or something could get hurt.”

But the thing about tools is that no matter how safe you make them, they can go wrong and they can be misused, and if AI is going to be a major part of our lives—which it almost certainly is—then we all need to learn how to understand it, how to think about it, and how to keep ourselves safe both with and from it. So as part of Cybersecurity Awareness Month, we’ve created some resources like the Be Cybersmart Kit to help individuals and organizations learn about some of the most important risks and how to protect themselves.

Cybersecurity awareness

Explore cybersecurity awareness resources and training

I’d like to focus on the three risks that are most likely to affect you directly as individuals and organizations in the near future: overreliance, deepfakes, and manipulation. The most important lesson is that AI safety is about a lot more than how it’s built—it’s about the ways we use it.

Overreliance on AI

Because my job has “security” in the title, when people ask me about the number one risk from AI they often expect me to talk about sophisticated cyberattacks. But the reality is that the number one way in which people get hurt by AI is by not knowing when (not) to trust it. If you were around in the late 1990s or early 2000s, you might remember a similar problem with search engines: people were worried that if people saw something on the Internet, all nicely written and formatted, they would assume whatever they read was true—and unfortunately, this worry was well-founded. This might seem ridiculous to us with twenty years of additional experience with the Internet; didn’t people know that the Internet was written by people? Had they ever met people? But at the time, very few people ever encountered professionally-formatted text with clean layouts that wasn’t the result of a lengthy editorial process; our instincts for what “looked reputable” were wrong. Today’s AI has a similar concern because it communicates with you, and we aren’t used to things that speak to us in natural language not understanding basic things about our lives.

We call this problem “overreliance,” and it comes in four basic shapes:

  • Naive overreliance happens when users simply don’t realize that just because responses from AI sound intelligent and well-reasoned, that doesn’t mean the responses actually are smart. They treat the AI like an expert instead of like a helpful, but sometimes naive, assistant.
  • Rushed overreliance happens when people know they need to check, but they just don’t have time to—maybe they’re in a fast-paced environment, or they have too many things to check one by one, or they’ve just gotten used to clicking “accept.”
  • Forced overreliance is what happens when users can’t check, even if they want to; think of an AI helping a non-programmer write a complex website (are you going to check the code for bugs?) or vision augmentation for the blind.
  • Motivated overreliance is maybe the sneakiest: it happens when users have an answer they want to get, and keep asking around (or rephrasing the question, or looking at different information) until they get it.

In each case, the problem with overreliance is that it undermines the human role in oversight, validation, and judgment, which is crucial in preventing AI mistakes from leading to negative outcomes.

How to stay safe

The most important thing you can do to protect yourself is to understand that AI systems aren’t the infallible computers of science fiction. The best way to think of them is as earnest, smart, junior colleagues—excited to help and sometimes really smart but sometimes also really dumb. In fact, this rule applies to a lot more than just overreliance: we’ve found that asking “how would I make this safe if it were a person instead of an AI?” is one of the most reliable ways to secure an AI system against a huge range of risks.

  1. Treat AI as a tool, not a decision-maker: Always verify the AI’s output, especially in critical areas. You wouldn’t hand a key task to a new hire and assume what they did is perfect; treat AI the same way. Whether it’s generating code or producing a report, review it carefully before relying on it.
  2. Maintain human oversight: Think of this as building a business process. If you’re going to be using an AI to help make decisions, who is going to cross-check that? Will someone be overseeing the results for compliance, maybe, or doing a final editorial pass? This is especially true in high-stakes or regulated environments where errors could have serious consequences.
  3. Use AI for brainstorming: AI is at its best when you ask it to lean into its creativity. It’s especially good at helping come up with ideas and interactively brainstorming. Don’t ask AI to do the job for you; ask AI to come up with an idea for your next step, think about it and maybe tweak it a bit, then ask it about its thoughts for what to do next. This way its creativity is boosting yours, while your eye is still on whether the result is what you want.

Train your team to know that AI can make mistakes. When people understand AI’s limitations, they’re less likely to trust it blindly.

Impersonation using AI

Fighting deepfakes with more transparency

Read more

Deepfakes are highly realistic images, recordings, and videos created by AI. They’re called “fakes” when they’re used for deceptive purposes—and both this threat and the next one are about deception. Impersonation is when someone uses a deepfake to convince you that you’re talking to someone that you aren’t. This threat can have serious implications for businesses, as bad actors can use deepfake technology to deceive others into making decisions based on fraudulent information.

Imagine someone creates a deepfake of your chief finance officer’s voice and uses it to convince an employee to authorize a fraudulent transfer. This isn’t hypothetical—it already happened. A company in Hong Kong was taken for $25.6 million with the use of this exact technique.1

The real danger lies in how convincingly these AI-generated voices and videos can mimic trusted individuals, making it hard to know who you’re talking to. Traditional methods of identifying people—like hearing their voice on the phone or seeing them on a video call—are no longer reliable.

How to stay safe

As deepfakes become more compelling, the best defense is to communicate with people in ways where recognizing their face or voice isn’t the only thing you’re relying on. That means using authenticated communication channels like Microsoft Teams or email rather than phone calls or SMS, which are trivial to fake. Within those channels, you need to check that you’re talking to the person you think you’re talking to, and that software (if built right) can help you do that.

In the Hong Kong example above, the bad actor sent an email from a fake but realistic-looking email address inviting the victim to a Zoom meeting on an attacker-controlled but realistically-named server, where they had a conversation with “coworkers” who were actually all deepfakes. Email services such as Outlook can prevent situations like this by vividly highlighting that this is a message from an unfamiliar email address and one that isn’t part of your company; enterprise video conferencing (VC) systems like Teams can identify that you’re connecting to a system outside your own company as a guest. Use tools that provide indicators like these and pay attention to them.

If you find that you need to talk over an unauthenticated channel—say, you get a phone call from a family member in a bad situation and desperately needing you to send them money, or you get a WhatsApp message from an unfamiliar number—consider pre-arranging some secret code words with people you know so you can identify that they’re really who they say they are.

All of these are examples of a familiar technique that we use in security called multi-factor authentication (MFA), which is about using multiple means to verify someone is who they say they are. If you communicate over an authenticated channel, an attacker has to both compromise an account on your service (which itself should be protected by multiple factors) and create a convincing deepfake of that particular person. Forcing attackers to simultaneously do multiple different attacks against the same target at once makes the job exponentially harder for them. Most important services you use (email, social networks, and so on) allow you to set up MFA, and you should always do this when you can—preferably using “strong” MFA methods like physical keys or mobile apps, rather than weak methods like SMS, which are easily faked. According to our latest Microsoft Digital Defense Report, implementing modern day MFA reduces the likelihood of account compromise by 99.2%, significantly strengthening security and making unauthorized access more difficult for attackers to gain access. Although MFA techniques reduce the risk of identity compromise, many organization have been slow to adopt them. So, in January 2020, Microsoft introduced “security defaults” that turn on MFA while turning off basic and legacy authentication for new tenants and those with simple environments. The impact is clear: tenants that use security defaults experience 80% fewer compromises than tenants that don’t.

Scams, phishing, and social manipulation

What is phishing?

Learn more

Beyond impersonating someone you know, AI can be used to power a whole range of attacks against people. The most expensive part of running a scam is taking the victim from the moment they first pick up the bait—answering an email message, perhaps—to the moment the scammers get what they want, be it your password or your money. Phishing campaigns often require work to create cloned websites to steal your credentials. Spear-phishing requires crafting a targeted set of lures for each potential victim. All of these are things that bad actors can do much more quickly and easily with AI tools to help them; they are, after all, the same tools that good actors use to automate customer service, website building, or document creation.

On top of scams, an increasingly important use of AI is in social manipulation, especially by actors with political goals—whether they be real advocacy organizations or foreign intelligence services. Since the mid-2010s, a key goal of many governments has been to sow confusion in the information world in order to sway political outcomes. This can include:

  • Convincing you that something is true when it isn’t—maybe that some kind of crime is rampant and you need to be protected from it, or that your political enemies have been doing something awful.
  • Convincing you that something isn’t true when it is—maybe that the bad things they were caught doing are actually deepfakes and frauds.
  • Simply convincing you that you can’t know what’s true, and you can’t do anything about it anyway, so you should just give up and stay home and not try to affect things.

There are a lot of tricks to doing this, but the most important ones are to make it feel like “everybody feels” something (by making sure you see just enough comments saying something that you figure it must be right, and you start repeating them, making other people believe it even more) and by telling you what you want to hear—creating false stories that line up with what you’re already expecting to believe. (Remember motivated overreliance? This is the same thing!)

AI is supercharging this space as well; it used to be that if you wanted to make sure that every hot conversation about a subject had people voicing your opinion, you needed either very non-human-sounding scripts, or you needed to hire a room full of operators. Today, all you need is a computer.

You can learn more about these attacks in on our threat intelligence website called Microsoft Security Insider.

How to stay safe

Take your current habits for being aware of potential scams or phishing attempts, and turn them up a notch. Just because something showed up at the top of search results doesn’t mean it’s legitimate. Look at things like URLs and source email addresses carefully, and see if you’re looking at something genuine or not.

To detect sophisticated phishing attempts, always verify both the source and the information with trusted channels. Cybercriminals often create a false sense of urgency, use amplification tactics, and mimic trustworthy sources to make their emails or content appear legitimate. Stay especially cautious when approached by unfamiliar individuals online, as most fraud or influence operations begin with a simple social media reply or a seemingly innocent “wrong number” message. (More sophisticated attacks will send friend requests to people, and once you get one person to say yes, your further requests to their friends will look more legitimate, since they now have mutual “friends” with the attacker.)

Social manipulation can affect you both directly (you see messages created by a threat actor) or indirectly (your friends saw those messages and unwittingly repeated them). This means that just because you hear something from someone you trust, you can’t be sure they didn’t get fooled too. If you’re forming your opinion about something, or if you need to make an important decision about whether you believe something or not, do some research, and figure out where a story came from. (And don’t forget that “they won’t tell you about this!” is a common thing to add to frauds, just to make you believe that the lack of news coverage makes it more true.)

But on the other hand, don’t refuse to believe anything you hear, because making you not believe true things is another way you can be cheated. Too much skepticism can get you in just as much trouble as not enough.

And ultimately, remember—social media and similar fora are designed to get you more engaged, activated, and excited, and when you’re in that state, you’re more likely to amplify any feelings you encounter. Often the best thing you can do is simply disconnect for a while and take a breather.

The power and limitations of AI

While AI is a powerful tool, its safety and effectiveness rely on more than just the technology itself. AI functions as one part of a larger, interconnected system that includes human oversight, business processes, and societal context. Navigating the risks—whether it’s overreliance, impersonation, cyberattacks, and social manipulation—requires not only understanding AI’s role but also the actions people must take to stay safe. As AI continues to evolve, staying safe means remaining active participants—adapting, learning, and taking intentional steps to protect both the technology and ourselves. We encourage you to use the resources on the cybersecurity awareness page and help educate your organization so as to create a security-first culture and secure our world—together.

Learn more about AI safety and security


1Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, CNN, 2024.

The post AI safety first: Protecting your business and empowering your people appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/feed/ 0
Microsoft Trustworthy AI: Unlocking human potential starts with trust  https://aka.ms/MicrosoftTrustworthyAI https://aka.ms/MicrosoftTrustworthyAI#respond Tue, 24 Sep 2024 14:00:00 +0000 At Microsoft, we have commitments to ensuring Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer. Building on our commitments, today we are announcing new product capabilities to strengthen the security, safety and privacy of AI systems.

The post Microsoft Trustworthy AI: Unlocking human potential starts with trust  appeared first on The Microsoft Cloud Blog.

]]>
As AI advances, we all have a role to play to unlock AI’s positive impact for organizations and communities around the world. That’s why we’re focused on helping customers use and build AI that is trustworthy, meaning AI that is securesafe and private.

At Microsoft, we have commitments to ensure Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer.

Building on our commitments, today we are announcing new product capabilities to strengthen the security, safety and privacy of AI systems.

Security. Security is our top priority at Microsoft, and our expanded Secure Future Initiative (SFI) underscores the company-wide commitments and the responsibility we feel to make our customers more secure. This week we announced our first SFI Progress Report, highlighting updates spanning culture, governance, technology and operations. This delivers on our pledge to prioritize security above all else and is guided by three principles: secure by design, secure by default and secure operations. In addition to our first party offerings, Microsoft Defender and Purview, our AI services come with foundational security controls, such as built-in functions to help prevent prompt injections and copyright violations. Building on those, today we’re announcing two new capabilities:

  • Evaluations in Azure AI Studio to support proactive risk assessments.
  • Microsoft 365 Copilot will provide transparency into web queries to help admins and users better understand how web search enhances the Copilot response. Coming soon.

Our security capabilities are already being used by customers. Cummins, a 105-year-old company known for its engine manufacturing and development of clean energy technologies, turned to Microsoft Purview to strengthen their data security and governance by automating the classification, tagging and labeling of data. EPAM Systems, a software engineering and business consulting company, deployed Microsoft 365 Copilot for 300 users because of the data protection they get from Microsoft. J.T. Sodano, Senior Director of IT, shared that “we were a lot more confident with Copilot for Microsoft 365, compared to other large language models (LLMs), because we know that the same information and data protection policies that we’ve configured in Microsoft Purview apply to Copilot.”

Safety. Inclusive of both security and privacy, Microsoft’s broader Responsible AI principles, established in 2018, continue to guide how we build and deploy AI safely across the company. In practice this means properly building, testing and monitoring systems to avoid undesirable behaviors, such as harmful content, bias, misuse and other unintended risks. Over the years, we have made significant investments in building out the necessary governance structure, policies, tools and processes to uphold these principles and build and deploy AI safely. At Microsoft, we are committed to sharing our learnings on this journey of upholding our Responsible AI principles with our customers. We use our own best practices and learnings to provide people and organizations with capabilities and tools to build AI applications that share the same high standards we strive for.

Today, we are sharing new capabilities to help customers pursue the benefits of AI while mitigating the risks:

  • Correction capability in Microsoft Azure AI Content Safety’s Groundedness detection feature that helps fix hallucination issues in real time before users see them.
  • Embedded Content Safety, which allows customers to embed Azure AI Content Safety on devices. This is important for on-device scenarios where cloud connectivity might be intermittent or unavailable.
  • New evaluations in Azure AI Studio to help customers assess the quality and relevancy of outputs and how often their AI application outputs protected material.
  • Protected Material Detection for Code is now in preview in Azure AI Content Safety to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repositories, fostering collaboration and transparency, while enabling more informed coding decisions.

It’s amazing to see how customers across industries are already using Microsoft solutions to build more secure and trustworthy AI applications. For example, Unity, a platform for 3D games, used Microsoft Azure OpenAI Service to build Muse Chat, an AI assistant that makes game development easier. Muse Chat uses content-filtering models in Azure AI Content Safety to ensure responsible use of the software. Additionally, ASOS, a UK-based fashion retailer with nearly 900 brand partners, used the same built-in content filters in Azure AI Content Safety to support top-quality interactions through an AI app that helps customers find new looks.

We’re seeing the impact in the education space too. New York City Public Schools partnered with Microsoft to develop a chat system that is safe and appropriate for the education context, which they are now piloting in schools. The South Australia Department for Education similarly brought generative AI into the classroom with EdChat, relying on the same infrastructure to ensure safe use for students and teachers.

Privacy. Data is at the foundation of AI, and Microsoft’s priority is to help ensure customer data is protected and compliant through our long-standing privacy principles, which include user control, transparency and legal and regulatory protections. To build on this, today we’re announcing:

  • Confidential inferencing in preview in our Azure OpenAI Service Whisper model, so customers can develop generative AI applications that support verifiable end-to-end privacy. Confidential inferencing ensures that sensitive customer data remains secure and private during the inferencing process, which is when a trained AI model makes predictions or decisions based on new data. This is especially important for highly regulated industries, such as health care, financial services, retail, manufacturing and energy.
  • The general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, which allow customers to secure data directly on the GPU. This builds on our confidential computing solutions, which ensure customer data stays encrypted and protected in a secure environment so that no one gains access to the information or system without permission.
  • Azure OpenAI Data Zones for the EU and U.S. are coming soon and build on the existing data residency provided by Azure OpenAI Service by making it easier to manage the data processing and storage of generative AI applications. This new functionality offers customers the flexibility of scaling generative AI applications across all Azure regions within a geography, while giving them the control of data processing and storage within the EU or U.S.

We’ve seen increasing customer interest in confidential computing and excitement for confidential GPUs, including from application security provider F5, which is using Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs to build advanced AI-powered security solutions, while ensuring confidentiality of the data its models are analyzing. And multinational banking corporation Royal Bank of Canada (RBC) has integrated Azure confidential computing into their own platform to analyze encrypted data while preserving customer privacy. With the general availability of Azure Confidential VMs with NVIDIA H100 Tensor Core GPUs, RBC can now use these advanced AI tools to work more efficiently and develop more powerful AI models.

An illustration of circles with icons depicting Microsoft’s Trustworthy AI commitments and capabilities around Security, Privacy, and Safety against a white background.

Achieve more with Trustworthy AI 

We all need and expect AI we can trust. We’ve seen what’s possible when people are empowered to use AI in a trusted way, from enriching employee experiences and reshaping business processes to reinventing customer engagement and reimagining our everyday lives. With new capabilities that improve security, safety and privacy, we continue to enable customers to use and build trustworthy AI solutions that help every person and organization on the planet achieve more. Ultimately, Trustworthy AI encompasses all that we do at Microsoft and it’s essential to our mission as we work to expand opportunity, earn trust, protect fundamental rights and advance sustainability across everything we do.

Commitments

Capabilities

The post Microsoft Trustworthy AI: Unlocking human potential starts with trust  appeared first on The Microsoft Cloud Blog.

]]>
https://aka.ms/MicrosoftTrustworthyAI/feed/ 0
Red teams think like hackers to help keep AI safe https://news.microsoft.com/source/features/ai/red-teams-think-like-hackers-to-help-keep-ai-safe/ https://news.microsoft.com/source/features/ai/red-teams-think-like-hackers-to-help-keep-ai-safe/#respond Thu, 01 Aug 2024 15:00:00 +0000 Just as AI tools such as ChatGPT and Copilot have transformed the way people work in all sorts of roles around the globe, they’ve also reshaped so-called red teams—groups of cybersecurity experts whose job is to think like hackers to help keep technology safe and secure.  

The post Red teams think like hackers to help keep AI safe appeared first on The Microsoft Cloud Blog.

]]>
Just as AI tools such as ChatGPT and Copilot have transformed the way people work in all sorts of roles around the globe, they’ve also reshaped so-called red teams — groups of cybersecurity experts whose job is to think like hackers to help keep technology safe and secure.  

Generative AI’s abilities to communicate conversationally in multiple languages, write stories and even create photorealistic images hold new potential hazards, from providing biased or inaccurate results to giving people with ill intent new ways to stir up discord. These risks spurred a novel and broad approach to how Microsoft’s AI Red Team is working to identify and reduce potential harm. 

“We think security, responsible AI and the broader notion of AI safety are different facets of the same coin,” says Ram Shankar Siva Kumar, who leads Microsoft’s AI Red Team. “It’s important to get a universal, one-stop-shop look at all the risks of an AI system before it reaches the hands of a customer. Because this is an area that is going to have massive sociotechnical implications.” 

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools. 

The term “red teaming” was coined during the Cold War, when the U.S. Defense Department conducted simulation exercises with red teams acting as the Soviets and blue teams acting as the U.S. and its allies. The cybersecurity community adopted the language a few decades ago, creating red teams to act as adversaries trying to break, corrupt or misuse technology — with the goal of finding and fixing potential harms before any problems emerged. 

When Siva Kumar formed Microsoft’s AI Red Team in 2018, he followed the traditional model of pulling together cybersecurity experts to proactively probe for weaknesses, just as the company does with all its products and services.  

At the same time, Forough Poursabzi was leading researchers from around the company in studies with a new and different angle from a responsible AI lens, looking at whether the generative technology could be harmful — either intentionally or due to systemic issues in models that were overlooked during training and evaluation. That’s not an element red teams have had to contend with before. 

The different groups quickly realized they’d be stronger together and joined forces to create a broader red team that assesses both security and societal-harm risks alongside each other, adding a neuroscientist, a linguist, a national security specialist and numerous other experts with diverse backgrounds.  

It’s important to get a universal, one-stop-shop look at all the risks of an AI system before it reaches the hands of a customer.

Ram Shankar Siva Kumar, head of Microsoft’s AI Red Team

“We need a wide range of perspectives to get responsible AI red teaming done right,” says Poursabzi, a senior program manager on Microsoft’s AI Ethics and Effects in Engineering and Research (Aether) team, which taps into a whole ecosystem of responsible AI at Microsoft and looks into emergent risks and longer-term considerations with generative AI technologies.  

The dedicated AI Red Team is separate from those who build the technology, and its expanded scope includes adversaries who may try to compel a system to generate hallucinations, as well as harmful, offensive or biased outputs due to inadequate or inaccurate data.  

Team members assume various personas, from a creative teenager pulling a prank to a known adversary trying to steal data, to reveal blind spots and uncover risks. Team members live around the world and collectively speak 17 languages, from Flemish to Mongolian to Telugu, to help with nuanced cultural contexts and region-specific threats.  

And they don’t only try to compromise systems alone; they also use large language models (LLMs) for automated attacks on other LLMs. 

“We need a wide range of perspectives to get responsible AI red teaming done right,” says Poursabzi, a senior program manager on Microsoft’s AI Ethics and Effects in Engineering and Research (Aether) team, which taps into a whole ecosystem of responsible AI at Microsoft and looks into emergent risks and longer-term considerations with generative AI technologies.  

The dedicated AI Red Team is separate from those who build the technology, and its expanded scope includes adversaries who may try to compel a system to generate hallucinations, as well as harmful, offensive or biased outputs due to inadequate or inaccurate data.  

Team members assume various personas, from a creative teenager pulling a prank to a known adversary trying to steal data, to reveal blind spots and uncover risks. Team members live around the world and collectively speak 17 languages, from Flemish to Mongolian to Telugu, to help with nuanced cultural contexts and region-specific threats.  

And they don’t only try to compromise systems alone; they also use large language models (LLMs) for automated attacks on other LLMs. 

The post Red teams think like hackers to help keep AI safe appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/ai/red-teams-think-like-hackers-to-help-keep-ai-safe/feed/ 0
Security above all else—expanding Microsoft’s Secure Future Initiative http://approjects.co.za/?big=en-us/security/blog/2024/05/03/security-above-all-else-expanding-microsofts-secure-future-initiative/ http://approjects.co.za/?big=en-us/security/blog/2024/05/03/security-above-all-else-expanding-microsofts-secure-future-initiative/#respond Fri, 03 May 2024 14:55:00 +0000 We are making security our top priority at Microsoft, above all else—over all other features.

The post Security above all else—expanding Microsoft’s Secure Future Initiative appeared first on The Microsoft Cloud Blog.

]]>
Last November, we launched the Secure Future Initiative (SFI) to prepare for the increasing scale and high stakes of cyberattacks. SFI brings together every part of Microsoft to advance cybersecurity protection across our company and products.

Since then, the threat landscape has continued to rapidly evolve, and we have learned a lot. The recent findings by the Department of Homeland Security’s Cyber Safety Review Board (CSRB) regarding the Storm-0558 cyberattack from last July, and the Midnight Blizzard attack we reported in January, underscore the severity of the threats facing our company and our customers.

Microsoft plays a central role in the world’s digital ecosystem, and this comes with a critical responsibility to earn and maintain trust. We must and will do more.

We are making security our top priority at Microsoft, above all else—over all other features. We’re expanding the scope of SFI, integrating the recent recommendations from the CSRB as well as our learnings from Midnight Blizzard to ensure that our cybersecurity approach remains robust and adaptive to the evolving threat landscape.

We will mobilize the expanded SFI pillars and goals across Microsoft and this will be a dimension in our hiring decisions. In addition, we will instill accountability by basing part of the compensation of the company’s Senior Leadership Team on our progress in meeting our security plans and milestones.

Below are details to demonstrate the seriousness of our work and commitment.

Diagram illustrating the six pillars of the Microsoft Secure Future Initiative.
Expansion of SFI approach and scope
We have evolved our security approach, and going forward our work will be guided by the following three security principles:

Secure by design: Security comes first when designing any product or service.
Secure by default: Security protections are enabled and enforced by default, require no extra effort, and are not optional.
Secure operations: Security controls and monitoring will continuously be improved to meet current and future threats.
We are further expanding our goals and actions aligned to six prioritized security pillars and providing visibility into the details of our execution:

  1. Protect identities and secrets
    Reduce the risk of unauthorized access by implementing and enforcing best-in-class standards across all identity and secrets infrastructure, and user and application authentication and authorization. As part of this, we are taking the following actions:

Protect identity infrastructure signing and platform keys with rapid and automatic rotation with hardware storage and protection (for example, hardware security module (HSM) and confidential compute).
Strengthen identity standards and drive their adoption through use of standard SDKs across 100% of applications.
Ensure 100% of user accounts are protected with securely managed, phishing-resistant multifactor authentication.
Ensure 100% of applications are protected with system-managed credentials (for example, Managed Identity and Managed Certificates).
Ensure 100% of identity tokens are protected with stateful and durable validation.
Adopt more fine-grained partitioning of identity signing keys and platform keys.
Ensure identity and public key infrastructure (PKI) systems are ready for a post-quantum cryptography world.

  1. Protect tenants and isolate production systems
    Protect all Microsoft tenants and production environments using consistent, best-in-class security practices and strict isolation to minimize breadth of impact. As part of this, we are taking the following actions:

Maintain the security posture and commercial relationships of tenants by removing all unused, aged, or legacy systems.
Protect 100% of Microsoft, acquired, and employee-created tenants, commerce accounts, and tenant resources to the security best practice baselines.
Manage 100% of Microsoft Entra ID applications to a high, consistent security bar.
Eliminate 100% of identity lateral movement pivots between tenants, environments, and clouds.
100% of applications and users have continuous least-privilege access enforcement.
Ensure only secure, managed, healthy devices will be granted access to Microsoft tenants.

  1. Protect networks
    Protect Microsoft production networks and implement network isolation of Microsoft and customer resources. As part of this, we are taking the following actions:

Secure 100% of Microsoft production networks and systems connected to the networks by improving isolation, monitoring, inventory, and secure operations.
Apply network isolation and microsegmentation to 100% of the Microsoft production environments, creating additional layers of defense against attackers.
Enable customers to easily secure their networks and network isolate resources in the cloud.

  1. Protect engineering systems
    Protect software assets and continuously improve code security through governance of the software supply chain and engineering systems infrastructure. As part of this, we are taking the following actions:

Build and maintain inventory for 100% of the software assets used to deploy and operate Microsoft products and services.
100% of access to source code and engineering systems infrastructure is secured through Zero Trust and least-privilege access policies.
100% of source code that deploys to Microsoft production environments is protected through security best practices.
Secure development, build, test, and release environments with 100% standardized, governed pipelines and infrastructure isolation.
Secure the software supply chain to protect Microsoft production environments.

  1. Monitor and detect threats
    Comprehensive coverage and automatic detection of threats to Microsoft production infrastructure and services. As part of this, we are taking the following actions:

Maintain a current inventory across 100% of Microsoft production infrastructure and services.
Retain 100% of security logs for at least two years and make six months of appropriate logs available to customers.
100% of security logs are accessible from a central data lake to enable efficient and effective security investigation and threat hunting.
Automatically detect and respond rapidly to anomalous access, behaviors, and configurations across 100% of Microsoft production infrastructure and services.

  1. Accelerate response and remediation
    Prevent exploitation of vulnerabilities discovered by external and internal entities, through comprehensive and timely remediation. As part of this, we are taking the following actions:

Reduce the Time to Mitigate for high-severity cloud security vulnerabilities with accelerated response.
Increase transparency of mitigated cloud vulnerabilities through the adoption and release of Common Weakness Enumeration™ (CWE™), and Common Platform Enumeration™ (CPE™) industry standards for released high severity Common Vulnerabilities and Exposures (CVE) affecting the cloud.
Improve the accuracy, effectiveness, transparency, and velocity of public messaging and customer engagement.
These goals directly align to our learnings from the Midnight Blizzard incident as well as all four CSRB recommendations to Microsoft and all 12 recommendations to cloud service providers (CSPs), across the areas of security culture, cybersecurity best practices, auditing logging norms, digital identity standards and guidance, and transparency.

We are delivering on these goals through a new level of coordination with a new operating model that aligns leaders and teams to the six SFI pillars, in order to drive security holistically and break down traditional silos. The pillar leaders are working across engineering Executive Vice Presidents (EVPs) to drive integrated, cross-company engineering execution, doing this work in waves. These engineering waves involve teams across Microsoft Azure, Windows, Microsoft 365, and Security, with additional product teams integrating into the process weekly.

While there is much more to do, we’ve made progress in executing against SFI priorities. For example, we’ve implemented automatic enforcement of multifactor authentication by default across more than one million Microsoft Entra ID tenants within Microsoft, including tenants for development, testing, demos, and production. We have eliminated or reduced application targets by removing 730,000 apps to date across production and corporate tenants that were out-of-lifecycle or not meeting current SFI standards. We have expanded our logging to give customers deeper visibility. And we recently announced a significant shift on our response process: We are now publishing root cause data for Microsoft CVEs using the CWE™ industry standard.

Adhering to standards with paved paths systems
Paved paths are best practices from our learned experiences, drawing upon lessons such as how to optimize productivity of our software development and operations, how to achieve compliance (such as Software Bill of Materials, Sarbanes-Oxley Act, General Data Protection Regulation, and others), and how to eliminate entire categories of vulnerabilities and mitigate related risks. A paved path becomes a standard when adoption significantly improves the developer or operations experience or security, quality, or compliance.

With SFI, we are explicitly defining standards for each of the six security pillars, and adherence to these standards will be measured as objectives and key results (OKRs).

Driving continuous improvement
The Secure Future Initiative empowers all of Microsoft to implement the needed changes to deliver security first. Our company culture is based on a growth mindset that fosters an ethos of continuous improvement. We continually seek feedback and new perspectives to tune our approach and progress. We will take our learnings from security incidents, feed them back into our security standards, and operationalize these learnings as paved paths that can enable secure design and operations at scale.

Instituting new governance
We are also taking major steps to elevate security governance, including several organizational changes and additional oversight, controls, and reporting.

Microsoft is implementing a new security governance framework spearheaded by the Chief Information Security Officer (CISO). This framework introduces a partnership between engineering teams and newly formed Deputy CISOs, collectively responsible for overseeing SFI, managing risks, and reporting progress directly to the Senior Leadership Team. Progress will be reviewed weekly with this executive forum and quarterly with our Board of Directors.

Finally, given the importance of threat intelligence, we are bringing the full breadth of nation-state actor and threat hunting capabilities into the CISO organization.

Instilling a security-first culture
Culture can only be reinforced through our daily behaviors. Security is a team sport and is best realized when organizational boundaries are overcome. The engineering EVPs, in close coordination with SFI pillar leaders, are holding broadscale weekly and monthly operational meetings that include all levels of management and senior individual contributors. These meetings work on detailed execution and continuous improvement of security in context with what we collectively deliver to customers. Through this process of bottom-to-top and end-to-end problem solving, security thinking is ingrained in our daily behaviors.

Ultimately, Microsoft runs on trust and this trust must be earned and maintained. As a global provider of software, infrastructure, and cloud services, we feel a deep responsibility to do our part to keep the world safe and secure. Our promise is to continually improve and adapt to the evolving needs of cybersecurity. This is job number one for us.

Get started with Microsoft Security

The post Security above all else—expanding Microsoft’s Secure Future Initiative appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2024/05/03/security-above-all-else-expanding-microsofts-secure-future-initiative/feed/ 0
Groundbreaking AI innovation is transforming industries across France http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/04/30/groundbreaking-ai-innovation-is-transforming-industries-across-france/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/04/30/groundbreaking-ai-innovation-is-transforming-industries-across-france/#respond Tue, 30 Apr 2024 15:00:00 +0000 Reflecting on the transformative journey of AI within France alone, it becomes evident that we're venturing into an unprecedented era of technological advancement.

The post Groundbreaking AI innovation is transforming industries across France appeared first on The Microsoft Cloud Blog.

]]>
This blog is part of the AI worldwide series, which highlights customer stories from around the globe. Read more stories from India, Australia and New Zealand, Brazil, and Japan.

AI is currently at the forefront of global technological advancement, permeating various sectors from insurance to energy, driving efficiency, innovation, and transformative changes in society. With ongoing developments in machine learning and natural language processing, AI continues to reshape industries, offering a glimpse into a future where technology and human ingenuity intersect in exciting new ways. The expanding footprint of AI promises both unprecedented opportunities and considerations for responsible implementation.  

For me personally, one of the most exciting aspects is seeing the revolution of industries set into motion as sector-specific use cases begin to emerge. The number of Azure AI customers continues to grow with more than 65% of the Fortune 500 companies now using Microsoft Azure OpenAI Service, which underscores the critical role of partnerships and industry innovation to scale AI solutions to full potential across sectors.

Microsoft AI

You dream it. AI helps build it.

A decorative GIF with abstract swirling animations

This willingness of industry leaders to be pioneers of AI was on bold display during the Microsoft AI Tour stop in Paris, part of the global event series designed to help decision makers and developers discover new opportunities with AI and advance their knowledge. Organizations such as Schneider Electric, The Groupama Group, Amadeus, Onepoint, AXA, and TotalEnergies are not just adopting AI; they’re redefining its potential. These groundbreaking use cases are shedding light on a future where AI is not just a tool, but a catalyst for a richer, more efficient, and more sustainable world.

Groupama’s virtual assistant optimizes policyholder service management

The Groupama Group, a premier mutual insurance group in France, has introduced a cutting-edge virtual assistant within its Employee Savings unit, harnessing the power of Azure OpenAI Service, Azure AI Search, and the Microsoft Bot Framework, to streamline customer managers’ interactions with policyholders. First ideated during an AI hackathon, the assistant has been embraced by the unit’s entire staff and boasts an impressive 80% success rate in providing accurate, dependable, and verifiable information.

“Our managers save a considerable amount of time and are able to carry out their work in much better conditions,” shared François-Xavier Enderlé, Head of Digital Transformation, “and this clearly enhances the quality of the relationship we maintain with our customers.”

Groupama has also stood up an interdepartmental AI-centered think tank, AI Factory, which is currently exploring more than 25 AI use cases aimed at revolutionizing claims processing, enhancing customer service through tools like a new FAQ chatbot, and streamlining the underwriting process for efficiency. Further, Groupama aims to democratize AI technology through training on AI prompting, empowering employees to innovate and improve operational efficiency and customer engagement.

Amadeus enhances employee efficiency with Copilot for Microsoft 365

Amadeus, a global technology provider for the travel industry, has deployed Copilot for Microsoft 365 to streamline work and free employees to focus on value-added tasks. Seamlessly integrated into Microsoft Teams, Word, PowerPoint, and Outlook, Amadeus’ Copilot solution has significantly improved operational efficiency.

“One of the challenges for large-scale, global organizations is the collaboration and the data management,” explains Marco Ruiz González, Product Manager and Solution Architect who supervised the deployment. “We are generating a large amount of data from different countries, and it’s very useful to have quick access to all this data.”

Copilot for Microsoft 365

Adoption strategies

Early results are promising: pilot users reported substantial time savings in communication drafting, enhanced efficiency in email and meeting management, and improved information gathering and content translation capabilities.

With a 90% adoption rate among the initial 300 pilot users, half of whom engage with Copilot weekly, Amadeus plans to extend Copilot to 3,000 employees over the next six months, prioritizing adoption training tailored to diverse user profiles.

Schneider Electric leads the charge in sustainable energy management with AI

Schneider Electric is tackling the complex issue of optimizing energy use and performance with its EcoStruxure platform. By combining Azure OpenAI Service and Internet of Things (IoT), EcoStruxures merges Schneider Electric’s industry knowledge with Microsoft AI technology, enabling sustainable energy solutions and efficient energy management on a global scale. This includes dynamic control of energy performance, decision-making on the use of renewable energy sources, and overall energy optimization.

“People are using sustainable energy solutions to both produce and consume energy, and they can optimize how to produce or store that energy on the grid as it makes sense,” says Yoann Bersihand, Vice President of AI Technology at Schneider Electric. “Without AI, there is no way that we could address a problem as complex as this.”

The platform is designed with a layered architecture, planning future enhancements to integrate AI directly into hardware for efficient and sustainable energy management, especially beneficial for customers with limitations on using cloud services.

Schneider Electric has expanded its partnership with Microsoft, integrating Azure OpenAI Service into its operations to enhance efficiency and innovation across various processes. The integration enables the creation of solutions such as the Resource Advisor Copilot, leveraging large language model technology for data analysis and decision support, and Jo-Chat GPT, an internal tool enhancing employee productivity through generative AI. Further innovations include a programmable logic controller (PLC) code generation assistant that helps engineers quickly create high-quality, tested, and validated code, the Finance Advisor and Knowledge Bot, aimed at improving financial decision-making and customer service, respectively, as well as plans to incorporate GitHub Copilot to boost offer creation and Microsoft Copilot for Sales to support frontline staff. These advancements signify Schneider Electric’s commitment to leveraging generative AI for operational excellence and innovation.

“We didn’t want AI just to be an extra layer on top of the data teams. We decided to really go all-in on AI and not simply create proofs of concepts,” explained Yoann Bersihand, Schneider Electric’s Vice President of AI Technology, capturing Schneider Electric’s commitment to fully integrating AI into its operations to lead innovation in the energy sector.

Onepoint unlocks productivity company-wide with generative AI

An early adopter of AI, technology and consulting firm, Onepoint, is infusing AI at every level of the company with Microsoft’s turnkey generative AI solutions: Azure OpenAI Service, GitHub Copilot, and Copilot for Microsoft 365.

Neo, Onepoint’s secure conversational agent built on Azure OpenAI Service, leverages a library of prompts and business-oriented solutions to quickly generate reports and analyses, increasing productivity for 3,300 employees across the company. Onepoint has also piloted GitHub Copilot, resulting in higher-quality code, better documentation, and productivity gains around 40% on code production.

“The pilot showed us that if developers were properly acculturated to the product, it was really possible to make a quantum leap in productivity,” asserts François Binder, Partner Data & AI for Onepoint.

Addressing the challenge of acculturating employees to AI technology and practices, Onepoint has instituted an “AI Office” to ensure both technical and non-technical staff understand and adopt AI effectively. By providing structured training, fostering an AI community, and overseeing the deployment of AI solutions, the unit seeks to bridge knowledge gaps and biases related to AI among both technical and non-technical employees.

“We’re doing everything we can to fully embark our team in the generative AI adventure and give each of our consultants the means to become augmented consultants,” insists Binder.

What’s more, Onepoint is strategically integrating generative AI solutions into its offerings, along with personalized training to ensure customers are well-versed in AI’s capabilities and best practices, extending the benefits of AI to their customers as well.

AXA’s secure generative AI platform boost productivity of global employees

AXA, a global leader in insurance and asset management, is embracing the digital future through generative AI with the launch of AXA Secure GPT. Developed in collaboration with Microsoft and powered by Azure OpenAI Service, this AXA Secure GPT is designed to equip AXA’s 140,000 employees with cutting-edge AI tools in a secure and efficient manner.

Addressing the challenge of safely integrating public AI advancements within the corporate environment, AXA Secure GPT ensures the utmost privacy and control over data by employing robust filtering and classification, alongside secure cloud tenancy to keep all data and interactions within a controlled environment. Stringent authentication protocols and comprehensive security controls monitor and protect against potential threats. By leveraging Microsoft’s content filtering and adding an extra layer of security, AXA Secure GPT exceeds the current standards for data privacy and security, ensuring a reliable and secure tool for its employees.

With the goal to scale from 1,000 current users to all 140,000 global employees by mid-2024, AXA provides comprehensive AI support and training which includes leveraging Microsoft consulting services for optimal technological use and architecture design, alongside a dedicated change management program in each country to ensure smooth integration.

“As an employer, it is our responsibility to provide our employees with the best tools to enhance their comfort and enable them to focus on high-value activities,” said Vincent De Ponthaud, Head of Software & AI Engineering at AXA. Tailored training sessions and a specially curated prompt library, aimed at enhancing productivity across various departments, empower AXA employees to focus on high-value activities.

TotalEnergies supports operational transformation with AI and low-code solutions

Multi-energy company, TotalEnergies, has implemented Copilot for Microsoft 365 to support operational transformation. In the initial testing phase involving 300 employees, the company observed enhanced operational efficiency and improved user experience. Concurrently, TotalEnergies is empowering its workforce with Microsoft Power Platform, enabling them to develop low-code/no-code solutions integrated with other company applications and databases, thereby streamlining the resolution of various day-to-day challenges.

“In line with our pioneering spirit, TotalEnergies is committed to digital transformation and supports its employees so that they can make the most of it,” said Patrick Pouyanné, CEO of TotalEnergies. “The new technologies of generative artificial intelligence and of ‘low code no code’ will provide them with the simplification and autonomy they need to put their skills and creativity even further at the service of our company’s transition strategy.” In pursuit of this objective, TotalEnergies employees will receive training dedicated to the understanding and utilization of the new AI tools effectively.

AI for everyone

The Microsoft AI Tour provided a compelling opportunity to witness firsthand the pinnacle of regional innovation and to glimpse the far-reaching global impact poised to shape our future. Reflecting on the transformative journey of AI within France alone, it becomes evident that we’re venturing into an unprecedented era of technological advancement. The success stories of companies like Schneider Electric, Groupama, Amadeus, Onepoint, and TotalEnergies illustrate how the synergy between AI and human ingenuity propels progress and transformation across diverse sectors, transformation that will doubtlessly reach beyond borders to the benefit of organizations across the globe.

A showroom from the Microsoft AI Tour in Paris, France where business leaders are testing Microsoft AI solutions
The Microsoft AI Tour in Paris, France.

It’s also important to recognize that our exploration with AI is in its infancy and the horizon for transformative impact is limitless. It’s critical that business leaders scaffold AI innovation within an architecture of responsible AI, which include the development of ethical guidelines that address transparency, equity, accountability, and privacy; building diverse teams; investing in employee education around responsible AI use; and collaboration with industry bodies and policymakers to establish regulatory frameworks that can guide responsible deployment. When innovation and responsibility are aligned, we draw closer to ensuring that the potential for transformational impact of AI will be harnessed for the benefit of society as a whole.

Take the next step in your AI journey by exploring Microsoft AI solutions, diving into The AI Strategy Roadmap, and getting skilled up with Microsoft Learn’s AI learning hub to ensure you’re ready to leverage Microsoft AI to its fullest potential.

The post Groundbreaking AI innovation is transforming industries across France appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/04/30/groundbreaking-ai-innovation-is-transforming-industries-across-france/feed/ 0
Building a foundation for AI success: Governance http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/03/28/building-a-foundation-for-ai-success-governance/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/03/28/building-a-foundation-for-ai-success-governance/#respond Thu, 28 Mar 2024 15:00:00 +0000 We have collected a set of resources that encompass best practices for AI governance, focusing on security, privacy and data governance, and responsible AI. 

The post Building a foundation for AI success: Governance appeared first on The Microsoft Cloud Blog.

]]>
This is the last post in our six-part blog series. See part one, part two, part three, part four, part five, and download the white paper.

To date, this series has explored four of the five drivers of AI readiness: business strategy, technology and data strategy, AI strategy and experience, and organization and culture. Each is critical to an organization’s ability to use AI to deliver value to the business, whether it’s related to productivity enhancements, customer experience, revenue generation, or net-new innovation. But nothing is ultimately more important than AI governance, which includes the processes, controls, and accountability structures needed to govern data privacy, data governance, security, and responsible development and use of AI in an organization.   

“We recognize that trust is not a given but earned through action,” said Microsoft Vice Chair and President Brad Smith. “That’s precisely why we are so focused on implementing our Microsoft responsible AI principles and practices—not just for ourselves, but also to equip our customers and partners to do the same.” 

In that spirit, we have collected a set of resources that encompass best practices for AI governance, focusing on security, privacy and data governance, and responsible AI. 

Building a Foundation for AI Success

A leader’s guide to accelerate your company’s success with AI

a close up of a purple wall

Security

Just as AI enables new opportunities, it also introduces new imperatives to manage risk, whether related specifically to AI usage, app and data protection, compliance with organizational and legal policies, or threat detection. The Microsoft Security Blog includes a set of resources to help you modernize security operations, empower security professionals, and learn best practices to mitigate and manage risk more effectively.  

One of the first steps you can take is to understand how AI is being used in the organization so you can make informed decisions and implement the appropriate controls. This post lays out the primary concerns leaders have about implementing AI, as well as a set of recommendations on how to discover, protect, and govern AI usage. 

For example, you may have heard of (or already be implementing) red teaming. Red teaming, according to this post by the Microsoft AI Red Team, “broadly refers to the practice of emulating real-world adversaries and their tools, tactics, and procedures to identify risks, uncover blind spots, validate assumptions, and improve the overall security posture of systems.” The post shares additional education, guidance, and resources to help your organization apply this best practice to your AI systems. 

Microsoft’s holistic approach to generative AI security considers the technology, its users, and society at large across four areas of protection: data privacy and ownership, transparency and accountability, user guidance and policy, and secure by design. For more on how Microsoft secures generative AI, download Securing AI guidance.  

Privacy and data governance

Building trust in AI requires a strong privacy and data governance foundation. As our Chief Privacy Officer Julie Brill has said, “At Microsoft we want to empower our customers to harness the full potential of new technologies like artificial intelligence, while meeting their privacy needs and expectations.” Enhancing trust and protecting privacy in the AI era, originally posted on the Microsoft on the Issues Blog, describes our approach to data privacy, focusing on topics such as data security, transparency, and data protection user controls. It also includes a set of resources to help you dig deeper into our approaches to privacy issues and share what we are learning. 
 

Data governance refers to the processes, policies, roles, metrics, and standards that enable secure, private, accurate, and usable data throughout its life cycle. It’s vital to your organization’s ability to manage risk, build trust, and promote successful business outcomes. It is also the foundation for data management practices that reduce the risk of data leakage or misuse of confidential or sensitive information such as business plans, financial records, trade secrets, and other business-critical assets. This post shares Microsoft’s approach to data security and compliance so you can learn more about how to safely and confidently adopt AI technologies and keep your most important asset—your data—safe. 

Responsible AI

“Don’t ask what computers can do, ask what they should do.” That is the title of the chapter on AI and ethics in a book Brad Smith coauthored in 2019, and they are also the first words in Governing AI: A Blueprint for the Future, which details Microsoft’s five-point approach to help governance advance more quickly, as well as our “Responsible by Design” approach to building AI systems that benefit society. 

The Microsoft on the Issues Blog includes a wealth of perspectives on responsible AI topics, including the Microsoft AI Access Principles, which detail our commitments to promote innovation and competition in the new AI economy and approaches to combating deepfakes in elections announced as part of the new Tech Accord announced in February in Munich. 

The Responsible AI Standard is the product of a multi-year effort to define product development requirements for responsible AI. It captures the essence of the work Microsoft has done to operationalize its responsible AI principles and offers valuable guidance to leaders and practitioners looking to apply similar approaches in their own organizations.

You may also have heard about our AI customer commitments, which include:  

  • Sharing what we are learning about developing and deploying AI responsibly and assist you in learning how to do the same. 
  • Creating an AI assurance program.
  • Supporting you as you implement your own AI systems responsibly. 

The Empowering responsible AI practices website brings together a range of policy, research, and engineering resources relevant to a spectrum of roles within your organization. Here you can find out more about our commitments to advance safe, secure, and trustworthy AI, learn about the most recent research advancements and collaborations, and explore responsible AI tools to help your organization define and implement best practices for human-AI interaction, fairness, transparency and accountability, and other critical objectives. 

Next steps

As Brad Smith concluded in Governing AI: A Blueprint for the Future, “We’re on a collective journey to forge a responsible future for artificial intelligence. We can all learn from each other. And no matter how good we may think something is today, we will all need to keep getting better.” 

Download our e-book, “The AI Strategy Roadmap: Navigating the Stages of AI Value Creation,” in which we share the emerging best practices that global leaders are using to accelerate time to value with AI. It is based on a research study including more than 1,300 business and technology decision makers across multiple regions and industries.

The post Building a foundation for AI success: Governance appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/03/28/building-a-foundation-for-ai-success-governance/feed/ 0
Protecting the data of our commercial and public sector customers in the AI era https://blogs.microsoft.com/on-the-issues/2024/03/28/data-protection-responsible-ai-azure-copilot/ https://blogs.microsoft.com/on-the-issues/2024/03/28/data-protection-responsible-ai-azure-copilot/#respond Thu, 28 Mar 2024 14:00:00 +0000 Our approach to Responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.  

The post Protecting the data of our commercial and public sector customers in the AI era appeared first on The Microsoft Cloud Blog.

]]>
Organizations across industries are leveraging Microsoft Azure OpenAI Service and Copilot services and capabilities to drive growth, increase productivity, and create value-added experiences. From advancing medical breakthroughs to streamlining manufacturing operations, our customers trust that their data is protected by robust privacy protections and data governance practices. As our customers continue to expand their use of our AI solutions, they can be confident that their valuable data is safeguarded by industry-leading data governance and privacy practices in the most trusted cloud on the market today. 

At Microsoft, we have a long-standing practice of protecting our customers’ information. Our approach to Responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.  

Microsoft's privacy commitments

Microsoft’s existing privacy commitments extend to our AI commercial products 

 Commercial and public sector customers can rest assured that the privacy commitments they have long relied on for our enterprise cloud products also apply to our enterprise generative AI solutions, including Azure OpenAI Service and our Copilots.  

  • We will keep your organization’s data private. Your data remains private when using Azure OpenAI Service and Copilots and is governed by our applicable privacy and contractual commitments, including the commitments we make in the Microsoft’s Data Protection AddendumMicrosoft’s Product Terms, and the Microsoft Privacy Statement.  
  • You are in control of your organization’s data. Your data is not used in undisclosed ways or without your permission. You may choose to customize your use of Azure OpenAI Service by opting to use your data to fine tune models for your organization’s own use. If you do use your organization’s data to fine tune, any fine-tuned AI solutions created with your data will be available only to you. 
  • Your access control and enterprise policies are maintained. To protect privacy within your organization when using enterprise products with generative AI capabilities, your existing permissions and access controls will continue to apply to ensure that your organization’s data is displayed only to those users to whom you have given appropriate permissions.   
  • Your organization’s data is not shared. Microsoft does not share your data with third parties without your permission. Your data, including the data generated through your organization’s use of Azure OpenAI Service or Copilots – such as prompts and responses – are kept private and are not disclosed to third parties. 
  • Your organization’s data privacy and security are protected by design. Security and privacy are incorporated through all phases of design and implementation of Azure OpenAI Service and Copilots. As with all our products, we provide a strong privacy and security baseline and make available additional protections that you can choose to enable. As external threats evolve, we will continue to advance our solutions and offerings to ensure world-class privacy and security in Azure OpenAI Service and Copilots, and we will continue to be transparent about our approach. 
  • Your organization’s data is not used to train foundation models. Microsoft’s generative AI solutions, including Azure OpenAI Service and Copilot services and capabilities, do not use your organization’s data to train foundation models without your permission. Your data is not available to OpenAI or used to train OpenAI models.
  • Our products and solutions continue to comply with global data protection regulations. The Microsoft AI products and solutions you deploy continue to be compliant with today’s global data protection and privacy regulations. As we continue to navigate the future of AI together, including the implementation of the EU AI Act and other laws globally, organizations can be certain that Microsoft will be transparent about our privacy, safety, and security practices. We will comply with laws globally that govern AI, and back up our promises with clear contractual commitments.  

You can find additional details about how Microsoft’s privacy commitments apply to Azure OpenAI and Copilots here

We provide programs, transparency documentation, and tools to assist your AI deployment  

To support our customers and empower their use of AI, Microsoft offers a range of solutions, tooling, and resources to assist in their AI deployment, from comprehensive transparency documentation to a suite of tools for data governance, risk, and compliance. Dedicated programs such as our industry-leading AI Assurance program and Customer Copyright Commitment further broaden the support we offer commercial customers in addressing their needs.  

Microsoft’s AI Assurance Program helps customers ensure that the AI applications they deploy on our platforms meet the legal and regulatory requirements for responsible AI. The program includes support for regulatory engagement and advocacy, risk framework implementation and the creation of a customer council. 

For decades we’ve defended our customers against intellectual property claims relating to our products. Building on our previous AI customer commitments, Microsoft announced our Customer Copyright Commitment, which extends our intellectual property indemnity support to both our commercial Copilot services and our Azure OpenAI Service. Now, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or Azure OpenAI Service, or for the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer has used the guardrails and content filters we have built into our products. 

Our comprehensive transparency documentation about Azure OpenAI Service and Copilot and the customer tools we provide help organizations understand how our AI products work and provide choices our customers can use to influence system performance and behavior.  

Copilot and your privacy

Azure’s enterprise-grade protections provide a strong foundation upon which customers can build their data privacy, security, and compliance systems to confidently scale AI while managing risk and ensuring compliance. With a range of solutions in the Microsoft Purview family of products, organizations can further discover, protect, and govern their data when using Copilot for Microsoft 365 within their organizations.  

With Microsoft Purview, customers can discover risks associated with data and users, such as which prompts include sensitive data. They can protect that sensitive data with sensitivity labels and classifications, which means Copilot will only summarize content for users when they have the right permissions to the content. And when sensitive data is included in a Copilot prompt, the Copilot generated output automatically inherits the label from the reference file. Similarly, if a user asks Copilot to create new content based on a labeled document, the Copilot generated output automatically inherits the sensitivity label along with all its protection, like data loss prevention policies.  

Copilot conversation inherits sensitivity label

 Copilot conversation inherits sensitivity label 

Finally, our customers can govern their Copilot usage to comply with regulatory and code of conduct policies through audit logging, eDiscovery, data lifecycle management, and machine-learning based detection of policy violations.  

As we continue to innovate and provide new kinds of AI solutions, Microsoft will continue to offer industry-leading tools, transparency resources, and support for our customers in their AI journey, and remain steadfast in protecting our customers’ data. 

The post Protecting the data of our commercial and public sector customers in the AI era appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/on-the-issues/2024/03/28/data-protection-responsible-ai-azure-copilot/feed/ 0
Embracing AI Transformation: How customers and partners are driving pragmatic innovation to achieve business outcomes with the Microsoft Cloud https://blogs.microsoft.com/blog/2024/01/29/embracing-ai-transformation-how-customers-and-partners-are-driving-pragmatic-innovation-to-achieve-business-outcomes-with-the-microsoft-cloud/ https://blogs.microsoft.com/blog/2024/01/29/embracing-ai-transformation-how-customers-and-partners-are-driving-pragmatic-innovation-to-achieve-business-outcomes-with-the-microsoft-cloud/#respond Mon, 29 Jan 2024 15:55:00 +0000 This past year was one of technology’s most exciting with the emergence of generative AI, as leaders everywhere considered the possibilities it represented for their organizations.

The post Embracing AI Transformation: How customers and partners are driving pragmatic innovation to achieve business outcomes with the Microsoft Cloud appeared first on The Microsoft Cloud Blog.

]]>
This past year was one of technology’s most exciting with the emergence of generative AI, as leaders everywhere considered the possibilities it represented for their organizations. Many recognized its value and are eager to continue innovating, while others are inspired by what it has unlocked and are seeking ways to adopt it. At Microsoft, we are focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI Transformation for our customers. As I talk to customers and partners about the outcomes they are seeing — and rationalize those against Microsoft’s generative AI capabilities — we have identified four areas of opportunity for organizations to empower their AI Transformation: enriching employee experiences, reinventing customer engagement, reshaping business processes and bending the curve on innovation. With these as a foundation, it becomes easier to see how to bring pragmatic AI innovation to life, and I am proud of the impact we have made with customers and partners around the world. From developing customer-focused AI and cloud services for millions across Europe and Africa with Vodafone, to empowering customers and employees with generative AI capabilities with Walmart, I look forward to what we will help you achieve in the year ahead.

Coworkers reviewing photographs
Dentsu drives creativity and growth for brands, supported by Microsoft Copilot.
Enriching employee experiences and shaping the future of work with copilot technology

Bayer employees are collaborating better on worldwide research projects and saving time on daily tasks with Copilot for Microsoft 365, while Finnish company Elisa is helping knowledge workers across finance, sales and customer service streamline routine tasks. Banreservas is driving employee productivity and enhancing decision-making, and Hong Kong’s largest transportation companies — Cathay and MTR — are streamlining workflows, improving communications, and reducing time-consuming administrative tasks. Across professional services, KPMG has seen a 50% jump in employee productivity, Dentsu is saving hundreds of employees up to 30 minutes per day on creative visualization processes, and EY is making it easier to generate reports and access insights in near real-time with Copilot for Microsoft 365. In Malaysia, financial services organization PNB is saving employees time searching through documents and emails and AmBank employees are enhancing the quality and impact of their work. At Hargreaves Lansdown, financial advisers are using Copilot for Microsoft 365 and Teams to drive productivity and make meetings more inclusive. Avanade is helping sellers save time updating contact records and summarizing email threads with Copilot for Dynamics 365, while HSO Group, Vixxo, and 9altitudes are streamlining work for field and service teams.

Employee and customer in store
Organizations are creating their own Generative AI assistants to help employees improve customer service.
Reinventing customer engagement with generative AI to deliver greater value and increased satisfaction

MECOMS is making it possible for utility customers to ask questions and get suggestions about how to reduce power consumption using Microsoft Fabric and copilot on their Power Pages portal. Schneider Electric has built a Resource Advisor copilot to equip customers with enhanced data analysis, visualization, decision support and performance optimization. California State University San Marcos is finding ways to better understand and personalize the student journey while driving engagement with parents and alumni using Dynamics 365 Customer Insights and Copilot for Dynamics 365. With Azure OpenAI Service, Adecco Group is bolstering its services and solutions to enable worker preparedness as generative AI reshapes the workforce, UiPath has already helped one of its insurance customers save over 90,000 hours through more efficient operations, and Providence has developed a solution for clinicians to respond to patient messages up to 35% faster. Organizations are building generative AI assistants to help employees save time, improve customer service and focus on more complex work, including Domino’s, LAQO and OCBC. Within a few weeks of introducing its copilot to personalize customer service, Atento has increased customer satisfaction by 30% while reducing operational errors by nearly 20%, and Turkey-based Setur is personalizing travel planning with a chatbot to customize responses in multiple languages for its 60,000 daily users. In the fashion industry, Coats Digital launched an AI assistant in six weeks to make customer onboarding easier. Greece-based ERGO Insurance partnered with EBO to provide 24/7 personalized assistance with its virtual agent, and H&R Block introduced AI Tax Assist to help individuals and small business owners file and manage their taxes confidently while saving costs.

Man and woman working in lab
Novo Nordisk is building out GitHub Copilot integration to decrease repetitive research and engineering tasks.
Reshaping business processes to uncover efficiencies, improve developer creativity and spur AI innovation

Siemens built its own industrial copilot to simplify virtual collaboration of design engineers and front-line workers, accelerate simulation times and reduce tasks from weeks to minutes. With help from Neudesic, Hanover Research designed a custom AI-powered research tool to streamline workflows and identify insights up to 10 times faster. With Microsoft Fabric, organizations like the London Stock Exchange Group and Milliman are reshaping how teams create more value from data insights, while Zeiss is streamlining analytics workflows to help teams make more customer-centric decisions. Volvo Group has saved more than 10,000 manual hours by launching a custom solution built with Azure AI to simplify document processing. By integrating GitHub Copilot, Carlsberg has significantly enhanced productivity across its development team; and Hover, SPH Media, Doctolib and CloudZero have improved their workflows within an agile and secure environment. Mastery Logistics Systems and Novo Nordisk are using GitHub Copilot to automate repetitive coding tasks for developers, while Intertech is pairing it with Azure OpenAI Service to enhance coding accuracy and reduce daily emails by 50%. Swiss AI-driven company Unique AG is helping financial industry clients reduce administrative work, speed up existing processes and improve IT support; and PwC is simplifying its audit process and increasing transparency for clients with Azure OpenAI Service. By leveraging Power Platform, including AI and Copilot features, Epiq has automated employee processes, saving over $500,000 in annual costs and 2,000 hours of work each month, PG&E is addressing up to 40% of help desk demands to save more than $1 million annually, and Nsure is building automations that reduce manual processing times by over 60% and costs by 50%. With security top of mind, WTW is using Microsoft Copilot for Security to accelerate its threat-hunting capabilities by making it possible for cyber teams to ask questions in natural language, while LTIMindtree is planning on using it to reduce training time and strengthen security analyst expertise.

Man working at multiple screens
VinBrain is harnessing Microsoft’s cutting-edge AI technologies to transform healthcare in Vietnam.
Bending the curve on innovation across industries with differentiated AI offerings

To make disaster response more efficient, nonprofit Team Rubicon is quickly identifying and engaging the right volunteers in the right locations with the help of Copilot for Dynamics 365. Netherlands-based TomTom is bringing the benefits of generative AI to the global automotive industry by developing an advanced AI-powered voice assistant to help drivers with tasks like navigation and temperature control. In Vietnam, VinBrain has developed one of the country’s first comprehensive AI-powered copilots to support medical professionals with enhanced screening and detection processes and encourage more meaningful doctor-patient interactions. Rockwell Automation is delivering industry-first capabilities with Azure OpenAI Service to accelerate time-to-market for customers building industrial automation systems. With a vision to democratize AI and reach millions of users, Perplexity.AI has brought its conversational answer engine to market in six months using Azure AI Studio. India’s biggest online fashion retailer, Myntra, is solving the open-ended search problem facing the industry by using generative AI to help shoppers figure out what they should wear based on occasion. In Japan, Aisin Corp has developed a generative AI app to empower people who are deaf or hard of hearing with tasks like navigation, communication and translation; and Canada-based startup Natural Reader is making education more accessible on-the-go for students with learning differences by improving AI voice quality with Azure AI. To solve one of the most complex engineering challenges — the design process for semiconductors — Synopsys is bringing in the power of generative AI to help engineering teams accelerate time-to-market.

As organizations continue to embrace AI Transformation, it is critical they develop clarity on how best to apply AI to meet their most pressing business needs. Microsoft is committed to helping our customers and partners accelerate pragmatic AI innovation and I am excited by the opportunities before us to enrich employee experiences, reinvent customer engagement, reshape business processes and bend the curve on innovation. As a technology partner of choice — from our differentiated copilot capabilities to our unparalleled partner ecosystem and unique co-innovation efforts with customers — we remain in service to your successful outcomes. We are also dedicated to preserving the trust we have built through our partnership approach, responsible AI solutions and commitments to protecting your data, privacy and IP. We believe this era of AI innovation allows us to live truer to our mission than ever before, and I look forward to continuing on this journey with you to help you achieve more.

The post Embracing AI Transformation: How customers and partners are driving pragmatic innovation to achieve business outcomes with the Microsoft Cloud appeared first on The Microsoft Cloud Blog.

]]>
https://blogs.microsoft.com/blog/2024/01/29/embracing-ai-transformation-how-customers-and-partners-are-driving-pragmatic-innovation-to-achieve-business-outcomes-with-the-microsoft-cloud/feed/ 0
Microsoft unveils expansion of AI for security and security for AI at Microsoft Ignite http://approjects.co.za/?big=en-us/security/blog/2023/11/15/microsoft-unveils-expansion-of-ai-for-security-and-security-for-ai-at-microsoft-ignite/ http://approjects.co.za/?big=en-us/security/blog/2023/11/15/microsoft-unveils-expansion-of-ai-for-security-and-security-for-ai-at-microsoft-ignite/#respond Wed, 15 Nov 2023 16:08:46 +0000 The increasing speed, scale, and sophistication of recent cyberattacks demand a new approach to security.

The post Microsoft unveils expansion of AI for security and security for AI at Microsoft Ignite appeared first on The Microsoft Cloud Blog.

]]>
The future of security with AI

The increasing speed, scale, and sophistication of recent cyberattacks demand a new approach to security. Traditional tools are no longer enough to keep pace with the threats posed by cybercriminals. In just two years, the number of password attacks detected by Microsoft has risen from 579 per second to more than 4,000 per second.1 On average, organizations use 80 security tools to manage their environment, resulting in security teams facing data deluge, alert fatigue, and limited visibility across security solutions. Plus, the global cost of cybercrime is expected to reach $10.5 trillion by 2025, up from $3 trillion in 2015. Security teams face an asymmetric challenge: they must protect everything, while cyberattackers only need to find one weak point. And security teams must do this while facing regulatory complexity, a global talent shortage, and rampant fragmentation.

One of the advantages for security teams is their view of the data field—they know how the infrastructure, user posture, and applications, are set up before a cyberattack begins. To further tip the scale in favor of cyberdefenders, Microsoft Security offers a very large-scale data advantage—65 trillion daily signals, expertise of global threat intelligence, monitoring more than 300 cyberthreat groups, and insights on cyberattacker behaviors from more than 1 million customers and more than 15,000 partners.1

Our new generative AI solution—Microsoft Security Copilot—combined with our massive data advantage and end-to-end security, all built on the principles of Zero Trust, creates a flywheel of protection to change the asymmetry of the digital threat landscape and favor security teams in this new era of security.

To learn more about Microsoft Security’s vision for the future and the latest generative AI announcements and demos, watch the Microsoft Ignite keynote “The Future of Security with AI” presented by Charlie Bell, Executive Vice President, Microsoft Security, and I on Thursday, November 16, 2023, at 10:15 AM PT.  

Changing the paradigm with Microsoft Security Copilot

One of the biggest challenges in security is the lack of cybersecurity professionals. This is an urgent need given the three million unfilled positions in the field, with cyberthreats increasing in frequency and severity.2 

Graphic explaining how preview participants in Microsoft Security Copilot demonstrated 44% more accurate responses across tasks.

In a recent study to measure the productivity impact for “new in career” analysts, participants using Security Copilot demonstrated 44 percent more accurate responses and were 26 percent faster across all tasks.3 

According to the same study:

  • 86 percent reported that Security Copilot helped them improve the quality of their work. 
  • 83 percent stated that Security Copilot reduced the effort needed to complete the task. 
  • 86 percent said that Security Copilot made them more productive. 
  • 90 percent expressed their desire to use Security Copilot next time they do the same task. 

Check out the Security Copilot Early Access Program—with Microsoft Defender Threat Intelligence included at no additional charge—that adds speed and scale for scenarios like security posture management, incident investigation and response, security reporting, and more—now available to interested and qualified customers. For example, one early adopter from Willis Towers Watson (WTW) said “I envision Microsoft Security Copilot as a change accelerator. The ability to do threat hunting at pace will mean that I’m able to reduce my mean time to investigate, and the faster I can do that, the better my security posture will become.”  Keep reading for a full list of capabilities.

Graphic showing the ways in which operational complexity is increasing for security teams.

Introducing the industry’s first generative AI-powered unified security operations platform with built-in Copilot

Security operations teams struggle to manage disparate security toolsets from siloed technologies and apps. This challenge is only exacerbated given the scarcity of skilled security talent. And while organizations have been investing in traditional AI and machine learning to improve threat intelligence, deploying AI and machine learning comes with its unique challenges and its own shortage of data science talent. It’s time for a step-change in our industry, and thanks to generative AI, we can now close the talent gap for both security and data professionals. Securing an organization today requires an innovative approach that prevents, detects, and disrupts cyberattacks at machine speed, while delivering simplicity and and approachable, conversational experiences to help security operations center (SOC) teams move faster, and bringing together all the security signals and threat intelligence currently stuck in disconnected tools. Today, we are thrilled to announce the next major step in this industry-defining vision: combining the power of leading solutions in security information and event management (SIEM), extended detection and response (XDR), and generative AI for security into the first unified security operations platform.

By bringing together Microsoft Sentinel, Microsoft Defender XDR (previously Microsoft 365 Defender), and Microsoft Security Copilot, security analysts now have a unified incident experience that streamlines triage and provides a complete, end-to-end view of threats across the digital estate. With a single set of automation rules and playbooks enriched with generative AI, coordinating response is now easier and quicker for analysts of every level. In addition, unified hunting now gives analysts the ability to query all SIEM and XDR data in one place to uncover cyberthreats and take appropriate remediation action. Customers interested in joining the preview of the unified security operations platform should contact their account team.

Screenshot of the Microsoft Defender dashboard.

Further, Microsoft Security Copilot is natively embedded into the analyst experience supporting both SIEM and XDR and equipping analysts with step-by-step guidance and automation for investigating and resolving incidents, without the reliance of data analysts. Complex tasks, such as analyzing malicious scripts or crafting Kusto Query Language (KQL) queries to hunt across data in Microsoft Sentinel and Defender XDR, can be accomplished simply by asking a question in natural language or accepting a suggestion from Security Copilot. If you need to update your chief information security officer (CISO) on an incident, you can now instantly generate a polished report that summarizes the investigation and the remediation actions that were taken to resolve it.

To keep up with the speed of cyberattackers, the unified security operations platform catches cyberthreats at machine speed and protects your organization by automatically disrupting advanced attacks. We are extending this capability to act on third-party signals, for example with SAP signals and alerts. For SIEM customers who have SAP connected, attack disruption will automatically detect financial fraud techniques and disable the native SAP and connected Microsoft Entra account to prevent the cyberattacker from transferring any funds—with no SOC intervention. The attack disruption capabilities will be further strengthened by new deception capabilities in Microsoft Defender for Endpoint—which can now automatically generate authentic-looking decoys and lures, so you can entice cyberattackers with fake, valuable assets that will deliver high-confidence, early stage signal to the SOC and trigger automatic attack disruption even faster.

Lastly, we are building on the native XDR experience by including cloud workload signals and alerts from Microsoft Defender for Cloud—a leading cloud-native application protection platform (CNAPP)—so analysts can conduct investigations that span across their multicloud infrastructure (Microsoft Azure, Amazon Web Services, and Google Cloud Platform environments) and identities, email and collaboration tools, software as a service (SaaS) apps, and multiplatform endpoints—making Microsoft Defender XDR one of the most comprehensive native XDR platforms in the industry.

Customers who operate both SIEM and XDR can add Microsoft Sentinel into their Microsoft Defender portal experience easily, with no migration required. Existing Microsoft Sentinel customers can continue using the Azure portal. The unified security operations platform is now available in private preview and will move to public preview in 2024.

Expanding Copilot for data security, identity, device management, and more 

Security is a shared responsibility across teams, yet many don’t share the same tools or data—and they often don’t collaborate with one another. We are adding new capabilities and embedded experiences of Security Copilot across the Microsoft Security portfolio as part of the Early Access Program to empower all security and IT roles to detect and address cyberthreats at machine speed. And to enable all roles to protect against top security risks and drive operational efficiency, Microsoft Security Copilot now brings together signals across Microsoft Defender, Microsoft Defender for Cloud, Microsoft Sentinel, Microsoft Intune, Microsoft Entra, and Microsoft Purview into a single pane of glass.

New capabilities in Security Copilot creating a forced multiplier for security and IT teams

SECURE AND GOVERN YOUR DATA IN THE ERA OF AIView session 

Microsoft Purview: Data security and compliance teams review a multitude of complex and diverse alerts spread across multiple security tools, each alert containing a wealth of rich insights. To make data protection faster, more effective, and easier, Security Copilot is now embedded in Microsoft Purview, offering summarization capabilities directly within Microsoft Purview Data Loss PreventionMicrosoft Purview Insider Risk ManagementMicrosoft Purview eDiscovery, and Microsoft Purview Communication Compliance workflows, making sense of profuse and diverse data, accelerating investigation and response times, and enabling analysts at all levels to complete complex tasks with AI-powered intelligence at their fingertips. Additionally, with AI translator capabilities in eDiscovery, you can use natural language to define search queries, resulting in faster and more accurate search iterations and eliminating the need to use keyword query language. These new data security capabilities are also available now in the Microsoft Security Copilot standalone experience.

SECURE ACCESS IN THE AI ERA: WHAT’S NEW IN MICROSOFT ENTRAView session 

Microsoft Entra: Password-based attacks have increased dramatically in the last year, and new attack techniques are now trying to circumvent multifactor authentication. To strengthen your defenses against identity compromise, Security Copilot embedded in Microsoft Entra can assist in investigating identity risks and help with troubleshooting daily identity tasks, such as why a sign-in required multifactor authentication or why a user’s risk level increased. IT administrators can instantly get a risk summary, steps to remediate, and recommended guidance for each identity at risk, in natural language. Quickly get to the root of an issue for a sign-in with a summarized report of the most relevant information and context. Additionally, in Microsoft Entra ID Governance, admins can use Security Copilot to guide in the creation of a lifecycle workflow to streamline the process of creating and issuing user credentials and access rights. These new capabilities to summarize users and groups, sign-in logs, and high-risk users are also available now in the Microsoft Security Copilot standalone experience.

FORTIFIED SECURITY AND SIMPLICITY COME TOGETHER WITH MICROSOFT INTUNEView session 

Microsoft Intune: The evolving device landscape is driving IT complexity and risk of endpoint vulnerabilities—and IT administrators play a critical security role in managing these devices and protecting organizational data. We are introducing Security Copilot embedded in Microsoft Intune in the coming weeks for select customers of the Early Access Program, marking a meaningful advancement in endpoint management and security. This experience offers unprecedented visibility across security data with full device context, provides real-time guidance when creating policies, and empowers security and IT teams to discover and remediate the root cause of device issues faster and easier. Now IT administrators and security analysts are empowered to drive better and informed outcomes with pre-deployment, AI-based guard rails to help them understand the impact of policy changes in their environment before applying them. With Copilot, they can save time and reduce complexity of gathering near real-time device, user, and app data and receive AI-driven recommendations to respond to threats, incidents, and vulnerabilities, fortifying endpoint security. 

BOOST MULTICLOUD SECURITY WITH A COMPREHENSIVE CODE TO CLOUD STRATEGYView session 

Microsoft Defender for Cloud: Maintaining a strong cloud security posture is a challenge for cybersecurity teams, as they face siloed visibility into risks and vulnerabilities across the application lifecycle, due to the rise of cloud-native development and multicloud environments. With Security Copilot now embedded in Microsoft Defender for Cloud, security admins are empowered to identify critical concerns to resources faster with guided risk exploration that summarizes risks, enriched with contextual insights such as critical vulnerabilities, sensitive data, and lateral movement. To address the uncovered critical risks more efficiently, admins can use Security Copilot in Microsoft Defender for Cloud to guide remediation efforts and streamline the implementation of recommendations by generating recommendation summaries, step-by-step remediation actions, and scripts in a preferred language, and directly delegate remediation actions to key resource users. These new cloud security capabilities are also available now in the Microsoft Security Copilot standalone experience. 

Microsoft Defender for External Attack Surface Management (EASM): Keeping up with tracking assets and their vulnerabilities can be overwhelming for security teams, as it requires time, coordination, and research to understand which assets pose a risk to the organization. New Defender for EASM capabilities are available in the Security Copilot standalone experience and enable security teams to quickly gain insights into their external attack surface, regardless of where the assets are hosted, and feel confident in the outcomes. These capabilities provide security operations teams with a snapshot view of their external attack surface, help vulnerability managers understand if their external attack surface is impacted by a particular common vulnerability and exposure (CVE), and provide visibility into vulnerable critical and high priority CVEs to help teams know how pervasive they are to their assets, so they can prioritize remediation efforts.

Custom plugins to trusted third-party tools: Security Copilot provides more robust, enriched insight and guidance when it is integrated with a broader set of security and IT teams’ tools. To do so, Security Copilot must embrace a vast ecosystem of security partners. As part of this effort, we are excited to announce the latest integration now available to Security Copilot customers with ServiceNow. For customers who want to bring onboard their trusted security tools and integrate their own organizational data and applications, we’re also introducing a new set of custom plugins that will enable them to expand the reach of Security Copilot to new data and new capabilities.

Securing the use of generative AI for safeguarding your organization

As organizations quickly adopt generative AI, it is vital to have robust security measures in place to ensure safe and responsible use. This involves understanding how generative AI is being used, protecting the data that is being used or created by generative AI, and governing the use of AI. As generative AI apps become more popular, security teams need tools that secure both the AI applications and the data they interact with. In fact, 43 percent of organizations said lack of controls to detect and mitigate risk in AI is a top concern.4 Different AI applications pose various levels of risk, and organizations need the ability to monitor and control these generative AI apps with varying levels of protection.

ADVANCED CLOUD-NATIVE SECURITY WITH MICROSOFT DEFENDER FOR CLOUDView session 

Microsoft Defender: Microsoft Defender for Cloud Apps is expanding its discovery capabilities to help organizations gain visibility into the generative AI apps in use, provide extensive protection and control to block risky generative AI apps, and apply ready-to-use customizable policies to prevent data loss in AI prompts and AI responses. This new feature supports more than 400 generative AI apps, and offers an easy way to sift through low- versus high-risk apps. 

HOW MICROSOFT PURVIEW HELPS YOU PROTECT YOUR DATAView session 

Microsoft Purview: New capabilities in Microsoft Purview help comprehensively secure and govern data in AI, including Microsoft Copilot and non-Microsoft generative AI applications. Customers can gain visibility into AI activity, including sensitive data usage in AI prompts, comprehensive protection with ready-to-use policies to protect data in AI prompts and responses, and compliance controls to help easily meet business and regulatory requirements. Microsoft Purview capabilities are integrated with Microsoft Copilot, starting with Copilot for Microsoft 365, strengthening the data security and compliance for Copilot for Microsoft 365.

Further, to enable customers to gain a better understanding of which AI applications are being used and how, we are announcing the preview of AI hub in Microsoft Purview. Microsoft Purview can provide organizations with an aggregated view of total prompts being sent to Copilot and the sensitive information included in those prompts. Organizations can also see an aggregated view of the number of users interacting with Copilot. And we are extending these capabilities to provide insights for more than 100 of the most commonly used consumer generative AI applications, such as ChatGPT, Bard, DALL-E, and more.

New AI hub in Microsoft Purview portal.

Expanding end-to-end security for comprehensive protection everywhere

Keeping up with daily protection requirements is a security challenge that can’t be ignored—and the struggle to stay ahead of cyberattackers and safeguard your organization’s data is why we’ve designed our security features to evolve with the digital threat landscape and provide comprehensive protection against cyberthreats.

Strengthen your code-to-cloud defenses with Microsoft Defender for Cloud. To cope with the complexity of multicloud environments and cloud-native applications, security teams need a comprehensive strategy that enables code-to-cloud defenses on all cloud deployments. For posture management, the preview of Defender for Cloud’s integration with Microsoft Entra Permissions Management helps you apply the least privilege principle for cloud resources and shows the link between access permissions and potential vulnerabilities across Azure, AWS, and Google Cloud. Defender for Cloud also has an improved attack path analysis experience, which helps you predict and prevent complex cloud attacks—and provides more insights into your Kubernetes deployments across Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters and APIs insights to prioritize cloud risk remediation.

To strengthen security throughout the application lifecycle, preview of the GitLab Ultimate integration gives you a clear view of your application security posture and simplifies code-to-cloud remediation workflows across all major developer platforms—GitHub, Azure DevOps, and GitLab within Defender for Cloud. Additionally, general availability of Defender for APIs, which offers machine learning-driven protection against API threats and agentless vulnerability assessments for container images in Microsoft Azure Container Registries. Defender for Cloud now offers a unified vulnerability assessment engine spanning all cloud workloads, powered by the strong capabilities of Microsoft Defender Vulnerability Management.

MDTI: NOW ANYONE CAN TAP INTO GAME-CHANGING THREAT INTELLIGENCEView session 

Leverage Microsoft Defender Threat Intelligence for elevating your threat intelligence. Available in Microsoft Defender XDR, Microsoft Defender Threat Intelligence offers valuable open-source intelligence and internet data sets found nowhere else. These capabilities now enhance Microsoft Defender products with crucial context around threat actors, tooling, and infrastructure at no additional cost to customers. Available in the Threat Intelligence blade of Defender XDR, Detonation Intelligence enables users to search, look up, and contextualize cyberthreats as well as detonate URLs and view results to quickly understand a malicious file or URL. Defender XDR customers can quickly submit an indicator of compromise (IoC) to immediately view the results. Vulnerability Profiles put intelligence collected from the Microsoft Threat Intelligence team about vulnerabilities all in one place. Profiles are updated when new information is discovered and contains a description, Common Vulnerability Scoring System scores (CVSS), a priority score, exploits, and deep and dark web chatter observations.

Use Microsoft Purview to extend data protection capabilities across structured and unstructured data types. In the past, securing and governing sensitive data across these diverse elements of your digital estate would have required multiple providers, adding a heavy integration tax. But today, with Microsoft Purview, you can gain visibility across your entire data estate, secure your structured and unstructured data, and detect risks across clouds. Microsoft Purview’s labeling and classification capabilities are expanding beyond Microsoft 365, offering access controls for both structured and unstructured data types. Users will have the ability to discover, classify, and safeguard sensitive information hosted in structured databases such as Microsoft Azure SQL and Azure Data Lake Storage (ADLS)—also extending these capabilities into Amazon Simple Storage Service (S3) buckets.

Detect insider risk with Microsoft Purview Insider Risk Management, which offers ready-to-use risk indicators to detect critical insider risks in Azure, AWS, and SaaS applications, including Box, Dropbox, Google Drive, and GitHub. Admins with appropriate permissions will no longer need to manually cross-reference signals in these environments. They can now utilize the curated and preprocessed indicators to obtain a more holistic view of a potential insider incident.

Simplify access security with Microsoft Entra. Securing access points is critical and can be complex when using multiple providers for identity management, network security, and cloud security. With Microsoft Entra, you can centralize all your access controls together to more fully secure and protect your environment. Microsoft’s Security Service Edge solution is expanding with several new features.

  • By the end of 2023, Microsoft Entra Internet Access preview will include context-aware secure web gateway (SWG) capabilities for all internet apps and resources with web content filtering, Conditional Access controls, compliant network check, and source IP restoration.
  • Microsoft Entra Private Access for private apps and resources has extended protocol support so you can seamlessly transition from your traditional VPN to a modern Zero Trust Network Access (ZTNA) solution, and the ability to add multifactor authentication to all private apps for remote and on-premises users.
  • Now with auto-enrollment into Microsoft Entra Conditional Access policies you can enhance security posture and reduce complexity for securing access. Easily create and manage a passkey, a free phishing-resistant credential based on open standards, in the Microsoft Authenticator app for signing into Microsoft Entra ID-managed apps.
  • Promote enforcement of least-privilege access for cloud resources with new integrations for Microsoft Entra Permissions Management. Permissions Management has a new integration with ServiceNow that enables organizations to incorporate time-bound access permission requests to existing approval workflows in ServiceNow.

Unify, simplify, and delight users by the Microsoft Intune Suite. We’re adding three new solutions to the Intune Suite, available in February 2024. These solutions further unify critical endpoint management workloads in Intune to fortify device security posture, power better experiences, and simplify IT and security operations end-to-end. We will also be able to offer these solutions coupled with the existing Intune Suite capabilities to agencies and organizations of the Government Community Cloud (GCC) in March 2024.

  • Microsoft Cloud PKI offers a comprehensive, cloud-based public key infrastructure and certificate management solution to simply create, deploy, and manage certificates for authentication, Wi-Fi, and VPN endpoint scenarios.
  • Microsoft Intune Enterprise Application Management streamlines third-party app discovery, packaging, deployment, and updates via a secure enterprise catalog to help all workers stay current.
  • Microsoft Intune Advanced Analytics extends the Intune Suite anomaly detection capabilities and provides deep device data insights as well as battery health scoring for administrators to proactively power better, more secure user experiences and productivity improvements.

Partner opportunities and news

There are several partners participating in our engineer-led Security Copilot Partner Private Preview to validate usage scenarios and provide feedback on functionality, operations, and APIs to assist with extensibility. If you are joining us in person at Microsoft Ignite, watch the demos at the Customer Meet-up Hub, presented by Microsoft Intelligent Security Association (MISA) members sponsoring at Microsoft Ignite. And if you’re a partner interested in staying current, join the Security Copilot Partner Interest Community.

MISA featured member presenting at Microsoft Expert Meetup Hub.

Join us in creating a more secure future

Embracing innovation has never been more important for an organization, not only with respect to today’s cyberthreats but also in anticipation of those to come. Recently, to create a more secure future, we launched the Secure Future Initiative—a new initiative to pursue our next generation of cybersecurity protection.

Microsoft Ignite 2023

Join Vasu Jakkal and Charlie Bell at Microsoft Ignite to watch “the Future of Security and AI” on November 16, 2023, at 10:15 AM PT.

Watch the keynote 

AI is changing our world forever. It is empowering us to achieve the impossible and it will usher in a new era of security that favors security teams. Microsoft is privileged to be a leader in this effort and committed to a vision of security for all.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (formerly known as Twitter) (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2023.

2Cybersecurity Workforce Study, ISC2. 2022.

3Microsoft Security Copilot randomized controlled trial conducted by Microsoft Office of the Chief Economist, November 2023.

4Data Security Index: Trends, insights, and strategies to secure data, Microsoft.

The post Microsoft unveils expansion of AI for security and security for AI at Microsoft Ignite appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2023/11/15/microsoft-unveils-expansion-of-ai-for-security-and-security-for-ai-at-microsoft-ignite/feed/ 0