AI Archives | Microsoft AI Blogs http://approjects.co.za/?big=en-us/ai/blog/topic/ai/ Wed, 19 Feb 2025 18:29:25 +0000 en-US hourly 1 Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/ Tue, 18 Feb 2025 16:00:00 +0000 Get an overview of the 2025 AI Decision Brief, a Microsoft report on how generative AI is impacting businesses and how to maximize AI at your organization.

The post Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI appeared first on Microsoft AI Blogs.

]]>
Generative AI has been on a phenomenal growth trajectory over the past few years. We’re seeing businesses across industries using AI to increase productivity, streamline processes, and accelerate innovation. As generative AI applications continue to become more powerful, the question isn’t whether organizations will take advantage of AI, but how they can use it most effectively.

At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. In this age of generative AI, we’re committed to sharing what we’ve learned to help further this mission. That’s why we wrote the 2025 AI Decision Brief: Insights from Microsoft and AI leaders on navigating the generative AI platform shift

This report is packed with perspectives from top Microsoft leaders and insights from AI innovators, along with stories of companies across industries that have transformed their businesses using generative AI. It’s also full of pragmatic tips to help your company with its own AI efforts. 

Here’s a more detailed look at what you’ll find in the report.

The state of generative AI today 

The world has embraced generative AI with unprecedented speed. While it took seven years for the internet to reach 100 million users, ChatGPT reached those numbers in just two months.1 And although generative AI is relatively new to the market, adoption is rapidly expanding. In fact, current and planned usage among enterprises jumped to 75% in 2024 from 55% in 2023, according to an IDC study.2  

Put another way, AI is rapidly evolving into what economists call a general-purpose technology. But getting to the point where everyone on the planet has AI access and takes advantage of that access will require some effort, including: 

  • Committing to responsible, trustworthy AI.
    For all people, organizations, and nations to embrace AI, it must be responsible, ethical, fair, and safe. As Microsoft Vice Chair and President Brad Smith says in this report, “Broad social acceptance for AI will depend on ensuring that AI creates new opportunities for workers, respects enduring values of individuals, and addresses the impact of AI on local resources such as land, energy, and water.” 
  • Overcoming adoption challenges.
    Organizations face several challenges in adopting generative AI, such as skill shortages, security concerns, and regulation and compliance issues. Training employees to use AI and building data privacy, security, and compliance into your AI adoption plan are essential.
  • Understanding the winning formula.
    There’s a striking difference between customers in the AI exploration stage and those who have fully embraced it. The highest-performing organizations gain almost four times the value from their AI investments than those just getting started. Plus, those high performers are implementing generative AI projects in a fraction of the time.2

Where generative AI is headed

AI capabilities are doubling at a rate four times that of historical progress.2 This exponential growth tells us that the effects of AI-powered automation, scientific discovery, and innovation will also accelerate. We expect generative AI to revolutionize operations, enable new and disruptive business models, and reshape the competitive landscape in many ways, including:

  • The future of work.
    As the use of generative AI in companies continues to grow, employees are starting to collaborate with AI rather than just treating it as a tool. This means learning to work with AI iteratively and conversationally. “Effective collaboration involves setting expectations, reviewing work, and providing feedback—similar to managing an employee,” explains Jared Spataro, Microsoft Chief Marketing Officer, AI at Work. 
  • The organizations leading innovation.
    Startups, software development companies, research organizations, and co-innovation labs where startups and software giants collaborate on solutions will all continue to shape AI innovation.  
  • Sustainable AI.
    Generative AI is helping build a more sustainable future thanks to tools that integrate renewable energy into grids, reduce food waste, and support socially and environmentally beneficial actions.

How to advance generative AI in your organization 

As we help companies move from talking about AI to translating it into lasting results, we’ve gained a unique perspective on the generative AI strategies that drive business impact. You’ll find many of them in this report, including:

  • Best practices for using generative AI at scale.
    Get tips for developing a scalable AI strategy that best suits your organization, implementing your AI adoption plan, and managing your AI efforts over time. 
  • Ways to accelerate your AI readiness.
    Get checklists for creating your organization’s AI business strategy, technology and data strategy, implementation strategy, cultural and mindset shift, and governance plan. 
  • Customer success stories.
    See how businesses across industries—including healthcare, energy, transportation, and finance—are demonstrating what’s possible with AI now, and in the future. Plus, explore which Microsoft and AI tools they’re using to succeed.

Maximize generative AI with insights from Microsoft leaders

We couldn’t be more excited about the promise of generative AI. Whether you’ve already begun using AI at your organization or are just getting started, we’re here to help you ease the journey and maximize your results.

Get The 2025 AI Decision Brief now for Microsoft AI leadership perspectives on: 

  • Empowering the future: AI access for us all—Brad Smith, Vice Chair and President.
  • How AI is revolutionizing IT at Microsoft—Nathalie D’Hers, CVP Microsoft Digital (IT).
  • Learnings on the business value of AI from IDC—Alysa Taylor, Chief Marketing Officer, Commercial Cloud and AI.
  • The future of work is AI-powered—Jared Spataro, Chief Marketing Officer, AI at Work.
  • Microsoft’s commitment to supporting customers on their AI transformation journey—Judson Althoff, Executive Vice President and Chief Commercial Officer.
  • How software development companies are paving the way for AI transformation—Jason Graefe, Corporate Vice President, ISV and Digital Natives.
  • How to stay ahead of emerging challenges and cyberthreats—Vasu Jakkal, Corporate Vice President, Microsoft Security Business.
A blurry image of a screen

2025 AI Decision Brief

Empower your organization and learn how AI is reshaping businesses through insights shared by Microsoft leaders


1 Benj Edwards, “ChatGPT sets record for fastest-growing user base in history, report says: Intense demand for AI chatbot breaks records and inspires new $20/mo subscription plan,” Ars Technica, February 1, 2023.

2 IDC InfoBrief, sponsored by Microsoft, 2024 Business Opportunity of AI, IDC# US52699124, November 2024.

The post Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI appeared first on Microsoft AI Blogs.

]]>
Personalization at scale: How cloud and AI are redefining customer engagement http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/10/personalization-at-scale-how-cloud-and-ai-are-redefining-customer-engagement/ Mon, 10 Feb 2025 16:00:00 +0000 For organizations ready to embrace the future, building the right infrastructure is the first step toward achieving personalization at scale.

The post Personalization at scale: How cloud and AI are redefining customer engagement appeared first on Microsoft AI Blogs.

]]>
In today’s digital-first world, personalization has become a business imperative. According to a study by McKinsey & Company, 71% of consumers expect companies to deliver personalized interactions, and 76% become frustrated when this doesn’t happen. Businesses that get personalization right, however, see revenue increases of 10% to 15%, with company-specific gains ranging from 5% to 25%—highlighting the clear link between personalization and business growth.1 From curated entertainment recommendations to seamless healthcare solutions, personalization drives loyalty, boosts revenue, and sets industry leaders apart. 

But achieving personalization at scale requires more than AI and data analytics—it demands a powerful, secure, and adaptive infrastructure that enables you to deploy AI. Without a scalable, high-performing cloud foundation, businesses face challenges like latency issues, fragmented data, and high operational costs—all while grappling with the growing importance of data security and compliance. For organizations ready to embrace the future, building the right infrastructure foundation is the first step toward achieving personalization at scale—empowering them to innovate faster, respond in real time, and deliver transformative, trustworthy customer experiences. 

A close up of a colorful wave

Customer Stories

Learn how organizations are achieving more with Microsoft

Overcoming barriers to personalization at scale 

Achieving personalization at scale comes with its share of challenges. Businesses often contend with fragmented data systems, privacy and compliance concerns, and the complexity of acting on data in real time. While these hurdles can seem daunting, understanding them is the first step toward finding solutions. 

Fragmented data is one of the most common obstacles to personalization. Customer information is often scattered across systems, departments, or even physical locations, making it difficult to gain a unified view. For example, PointClickCare found that siloed healthcare data across providers delayed critical care decisions, highlighting the importance of breaking down these barriers to enable better insights.

It’s common for people to work with multiple healthcare professionals for different treatments and prescriptions, for the best care, everyone needs to access, use, and trust the most current, accurate information.

Andrew Datars, Senior Vice President of Engineering at PointClickCare

Real-time data processing adds another layer of complexity. Personalization requires immediate insights and responses, but many businesses struggle with legacy systems that can’t handle fluctuating demands. MediaKind, for example, encountered difficulties delivering real-time media experiences during peak events—putting customer satisfaction at risk. With the increase in competition in the industry and the pace of innovation around engaging with customers through video, they needed to find a way to match their current demands and innovation needs. “People get really upset when their entertainment is offline. Seconds of downtime costs broadcasters and streamers millions of dollars in advertising and brand revenue,” Allen Broome, MediaKind’s Chief Executive Officer notes. 

Privacy and compliance can create significant challenges for businesses aiming to deliver personalized experiences. Analyzing sensitive customer data requires navigating a maze of strict regulations, such as ensuring data residency, meeting regional compliance requirements, and safeguarding user trust. These challenges are particularly pronounced in industries like legal, where the sensitivity of data and the complexity of workflows add additional layers of difficulty. Harvey, a platform designed for the legal sector, faces these exact hurdles. Security is paramount for Harvey due to the need to comply with varied regional security requirements and ensure that data never crosses regional boundaries. “The reason it’s been so hard to build technology for industries like legal is the workflows are so varied and complex, and no two days are the same,” explains Gabe Pereyra, Co-Founder and President at Harvey. By prioritizing security and compliance from the ground up, Harvey provides a trusted solution tailored to one of the most demanding industries. 

While these challenges are real, they are manageable with the right strategies. Recognizing and addressing these barriers allows businesses to take their first steps toward achieving personalization at scale, turning these obstacles into opportunities for growth. 

Redefining customer engagement with cloud and AI technologies 

Scaling personalization to meet modern customer expectations is a complex challenge, but cloud and AI technologies make it practical. Together, they empower organizations to process vast amounts of data, generate actionable insights in real time, and deliver tailored experiences at scale. 

For many organizations, data is scattered across disconnected systems, creating silos that prevent a unified view of customer behaviors and needs. Overcoming this barrier requires modernizing infrastructure to centralize data, enable seamless integration, and provide real-time access to actionable insights. Cloud platforms like Microsoft Azure make this possible by offering secure and scalable solutions that unify fragmented data sources into a single, comprehensive view. For example, PointClickCare leveraged Azure to consolidate siloed healthcare data from multiple systems into a unified network. PointClickCare modernized their infrastructure by deploying a cloud-based solution with key Azure products like Windows Server, Azure SQL Managed Instance, and Azure OpenAI Service to securely integrate data, streamline workflows, and enable real-time access to critical patient information. This transformation provided healthcare providers with actionable insights, improved operational efficiency, and enhanced patient care.  

Personalization hinges on immediacy, and AI-powered cloud platforms enable businesses to process massive streams of data in real time, offering insights and actions when they matter most. Overcoming this challenge requires infrastructure that can handle both the scale and speed of data processing without delays. LALIGA achieves this by leveraging cloud-based AI and machine learning to analyze over 3 million data points per match, all processed in real time. Operating within a hybrid environment, they ensure consistent performance by distributing workloads intelligently across on-premises and cloud systems using Microsoft Azure Arc. This allows LALIGA to deliver engaging digital and in-stadium experiences, from detailed match statistics to personalized player insights, enhancing how fans connect with the game. 

To ensure real-time data provision, cloud infrastructure must be capable of adapting to variable demands. Cloud solutions provide elastic scalability, ensuring organizations can handle varying workloads without compromising performance. With 30 teams, more than 500 players, and each team playing 82 games per season, not including playoffs, the NBA have an enormous amount of player data to collect and analyze. In exploring how AI could help them process data on all on-court players’ specific live body movements, analyzing things like speed, dunk height, number of passes and dribbles, and even injury risk, simultaneously, can create a need for elastic scalability. The NBA used a Microsoft Azure solution, based on Azure Kubernetes Service (AKS), that can manage and process up to 16 gigabytes of raw data per game, not including RGB video signals—sometimes more if the game goes into overtime. The new solution is deployed and operational, and the data being collected is already helping the NBA better understand players’ strengths and weaknesses and improve their performance.  

Lastly, trust and security are fundamental to achieving personalization at scale. In today’s environment, businesses must be able to navigate strict regulatory requirements, safeguard sensitive customer data, and maintain user trust while delivering tailored experiences. Overcoming these challenges requires implementing robust security measures, such as end-to-end encryption, role-based access controls, and compliance monitoring, all of which can be enabled and streamlined through cloud platforms. Azure provides a unified environment where businesses can securely integrate data, enforce regulatory compliance across regions, and monitor potential risks in real time, ensuring sensitive information is protected at every stage. Harvey, for example, leveraged advanced encryption, access management, and compliance tools to meet the stringent security requirements of its clients. This solution enables law firms to confidently protect sensitive client data while delivering innovative, AI-powered legal services. As Harvey’s Chief Executive Officer explained, “Law firms trust Azure because it allows them to deliver cutting-edge, AI-driven legal services without compromising on security or compliance.” This commitment to security enables Harvey to focus on innovation while maintaining trust with its clients. 

Transform your business with scalable personalization 

Personalization at scale is essential for businesses striving to stay competitive in today’s rapidly evolving market. Customers increasingly expect experiences that feel tailored, anticipate their needs, and build trust. As cloud and AI technologies continue to advance, the opportunities for deeper, more impactful personalization will only expand. 

You can stay ahead of the competition by delivering personalized experiences that resonate with your customers. Here are some essential steps you can take to get started today:  

  • Audit your data landscape to identify silos and unify disparate systems into a centralized platform for streamlined insights.
  • Establish robust data governance policies to ensure compliance, security, and transparency, earning and maintaining customer trust.
  • Invest in scalable, elastic cloud infrastructure that grows with your needs, so you can handle the demands of real-time personalization.
  • Empower your teams with the training and tools needed to effectively leverage AI and cloud technologies, making personalization a reality. 

By acting now, businesses can not only meet today’s customer expectations but also pave the way to lead in a future driven by secure, scalable, and transformative personalization.

Abstract image

Microsoft Customer Stories

See how organizations are working smarter


1 McKinsey & Company, The value of getting personalization right—or wrong—is multiplying, November 2021.

The post Personalization at scale: How cloud and AI are redefining customer engagement appeared first on Microsoft AI Blogs.

]]>
More value, less risk: How to implement generative AI across the organization securely and responsibly http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/ Mon, 04 Nov 2024 16:00:00 +0000 The technology landscape is undergoing a massive transformation, and AI is at the center of this change.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on Microsoft AI Blogs.

]]>
The technology landscape is undergoing a massive transformation, and AI is at the center of this change—posing both new opportunities as well as new threats.  While AI can be used by adversaries to execute malicious activities, it also has the potential to be a game changer for organizations to help defeat cyberattacks at machine speed. Already today generative AI stands out as a transformative technology that can help boost innovation and efficiency. To maximize the advantages of generative AI, we need to strike a balance between addressing the potential risks and embracing innovation. In our recent strategy paper, “Minimize Risk and Reap the Benefits of AI,” we provide a comprehensive guide to navigating the challenges and opportunities of using generative AI.

background pattern

Minimize Risk and Reap the Benefits of AI

Addressing security concerns and implementing safeguards

According to a recent survey conducted by ISMG, the top concerns for both business executives and security leaders on using generative AI in their organization range, from data security and governance, transparency and accountability to regulatory compliance.1 In this paper, the first in a series on AI compliance, governance, and safety from the Microsoft Security team, we provide business and technical leaders with an overview of potential security risks when deploying generative AI, along with insights into recommended safeguards and approaches to adopt the technology responsibly and effectively.

Learn how to deploy generative AI securely and responsibly

In the paper, we explore five critical areas to help ensure the responsible and effective deployment of generative AI: data security, managing hallucinations and overreliance, addressing biases, legal and regulatory compliance, and defending against threat actors. Each section provides essential insights and practical strategies for navigating these challenges. 

An infographic displaying the top 5 security and business leader concerns: data security, hallucinations, threat actors, biases, and legal and regulatory

Data security

build a foundation for AI success


Explore governance

Data security is a top concern for business and cybersecurity leaders. Specific worries include data leakage, over-permissioned data, and improper internal sharing. Traditional methods like applying data permissions and lifecycle management can enhance security. 

Managing hallucinations and overreliance

Generative AI hallucinations can lead to inaccurate data and flawed decisions. We explore techniques to help ensure AI output accuracy and minimize overreliance risks, including grounding data on trusted sources and using AI red teaming. 

Defending against threat actors

Threat actors use AI for cyberattacks, making safeguards essential. We cover protecting against malicious model instructions, AI system jailbreaks, and AI-driven attacks, emphasizing authentication measures and insider risk programs. 

Grow Your Business with AI You Can Trust

Addressing biases

Reducing bias is crucial to help ensure fair AI use. We discuss methods to identify and mitigate biases from training data and generative systems, emphasizing the role of ethics committees and diversity practices.

Microsoft’s journey to redefine legal support with AI


All in on AI

Navigating AI regulations is challenging due to unclear guidelines and global disparities. We offer best practices for aligning AI initiatives with legal and ethical standards, including establishing ethics committees and leveraging frameworks like the NIST AI Risk Management Framework.

Explore concrete actions for the future

Explore security innovations


Microsoft at RSAC 2025

As your organization adopts generative AI, it’s critical to implement responsible AI principles—including fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability. In this paper, we provide an effective approach that uses the “map, measure, and manage” framework as a guide; as well as explore the importance of experimentation, efficiency, and continuous improvement in your AI deployment.

I’m excited to launch this series on AI compliance, governance, and safety with a strategy paper on minimizing risk and enabling your organization to reap the benefits of generative AI. We hope this series serves as a guide to unlock the full potential of generative AI while ensuring security, compliance, and ethical use—and trust the guidance will empower your organization with the knowledge and tools needed to thrive in this new era for business.

Additional resources

Get more insights from Bret Arsenault on emerging security challenges from his Microsoft Security blogs covering topics like next generation built-in security, insider risk management, managing hybrid work, and more.


1, 2 ISMG’s First annual generative AI study – Business rewards vs. security risks: Research report, ISMG.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on Microsoft AI Blogs.

]]>
AI safety first: Protecting your business and empowering your people http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/ Thu, 31 Oct 2024 15:00:00 +0000 Microsoft has created some resources like the Be Cybersmart Kit to help organizations learn how to protect themselves.

The post AI safety first: Protecting your business and empowering your people appeared first on Microsoft AI Blogs.

]]>


Every technology can be used for good or bad. This was as true for fire and for writing as it is for search engines and for social networks, and it is very much true for AI. You can probably think of many ways that these latter two have helped and harmed in your own life—and you can probably think of the ways they’ve harmed more easily, because those stick out in our minds, while the countless ways they helped (finding your doctor, navigating to their office, the friends you made, the jobs you got) fade into the background of life. You’re not wrong to think this: when a technology is new it’s unfamiliar, and every aspect of it attracts our attention—how often do you get astounded by the existence of writing nowadays?—and when it doesn’t work, or gets misused, it attracts our attention a lot.

The job of the people who build technologies is to make them as good as possible at helping, and as bad as possible at harming. That’s what my job is: as CVP and Deputy CISO of AI Safety and Security at Microsoft, I have the rare privilege of leading a team whose job is to look at every aspect of every AI system we build, and figure out ways to make them safer and more effective. We use the word “safety” very intentionally, because our work isn’t just about security, or privacy, or abuse; our scope is simply “if it involves AI, and someone or something could get hurt.”

But the thing about tools is that no matter how safe you make them, they can go wrong and they can be misused, and if AI is going to be a major part of our lives—which it almost certainly is—then we all need to learn how to understand it, how to think about it, and how to keep ourselves safe both with and from it. So as part of Cybersecurity Awareness Month, we’ve created some resources like the Be Cybersmart Kit to help individuals and organizations learn about some of the most important risks and how to protect themselves.

Cybersecurity awareness

Explore cybersecurity awareness resources and training

I’d like to focus on the three risks that are most likely to affect you directly as individuals and organizations in the near future: overreliance, deepfakes, and manipulation. The most important lesson is that AI safety is about a lot more than how it’s built—it’s about the ways we use it.

Overreliance on AI

Microsoft at RSAC 2025


Explore Security innovations

Because my job has “security” in the title, when people ask me about the number one risk from AI they often expect me to talk about sophisticated cyberattacks. But the reality is that the number one way in which people get hurt by AI is by not knowing when (not) to trust it. If you were around in the late 1990s or early 2000s, you might remember a similar problem with search engines: people were worried that if people saw something on the Internet, all nicely written and formatted, they would assume whatever they read was true—and unfortunately, this worry was well-founded. This might seem ridiculous to us with twenty years of additional experience with the Internet; didn’t people know that the Internet was written by people? Had they ever met people? But at the time, very few people ever encountered professionally-formatted text with clean layouts that wasn’t the result of a lengthy editorial process; our instincts for what “looked reputable” were wrong. Today’s AI has a similar concern because it communicates with you, and we aren’t used to things that speak to us in natural language not understanding basic things about our lives.

We call this problem “overreliance,” and it comes in four basic shapes:

  • Naive overreliance happens when users simply don’t realize that just because responses from AI sound intelligent and well-reasoned, that doesn’t mean the responses actually are smart. They treat the AI like an expert instead of like a helpful, but sometimes naive, assistant.
  • Rushed overreliance happens when people know they need to check, but they just don’t have time to—maybe they’re in a fast-paced environment, or they have too many things to check one by one, or they’ve just gotten used to clicking “accept.”
  • Forced overreliance is what happens when users can’t check, even if they want to; think of an AI helping a non-programmer write a complex website (are you going to check the code for bugs?) or vision augmentation for the blind.
  • Motivated overreliance is maybe the sneakiest: it happens when users have an answer they want to get, and keep asking around (or rephrasing the question, or looking at different information) until they get it.

In each case, the problem with overreliance is that it undermines the human role in oversight, validation, and judgment, which is crucial in preventing AI mistakes from leading to negative outcomes.

How to stay safe

The most important thing you can do to protect yourself is to understand that AI systems aren’t the infallible computers of science fiction. The best way to think of them is as earnest, smart, junior colleagues—excited to help and sometimes really smart but sometimes also really dumb. In fact, this rule applies to a lot more than just overreliance: we’ve found that asking “how would I make this safe if it were a person instead of an AI?” is one of the most reliable ways to secure an AI system against a huge range of risks.

  1. Treat AI as a tool, not a decision-maker: Always verify the AI’s output, especially in critical areas. You wouldn’t hand a key task to a new hire and assume what they did is perfect; treat AI the same way. Whether it’s generating code or producing a report, review it carefully before relying on it.
  2. Maintain human oversight: Think of this as building a business process. If you’re going to be using an AI to help make decisions, who is going to cross-check that? Will someone be overseeing the results for compliance, maybe, or doing a final editorial pass? This is especially true in high-stakes or regulated environments where errors could have serious consequences.
  3. Use AI for brainstorming: AI is at its best when you ask it to lean into its creativity. It’s especially good at helping come up with ideas and interactively brainstorming. Don’t ask AI to do the job for you; ask AI to come up with an idea for your next step, think about it and maybe tweak it a bit, then ask it about its thoughts for what to do next. This way its creativity is boosting yours, while your eye is still on whether the result is what you want.

Train your team to know that AI can make mistakes. When people understand AI’s limitations, they’re less likely to trust it blindly.

Impersonation using AI

Fighting deepfakes with more transparency


Read more

Deepfakes are highly realistic images, recordings, and videos created by AI. They’re called “fakes” when they’re used for deceptive purposes—and both this threat and the next one are about deception. Impersonation is when someone uses a deepfake to convince you that you’re talking to someone that you aren’t. This threat can have serious implications for businesses, as bad actors can use deepfake technology to deceive others into making decisions based on fraudulent information.

Imagine someone creates a deepfake of your chief finance officer’s voice and uses it to convince an employee to authorize a fraudulent transfer. This isn’t hypothetical—it already happened. A company in Hong Kong was taken for $25.6 million with the use of this exact technique.1

The real danger lies in how convincingly these AI-generated voices and videos can mimic trusted individuals, making it hard to know who you’re talking to. Traditional methods of identifying people—like hearing their voice on the phone or seeing them on a video call—are no longer reliable.

How to stay safe

As deepfakes become more compelling, the best defense is to communicate with people in ways where recognizing their face or voice isn’t the only thing you’re relying on. That means using authenticated communication channels like Microsoft Teams or email rather than phone calls or SMS, which are trivial to fake. Within those channels, you need to check that you’re talking to the person you think you’re talking to, and that software (if built right) can help you do that.

In the Hong Kong example above, the bad actor sent an email from a fake but realistic-looking email address inviting the victim to a Zoom meeting on an attacker-controlled but realistically-named server, where they had a conversation with “coworkers” who were actually all deepfakes. Email services such as Outlook can prevent situations like this by vividly highlighting that this is a message from an unfamiliar email address and one that isn’t part of your company; enterprise video conferencing (VC) systems like Teams can identify that you’re connecting to a system outside your own company as a guest. Use tools that provide indicators like these and pay attention to them.

If you find that you need to talk over an unauthenticated channel—say, you get a phone call from a family member in a bad situation and desperately needing you to send them money, or you get a WhatsApp message from an unfamiliar number—consider pre-arranging some secret code words with people you know so you can identify that they’re really who they say they are.

All of these are examples of a familiar technique that we use in security called multi-factor authentication (MFA), which is about using multiple means to verify someone is who they say they are. If you communicate over an authenticated channel, an attacker has to both compromise an account on your service (which itself should be protected by multiple factors) and create a convincing deepfake of that particular person. Forcing attackers to simultaneously do multiple different attacks against the same target at once makes the job exponentially harder for them. Most important services you use (email, social networks, and so on) allow you to set up MFA, and you should always do this when you can—preferably using “strong” MFA methods like physical keys or mobile apps, rather than weak methods like SMS, which are easily faked. According to our latest Microsoft Digital Defense Report, implementing modern day MFA reduces the likelihood of account compromise by 99.2%, significantly strengthening security and making unauthorized access more difficult for attackers to gain access. Although MFA techniques reduce the risk of identity compromise, many organization have been slow to adopt them. So, in January 2020, Microsoft introduced “security defaults” that turn on MFA while turning off basic and legacy authentication for new tenants and those with simple environments. The impact is clear: tenants that use security defaults experience 80% fewer compromises than tenants that don’t.

Scams, phishing, and social manipulation

What is phishing?


Learn more

Beyond impersonating someone you know, AI can be used to power a whole range of attacks against people. The most expensive part of running a scam is taking the victim from the moment they first pick up the bait—answering an email message, perhaps—to the moment the scammers get what they want, be it your password or your money. Phishing campaigns often require work to create cloned websites to steal your credentials. Spear-phishing requires crafting a targeted set of lures for each potential victim. All of these are things that bad actors can do much more quickly and easily with AI tools to help them; they are, after all, the same tools that good actors use to automate customer service, website building, or document creation.

On top of scams, an increasingly important use of AI is in social manipulation, especially by actors with political goals—whether they be real advocacy organizations or foreign intelligence services. Since the mid-2010s, a key goal of many governments has been to sow confusion in the information world in order to sway political outcomes. This can include:

  • Convincing you that something is true when it isn’t—maybe that some kind of crime is rampant and you need to be protected from it, or that your political enemies have been doing something awful.
  • Convincing you that something isn’t true when it is—maybe that the bad things they were caught doing are actually deepfakes and frauds.
  • Simply convincing you that you can’t know what’s true, and you can’t do anything about it anyway, so you should just give up and stay home and not try to affect things.

There are a lot of tricks to doing this, but the most important ones are to make it feel like “everybody feels” something (by making sure you see just enough comments saying something that you figure it must be right, and you start repeating them, making other people believe it even more) and by telling you what you want to hear—creating false stories that line up with what you’re already expecting to believe. (Remember motivated overreliance? This is the same thing!)

AI is supercharging this space as well; it used to be that if you wanted to make sure that every hot conversation about a subject had people voicing your opinion, you needed either very non-human-sounding scripts, or you needed to hire a room full of operators. Today, all you need is a computer.

You can learn more about these attacks in on our threat intelligence website called Microsoft Security Insider.

How to stay safe

Take your current habits for being aware of potential scams or phishing attempts, and turn them up a notch. Just because something showed up at the top of search results doesn’t mean it’s legitimate. Look at things like URLs and source email addresses carefully, and see if you’re looking at something genuine or not.

To detect sophisticated phishing attempts, always verify both the source and the information with trusted channels. Cybercriminals often create a false sense of urgency, use amplification tactics, and mimic trustworthy sources to make their emails or content appear legitimate. Stay especially cautious when approached by unfamiliar individuals online, as most fraud or influence operations begin with a simple social media reply or a seemingly innocent “wrong number” message. (More sophisticated attacks will send friend requests to people, and once you get one person to say yes, your further requests to their friends will look more legitimate, since they now have mutual “friends” with the attacker.)

Social manipulation can affect you both directly (you see messages created by a threat actor) or indirectly (your friends saw those messages and unwittingly repeated them). This means that just because you hear something from someone you trust, you can’t be sure they didn’t get fooled too. If you’re forming your opinion about something, or if you need to make an important decision about whether you believe something or not, do some research, and figure out where a story came from. (And don’t forget that “they won’t tell you about this!” is a common thing to add to frauds, just to make you believe that the lack of news coverage makes it more true.)

But on the other hand, don’t refuse to believe anything you hear, because making you not believe true things is another way you can be cheated. Too much skepticism can get you in just as much trouble as not enough.

And ultimately, remember—social media and similar fora are designed to get you more engaged, activated, and excited, and when you’re in that state, you’re more likely to amplify any feelings you encounter. Often the best thing you can do is simply disconnect for a while and take a breather.

The power and limitations of AI

While AI is a powerful tool, its safety and effectiveness rely on more than just the technology itself. AI functions as one part of a larger, interconnected system that includes human oversight, business processes, and societal context. Navigating the risks—whether it’s overreliance, impersonation, cyberattacks, and social manipulation—requires not only understanding AI’s role but also the actions people must take to stay safe. As AI continues to evolve, staying safe means remaining active participants—adapting, learning, and taking intentional steps to protect both the technology and ourselves. We encourage you to use the resources on the cybersecurity awareness page and help educate your organization so as to create a security-first culture and secure our world—together.

Learn more about AI safety and security


1Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, CNN, 2024.

The post AI safety first: Protecting your business and empowering your people appeared first on Microsoft AI Blogs.

]]>