Digital Security Industry trends | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/content-type/industry-trends/ Expert coverage of cybersecurity topics Wed, 01 Apr 2026 16:28:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Threat actor abuse of AI accelerates from tool to cyberattack surface http://approjects.co.za/?big=en-us/security/blog/2026/04/02/threat-actor-abuse-of-ai-accelerates-from-tool-to-cyberattack-surface/ Thu, 02 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=146176 Generative AI is upgrading cyberattacks, from 450% higher phishing click‑through rates to industrialized MFA bypass.

The post Threat actor abuse of AI accelerates from tool to cyberattack surface appeared first on Microsoft Security Blog.

]]>
For the last year, one word has represented the conversation living at the intersection of AI and cybersecurity: speed. Speed matters, but it’s not the most important shift we are observing across the threat landscape today. Now, threat actors from nation states to cybercrime groups are embedding AI into how they plan, refine, and sustain cyberattacks. The objectives haven’t changed, but the tempo, iteration, and scale of generative AI enabled attacks are certainly upgrading them.

However, like defenders, there is typically a human-in-the-loop still powering these attacks, and not fully autonomous or agentic AI running campaigns. AI is reducing friction across the attack lifecycle; helping threat actors research faster, write better lures, vibe code malware, and triage stolen data. The security leaders I spoke with at RSAC™ 2026 Conference this week are prioritizing resources and strategy shifts to get ahead of this critical progression across the threat landscape.

The operational reality: Embedded, not emerging

The scale of what we are tracking makes the scope impossible to dismiss. Threat activity spans every region. The United States alone represents nearly 25% of observed activity, followed by the United Kingdom, Israel, and Germany. That volume reflects economic and geopolitical realities.1

But the bigger shift is not geographic, it’s operational. Threat actors are embedding AI into how they work across reconnaissance, malware development, and post-compromise operations. Objectives like credential theft, financial gain, and espionage might look familiar, but the precision, persistence, and scale behind them have changed.

Email is still the fastest inroad

Email remains the fastest and cheapest path to initial access. What has changed is the level of refinement that AI enables in crafting the message that gets someone to click.

When AI is embedded into phishing operations, we are seeing click-through rates reach 54%, compared to roughly 12% for more traditional campaigns. That is a 450% increase in effectiveness. That’s not the result of increased volume, but the result of improved precision. AI is helping threat actors localize content and adapt messaging to specific roles, reducing the friction in crafting a lure that converts into access. When you combine that improved effectiveness with infrastructure designed to bypass multifactor authentication (MFA), the result is phishing operations that are more resilient, more targeted, and significantly harder to defend at scale.

A 450% increase in click-through rates changes the risk calculus for every organization. It also signals that AI is not just being used to do more of the same, it is being used to do it better.

Tycoon2FA: What industrial-scale cybercrime looks like

Tycoon2FA is an example of how the actor we track as Storm-1747 shifted toward refinement and resilience. Understanding how it operated teaches us where threats might be headed, and fueled conversations in the briefing rooms at RSAC 2026 this week that focused on ecosystem instead of individual actors.

Tycoon2FA was not a phishing kit, it was a subscription platform that generated tens of millions of phishing emails per month. It was linked to nearly 100,000 compromised organizations since 2023. At its peak, it accounted for roughly 62% of all phishing attempts that Microsoft was blocking every month. This operation specialized in adversary-in-the-middle attacks designed to defeat MFA. It intercepted credentials and session tokens in real time and allowed attackers to authenticate as legitimate users without triggering alerts, even after passwords were reset.

But the technical capability is only part of the story. The bigger shift is structural. Storm-1747 was not operating alone. This was modular cybercrime: one service handled phishing templates, another provided infrastructure, another managed email distribution, another monetized access. It was effectively an assembly line for identity theft. The services were composable, scalable, and available by subscription.

This is the model that has changed the conversations this week: it is not about a single sophisticated actor; it is about an ecosystem that has industrialized access and lowers the barrier to entry for every actor that plugs into it. That is exactly what AI is doing across the broader threat landscape: making the capabilities of sophisticated actors available to everyone.

Disruption: Closing the threat intelligence loop

Our Digital Crimes Unit disrupted Tycoon2FA earlier this month, seizing 330 domains in coordination with Europol and industry partners. But the goal was not simply to take down websites. The goal was to apply pressure to a supply chain. Cybercrime today is about scalable service models that lower the barrier to entry. Identity is the primary target and MFA bypass is now packaged as a feature. Disrupting one service forces the market to adapt. Sustained pressure fragments the ecosystem. By targeting the economic engine behind attacks, we can reshape the risk environment.

Every time we disrupt an attack, it generates signal. The signal feeds intelligence. The intelligence strengthens detection. Detection is what drives response. That is how we turn threat actor actions into durable defenses, and how the work of disruption compounds over time. Microsoft’s ability to observe at scale, act at scale, and share intelligence at scale is the differentiation that matters. It makes a difference because of how we put it into practice.

AI across the full attack lifecycle

When we step back from any single campaign and look for a broader pattern, AI doesn’t show up in just one phase of an attack; it appears across the entire lifecycle. At RSAC 2026 this week, I offered a frame to help defenders prioritize their response:

  • In reconnaissance: AI accelerates infrastructure discovery and persona development, compressing the time between target selection and first contact. 
  • In resource development: AI generates forged documents, polished social engineering narratives, and supports infrastructure at scale. 
  • For initial access: AI refines voice overlays, deepfakes, and message customization using scraped data, producing lures that are increasingly difficult to distinguish from legitimate communications. 
  • In persistence and evasion: AI scales fake identities and automates communication that maintains attacker presence while blending with normal activity. 
  • In weaponization: AI enables malware development, payload regeneration, and real-time debugging, producing tooling that adapts to the victim environment rather than relying on static signatures. 
  • In post-compromise operations: AI adapts tooling to the specific victim environment and, in some cases, automates ransom negotiation itself. 

The objective has not changed: credential theft, financial gain, and espionage. What has changed is the tempo, the iteration speed, and the ability to test and refine at scale. AI is not just accelerating cyberattacks, it’s upgrading them.

What comes next

In my sessions at RSAC 2026 this week, I shared a set of themes that help define the AI-powered shift in the threat landscape.

The first is the agentic threat model. The scenarios we prepare for have changed. The barrier to launching sophisticated attacks has collapsed. What once required the resources of a nation-state or well-organized criminal enterprise is now accessible to a motivated individual with the right tools and the patience to use them. The techniques have not fundamentally changed; the precision, velocity, and volume have.

The second is the software supply chain. Knowing what software and agents you have deployed and being able to account for their behavior is not a compliance exercise. The agent ecosystem will become the most attacked surface in the enterprise. Organizations that cannot answer basic inventory questions about their agent environment will not be able to defend it.

The third is understanding the value of human talent in a security operation using agentic systems to scale. The security analyst as practitioner is giving way to the security analyst as orchestrator. The talent models organizations are hiring against today are already outdated. But technology can help protect humans who may make mistakes. Though it means auditability of agent decisions is a governance requirement today, not eventually. The SOC of the future demands a fundamentally different kind of defender.

The moment to lead with strategic clarity, ranked priorities, and a hardened posture for agentic accountability is now.

If AI is embedded across the attack lifecycle, intelligence and defense must be embedded across the lifecycle too. Microsoft Threat Intelligence will continue to track, publish, and act on what we are observing in real time. The patterns are visible. The intelligence is there.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2025.

The post Threat actor abuse of AI accelerates from tool to cyberattack surface appeared first on Microsoft Security Blog.

]]>
Women’s History Month: Encouraging women in cybersecurity at every career stage http://approjects.co.za/?big=en-us/security/blog/2026/03/05/womens-history-month-encouraging-women-in-cybersecurity-at-every-career-stage/ Thu, 05 Mar 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145412 This Women’s History Month, we explore ways to support the next generation of female defenders at every career stage.

The post Women’s History Month: Encouraging women in cybersecurity at every career stage appeared first on Microsoft Security Blog.

]]>
Women’s History Month—and International Women’s Day on March 8, 2026—always gives me pause for reflection. It’s a moment to think about how far we’ve come and think about who we choose to uplift as we look ahead.

Throughout my career, I’ve been inspired by extraordinary women leaders—trailblazers who broke barriers, opened doors, and reshaped what leadership in technology looks like. But today, I want to shine a light on another group that inspires me just as deeply: women early in their careers—the builders, learners, and question-askers who are defining the future of cybersecurity and developing their skills in the era of AI.

These women are entering the field at a moment of unprecedented complexity. Cyberthreats are accelerating. AI is reshaping how we defend, detect, and respond. And the stakes—for trust, safety, and resilience—have never been higher.

That’s exactly why it has never been more critical to have a wide range of experiences and perspectives in our defender community.

Be Cybersmart

Help educate everyone in your organization with cybersecurity awareness resources and training curated by the security experts at Microsoft.

Get the Be Cybersmart Kit.

Why diversity of perspectives is not optional in cybersecurity

Cybersecurity is fundamentally about understanding people—how they behave, how they make decisions, how systems can be misused, and where harm can occur. That’s why diversity of perspectives, backgrounds, experiences, and people is a security imperative.

The ISACA paper titled “The Value of Diversity and Inclusion in Cybersecurity” concludes that cybersecurity teams lacking diversity are at greater risk of engaging in limited threat modeling, exhibiting reduced innovation, and making less robust decisions in complex security environments. At Microsoft Security, we recognize that the cyberthreats we encounter are as varied and multifaceted as humanity itself.

To stay ahead, our teams must reflect that diversity across gender, background, culture, discipline, and lived experience.

When teams bring different perspectives to the table,

  • They ask better questions;
  • They surface risks earlier;
  • They design systems that work for more people;
  • And they build security that is resilient by design.

The power of women early in career and beyond

Women early in their career bring something incredibly powerful to cybersecurity and AI: fresh perspective paired with fearless curiosity. Women bring empathy, clarity, systems thinking, and collaborative leadership that directly strengthen our ability to detect cyberthreats, understand human behavior, and build secure products that work for everyone.

This makes me think of my valued friend and colleague, Lauren Buitta, who is the founder and chief executive officer (CEO) of Girl Security. Lauren has been a tireless advocate for providing women early in career—especially those from underrepresented backgrounds, with the skills and confidence needed to enter security careers. She often says, “Security isn’t just a discipline—it’s empowerment through knowledge.” That philosophy extends to Girl Security’s work preparing the next generation to navigate and lead in an AI-powered world. Her efforts show us that nurturing curiosity early on can have lasting effects throughout life.

They challenge assumptions that may no longer hold. They ask “why” before accepting “how.” They’re often the first to notice gaps—in data, in design, in who is represented and who is missing. Supporting women at this stage isn’t just about equity. It’s about strengthening the future of security itself. These actions build a stronger, more resilient security ecosystem.

Building and cultivating pathways for the next generation

Investing in women early in their cybersecurity and AI security careers is essential. Early access to education, opportunity, and confidence building experiences helps more women see themselves in this field—and choose to stay.

But if we stop there, we shouldn’t be surprised when the numbers don’t move.  In fact, independent global analyses from the Global Cybersecurity Forum and Boston Consulting Group show that women represent just 24% of the cybersecurity workforce worldwide—a figure reinforced by LinkedIn’s real-time labor market data. What I’ve realized is this: To change outcomes, we have to cultivate women throughout their careers—from first exposure to technical mastery, from early roles to leadership, and from individual contributor to decisionmaker. Otherwise, we’ll continue to bring women into the field without creating the conditions that allow them to grow, advance, and remain.

That means pairing early career investment with sustained support, inclusive cultures, and everyday actions that reinforce belonging and opportunity over time.

Here are meaningful steps we can all take—not just to widen the pipeline, but to strengthen it end to end:

1. Share stories from a diverse set of role models at every career stage.
Representation fuels imagination. When women early in career see themselves reflected in cybersecurity, they’re more likely to enter the field. When women midcareer and in senior roles see paths forward, they’re more likely to stay and lead.

2. Reevaluate job descriptions at entry and beyond.
Rigid expectations or narrow definitions of technical expertise discourage qualified candidates from applying, and can also limit progression into advanced or leadership roles.

3. Invest in inclusive training and early career programs and sustain learning over time.
Accessible, hands-on learning builds confidence early. Continued upskilling, reskilling, and leadership development ensure women can evolve alongside rapidly changing security and AI technologies.

4. Volunteer with organizations driving cybersecurity and AI education.
Groups like Girl Security and Women in CyberSecurity (WiCyS) are changing outcomes for thousands of girls and women. Your time, mentorship, or sponsorship helps build momentum early—and reinforces pathways later. I welcome you to join Nicole Ford, Vice President Customer Security Officer at Microsoft, who will be hosting a leadership lunch at the WiCyS conference to discuss cultivating leaders for the future and though advocacy and sponsorship.

5. Partner with community groups offering mentorship and sponsorship opportunities.
Mentorship is one of the strongest predictors of early career success. Sponsorship—advocacy that opens doors to stretch roles, visibility, and advancement—is critical for long term progression.

6. Be an ally every day across the full career journey.
Introduce emerging talent to your networks. Encourage them to speak up. Create space for them to lead. Advocate for their ideas in rooms they aren’t in yet—especially as stakes and visibility increase.

Our commitment—and our opportunity

At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. That starts by ensuring the next generation of cybersecurity and AI security professionals has equitable access to opportunity, education, and belonging.

This Women’s History Month, let’s celebrate not only the women who have led the way — but the women who are just getting started.

They’re actively shaping security today, not just influencing its future. Security is a team sport and we need everyone in this team because together, we can build a safer, more inclusive digital future for all.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Women’s History Month: Encouraging women in cybersecurity at every career stage appeared first on Microsoft Security Blog.

]]>
Unify now or pay later: New research exposes the operational cost of a fragmented SOC http://approjects.co.za/?big=en-us/security/blog/2026/02/17/unify-now-or-pay-later-new-research-exposes-the-operational-cost-of-a-fragmented-soc/ Tue, 17 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145254 New research from Microsoft and Omdia reveals how fragmented tools, manual workflows, and alert overload are pushing SOCs to a breaking point.

The post Unify now or pay later: New research exposes the operational cost of a fragmented SOC appeared first on Microsoft Security Blog.

]]>
Security operations are entering a pivotal moment: the operating model that grew around network logs and phishing emails is now buckling under tool sprawl, manual triage, and threat actors that outpace defender capacity. New research from Microsoft and Omdia shows just how heavy the burden can be—security operations centers (SOCs) juggle double-digit consoles, teams manually ingest data several times a week, and nearly half of all alerts go uninvestigated. The result is a growing gap between cyberattacker speed and defender capacity. Read State of the SOC—Unify Now or Pay Later to learn how hidden operational pressures impact resilience—compelling evidence to why unification, automation, and AI-powered workflows are quickly becoming non-negotiables for modern SOC performance.

The forces pushing modern SOC operations to a breaking point

The report surfaces five specific operational pressures shaping the modern SOC—spanning fragmentation, manual toil, signal overload, business-level risk exposure, and detection bias. Separately, each data point is striking. But taken together, they reveal a more consequential reality: analysts spend their time stitching context across consoles and working through endless queues, while real cyberattacks move in parallel. When investigations stall and alerts go untriaged, missed signals don’t just hurt metrics—they create the conditions for preventable compromises. Let’s take a closer look at each of the five issues:

1. Fragmentation

Fragmented tools and disconnected data force analysts to pivot across an average of 10.9 consoles1 and manually reconstruct context, slowing investigations and increasing the likelihood of missed signals. These gaps compound when only about 59% of tools push data to the security information and event management (SIEM), leaving most SOCs manually ingesting data and operating with incomplete visibility.

2. Manual toil

Manual, repetitive data work consumes an outsized share of analyst capacity, with 66% of SOCs losing 20% of their week to aggregation and correlation—an operational drain that delays investigations, suppresses threat hunting, and weakens the SOC’s ability to reduce real risk.

3. Security signal overload

Surging alert volumes bury analysts in noise with an estimated 46% of alerts proving false positives and 42% going uninvestigated, overwhelming capacity, driving fatigue, and increasing the likelihood real cyberthreats slip through unnoticed.

4. Operational gaps

Operational gaps are directly translating into business disrupting incidents, with 91% of security leaders reporting serious events and more than half experiencing five or more in the past year—exposing organizations to financial loss, downtime, and reputational damage.

5. Detection bias

Detection bias keeps SOCs focused on tuning alerts for familiar cyberthreats—52% of positive alerts map to known vulnerabilities—leaving dangerous blind spots for emerging tactics, techniques, and procedures (TTPs). This reactive posture slows proactive threat hunting and weakens readiness for novel attacks even as 75% of security leaders worry the SOC is losing pace with new cyberthreats.

Read the full report for the deeper story, including chief information security officer (CISO)-level takeaways, expanded data, and the complete analysis behind each operational pressure, as well as insights that can help security professionals strengthen their strategy and improve real world SOC outcomes.

What CISOs can do now to strengthen resilience

Security leaders have a clear path to easing today’s operational strain: unify the environment, automate what slows teams down, and elevate identity and endpoint as a single control plane. The shift is already underway as forward-leaning organizations focus on high-impact wins—automating routine lookups, reducing noise, streamlining triage, and eliminating the fragmentation and manual toil that drain analyst capacity. Identity remains the most critical failure point, and leaders increasingly view unified identity to endpoint protection as foundational to reducing exposure and restoring defender agility. And as environments unify, the strength of the underlying graph and data lake becomes essential for connecting signals at scale and accelerating every defender workflow.

As AI matures, leaders are also looking for governable, customizable approaches—not black box automation. They want AI agents they can shape to their environment, integrate deeply with their SIEM, and extend across cloud, identity, and on-premises signals. This mindset reflects a broader operational shift: modern key performance indicators (KPIs) will improve only when tools, workflows, and investigations are unified, and automation frees analysts for higher value work.

The report details a roadmap for CISOs that emphasizes unifying signals, embedding AI into core workflows, and strengthening identity as the primary control point for reducing risk. It shows how leaders can turn operational friction into strategic momentum by consolidating tools, automating routine investigation steps, elevating analysts to higher value work, and preparing their SOCs for a future defined by integrated visibility, adaptive defenses, and AI-assisted decision making.

Chart your path forward

The pressures facing today’s SOCs are real, but the path forward is increasingly clear. As this report shows, organizations that take these steps aren’t just reducing operational friction—they’re building a stronger foundation for rapid detection, decisive response, and long-term readiness. Read State of the SOC—Unify Now or Pay Later for deeper guidance, expanded findings, and a phased roadmap that can help security professionals chart the next era of their SOC evolution.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1The study, commissioned by Microsoft, was conducted by Omdia from June 25, 2025, to July 23, 2025. Survey respondents (N=300) included security professionals responsible for SOC operations at mid-market and enterprise organizations (more than 750 employees) across the United States, United Kingdom, and Australia and New Zealand. All statistics included in this post are from the study.

The post Unify now or pay later: New research exposes the operational cost of a fragmented SOC appeared first on Microsoft Security Blog.

]]>
80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/ Tue, 10 Feb 2026 16:00:00 +0000 Read Microsoft's new Cyber Pulse report for straightforward, practical insights and guidance on new cybersecurity risks.

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on Microsoft Security Blog.

]]>
Today, Microsoft is releasing the new Cyber Pulse report to provide leaders with straightforward, practical insights and guidance on new cybersecurity risks. One of today’s most pressing concerns is the governance of AI and autonomous agents. AI agents are scaling faster than some companies can see them—and that visibility gap is a business risk.1 Like people, AI agents require protection through strong observability, governance, and security using Zero Trust principles. As the report highlights, organizations that succeed in the next phase of AI adoption will be those that move with speed and bring business, IT, security, and developer teams together to observe, govern, and secure their AI transformation.

Agent building isn’t limited to technical roles; today, employees in various positions create and use agents in daily work. More than 80% of Fortune 500 companies today use AI active agents built with low-code/no-code tools.2 AI is ubiquitous in many operations, and generative AI-powered agents are embedded in workflows across sales, finance, security, customer service, and product innovation. 

With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place. AI agents should be held to the same standards as employees or service accounts. That means applying long‑standing Zero Trust security principles consistently:

  • Least privilege access: Give every user, AI agent, or system only what they need—no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, risk level.
  • Assume compromise can occur: Design systems expecting that cyberattackers will get inside.

These principles are not new, and many security teams have implemented Zero Trust principles in their organization. What’s new is their application to non‑human users operating at scale and speed. Organizations that embed these controls within their deployment of AI agents from the beginning will be able to move faster, building trust in AI.

The rise of human-led AI agents

The growth of AI agents expands across many regions around the world from the Americas to Europe, Middle East, and Africa (EMEA), and Asia.

A graph showing the percentages of the regions around the world using AI agents.

According to Cyber Pulse, leading industries such as software and technology (16%), manufacturing (13%), financial institutions (11%), and retail (9%) are using agents to support increasingly complex tasks—drafting proposals, analyzing financial data, triaging security alerts, automating repetitive processes, and surfacing insights at machine speed.3 These agents can operate in assistive modes, responding to user prompts, or autonomously, executing tasks with minimal human intervention.

A graphic showing the percentage of industries using agents to support complex tasks.
Source: Industry Agent Metrics were created using Microsoft first-party telemetry measuring agents build with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

And unlike traditional software, agents are dynamic. They act. They decide. They access data. And increasingly, they interact with other agents.

That changes the risk profile fundamentally.

The blind spot: Agent growth without observability, governance, and security

Despite the rapid adoption of AI agents, many organizations struggle to answer some basic questions:

  • How many agents are running across the enterprise?
  • Who owns them?
  • What data do they touch?
  • Which agents are sanctioned—and which are not?

This is not a hypothetical concern. Shadow IT has existed for decades, but shadow AI introduces new dimensions of risk. Agents can inherit permissions, access sensitive information, and generate outputs at scale—sometimes outside the visibility of IT and security teams. Bad actors might exploit agents’ access and privileges, turning them into unintended double agents. Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability. When leaders lack observability in their AI ecosystem, risk accumulates silently.

According to the Cyber Pulse report, already 29% of employees have turned to unsanctioned AI agents for work tasks.4 This disparity is noteworthy, as it indicates that numerous organizations are deploying AI capabilities and agents prior to establishing appropriate controls for access management, data protection, compliance, and accountability. In regulated sectors such as financial services, healthcare, and the public sector, this gap can have particularly significant consequences.

Why observability comes first

You can’t protect what you can’t see, and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organization (IT, security, developers, and AI teams) to understand:  

  • What agents exist 
  • Who owns them 
  • What systems and data they touch 
  • How they behave 

In the Cyber Pulse report, we outline five core capabilities that organizations need to establish for true observability and governance of AI agents:

  • Registry: A centralized registry acts as a single source of truth for all agents across the organization—sanctioned, third‑party, and emerging shadow agents. This inventory helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications. Least‑privilege permissions, enforced consistently, help ensure agents can access only the data, systems, and workflows required to fulfill their purpose—no more, no less.
  • Visualization: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behavior and impact—supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external cyberthreats. Security signals, policy enforcement, and integrated tooling help organizations detect compromised or misaligned agents early and respond quickly—before issues escalate into business, regulatory, or reputational harm.

Governance and security are not the same—and both matter

One important clarification emerging from Cyber Pulse is this: governance and security are related, but not interchangeable.

  • Governance defines ownership, accountability, policy, and oversight.
  • Security enforces controls, protects access, and detects cyberthreats.

Both are required. And neither can succeed in isolation.

AI governance cannot live solely within IT, and AI security cannot be delegated only to chief information security officers (CISOs). This is a cross functional responsibility, spanning legal, compliance, human resources, data science, business leadership, and the board.

When AI risk is treated as a core enterprise risk—alongside financial, operational, and regulatory risk—organizations are better positioned to move quickly and safely.

Strong security and governance do more than reduce risk—they enable transparency. And transparency is fast becoming a competitive advantage.

From risk management to competitive advantage

This is an exciting time for leading Frontier Firms. Many organizations are already using this moment to modernize governance, reduce overshared data, and establish security controls that allow safe use. They are proving that security and innovation are not opposing forces; they are reinforcing ones. Security is a catalyst for innovation.

According to the Cyber Pulse report, the leaders who act now will mitigate risk, unlock faster innovation, protect customer trust, and build resilience into the very fabric of their AI-powered enterprises. The future belongs to organizations that innovate at machine speed and observe, govern and secure with the same precision. If we get this right, and I know we will, AI becomes more than a breakthrough in technology—it becomes a breakthrough in human ambition.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Data Security Index 2026: Unifying Data Protection and AI Innovation, Microsoft Security, 2026.

2Based on Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

3Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

4July 2025 multi-national survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Group.

Methodology:

Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the past 28 days of November 2025. 

2026 Data Security Index: 

A 25-minute multinational online survey was conducted from July 16 to August 11, 2025, among 1,725 data security leaders. 

Questions centered around the data security landscape, data security incidents, securing employee use of generative AI, and the use of generative AI in data security programs to highlight comparisons to 2024. 

One-hour in-depth interviews were conducted with 10 data security leaders in the United States and United Kingdom to garner stories about how they are approaching data security in their organizations. 

Definitions: 

Active Agents are 1) deployed to production and 2) have some “real activity” associated with them in the past 28 days.  

“Real activity” is defined as 1+ engagement with a user (assistive agents) OR 1+ autonomous runs (autonomous agents).  

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on Microsoft Security Blog.

]]>
The CISO imperative: Building resilience in an era of accelerated cyberthreats http://approjects.co.za/?big=en-us/security/blog/2025/10/22/the-ciso-imperative-building-resilience-in-an-era-of-accelerated-cyberthreats/ Wed, 22 Oct 2025 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=143198 The latest Microsoft Digital Defense Report 2025 paints a vivid picture of a cyberthreat landscape in flux. The surge in financially motivated cyberattacks and the persistent risk of nation-state actors demand urgent attention. But for those of us in the Office of the CISO, the real challenge, and opportunity, lies in how organizations respond, adapt, and build resilience for what comes next.

The post The CISO imperative: Building resilience in an era of accelerated cyberthreats appeared first on Microsoft Security Blog.

]]>
The latest Microsoft Digital Defense Report 2025 paints a vivid picture of a cyberthreat landscape in flux. The surge in financially motivated cyberattacks and the persistent risk of nation-state actors demand urgent attention. But for those of us in the Office of the Chief Information Security Officer (CISO), the real challenge and opportunity lie in how organizations respond, adapt, and build resilience for what comes next.

This year’s findings reveal something we have all been sensing: the threat of landscape is not just evolving—it is accelerating. AI has fundamentally changed the equation, impacting the speed, scale, and sophistication of cyberattacks in ways that render many traditional defensive assumptions obsolete. Yet AI also represents our most powerful tool for adaptation.

Understanding the acceleration

The metrics tell a stark story, but the operational implications matter more. We’re observing cyberattacks that execute in the time it takes a user to click—ClickFix techniques that bypass layered defenses through social engineering at machine speed. In cloud environments, the window between deployment and compromise has collapsed to 48 hours for containers, fundamentally challenging our assumptions about hardening timelines.

The economics have shifted as well. AI-powered phishing campaigns now achieve 50 times profitability improvements by automating personalization at scale. We’re tracking North Korean operations that have embedded tens of thousands of workers globally, turning the remote workforce into a persistent cyberthreat vector. This is not opportunistic. Indeed, it is industrial-scale infiltration.

The sophistication curve continues its steep climb. Our telemetry shows an 87% increase in disruptive campaigns targeting Microsoft Azure environments. Credential theft attempts are up 23%, data exfiltration up 58%. We are now tracking early indicators of autonomous malware capable of lateral movement and adaptive behavior without human direction.

What strikes me most is the operational coordination. Through Microsoft Threat Intelligence, we are observing campaigns spanning more than 130 countries where nation-states, criminal syndicates, and commercial mercenaries share infrastructure and tactics. Access brokers have created marketplaces that blur lines between espionage and crime. The models–scalable, resilient, and disturbingly efficient.

From threat awareness to strategic action

Here is the paradox every CISO faces: threats are accelerating, yet our defensive capabilities have never been stronger. The gap is not technology. The gap is in how we think about and operationalize security. Legacy approaches that separate security from business strategy, that prioritize prevention over resilience, that treat threat incidents as failures rather than inevitable events—these mindsets are now liabilities.

The path forward requires fundamental shifts:

Security as a business enabler, not a control point. We just embed security into every business process, from product development to supply chain management. When security becomes integral to how organizations operate, rather than a gate they must pass through, we move faster while managing risk more effectively. This is not about lowering standards. This is about building security into the foundation rather than adding it as a façade.

Resilience as the primary objective. The question isn’t if an incident will occur, but how quickly we can detect, contain, and recover from it. When cyberattacks execute in seconds and compromises happen within 48 hours, our response capabilities must match that velocity. This means tested playbooks, empowered teams, and automated response mechanisms that operate at machine speed.

Intelligence and automation as force multipliers. The same AI technologies that let cyberattackers scale operations can amplify our defense capabilities—if we deploy them strategically. Automation is not about replacing security teams. It is about letting them operate at the speed and scale that modern threats demand.

The evolved CISO mandate

The role of the CISO has fundamentally expanded. We are no longer purely technologists. We are risk managers, strategic advisors, and organizational change agents. The board needs us to translate technical cyberthreats into business risks and resilience strategies into competitive advantages.

This evolution demands new capabilities:

Cross-functional leadership that transcends IT. When a social engineering attack can compromise an organization in seconds, response requires coordinated actions across IT, legal, human resources, communications, and executive leadership. We must build these partnerships before the crisis, not during it.

Continuous adaptation as operational discipline. The 48-hour container compromise window and the instant infection vectors we are seeing mean that continuous monitoring, regular testing, and rapid iteration are not best practices. They are survival requirements. Our defenses, policies, and response capabilities must evolve as quicky as threats.

Governance that anticipates regulatory evolution. As governments increase transparency requirements and impose consequences for malicious activity, we must ensure our organizations can meet both the letter and the spirit of emerging regulations. This includes understanding third-party risks, from access brokers to embedded cyberthreats in our workforce and supply chains.

Proven strategies for operationalizing security resilience

From our work with customers, own operational experience, and implementation of the Secure Future Initiative (SFI), three priorities rise to the top:

Modern identity controls are non-negotiable. With 97% of identity attacks targeting passwords, phishing-resistant MFA fundamentally alters the risk equation. This isn’t about adding layers—it’s about eliminating entire attack vectors. Organizations that deploy phishing-resistant authentication see dramatic reductions in successful compromises.

Incident response readiness determines outcome. When attacks move at machine speed, response time becomes the critical variable. This means regular simulations, tested playbooks, and teams empowered to act decisively. We must practice for the scenarios we’ll face, not the ones we hope to avoid. The organizations that recover fastest are those that have failed in simulation and learned before the real event.

Collective defense is no longer optional. Against campaigns spanning more than 130 countries and cyberattacker ecosystems sharing infrastructure, isolated defense is ineffective. Intelligence sharing, collaborative best practices, and sector-wide coordination are force multipliers that benefit everyone. The cyberthreats we face are too sophisticated and too coordinated for any organization to defend alone.

We’ve been applying these same principles internally through our Secure Future Initiative. Rather than keep our implementation lessons internal, we’re publishing the actual patterns and practices we’ve used—the specific approaches that worked, the trade-offs we encountered, and the practical steps other organizations can adapt. The SFI patterns and practices library includes detailed guidance on challenges like securing multi-tenant environments, protecting software supply chains, and implementing Zero Trust for source code access.

What I appreciate about these patterns is that they are written by practitioners who have actually implemented them. Each one outlines the problem, explains how we solved it internally at Microsoft, and provides recommendations that you can evaluate for your own environment. No glossy overviews—just the operating details of what worked and what did not.

Steps to strengthen resilience and response across your organization 

The acceleration we are witnessing—cyberattack speed, operational scale, and technical sophistication—demands an equivalent acceleration in our response. This is not about working harder; it’s about working differently. It means treating AI and automation as operational imperatives, not future projects. It means building identity security as foundational infrastructure, not a compliance checkbox. It means developing incident response capabilities that match the velocity of modern cyberattacks.

Most fundamentally, it means embracing our evolved role as CISOs. We are architects of organizational resilience in an era where cyberthreats move at machine speed and span continents. This requires equal parts of technical depth, strategic vision, and collaborative leadership.

The cyberthreat landscape will continue to evolve. Our mandate is to evolve faster, to build organizations that are not just secure but resilient, adaptive, and prepared for whatever comes next. That is the challenge facing every CISO today. It is also the opportunity to build something stronger than what came before.

For a detailed and comprehensive analysis, explore the full Microsoft Digital Defense Report 2025.

Microsoft Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series.

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

A professional man working on a laptop at his desk in a modern office setting.

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post The CISO imperative: Building resilience in an era of accelerated cyberthreats appeared first on Microsoft Security Blog.

]]>
Extortion and ransomware drive over half of cyberattacks https://blogs.microsoft.com/on-the-issues/2025/10/16/mddr-2025/ Thu, 16 Oct 2025 14:05:00 +0000 In 80% of the cyber incidents Microsoft’s security teams investigated last year, attackers sought to steal data—a trend driven more by financial gain than intelligence gathering.

The post Extortion and ransomware drive over half of cyberattacks appeared first on Microsoft Security Blog.

]]>
In 80% of the cyber incidents Microsoft’s security teams investigated last year, attackers sought to steal data—a trend driven more by financial gain than intelligence gathering. According to the latest Microsoft Digital Defense Report, written with our Chief Information Security Officer Igor Tsyganskiy, over half of cyberattacks with known motives were driven by extortion or ransomware. That’s at least 52% of incidents fueled by financial gain, while attacks focused solely on espionage made up just 4%. Nation-state threats remain a serious and persistent threat, but most of the immediate attacks organizations face today come from opportunistic criminals looking to make a profit.

Every day, Microsoft processes more than 100 trillion signals, blocks approximately 4.5 million new malware attempts, analyzes 38 million identity risk detections, and screens 5 billion emails for malware and phishing. Advances in automation and readily available off-the-shelf tools have enabled cybercriminals—even those with limited technical expertise—to expand their operations significantly. The use of AI has further added to this trend with cybercriminals accelerating malware development and creating more realistic synthetic content, enhancing the efficiency of activities such as phishing and ransomware attacks. As a result, opportunistic malicious actors now target everyone — big or small — making cybercrime a universal, ever-present threat that spills into our daily lives.

In this environment, organizational leaders must treat cybersecurity as a core strategic priority—not just an IT issue—and build resilience into their technology and operations from the ground up. In our sixth annual Microsoft Digital Defense Report, which covers trends from July 2024 through June 2025, we highlight that legacy security measures are no longer enough; we need modern defenses leveraging AI and strong collaboration across industries and governments to keep pace with the threat. For individuals, simple steps like using strong security tools—especially phishing-resistant multifactor authentication (MFA)—makes a big difference, as MFA can block over 99% of identity-based attacks. Below are some of the key findings.

A screenshot of a computer screen

AI-generated content may be incorrect.

Critical services are prime targets with a real-world impact.

Malicious actors remain focused on attacking critical public services –— targets that, when compromised, can have a direct and immediate impact on people’s lives. Hospitals and local governments, for example, are all targets because they store sensitive data, or have tight cybersecurity budgets with limited incident response capabilities, often resulting in outdated software. In the past year, cyberattacks on these sectors had real -world consequences, including delayed emergency medical care, disrupted emergency services, canceled school classes, and halted transportation systems.

Ransomware actors in particular focus on these critical sectors because of the targets’ limited options. For example, a hospital must quickly resolve its encrypted systems, or patients could die, potentially leaving no other recourse but to pay. Additionally, governments, hospitals, and research institutions store sensitive data that criminals can steal and monetize through illicit marketplaces on the dark web, fueling downstream criminal activity. Government and industry can collaborate to strengthen cybersecurity in these sectors—particularly for the most vulnerable. These efforts are critical to protecting communities and ensuring continuity of care, education, and emergency response.

Nation-state actors are expanding operations.

While cybercriminals are the biggest cyber threat by volume, nation-state actors still target key industries and regions, expanding their focus on espionage and, in some cases, on financial gain. Geopolitical objectives continue to drive a surge in state-sponsored cyber activity, with a notable expansion in targeting communications, research, and academia.

Key insights:

China is continuing its broad push across industries to conduct espionage and steal sensitive data. State-affiliated actors are increasingly attacking non-governmental organizations (NGOs) to expand their insights and are using covert networks and vulnerable internet-facing devices to gain entry and avoid detection. They have also become faster at operationalizing newly disclosed vulnerabilities.

Iran is going after a wider range of targets than ever before, from the Middle East to North America, as part of broadening espionage operations. Recently, three Iranian state-affiliated actors attacked shipping and logistics firms in Europe and the Persian Gulf to gain ongoing access to sensitive commercial data, raising the possibility that Iran may be pre-positioning to have the ability to interfere with commercial shipping operations.

Russia, while still focused on the war in Ukraine, has expanded its targets. For example, Microsoft has observed Russian state-affiliated actors targeting small businesses in countries supporting Ukraine. In fact, outside of Ukraine, the top ten countries most affected by Russian cyber activity all belong to the North Atlantic Treaty Organization (NATO) —a 25% increase compared to last year. Russian actors may view these smaller companies as possibly less resource-intensive pivot points they can use to access larger organizations. These actors are also increasingly leveraging the cybercriminal ecosystem for their attacks.

North Korea remains focused on revenue generation and espionage. In a trend that has gained significant attention, thousands of state-affiliated North Korean remote IT workers have applied for jobs with companies around the world, sending their salaries back to the government as remittances. When discovered, some of these workers have turned to extortion as another approach to bringing in money for the regime.

The cyber threats posed by nation-states are becoming more expansive and unpredictable. In addition, the shift by at least some nation-state actors to further leveraging the cybercriminal ecosystem will make attribution even more complicated. This underscores the need for organizations to stay abreast of the threats to their industries and work with both industry peers and governments to confront the threats posed by nation-state actors.

2025 saw an escalation in the use of AI by both attackers and defenders.

Over the past year, both attackers and defenders harnessed the power of generative AI. Threat actors are using AI to boost their attacks by automating phishing, scaling social engineering, creating synthetic media, finding vulnerabilities faster, and creating malware that can adapt itself. Nation-state actors, too, have continued to incorporate AI into their cyber influence operations. This activity has picked up in the past six months as actors use the technology to make their efforts more advanced, scalable, and targeted.

A graph on a blue background

AI-generated content may be incorrect.

For defenders, AI is also proving to be a valuable tool. Microsoft, for example, uses AI to spot threats, close detection gaps, catch phishing attempts, and protect vulnerable users. As both the risks and opportunities of AI rapidly evolve, organizations must prioritize securing their AI tools and training their teams. Everyone –— from industry to government –— must be proactive to keep pace with increasingly sophisticated attackers and to ensure that defenders keep ahead of adversaries.

Adversaries aren’t breaking in,; they’re signing in.

Amid the growing sophistication of cyber threats, one statistic stands out: more than 97% of identity attacks are password attacks. In the first half of 2025 alone, identity -based attacks surged by 32%. That means the vast majority of malicious sign-in attempts an organization might receive are via large-scale password guessing attempts. Attackers get usernames and passwords (“credentials”) for these bulk attacks by in largelargely from credential leaks.

However, credential leaks aren’t the only place where attackers can obtain credentials. This year, we saw a surge in the use of infostealer malware by cybercriminals. Infostealers can secretly gather credentials and information about your online accounts, like browser session tokens, at scale. Cybercriminals can then buy this stolen information on cybercrime forums, making it easy for anyone to access accounts for purposes such as the delivery of ransomware.

Luckily, the solution to identity compromise is simple. The implementation of phishing-resistant multifactor authentication (MFA) can stop over 99% of this type of attack even if the attacker has the correct username and password combination. To target the malicious supply chain, Microsoft’s Digital Crimes Unit (DCU) is fighting back against the cybercriminal use of infostealers. In May, the DCU disrupted the most popular infostealer —– Lumma Stealer –— alongside the US Department of Justice and Europol.

Moving forward: Cybersecurity is a shared defensive priority.

As threat actors grow more sophisticated, persistent, and opportunistic, organizations must stay vigilant, continually updating their defenses, and sharing intelligence. Microsoft remains committed to doing its part to strengthen our products and services via our Secure Future Initiative. We also continue to collaborate with others to track threats, alert targeted customers, and share insights with the broader public when appropriate.

However, security is not only a technical challenge, but a governance imperative. Defensive measures alone are not enough to deter nation-state adversaries. Governments must build frameworks that signal credible and proportionate consequences for malicious activity that violates international rules. Encouragingly, governments are increasingly attributing cyberattacks to foreign actors and imposing consequences such as indictments and sanctions. This growing transparency and accountability are important steps toward building collective deterrence. As digital transformation accelerates—amplified by the rise of AI—cyber threats pose risks to economic stability, governance, and personal safety. Addressing these challenges requires not only technical innovation but coordinated societal action.

The post Extortion and ransomware drive over half of cyberattacks appeared first on Microsoft Security Blog.

]]>
Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures http://approjects.co.za/?big=en-us/security/blog/2025/04/16/cyber-signals-issue-9-ai-powered-deception-emerging-fraud-threats-and-countermeasures/ Wed, 16 Apr 2025 11:00:00 +0000 Microsoft maintains a continuous effort to protect its platforms and customers from fraud and abuse. This edition of Cyber Signals takes you inside the work underway and important milestones achieved that protect customers.

The post Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures appeared first on Microsoft Security Blog.

]]>
Introduction | Security snapshot | Threat briefing
Defending against attacks | Expert profile 

Microsoft maintains a continuous effort to protect its platforms and customers from fraud and abuse. From blocking imposters on Microsoft Azure and adding anti-scam features to Microsoft Edge, to fighting tech support fraud with new features in Windows Quick Assist, this edition of Cyber Signals takes you inside the work underway and important milestones achieved that protect customers.

We are all defenders. 

A person standing in a dark room

Between April 2024 and April 2025, Microsoft:

  • Thwarted $4 billion in fraud attempts.
  • Rejected 49,000 fraudulent partnership enrollments.
  • Blocked about 1.6 million bot signup attempts per hour.

The evolution of AI-enhanced cyber scams

AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools, making it easier and cheaper to generate believable content for cyberattacks at an increasingly rapid rate. AI software used in fraud attempts runs the gamut, from legitimate apps misused for malicious purposes to more fraud-oriented tools used by bad actors in the cybercrime underground.

AI tools can scan and scrape the web for company information, helping cyberattackers build detailed profiles of employees or other targets to create highly convincing social engineering lures. In some cases, bad actors are luring victims into increasingly complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, where scammers create entire websites and e-commerce brands, complete with fake business histories and customer testimonials. By using deepfakes, voice cloning, phishing emails, and authentic-looking fake websites, threat actors seek to appear legitimate at wider scale.

According to the Microsoft Anti-Fraud Team, AI-powered fraud attacks are happening globally, with much of the activity coming from China and Europe, specifically Germany due in part to Germany’s status as one of the largest e-commerce and online services markets in the European Union (EU). The larger a digital marketplace in any region, the more likely a proportional degree of attempted fraud will take place.

E-commerce fraud

A shopping cart full of boxes

Fraudulent e-commerce websites can be set up in minutes using AI and other tools requiring minimal technical knowledge. Previously, it would take threat actors days or weeks to stand up convincing websites. These fraudulent websites often mimic legitimate sites, making it challenging for consumers to identify them as fake. 

Using AI-generated product descriptions, images, and customer reviews, customers are duped into believing they are interacting with a genuine merchant, exploiting consumer trust in familiar brands.

AI-powered customer service chatbots add another layer of deception by convincingly interacting with customers. These bots can delay chargebacks by stalling customers with scripted excuses and manipulating complaints with AI-generated responses that make scam sites appear professional.

In a multipronged approach, Microsoft has implemented robust defenses across our products and services to protect customers from AI-powered fraud. Microsoft Defender for Cloud provides comprehensive threat protection for Azure resources, including vulnerability assessments and threat detection for virtual machines, container images, and endpoints.

Microsoft Edge features website typo protection and domain impersonation protection using deep learning technology to help users avoid fraudulent websites. Edge has also implemented a machine learning-based Scareware Blocker to identify and block potential scam pages and deceptive pop-up screens with alarming warnings claiming a computer has been compromised. These attacks try to frighten users into calling fraudulent support numbers or downloading harmful software.

Job and employment fraud

A hand holding a piece of paper with numbers and a picture of a person

The rapid advancement of generative AI has made it easier for scammers to create fake listings on various job platforms. They generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers. AI-powered interviews and automated emails enhance the credibility of job scams, making it harder for job seekers to identify fraudulent offers.

To prevent this, job platforms should introduce multifactor authentication for employer accounts to make it harder for bad actors to take over legitimate hirers’ listings and use available fraud-detection technologies to catch suspicious content.

Fraudsters often ask for personal information, such as resumes or even bank account details, under the guise of verifying the applicant’s information. Unsolicited text and email messages offering employment opportunities that promise high pay for minimal qualifications are typically an indicator of fraud.

Employment offers that include requests for payment, offers that seem too good to be true, unsolicited offers or interview requests over text message, and a lack of formal communication platforms can all be indicators of fraud.

Tech support scams

Tech support scams are a type of fraud where scammers trick victims into unnecessary technical support services to fix a device or software problems that don’t exist. The scammers may then gain remote access to a computer—which lets them access all information stored on it, and on any network connected to it or install malware that gives them access to the computer and sensitive data.

Tech support scams are a case where elevated fraud risks exist, even if AI does not play a role. For example, in mid-April 2024, Microsoft Threat Intelligence observed the financially motivated and ransomware-focused cybercriminal group Storm-1811 abusing Windows Quick Assist software by posing as IT support. Microsoft did not observe AI used in these attacks; Storm-1811 instead impersonated legitimate organizations through voice phishing (vishing) as a form of social engineering, convincing victims to grant them device access through Quick Assist. 

Quick Assist is a tool that enables users to share their Windows or macOS device with another person over a remote connection. Tech support scammers often pretend to be legitimate IT support from well-known companies and use social engineering tactics to gain the trust of their targets. They then attempt to employ tools like Quick Assist to connect to the target’s device. 

Quick Assist and Microsoft are not compromised in these cyberattack scenarios; however, the abuse of legitimate software presents risk Microsoft is focused on mitigating. Informed by Microsoft’s understanding of evolving cyberattack techniques, the company’s anti-fraud and product teams work closely together to improve transparency for users and enhance fraud detection techniques. 

The Storm-1811 cyberattacks highlight the capability of social engineering to circumvent security defenses. Social engineering involves collecting relevant information about targeted victims and arranging it into credible lures delivered through phone, email, text, or other mediums. Various AI tools can quickly find, organize, and generate information, thus acting as productivity tools for cyberattackers. Although AI is a new development, enduring measures to counter social engineering attacks remain highly effective. These include increasing employee awareness of legitimate helpdesk contact and support procedures, and applying Zero Trust principles to enforce least privilege across employee accounts and devices, thereby limiting the impact of any compromised assets while they are being addressed. 

Microsoft has taken action to mitigate attacks by Storm-1811 and other groups by suspending identified accounts and tenants associated with inauthentic behavior. If you receive an unsolicited tech support offer, it is likely a scam. Always reach out to trusted sources for tech support. If scammers claim to be from Microsoft, we encourage you to report it directly to us at http://approjects.co.za/?big=reportascam

Building on the Secure Future Initiative (SFI), Microsoft is taking a proactive approach to ensuring our products and services are “Fraud-resistant by Design.” In January 2025, a new fraud prevention policy was introduced: Microsoft product teams must now perform fraud prevention assessments and implement fraud controls as part of their design process. 

Recommendations

  • Strengthen employer authentication: Fraudsters often hijack legitimate company profiles or create fake recruiters to deceive job seekers. To prevent this, job platforms should introduce multifactor authentication and Verified ID as part of Microsoft Entra ID for employer accounts, making it harder for unauthorized users to gain control.
  • Monitor for AI-based recruitment scams: Companies should deploy deepfake detection algorithms to identify AI-generated interviews where facial expressions and speech patterns may not align naturally.
  • Be cautious of websites and job listings that seem too good to be true: Verify the legitimacy of websites by checking for secure connections (https) and using tools like Microsoft Edge’s typo protection.
  • Avoid providing personal information or payment details to unverified sources: Look for red flags in job listings, such as requests for payment or communication through informal platforms like text messages, WhatsApp, nonbusiness Gmail accounts, or requests to contact someone on a personal device for more information.
A white text on a black background

Using Microsoft’s security signal to combat fraud

Microsoft is actively working to stop fraud attempts using AI and other technologies by evolving large-scale detection models based on AI, such as machine learning, to play defense by learning from and mitigating fraud attempts. Machine learning is the process that helps a computer learn without direct instruction using algorithms to discover patterns in large datasets. Those patterns are then used to create a comprehensive AI model, allowing for predictions with high accuracy.

We have developed in-product safety controls that warn users about potential malicious activity and integrate rapid detection and prevention of new types of attacks.

Our fraud team has developed domain impersonation protection using deep-learning technology at the domain creation stage, to help protect against fraudulent e-commerce websites and fake job listings. Microsoft Edge has incorporated website typo protection, and we have developed AI-powered fake job detection systems for LinkedIn.

Microsoft Defender Smartscreen is a cloud-based security feature that aims to prevent unsafe browsing habits by analyzing websites, files, and applications based on their reputation and behavior. It is integrated into Windows and the Edge browser to help protect users from phishing attacks, malicious websites, and potentially harmful downloads.

Furthermore, Microsoft’s Digital Crimes Unit (DCU) partners with others in the private and public sector to disrupt the malicious infrastructure used by criminals perpetuating cyber-enabled fraud. The team’s longstanding collaboration with law enforcement around the world to respond to tech support fraud has resulted in hundreds of arrests and increasingly severe prison sentences worldwide. The DCU is applying key learnings from past actions to disrupt those who seek to abuse generative AI technology for malicious or fraudulent purposes. 

Quick Assist features and remote help combat tech support fraud

To help combat tech support fraud, we have incorporated warning messages to alert users about possible tech support scams in Quick Assist before they grant access to someone approaching them purporting to be an authorized IT department or other support resource.

Windows users must read and click the box to acknowledge the security risk of granting remote access to the device.

A man talking on a phone and a laptop with a white bubble

Microsoft has significantly enhanced Quick Assist protection for Windows users by leveraging its security signal. In response to tech support scams and other threats, Microsoft now blocks an average of 4,415 suspicious Quick Assist connection attempts daily, accounting for approximately 5.46% of global connection attempts. These blocks target connections exhibiting suspicious attributes, such as associations with malicious actors or unverified connections.

Microsoft’s continual focus on advancing Quick Assist safeguards seeks to counter adaptive cybercriminals, who previously targeted individuals opportunistically with fraudulent connection attempts, but more recently have sought to target enterprises with more organized cybercrime campaigns that Microsoft’s actions have helped disrupt.

Our Digital Fingerprinting capability, which leverages AI and machine learning, drives these safeguards by providing fraud and risk signals to detect fraudulent activity. If our risk signals detect a possible scam, the Quick Assist session is automatically ended. Digital Fingerprinting works by collecting various signals to detect and prevent fraud.

For enterprises combating tech support fraud, Remote Help is another valuable resource for employees. Remote Help is designed for internal use within an organization and includes features that make it ideal for enterprises.

By reducing scams and fraud, Microsoft aims to enhance the overall security of its products and protect its users from malicious activities.

Consumer protection tips

Fraudsters exploit psychological triggers such as urgency, scarcity, and trust in social proof. Consumers should be cautious of:

  • Impulse buying—Scammers create a sense of urgency with “limited-time” deals and countdown timers.
  • Trusting fake social proof—AI generates fake reviews, influencer endorsements, and testimonials to appear legitimate.
  • Clicking on ads without verification—Many scam sites spread through AI-optimized social media ads. Consumers should cross-check domain names and reviews before purchasing.
  • Ignoring payment security—Avoid direct bank transfers or cryptocurrency payments, which lack fraud protections.

Job seekers should verify employer legitimacy, be on the lookout for common job scam red flags, and avoid sharing personal or financial information with unverified employers.

  • Verify employer legitimacy—Cross-check company details on LinkedIn, Glassdoor, and official websites to verify legitimacy.
  • Notice common job scam red flags—If a job requires upfront payments for training materials, certifications, or background checks, it is likely a scam. Unrealistic salaries or no-experience-required remote positions should be approached with skepticism. Emails from free domains (such as johndoehr@gmail.com instead of hr@company.com) are also typically indicators of fraudulent activity.
  • Be cautious of AI-generated interviews and communications—If a video interview seems unnatural, with lip-syncing delays, robotic speech, or odd facial expressions, it could be deepfake technology at work. Job seekers should always verify recruiter credentials through the company’s official website before engaging in any further discussions.
  • Avoid sharing personal or financial information—Under no circumstances should you provide a Social Security number, banking details, or passwords to an unverified employer.

Microsoft is also a member of the Global Anti-Scam Alliance (GASA), which aims to bring governments, law enforcement, consumer protection organizations, financial authorities and providers, brand protection agencies, social media, internet service providers, and cybersecurity companies together to share knowledge and protect consumers from getting scammed.

Recommendations

  • Remote Help: Microsoft recommends using Remote Help instead of Quick Assist for internal tech support. Remote Help is designed for internal use within an organization and incorporates several features designed to enhance security and minimize the risk of tech support hacks. It is engineered to be used only within an organization’s tenant, providing a safer alternative to Quick Assist.
  • Digital Fingerprinting: This identifies malicious behaviors and ties them back to specific individuals. This helps in monitoring and preventing unauthorized access.
  • Blocking full control requests: Quick Assist now includes warnings and requires users to check a box acknowledging the security implications of sharing their screen. This adds a layer of helpful “security friction” by prompting users who may be multitasking or preoccupied to pause to complete an authorization step.
A black background with orange dots

Kelly Bissell: A cybersecurity pioneer combating fraud in the new era of AI

Kelly Bissell’s journey into cybersecurity began unexpectedly in 1990. Initially working in computer science, Kelly was involved in building software for healthcare patient accounting and operating systems at Medaphis and Bellsouth, now AT&T.

His interest in cybersecurity was sparked when he noticed someone logged into a phone switch attempting to get free long-distance calls and traced the intruder back to Romania. This incident marked the beginning of Kelly’s career in cybersecurity.

“I stayed in cybersecurity hunting for bad actors, integrating security controls for hundreds of companies, and helping shape the NIST security frameworks and regulations such as FFIEC, PCI, NERC-CIP,” he explains.

Currently, Kelly is Corporate Vice President of Anti-Fraud and Product Abuse within Microsoft Security. Microsoft’s fraud team employs machine learning and AI to build better detection code and understand fraud operations. They use AI-powered solutions to detect and prevent cyberthreats, leveraging advanced fraud detection frameworks that continuously learn and evolve.

“Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years. I think we have an opportunity today to adopt AI faster so we can detect and close the gap of exposure quickly. Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

Previously Kelly managed the Microsoft Detection and Response Team (DART) and created the Global Hunting, Oversight, and Strategic Triage (GHOST) team that detected and responded to attackers such as Storm-0558 and Midnight Blizzard.

Prior to Microsoft, during his time at Accenture and Deloitte, Kelly collaborated with companies and worked extensively with government agencies like the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation, where he helped build security systems inside their operations.

His time as Chief Information Security Officer (CISO) at a bank exposed him to addressing both cybersecurity and fraud, leading to his involvement in shaping regulatory guidelines to protect banks and eventually Microsoft.

Kelly has also played a significant role in shaping regulations around the National Institute of Standards and Technology (NIST) and Payment Card Industry (PCI) compliance, which helps ensure the security of businesses’ credit card transactions, among others.

Internationally, Kelly played a crucial role in helping establish agencies and improve cybersecurity measures. As a consultant in London, he helped stand up the United Kingdom’s National Cyber Security Centre (NCSC), which is part of the Government Communications Headquarters (GCHQ), the equivalent of CISA. Kelly’s efforts in content moderation with several social media companies, including YouTube, were instrumental in removing harmful content.

That’s why he’s excited about Microsoft’s partnership with GASA. GASA brings together governments, law enforcement, consumer protection organizations, financial authorities, internet service providers, cybersecurity companies, and others to share knowledge and define joint actions to protect consumers from getting scammed.

“If I protect Microsoft, that’s good, but it’s not sufficient. In the same way, if Apple does their thing, and Google does their thing, but if we’re not working together, we’ve all missed the bigger opportunity. We must share cybercrime information with each other and educate the public. If we can have a three-pronged approach of tech companies building security and fraud protection into their products, public awareness, and sharing cybercrime and fraudster information with law enforcement, I think we can make a big difference,” he says.

A man wearing glasses and a suit

Next steps with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


Methodology: Microsoft platforms and services, including Azure, Microsoft Defender for Office, Microsoft Threat Intelligence, and Microsoft Digital Crimes Unit (DCU), provided anonymized data on threat actor activity and trends. Additionally, Microsoft Entra ID provided anonymized data on threat activity, such as malicious email accounts, phishing emails, and attacker movement within networks. Additional insights are from the daily security signals gained across Microsoft, including the cloud, endpoints, the intelligent edge, and telemetry from Microsoft platforms and services. The $4 billion figure represents an aggregated total of fraud and scam attempts against Microsoft and our customers in consumer and enterprise segments (in 12 months).

The post Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures appeared first on Microsoft Security Blog.

]]>
How cyberattackers exploit domain controllers using ransomware http://approjects.co.za/?big=en-us/security/blog/2025/04/09/how-cyberattackers-exploit-domain-controllers-using-ransomware/ Wed, 09 Apr 2025 16:00:00 +0000 Read how cyberattackers exploit domain controllers to gain privileged system access where they deploy ransomware that causes widespread damage and operational disruption.

The post How cyberattackers exploit domain controllers using ransomware appeared first on Microsoft Security Blog.

]]>
In recent years, human-operated cyberattacks have undergone a dramatic transformation. These attacks, once characterized by sporadic and opportunistic attacks, have evolved into highly sophisticated, targeted campaigns aimed at causing maximum damage to organizations, with the average cost of a ransomware attack reaching $9.36 million in 2024.1 A key catalyst to this evolution is the rise of ransomware as a primary tool for financial extortion—an approach that hinges on crippling an organization’s operations by encrypting critical data and demanding a ransom for its release. Microsoft Defender for Endpoint disrupts ransomware attacks in an average of three minutes, only kicking in when more than 99.99% confident in the presence of a cyberattack.

The evolution of ransomware attacks

What is ransomware?

Learn more ↗

Modern ransomware campaigns are meticulously planned. Cyberattackers understand that their chances of securing a ransom increase significantly if they can inflict widespread damage across a victim’s environment. The rationale is simple: paying the ransom becomes the most viable option when the alternative—restoring the environment and recovering data—is technically unfeasible, time-consuming, and costly.

This level of damage happens in minutes and even seconds, where bad actors embed themselves within an organization’s environment, laying the groundwork for a coordinated cyberattack that can encrypt dozens, hundreds, or even thousands of devices within minutes. To execute such a campaign, threat actors must overcome several challenges such as evading protection, mapping the network, maintaining their code execution ability, and preserving persistency in the environment, building their way to securing two major prerequisites necessary to execute ransomware on multiple devices simultaneously:

  • High-privilege accounts: Whether cyberattackers choose to drop files and encrypt the devices locally or perform remote operations over the network, they must obtain the ability to authenticate to a device. In an on-premises environment, cyberattackers usually target domain admin accounts or other high-privilege accounts, as those can authenticate to the most critical resources in the environment.
  • Access to central network assets: To execute the ransomware attack as fast and as wide as possible, threat actors aim to achieve access to a central asset in the network that is exposed to many endpoints. Thus, they can leverage the possession of high-privilege accounts and connect to all devices visible in their line of sight.

The role of domain controllers in ransomware campaigns

Domain controllers are the backbone of any on-premises environment, managing identity and access through Active Directory (AD). They play a pivotal role in enabling cyberattackers to achieve their goals by fulfilling two critical requirements:

1. Compromising highly privileged accounts

Domain controllers house the AD database, which contains sensitive information about all user accounts, including highly privileged accounts like domain admins. By compromising a domain controller, threat actors can:

  • Extract password hashes: Dumping the NTDS.dit file allows cyberattackers to obtain password hashes for every user account.
  • Create and elevate privileged accounts: Cyberattackers can generate new accounts or manipulate existing ones, assigning them elevated permissions, ensuring continued control over the environment.

With these capabilities, cyberattackers can authenticate as highly privileged users, facilitating lateral movement across the network. This level of access enables them to deploy ransomware on a scale, maximizing the impact of their attack.

2. Exploiting centralized network access

Domain controllers handle crucial tasks like authenticating users and devices, managing user accounts and policies, and keeping the AD database consistent across the network. Because of these important roles, many devices need to interact with domain controllers regularly to ensure security, efficient resource management, and operational continuity. That’s why domain controllers need to be central in the network and accessible to many endpoints, making them a prime target for cyberattackers looking to cause maximum damage with ransomware attacks.

Given these factors, it’s no surprise that domain controllers are frequently at the center of ransomware operations. Cyberattackers consistently target them to gain privileged access, move laterally, and rapidly deploy ransomware across an environment. We’ve seen in more than 78% of human-operated cyberattacks, threat actors successfully breach a domain controller. Additionally, in more than 35% of cases, the primary spreader device—the system responsible for distributing ransomware at scale—is a domain controller, highlighting its crucial role in enabling widespread encryption and operational disruption.

Case study: Ransomware attack using a compromised domain controller

In one notable case, a small-medium manufacturer fell victim to a well-known, highly skilled threat actor attempting to execute a widespread Akira ransomware attack:

How Microsoft Defender for Endpoint's automatic attack disruption helped contain a widespread ransomware attack.

Pre domain-compromise activity

After gaining initial access, presumably through leveraging the customer’s VPN infrastructure, and prior to obtaining domain admin privileges, the cyberattackers initiated a series of actions focused on mapping potential assets and escalating privileges. A wide, remote execution of secrets dump is detected on Microsoft Defender for Endpoint-onboarded devices and User 1 (domain user) is contained by attack disruption.

Post domain-compromise activity

Once securing domain admin (User 2) credentials, potentially through leveraging the victim’s non-onboarded estate, the attacker immediately attempts to connect to the victim’s domain controller (DC1) using Remote Desktop Protocol (RDP) from the cyberattacker’s controlled device. When gaining access to DC1, the cyberattacker leverages the device to perform the following set of actions:

  • Reconnaissance—The cyberattacker leverages the domain controller’s wide network visibility and high privileges to map the network using different tools, focusing on servers and network shares.
  • Defense evasion—Leveraging the domain controller’s native group policy functionality, the cyberattacker attempts to tamper with the victim’s antivirus by modifying security-related group policy settings.
  • Persistence—The cyberattacker leverages the direct access to Active Directory, creating new domain users (User 3 and User 4) and adding them to the domain admin group, thus establishing a set of highly privileged users that would later on be used to execute the ransomware attack.

Encryption over the network

Once the cyberattacker takes control over a set of highly privileged users, this provides them access to any domain-joined resource, including comprehensive network access and visibility. It will also allow them to set up tools for the encryption phase of the cyberattack.

Assuming they’re able to validate a domain controller’s effectiveness, they begin by running the payload locally on the domain controller. Attack disruption detects the threat actor’s attempt to run the payload and contains User 2, User 3, and the cyberattacker-controlled device used to RDP to the domain controller.

After successfully containing Users 2 and 3, the cyberattacker proceeded to log in to the domain controller using User 4, who had not yet been utilized. After logging into the device, the cyberattacker attempted to encrypt numerous devices over the network from the domain controller, leveraging the access provided by User 4.

Attack disruption detects the initiation of encryption over the network and automatically granularly contains device DC1 and User 4, blocking the attempted remote encryption on all Microsoft Defender for Endpoint-onboarded and targeted devices.

Protecting your domain controllers

Given the central role of domain controllers in ransomware attacks, protecting them is critical to preventing large-scale damage. However, securing domain controllers is particularly challenging due to their fundamental role in network operations. Unlike other endpoints, domain controllers must remain highly accessible to authenticate users, enforce policies, and manage resources across the environment. This level of accessibility makes it difficult to apply traditional security measures without disrupting business continuity. Hence, security teams constantly face the complex challenge of striking the right balance between security and operational functionality.

To address this challenge, Defender for Endpoint introduced contain high value assets (HVA), an expansion of our contain device capability designed to automatically contain HVAs like domain controllers in a granular manner. This feature builds on Defender for Endpoint’s capability to classify device roles and criticality levels to deliver a custom, role-based containment policy, meaning that if a sensitive device, such a domain controller, is compromised, it is immediately contained in less than three minutes, preventing the cyberattacker from moving laterally and deploying ransomware, while at the same time maintaining the operational functionality of the device. The ability of the domain controller to distinguish between malicious and benign behavior helps keep essential authentication and directory services up and running. This approach provides rapid, automated cyberattack containment without sacrificing business continuity, allowing organizations to stay resilient against sophisticated human-operated cyberthreats.

Now your organization’s domain controllers can leverage automatic attack disruption as an extra line of defense against malicious actors trying to overtake high value assets and exert costly ransomware attacks.

Learn more

Explore these resources to stay updated on the latest automatic attack disruption capabilities:

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Average cost per data breach in the United States 2006-2024, Ani Petrosyan. October 10, 2024.

The post How cyberattackers exploit domain controllers using ransomware appeared first on Microsoft Security Blog.

]]>
Rethinking remote assistance security in a Zero Trust world http://approjects.co.za/?big=en-us/security/blog/2025/02/26/rethinking-remote-assistance-security-in-a-zero-trust-world/ Wed, 26 Feb 2025 17:00:00 +0000 The rise in sophisticated cyberthreats demands a fundamental shift in our approach. Organizations must rethink remote assistance security through the lens of Zero Trust, using the three key principles of Verify Explicitly, Use Least Privilege, and Assume Breach as a guide and ensuring that every session, user, and device is verified, compliant, and monitored before access is granted.

The post Rethinking remote assistance security in a Zero Trust world appeared first on Microsoft Security Blog.

]]>
The recent breach of the United States Treasury underscores a stark reality: cyber adversaries are no longer just looking for gaps in traditional network security—they are actively exploiting the tools organizations rely on for daily operations. Remote assistance technologies, essential for IT support and business continuity, have become prime targets for credential theft, moving within the network, and system exploitation. The message is clear: securing remote assistance is no longer optional; it is a fundamental requirement for maintaining operational resilience.  

A multi-pronged approach to securing remote assistance with Zero Trust

For too long, remote assistance security has been presumed rather than intentionally designed into its architecture. The rise in sophisticated cyberthreats demands a fundamental shift in our approach. Organizations must rethink remote assistance security through the lens of Zero Trust, using the three key principles of verify explicitly, use least privilege, and assume breach as a guide and ensuring that every session, user, and device is verified, compliant, and monitored before access is granted. 

Discover how implementing Zero Trust can fortify your remote assistance security by visiting our Zero Trust Workshop, where you’ll find an interactive guide to embedding security into your IT operations.  

This requires a structured approach with a foundation of: 

  1. Identity and access control—ensuring that only authenticated, compliant users and devices can initiate or receive remote assistance. 
  2. Endpoint security and compliance—enforcing security baselines and conditional access across all managed devices. 
  3. Embedded security in remote assistance—building security into the very foundation of remote assistance tools, eliminating gaps that cyberattackers can exploit. 

      Identity and access control: The first line of cybersecurity defense

      Identity security is the cornerstone of any secure remote assistance strategy. A compromised identity is often the first step in a cyberattack, making it critical to ensure only verified users and devices can initiate or receive remote assistance sessions. Organizations must enforce: 

      • Explicit identity verification—using multi-factor authentication (MFA) and risk-based conditional access to ensure only authorized users gain access. 
      • Least privilege access—ensuring remote assistance is granted only for the necessary duration and with minimal privileges to reduce the risk of exploitation. 
      • Real-time risk assessment—continuously evaluating access requests for anomalies or suspicious activity to prevent unauthorized access. 

      By shifting the security perimeter to identity, organizations create an environment where trust is earned dynamically, not assumed.  

      Closing the gaps with endpoint security and compliance with Microsoft Intune

      Cyberattackers frequently exploit outdated, misconfigured, or non-compliant endpoints to gain a foothold in enterprise environments. IT and security leaders must ensure that remote assistance is built on a strong endpoint security foundation, where every device connecting to corporate resources meets strict compliance standards. This highlights the need for organizations to establish consistent security policies across all devices, ensuring they are up to date and compliant before being granted remote access.  

      Microsoft Intune provides the necessary tools to: 

      • Enforce compliance policies—restrict remote assistance to managed, up-to-date, and policy-compliant devices. 
      • Apply security baselines—standardize configurations across endpoints to minimize security gaps. 
      • Integrate with Microsoft’s security ecosystem—connecting remote assistance workflows with Microsoft Entra, Microsoft Defender product family, and other security tools for real-time monitoring and cyberthreat mitigation.  

      Remote Help: Secure remote assistance built for Zero Trust 

      As organizations work toward a Zero Trust model, secure remote assistance must align with core security principles. This means moving beyond reactive security measures and embedding proactive, policy-driven controls into every remote session. Microsoft Intune Remote Help was designed with these imperatives in mind, providing a robust solution that enhances IT support while minimizing security risks. 

      While legacy remote assistance tools can lack enterprise-grade security controls, Remote Help is built to align with Zero Trust principles. Unlike traditional solutions, Remote Help: 

      • Integrates directly with Microsoft Entra ID—enhancing security where authentication and access controls can consistently take place. 
      • Provides session transparency—IT teams can track and monitor remote assistance activity in real time. 
      • Enforces compliance requirements—only compliant, managed devices can participate in remote assistance sessions.  

      For highly regulated industries, Remote Help offers an alternative to third-party tools that may introduce security blind spots. By embedding security directly into remote assistance workflows, organizations can significantly reduce the risk of unauthorized access.  

      Engaging customers and partners to strengthen cyber resilience 

      Cybersecurity is a team sport. As cyberthreat actors grow more sophisticated, collaboration across industries is essential. Microsoft is committed to engaging with customers and partners to drive security innovation and resilience. Initiatives such as the Windows Resiliency Initiative (WRI) focus on: 

      • Reducing the need for admin privileges—helping organizations adopt a least privilege approach at scale.
      • Enhancing identity protection—strengthening defenses against phishing and identity-based attacks.
      • Quick machine recovery—empowering IT teams with tools to rapidly store compromised devices remotely.

      By fostering collaboration and continuously evolving security measures, Microsoft is helping organizations stay ahead of emerging cyberthreats. These on-going conversations with our customers and partners are crucial in shaping resilient security strategies that adapt to an ever-changing cyberthreat landscape.   

      A security-first approach for the future 

      The increasing reliance on remote assistance demands a security-first mindset. Organizations must recognize that every remote access session presents an opportunity for exploitation from an ever-evolving cast of cyberattackers. Rather than treating security as an afterthought, it must be deeply integrated into the architecture of the remote assistance solutions. A modern approach requires proactive risk mitigation, continuous verification, and seamless security controls that support productivity without compromising protection.  

      Now is the time for IT and security leaders to: 

      • Evaluate your current remote assistance tools—identifying the gaps and areas for improvement. 
      • Adopt Zero Trust principles—ensuring the access is verified and explicitly and continuously monitored. 
      • Leverage solutions like Microsoft Intune and Remote Help—deploying secure, enterprise-grade remote assistance capabilities. 

      By taking these steps, you can strengthen your security posture, minimize risk, and ensure that remote assistance remains a tool for operational efficiency rather than a gateway for cyberthreats.  

      To explore how Zero Trust can enhance your remote assistance security, visit the Zero Trust Workshop, an interactive, step-by-step guide to embedding security into every layer of IT operations, ensuring a comprehensive and measurable approach to security transformation. 

      Learn more with Microsoft Security

      To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

      The post Rethinking remote assistance security in a Zero Trust world appeared first on Microsoft Security Blog.

      ]]>
      Agile Business, agile security: How AI and Zero Trust work together http://approjects.co.za/?big=en-us/security/blog/2024/12/16/agile-business-agile-security-how-ai-and-zero-trust-work-together/ Mon, 16 Dec 2024 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=136844 We recently published a new whitepaper that examines the security challenges and opportunities from generative AI.

      The post Agile Business, agile security: How AI and Zero Trust work together appeared first on Microsoft Security Blog.

      ]]>
      Traditional security approaches don’t work for AI. Generative AI technology is already transforming our world and has immense positive potential for cybersecurity and business processes, but traditional security models and controls aren’t enough to manage the security risks associated with this new technology.   

      We recently published a new whitepaper that examines the security challenges and opportunities from generative AI, what security must do to adapt to manage risk related to it, how a Zero Trust approach is essential to effectively secure this AI technology (and underlying data), and how different roles across your organization must work together for effective AI security.  

      AI security and Zero Trust

      Agile security for agile businesses.

      AI presents new types of problems that require different thinking and different solutions.

      Generative AI is dynamic

      At the most fundamental level, generative AI is non-deterministic computing, which means that it doesn’t provide the exact same output each time you run it. For example, asking an image generation model to “draw a picture of a kitten in a security guard uniform” repeatedly is unlikely to generate the exact same picture twice (though they will all be similar). Static security controls assume that vulnerabilities (in the broader definition) and their exploitation so they will look exactly the same each time will not be particularly effective at detecting and blocking attacks on AI. You need controls made for AI. 

      Generative AI is data-centric

      Generative AI is fundamentally a data analysis and data generation technology, making the security and governance of your data incredibly important to the security of your AI applications and the reliability and trustworthiness of their outputs.  

      You need to have an asset-centric and data-centric security approach that can handle dynamic changes to secure AI and the data it relies on. This means you need a Zero Trust approach to effectively secure AI.  

      Zero Trust is simply modern security without the false assumption that a network security perimeter is enough to secure assets in it (including data). This drives a mindset shift that changes how you look at security strategy, architecture, controls, and more. Zero Trust focuses security protecting business assets inside and outside the classic network perimeter across the ‘hybrid of everything’ environments (including multiplatform, multicloud, on-premises, operational technology, Internet of Things, and more). 

      Cyberattackers are using generative AI against you

      Another complication is that AI relies on vast amounts of data to train models, making your data a prime target for cyberattackers and elevating the importance of protecting your data. Cybercriminals are also using AI now to refine attack techniques and process the data they steal from organizations. Organizations must recognize that these threats are already happening and urgently adapt their security strategies to effectively protect their data, AI applications, business assets, and people.  

      By applying Zero Trust principles, organizations can reduce the risk related to AI while rapidly embracing the opportunities that this technology offers.

      Key strategies to help manage AI security risks  

      These strategies from the whitepaper illustrate how to manage the risks associated with AI.  

      • Provide guidance to users. Cyberattackers are using AI to improve the quality and volume of scam emails and phone calls (sometimes called phishing or business email compromise) that will be experienced by nearly anyone in the organization. Organizations must urgently start educating everyone (starting with financial roles and other high-business-impacting roles) so that they understand that they are likely to see these highly convincing fake communications and what to do about it. People will need to understand the basics of how AI works, the risks that it poses, and what they can do about it (such as how to spot it, how to report it to security teams, or how to enhance business processes to independently verify important transactions). 
      • Protect AI applications and data. Cybercriminals are actively targeting AI systems. Early integration of security in AI development is crucial to avoid costly fixes later.  
      • Adopt AI security capabilities. While AI is not a magical silver bullet that can replace talented human experts and existing tools, AI technology can significantly enhance security operations (SecOps) by empowering people to get more out of their data and tools (quickly writing up reports, analyzing business impact of attacks, guiding newer analysts through investigation, and more).  
      • Policy and standards. Organizations need written security standards and processes to guide their team’s decisions and demonstrate they are following due diligence to regulators. These standards should cover security, privacy, and ethical considerations—you can use Microsoft’s Responsible AI Standard as a reference to guide this work.  
      Diagram showing multiple dimensions of AI security risk.

       Zero Trust and AI: A symbiotic relationship 

      We have found that there is a symbiotic relationship between Zero Trust and Generative AI where: 

      • AI requires a Zero Trust approach to effectively protect data and AI applications.  
      • AI-powered capabilities can help accelerate Zero Trust by analyzing vast data signals, extracting key insights, guiding humans through key processes, and automating repetitive manual tasks. This allows your teams to cut through the noise, responding to threats faster, and continuously learn and grow their expertise.

      The Zero Trust approach to security helps you keep up with continuously changing threats as well as the rapid evolution of technology that AI represents. I will wrap this blog with a quote from the new whitepaper:

      “By integrating security early and embracing Zero Trust principles, organizations can take advantage of AI while mitigating risks, much like brakes on a car enable people to safely travel faster.”

      Learn more about the Zero Trust approach

      To learn more about how Zero Trust can guide this approach, visit the Zero Trust Model webpage and explore additional resources at the Zero Trust Guidance Center. Check out Mark’s List for additional resources.

      For more security resources and links, you can visit our LinkedIn. You can also bookmark the Security blog to keep up with security news and follow Microsoft Security on LinkedIn and X (@MSFTSecurity).

      The post Agile Business, agile security: How AI and Zero Trust work together appeared first on Microsoft Security Blog.

      ]]>