Network security Insights | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/topic/network-security/ Expert coverage of cybersecurity topics Thu, 19 Feb 2026 00:20:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 New e-book: Establishing a proactive defense with Microsoft Security Exposure Management http://approjects.co.za/?big=en-us/security/blog/2026/02/19/new-e-book-establishing-a-proactive-defense-with-microsoft-security-exposure-management/ Thu, 19 Feb 2026 17:00:00 +0000 Read the new maturity-based guide that helps organizations move from fragmented, reactive security practices to a unified exposure management approach that enables proactive defense.

The post New e-book: Establishing a proactive defense with Microsoft Security Exposure Management appeared first on Microsoft Security Blog.

]]>
Effective exposure management begins by illuminating and hardening risks across the entire attack surface. Some of the most meaningful shifts in security happen quietly—when teams take a clear look at their exposure landscape and acknowledge the gap between where they stand today and where they need to be. Today, we’re sharing a new guide designed to support that moment of clarity. It offers a practical, maturity-based path for moving from fragmented visibility and reactive fixes to a more unified, risk-driven approach that strengthens resilience one step at a time. Read “Establishing proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management” to learn more now. 

Five levels of exposure management maturity 

In the guide, you’ll learn how organizations progress through five levels of exposure management maturity to strengthen how they identify, prioritize, and act on risk. Early-stage teams operate reactively with limited visibility and compliance-driven fixes. As capabilities mature, processes become consistent, prioritization incorporates business context, and decisions shift from reactive to proactive. This progression reflects a move away from isolated security actions toward repeatable, measurable practices that scale with organizational complexity. At higher maturity, organizations validate controls, consolidate asset and risk data into a single source of truth, and confirm that mitigations work. Rather than assuming security improvements are effective, teams test and verify outcomes to ensure effort translates into real risk reduction. At the most advanced stage, exposure management is fully aligned to business objectives, supported by clear risk metrics, and used to guide remediation, resource allocation, and strategic outcomes.

The maturity model helps security leaders assess where their organization is at and identify practical next steps to mature and have a full-fledged exposure management program. Each level in the guide includes details on the realities organizations face, the key characteristics at each maturity level, common pain points, and suggestions for moving forward and up in maturity. Importantly, the model emphasizes that maturity is not static or final. The last stage of the maturity model, level five, isn’t a finish line—it’s the point where exposure management becomes a continuously evolving capability, fueled by real-time telemetry and adaptive risk modeling. At this stage, exposure management shifts from a program to a strategic discipline—one that informs long-term resilience decisions rather than discrete remediation cycles. 

The path to proactive defense  

Organizations build a unified path to proactive defense when they move beyond fragmented tools and adopt an integrated exposure management approach. By bringing assets, identities, cloud posture, and attack paths into one coherent view, security teams gain the clarity needed to focus effort where it matters most. This alignment enables more consistent action, stronger prioritization, and security decisions that reflect real business risk instead of isolated signals. It also helps teams move from chasing individual findings to managing exposure systematically, with shared context across security, IT, and risk stakeholders. Over time, this shift turns exposure management into a repeatable operating model rather than a collection of disconnected responses. 

Take the next step toward proactive defense 

Designed to help security leaders translate strategy into practical next steps, regardless of where they are starting, the maturity levels outlined in the e-book support organizations as they shift from reacting to cyberthreats to proactively reducing risk and strengthening security across every layer of the environment. To go deeper into the practices, maturity levels, and actions that matter most, read the new e-book: Establishing a proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management to learn more now. 

Join us at RSAC™ 2026

RSAC™ 2026 is more than a conference. It’s a chance to shape the future of security. By engaging with Microsoft Security, you’ll gain:  

  • Actionable insights from industry leaders and researchers.  
  • Hands-on experience with cutting-edge security tools.  
  • Connections that help you navigate the evolving cyberthreat landscape.  

Together, we can make the world safer for all. Join us in San Francisco March 22-26, 2026, and be part of the conversation that defines the next era of cybersecurity.  

Learn more

Learn more about Microsoft Security Exposure Management.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post New e-book: Establishing a proactive defense with Microsoft Security Exposure Management appeared first on Microsoft Security Blog.

]]>
Harden your identity defense with improved protection, deeper correlation, and richer context http://approjects.co.za/?big=en-us/security/blog/2025/10/23/harden-your-identity-defense-with-improved-protection-deeper-correlation-and-richer-context/ Thu, 23 Oct 2025 16:00:00 +0000 Expanded ITDR features—including the new Microsoft Defender for Identity sensor, now generally available—bring improved protection, correlation, and context to help customers modernize their identity defense.

The post Harden your identity defense with improved protection, deeper correlation, and richer context appeared first on Microsoft Security Blog.

]]>
In today’s digital-first enterprise, identities have become the new corporate security perimeter. Hybrid work and cloud-first strategies have dissolved traditional network boundaries and dramatically increased the complexity of identity fabrics. Security teams are left managing a constellation of users, infrastructure, and tools scattered across hybrid environments or even multivendor ecosystems. To put the threat into perspective, we saw more than 7,000 password attacks every second in 2024, and on average 66% of attack paths involve some type of identity compromise.1 AI is further amplifying this challenge by introducing a surge of non-human identities that require even more unique protection and capabilities.

This evolution demands a fundamental shift in Identity Threat Detection and Response (ITDR). It’s no longer simply about protecting users; it requires consistent, comprehensive protection for every piece of the identity fabric, whether human or non-human, on-premises or in the cloud, from Microsoft or another vendor.

ITDR for the modern enterprise

Successful identity security practices understand that seams in protection are the real enemy of identity security. A unified approach between identity and security teams is a necessity  and our unique perspective as both a leading identity and security provider allow us to further streamline the flow of contextual insights, actions, and workflows across these groups, minimizing the potential for gaps or oversight.

A black background with a black square

While both identity and security teams play critical roles in ITDR, it is just one piece of their overall charter and goal. For security operations center (SOC) professionals their core mission remains to prevent, detect, and respond to cyberthreats that could impact their organization’s security and business continuity. On a day-to-day basis, identity and security teams proactively harden their security posture, triage and investigate incoming alerts, and, when a true cyberthreat is confirmed, coordinate a rapid and effective response. Within this broader mission, ITDR resents a critical but focused subset. For instance, identity security posture recommendations are essential but only one piece of broader security hardening.

Similarly, identity alerts offer invaluable insights needed to detect anomalous identity activity, but they must be understood in the context of the overall cyberattack. And while identity response actions such as revoking sessions or enforcing multifactor authentication are critical to stop attacks, they must be coordinated with other response actions across endpoints and other domains to block lateral movement.

True defense requires enriching identity signals and delivering them in context as part of a unified threat picture, enabling coordinated response across domains, and continuously improving posture to stay ahead of evolving cyberthreats.

This blog explores how Microsoft is reimagining identity security to meet these challenges head-on—empowering defenders with the clarity, context, and control they need to stay ahead of identity-based threats.

Enriched and insightful: Building the foundation for identity security

Identity security starts with ensuring your environment is protected as a foundation. Visibility across your organization’s unique fabric of interconnected identities, infrastructure, and applications is what enables SOC teams to detect cyberthreats earlier, respond faster, and reduce risk across the board. Because in today’s identity-driven cyberthreat landscape, partial visibility is no longer an option. To meet this challenge, organizations need sensors for on-premises infrastructure and integrations with cloud-based identity solutions to pull in insights from the entirety of their identity fabric.

Understanding this, Microsoft is proud to offer one of the widest sets of dedicated sensors for on-premises identity infrastructure. Domain controllers, Active Directory Federation Services (AD FS), Active Directory Certificate Services (AD CS), and Microsoft Entra ID Connect each serve a distinct purpose within on-premises identity footprint and our dedicated sensors are purpose built to monitor and detect anomalies within their specific activity or configurations.

Additionally, I am excited to announce the general availability of the unified identity and endpoint sensors we unveiled at Microsoft Ignite in 2024. This amazing milestone makes it even easier for new Microsoft Defender for Identity customers to activate identity protections on qualifying domain controllers and start benefiting from identity-specific visibility, posture recommendations, alerts, and automatic attack disruption capabilities within the Defender experience.

Our protections don’t end on-premises, however. Defender’s native integration with Microsoft Entra ID empowers the SOC with real-time visibility into Entra identity activity, risk level, and seamless integration into Zero Ttrust policies through Conditional Access and user containment. And because identity fabrics are rarely homogenous, Microsoft also supports other cloud identities like Okta, offering unified visibility, posture insights, and ITDR capabilities across platforms.

The raw data into cloud and on-premises accounts is important but to be truly insightful it needs to be enriched. To do this we are shifting the paradigm from account-centric to identity-centric. This means correlating information across accounts, platforms, and environments to reveal an identity’s true footprint. With an understanding of how multiple accounts map back to a single identity, the SOC can more accurately investigate and respond to cyberthreats.

What is privileged access management (PAM)?

Learn more ↗

This enriched view is especially critical when dealing with privileged identities. Integrations with Privileged Access Management (PAM) solutions further empower security organizations to monitor and protect high-value identities.   

All of this is in addition to the native extended detection and response (XDR) correlation done by Microsoft Defender that automatically links identity signals with insights from other security domains, giving security teams a unified threat picture, breaking down silos, and improving response efficiency. From the Identity page in the Defender portal, SOC analysts can see related devices, applications, and alerts—creating a connected view of the threat landscape. These relationships are also exposed in Advanced Hunting, allowing defenders to query across domains and uncover patterns that would otherwise remain hidden. And because Microsoft extends protections to AI agents, service accounts, third-party identities and more, it can use behavioral signals to detect drift and enforce policy—an area where many competitors simply can’t match.

Context is everything

Microsoft Defender delivers deep, enriched visibility into your unique identity fabric. But the true magic lies in how this intelligence is operationalized within the SOC experience. Defender and Microsoft Entra work together generate identity alerts, which get correlated into broader security incidents within Microsoft Defender XDR, giving analysts a unified view of threat activity across endpoints, identities, and cloud resources. Similarly, identity-posture recommendations are part of Microsoft’s Exposure Management strategy, where they are surfaced alongside other risk signals to help teams proactively reduce their attack surface. And when a threat is confirmed, automatic attack disruption can dynamically contain not only the compromised user but also the devices and sessions associated with the attack. This contextualization turns the powerful insights into decisive action. And in today’s threat landscape it’s not just about seeing more—it’s about responding smarter, faster.

A diagram of a network

Getting started

New Defender for Identity customers interested in activating the unified sensor can learn more, including how to deploy, within our documentation here. Existing customers that have already deployed the Defender for Identity sensors do not need to do anything at this time, stay tuned for migration guidance in the coming months.  

Learn more about Microsoft ITDR solutions.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1State of Multicloud Security Risk, Microsoft, 2024.

The post Harden your identity defense with improved protection, deeper correlation, and richer context appeared first on Microsoft Security Blog.

]]>
How cyberattackers exploit domain controllers using ransomware http://approjects.co.za/?big=en-us/security/blog/2025/04/09/how-cyberattackers-exploit-domain-controllers-using-ransomware/ Wed, 09 Apr 2025 16:00:00 +0000 Read how cyberattackers exploit domain controllers to gain privileged system access where they deploy ransomware that causes widespread damage and operational disruption.

The post How cyberattackers exploit domain controllers using ransomware appeared first on Microsoft Security Blog.

]]>
In recent years, human-operated cyberattacks have undergone a dramatic transformation. These attacks, once characterized by sporadic and opportunistic attacks, have evolved into highly sophisticated, targeted campaigns aimed at causing maximum damage to organizations, with the average cost of a ransomware attack reaching $9.36 million in 2024.1 A key catalyst to this evolution is the rise of ransomware as a primary tool for financial extortion—an approach that hinges on crippling an organization’s operations by encrypting critical data and demanding a ransom for its release. Microsoft Defender for Endpoint disrupts ransomware attacks in an average of three minutes, only kicking in when more than 99.99% confident in the presence of a cyberattack.

The evolution of ransomware attacks

What is ransomware?

Learn more ↗

Modern ransomware campaigns are meticulously planned. Cyberattackers understand that their chances of securing a ransom increase significantly if they can inflict widespread damage across a victim’s environment. The rationale is simple: paying the ransom becomes the most viable option when the alternative—restoring the environment and recovering data—is technically unfeasible, time-consuming, and costly.

This level of damage happens in minutes and even seconds, where bad actors embed themselves within an organization’s environment, laying the groundwork for a coordinated cyberattack that can encrypt dozens, hundreds, or even thousands of devices within minutes. To execute such a campaign, threat actors must overcome several challenges such as evading protection, mapping the network, maintaining their code execution ability, and preserving persistency in the environment, building their way to securing two major prerequisites necessary to execute ransomware on multiple devices simultaneously:

  • High-privilege accounts: Whether cyberattackers choose to drop files and encrypt the devices locally or perform remote operations over the network, they must obtain the ability to authenticate to a device. In an on-premises environment, cyberattackers usually target domain admin accounts or other high-privilege accounts, as those can authenticate to the most critical resources in the environment.
  • Access to central network assets: To execute the ransomware attack as fast and as wide as possible, threat actors aim to achieve access to a central asset in the network that is exposed to many endpoints. Thus, they can leverage the possession of high-privilege accounts and connect to all devices visible in their line of sight.

The role of domain controllers in ransomware campaigns

Domain controllers are the backbone of any on-premises environment, managing identity and access through Active Directory (AD). They play a pivotal role in enabling cyberattackers to achieve their goals by fulfilling two critical requirements:

1. Compromising highly privileged accounts

Domain controllers house the AD database, which contains sensitive information about all user accounts, including highly privileged accounts like domain admins. By compromising a domain controller, threat actors can:

  • Extract password hashes: Dumping the NTDS.dit file allows cyberattackers to obtain password hashes for every user account.
  • Create and elevate privileged accounts: Cyberattackers can generate new accounts or manipulate existing ones, assigning them elevated permissions, ensuring continued control over the environment.

With these capabilities, cyberattackers can authenticate as highly privileged users, facilitating lateral movement across the network. This level of access enables them to deploy ransomware on a scale, maximizing the impact of their attack.

2. Exploiting centralized network access

Domain controllers handle crucial tasks like authenticating users and devices, managing user accounts and policies, and keeping the AD database consistent across the network. Because of these important roles, many devices need to interact with domain controllers regularly to ensure security, efficient resource management, and operational continuity. That’s why domain controllers need to be central in the network and accessible to many endpoints, making them a prime target for cyberattackers looking to cause maximum damage with ransomware attacks.

Given these factors, it’s no surprise that domain controllers are frequently at the center of ransomware operations. Cyberattackers consistently target them to gain privileged access, move laterally, and rapidly deploy ransomware across an environment. We’ve seen in more than 78% of human-operated cyberattacks, threat actors successfully breach a domain controller. Additionally, in more than 35% of cases, the primary spreader device—the system responsible for distributing ransomware at scale—is a domain controller, highlighting its crucial role in enabling widespread encryption and operational disruption.

Case study: Ransomware attack using a compromised domain controller

In one notable case, a small-medium manufacturer fell victim to a well-known, highly skilled threat actor attempting to execute a widespread Akira ransomware attack:

How Microsoft Defender for Endpoint's automatic attack disruption helped contain a widespread ransomware attack.

Pre domain-compromise activity

After gaining initial access, presumably through leveraging the customer’s VPN infrastructure, and prior to obtaining domain admin privileges, the cyberattackers initiated a series of actions focused on mapping potential assets and escalating privileges. A wide, remote execution of secrets dump is detected on Microsoft Defender for Endpoint-onboarded devices and User 1 (domain user) is contained by attack disruption.

Post domain-compromise activity

Once securing domain admin (User 2) credentials, potentially through leveraging the victim’s non-onboarded estate, the attacker immediately attempts to connect to the victim’s domain controller (DC1) using Remote Desktop Protocol (RDP) from the cyberattacker’s controlled device. When gaining access to DC1, the cyberattacker leverages the device to perform the following set of actions:

  • Reconnaissance—The cyberattacker leverages the domain controller’s wide network visibility and high privileges to map the network using different tools, focusing on servers and network shares.
  • Defense evasion—Leveraging the domain controller’s native group policy functionality, the cyberattacker attempts to tamper with the victim’s antivirus by modifying security-related group policy settings.
  • Persistence—The cyberattacker leverages the direct access to Active Directory, creating new domain users (User 3 and User 4) and adding them to the domain admin group, thus establishing a set of highly privileged users that would later on be used to execute the ransomware attack.

Encryption over the network

Once the cyberattacker takes control over a set of highly privileged users, this provides them access to any domain-joined resource, including comprehensive network access and visibility. It will also allow them to set up tools for the encryption phase of the cyberattack.

Assuming they’re able to validate a domain controller’s effectiveness, they begin by running the payload locally on the domain controller. Attack disruption detects the threat actor’s attempt to run the payload and contains User 2, User 3, and the cyberattacker-controlled device used to RDP to the domain controller.

After successfully containing Users 2 and 3, the cyberattacker proceeded to log in to the domain controller using User 4, who had not yet been utilized. After logging into the device, the cyberattacker attempted to encrypt numerous devices over the network from the domain controller, leveraging the access provided by User 4.

Attack disruption detects the initiation of encryption over the network and automatically granularly contains device DC1 and User 4, blocking the attempted remote encryption on all Microsoft Defender for Endpoint-onboarded and targeted devices.

Protecting your domain controllers

Given the central role of domain controllers in ransomware attacks, protecting them is critical to preventing large-scale damage. However, securing domain controllers is particularly challenging due to their fundamental role in network operations. Unlike other endpoints, domain controllers must remain highly accessible to authenticate users, enforce policies, and manage resources across the environment. This level of accessibility makes it difficult to apply traditional security measures without disrupting business continuity. Hence, security teams constantly face the complex challenge of striking the right balance between security and operational functionality.

To address this challenge, Defender for Endpoint introduced contain high value assets (HVA), an expansion of our contain device capability designed to automatically contain HVAs like domain controllers in a granular manner. This feature builds on Defender for Endpoint’s capability to classify device roles and criticality levels to deliver a custom, role-based containment policy, meaning that if a sensitive device, such a domain controller, is compromised, it is immediately contained in less than three minutes, preventing the cyberattacker from moving laterally and deploying ransomware, while at the same time maintaining the operational functionality of the device. The ability of the domain controller to distinguish between malicious and benign behavior helps keep essential authentication and directory services up and running. This approach provides rapid, automated cyberattack containment without sacrificing business continuity, allowing organizations to stay resilient against sophisticated human-operated cyberthreats.

Now your organization’s domain controllers can leverage automatic attack disruption as an extra line of defense against malicious actors trying to overtake high value assets and exert costly ransomware attacks.

Learn more

Explore these resources to stay updated on the latest automatic attack disruption capabilities:

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Average cost per data breach in the United States 2006-2024, Ani Petrosyan. October 10, 2024.

The post How cyberattackers exploit domain controllers using ransomware appeared first on Microsoft Security Blog.

]]>
The four stages of creating a trust fabric with identity and network security http://approjects.co.za/?big=en-us/security/blog/2024/06/04/the-four-stages-of-creating-a-trust-fabric-with-identity-and-network-security/ Tue, 04 Jun 2024 16:00:00 +0000 The trust fabric journey has four stages of maturity for organizations working to evaluate, improve, and evolve their identity and network access security posture.

The post The four stages of creating a trust fabric with identity and network security appeared first on Microsoft Security Blog.

]]>

How implementing a trust fabric strengthens identity and network

Read the blog ›

At Microsoft, we’re continually evolving our solutions for protecting identities and access to meet the ever-changing security demands our customers face. In a recent post, we introduced the concept of the trust fabric. It’s a real-time approach to securing access that is adaptive and comprehensive. In this blog post, we’ll explore how any organization—large or small—can chart its own path toward establishing their own digital trust fabric. We’ll share how customers can secure access for any trustworthy identity, signing in from anywhere, to any app or resource on-premises, and in any cloud. While every organization is at a different stage in their security journey, with different priorities, we’ll break down the trust fabric journey into distinct maturity stages and provide guidance to help customers prioritize their own identity and network access improvements.

Graphic showing the four stages for creating a trust fabric.

Stage 1: Establish Zero Trust access controls

“Microsoft enabled secure access to data from any device and from any location. The Zero Trust model has been pivotal to achieve the desired configuration for users, and Conditional Access has helped enable it.”

Arshaad Smile, Head of Cloud Security, Standard Bank of South Africa 

This first stage is all about your core identity and access management solutions and practices. It’s about securing identities, preventing external attacks, and verifying explicitly with strong authentication and authorization controls. Today, identity is the first line of defense and the most attacked surface area. In 2022, Microsoft tracked 1,287 password attacks every second. In 2023 we saw a dramatic increase, with an average of more than 4,000 password attacks per second.1

To prevent identity attacks, Microsoft recommends a Zero Trust security strategy, grounded in the following three principles—verify explicitly, ensure least-privilege access, and assume breach. Most organizations start with identity as the foundational pillar of their Zero Trust strategies, establishing essential defenses and granular access policies. Those essential identity defenses include:

  • Single sign-on for all applications to unify access policies and controls.
  • Phishing-resistant multifactor authentication or passwordless authentication to verify every identity and access request.
  • Granular Conditional Access policies to check user context and enforce appropriate controls before granting access.

In fact, Conditional Access is the core component of an effective Zero Trust strategy. Serving as a unified Zero Trust access policy engine, it reasons over all available user context signals like device health or risk, and decides whether to grant access, require multifactor authentication, monitor or block access.

Recommended resources—Stage 1

For organizations in this stage of their journey, we’re detailing a few recommendations to make it easier to adopt and advance Zero Trust security fundamentals:

  1. Implement phishing-resistant multifactor authentication for your organization to protect identities from compromise.
  2. Deploy the recommended Conditional Access policies, customize Microsoft-managed policies, and add your own. Test in report-only mode. Mandate strong, phishing-resistant authentication for any scenario.
  3. Check your Microsoft Entra recommendations and Identity Secure Score to measure your organization’s identity security posture and plan your next steps. 

Stage 2: Secure access for your hybrid workforce

Once your organization has established foundational defenses, the next priority is expanding Zero Trust strategy by securing access for your hybrid workforce. Flexible work models are now mainstream, and they pose new security challenges as boundaries between corporate networks and open internet are blurred. At the same time, many organizations increasingly have a mix of modern cloud applications and legacy on-premises resources, leading to inconsistent user experiences and security controls.

The key concept for this stage is Zero Trust user access. It’s about advanced protection that extends Zero Trust principles to any resource, while making it possible to securely access any application or service from anywhere. At the second stage of the trust fabric journey, organizations need to:                          

  1. Unify Conditional Access across identity, endpoint, and network, and extend it to on-premises apps and internet traffic so that every access point is equally protected.
  2. Enforce least-privilege access to any app or resource—including AI—so that only the right users can access the right resources at the right time.
  3. Minimize dependency on the legacy on-premises security tools like traditional VPNs, firewalls, or governance that don’t scale to the demands of cloud-first environments and lack protections for sophisticated cyberattacks.

A great outcome of those strategies is much improved user experience, as now any application can be made available from anywhere, with familiar, consistent sign-in experience.

Recommended resources—Stage 2

Here are key recommendations to secure access for your employees:

  1. Converge identity and network access controls and extend Zero Trust access controls to on-premises resources and the open internet.
  2. Automate lifecycle workflows to simplify access reviews and ensure least privilege access.
  3. Replace legacy solutions such as basic Secure Web Gateway (SWG), Firewalls, and Legacy VPNs.

Stage 3: Secure access for customers and partners

With Zero Trust user access in place, organizations need to also secure access for external users including customers, partners, business guests, and more. Modern customer identity and access management (CIAM) solutions can help create user-centric experiences that make it easier to securely engage with customers and collaborate with anyone outside organizational boundaries—ultimately driving positive business outcomes.

In this third stage of the journey towards an identity trust fabric, it’s essential to:

  1. Protect external identities with granular Conditional Access policies, fraud protection, and identity verification to make sure security teams know who those external users are.
  2. Govern external identities and their access to ensure that they only access resources that they need, and don’t keep access when it’s no longer needed.
  3. Create user-centric, frictionless experiences to make it easier for external users to follow your security policies.
  4. Simplify developer experiences so that any new application has strong identity controls built-in from the start.

Recommended resources—Stage 3

  1. Learn how to extend your Zero Trust foundation to external identities. Protect your customers and partners against identity compromise.
  2. Set up your governance for external users. Implement strong access governance including lifecycle workflows for partners, contractors, and other external users.
  3. Protect customer-facing apps. Customize and control how customers sign up and sign in when using your applications.

Stage 4: Secure access to resources in any cloud

The journey towards an organization’s trust fabric is not complete without securing access to resources in multicloud environments. Cloud-native services depend on their ability to access other digital workloads, which means billions of applications and services connect to each other every second. Already workload identities exceed human identities by 10 to 1 and the number of workload identities will only grow.2 Plus, 50% of total identities are super identities, that have access to all permissions and all resources, and 70% of those super identities are workload identities.3

Managing access across clouds is complex, and challenges like fragmented role-based access control (RBAC) systems, limited scalability of on-premises Privileged Access Management (PAM) solutions, and compliance breaches are common. These issues are exacerbated by the growing adoption of cloud services from multiple providers. Organizations typically use seven to eight different products to address these challenges. But many still struggle to attain complete visibility into their cloud access.

Graphic that shows the progression of steps for how to discover, detect, enforce, and automate with Microsoft Entra.

We’re envisioning the future for cloud access management as a unified platform that will deliver comprehensive visibility into permissions and risk for all identities—human and workloads—and will secure access to any resources in any cloud. In the meantime, we recommend the following key actions for in the fourth stage of their journey towards the trust fabric:

Read our recent blog titled “Securing access to any resource, anywhere” to learn more about our vision for Cloud Access Management.

Recommended resources—Stage 4

As we work towards making this vision a reality, customers today can get started on their stage four trust fabric journey by learning more about multicloud risk, getting visibility, and remediating over-provisioned permissions across clouds. Check out the following resources to learn more.

  1. Understand multicloud security risks from the 2024 State of Multicloud Security Risk Report.
  2. Get visibility into cloud permissions assigned to all identities and permissions assigned and used across multiple clouds and remediate risky permissions.
  3. Protect workload-to-workload interactions by securing workload identities and their access to cloud resources.

Accelerate your trust fabric with Generative AI capabilities and skills

To increase efficiency, speed, and scale, many organizations are looking to AI to help augment existing security workflows. Microsoft Entra and Microsoft Copilot for Security work together at machine speed, integrating with an admin’s daily workflow to prioritize and automate, understand cyberthreats in real time, and process large volumes of data.

Copilot skills and capabilities embedded in Microsoft Entra helps admins to:

  • Discover high risk users, overprivileged access, and suspicious sign-ins.
  • Investigate identity risks and help troubleshoot daily identity tasks.
  • Get instant risk summaries, steps to remediate, and recommended guidance for each identity at risk.
  • Create lifecycle workflows to streamline the process of provisioning user access and eliminating configuration gaps.

Copilot is informed by large-scale data and threat intelligence, including the more than 78 trillion security signals processed by Microsoft each day, and coupled with large language models to deliver tailored insights and guide next steps. Learn more about how Microsoft Copilot for Security can help support your trust fabric maturity journey.

Microsoft Entra

Protect any identity and secure access to any resource with a family of multicloud identity and network access solutions.

Side view close-up of a man typing on his phone while standing behind a Microsoft Surface Studio.

Microsoft is here to help

No matter where you are on your trust fabric journey, Microsoft can help you with the experience, resources, and expertise at every stage. The Microsoft Entra family of identity and network access solutions can help you create a trust fabric for securing access for any identity, from anywhere, to any app or resource across on-premises and clouds. The products listed below work together to prevent identity attacks, enforce least privilege access, unify access controls, and improve the experience for users, admins, and developers.

Graph showing the functions of Microsoft Entra and which product is key to each function.

Learn more about securing access across identity, endpoint, and network to accelerate your organization’s trust fabric implementation on our new identity and network access solution page.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2023.

2How do cloud permission risks impact your organization?, Microsoft.

32024 State of Multicloud Security Risk Report, Microsoft.

The post The four stages of creating a trust fabric with identity and network security appeared first on Microsoft Security Blog.

]]>
New Windows 11 features strengthen security to address evolving cyberthreat landscape http://approjects.co.za/?big=en-us/security/blog/2024/05/20/new-windows-11-features-strengthen-security-to-address-evolving-cyberthreat-landscape/ Mon, 20 May 2024 18:00:00 +0000 Today, ahead of the Microsoft Build 2024 conference, we announced a new class of Windows computers, Copilot+ PC. Alongside this exciting new class of computers, we are introducing important security features and updates that make Windows 11 more secure for users and organizations, and give developers the tools to prioritize security.

The post New Windows 11 features strengthen security to address evolving cyberthreat landscape appeared first on Microsoft Security Blog.

]]>
Ahead of the Microsoft Build 2024 conference, we announced a new class of Windows computers, Copilot+ PC. Alongside this exciting new class of PCs, we are introducing important security features and updates that make Windows 11 more secure for users and organizations and give developers the tools to prioritize security.

Today’s threat landscape is unlike any we’ve seen before. Attacks are growing in speed, scale, and sophistication. In 2015, our identity systems were detecting around 115 password attacks per second. Less than a decade later, that number has surged 3,378% to more than 4,000 password attacks per second.1 This landscape requires stronger and more comprehensive security approaches than ever before, across all devices and technologies we use in our lives both at home and at work.

Cybersecurity at the forefront of all we do

We’ve always had a longstanding commitment to security in Windows. Several years back, when we saw cyberattackers increasingly exploiting hardware, we introduced the Secured-core PC to help secure from chip to cloud and that critical layer of computing.

As we’ve seen identity-based cyberattacks increase at an alarming rate over the years, we’ve expanded our passwordless offerings quickly and broadly. In September 2023, we announced expanded passkey support with cross-device authentication, and have continued to build on that momentum. Earlier this month we announced passkey support for Microsoft consumer accounts and for device-bound passkeys in the Microsoft Authenticator app for iOS and Android users, expanding our support of this industry initiative backed by the FIDO Alliance. Passkeys on Windows are protected by Windows Hello technology that encompasses both Windows Hello and Windows Hello for Business. This latest step builds on nearly a decade of critical work strengthening Windows Hello to give users easier and more secure sign-in options and eliminate points of vulnerability.

Earlier this month we expanded our Secure Future Initiative (SFI), making it clear that we are prioritizing security above all else. SFI, a commitment we shared first in November 2023, prioritizes designing, building, testing, and operating our technology in a way that helps to ensure secure and trustworthy product and service delivery. With these commitments in mind, we’ve not only built new security features into Windows 11, but we’ve also doubled down on security features that will be turned on by default. Our goal remains simple: make it easy to stay safe with Windows. 

Today we are sharing exciting updates that make Windows more secure out of the box, by design and by default.

Windows 11

Create, collaborate, and keep your stuff protected.

Modern, secure hardware

We believe security is a team sport. We are working in close partnership with our Original Equipment Manufacturer (OEM) partners to complement OEM security features and deliver more secure devices out of the box.

While Secured-core PCs were once considered specialized devices for those handling sensitive data, now Windows users can benefit from enhanced security and AI on one device. We announced that all Copilot+ PCs will be Secured-core PCs, bringing advanced security to both commercial and consumer devices. In addition to the layers of protection in Windows 11, Secured-core PCs provide advanced firmware safeguards and dynamic root-of-trust measurement to help protect from chip to cloud. 

Microsoft Pluton security processor

Learn more ↗

Microsoft Pluton security processor will be enabled by default on all Copilot+ PCs. Pluton is a chip-to-cloud security technology—designed by Microsoft and built by silicon partners—with Zero Trust principles at the core. It helps protect credentials, identities, personal data, and encryption keys, making it significantly harder to remove, even if a cyberattacker installs malware or has physical possession of the PC.

All Copilot+ PCs will also ship with Windows Hello Enhanced Sign-in Security (ESS). This provides more secure biometric sign ins and eliminates the need for a password. ESS provides an additional level of security to biometric data by leveraging specialized hardware and software components, such as virtualization-based security (VBS) and Trusted Platform Module 2.0 to help isolate and protect authentication data and secure the channel on which it is communicated. ESS is also available on other compatible Windows 11 devices.

Stay ahead of evolving threats with Windows

To enhance user security from the start, we’re continuously updating security measures and enabling new defaults within Windows.

Windows 11 is designed with layers of security enabled by default, so you can focus on your work, not your security settings. Out-of-the-box features such as credential safeguards, malware shields, and application protection led to a reported 58% drop in security incidents, including a 3.1 times reduction in firmware attacks. In Windows 11, hardware and software work together to help shrink the attack surface, protect system integrity, and shield valuable data.2 

Windows Hello for Business

Learn more ↗

Credential and identity theft is a prime focus of cyberattackers. Enabling multifactor authentication with Windows Hello, Windows Hello for Business, and passkeys are effective multifactor authentication solutions. But, as more people enable multifactor authentication, cyberattackers are moving away from simple password-based attacks and focusing energy on other types of credential theft. We have been working to make this more difficult with our latest updates:

  • Local Security Authority protection: Windows has several critical processes to verify a user’s identity, including the Local Security Authority (LSA). LSA authenticates users and verifies Windows sign ins, handling tokens and credentials, such as passwords, that are used for single sign-on to Microsoft accounts and Microsoft Azure services. LSA protection, previously on by default for all new commercial devices, is now also enabled by default for new consumer devices. For users upgrading where it has not previously been enabled, For new consumer devices and for users upgrading where it has not been enabled, LSA protection will enter into a grace period. LSA protection prevents LSA from loading untrusted code and prevents untrusted processes from accessing LSA memory, offering significant protection against credential theft.3 
  • NT LAN Manager (NTLM) deprecation: Ending the use of NTLM has been a huge ask from our security community as it will strengthen authentication. NTLM is being deprecated, meaning that, while supported, it is no longer under active feature development. We are introducing new features and tools to ease customers’ transitions to stronger authentication protocols.
  • Advancing key protection in Windows using VBS: Now available in public preview for Windows Insiders, this feature helps to offer a higher security bar than software isolation, with stronger performance compared to hardware-based solutions, since it is powered by the device’s CPU. While hardware-backed keys offer strong levels of protection, VBS is helpful for services with high security, reliability, and performance requirements.
  • Windows Hello hardening: With Windows Hello technology being extended to protect passkeys, if you are using a device without built-in biometrics, Windows Hello has been further hardened by default to use VBS to isolate credentials, protecting from admin-level attacks.

We have also prioritized helping users know what apps and drivers can be trusted to better protect people from phishing attacks and malware. Windows is both creating new inbox capabilities as well as providing more features for the Windows app developer community to help strengthen app security.

  • Smart App Control: Now available and on by default on select new systems where it can provide an optimal experience, Smart App Control has been enhanced with AI learning. Using an AI model based on the 78 trillion security signals Microsoft collects each day, this feature can predict if an app is safe. The policy keeps common, known-to-be-safe apps running while unknown, malware-connected apps are blocked. This is incredibly effective protection against malware.
  • Trusted Signing: Unsigned apps pose significant risks. In fact, Microsoft research has revealed that a lot of malware comes in the form of unsigned apps. The best way to ensure seamless compatibility with Smart App Control is with signing of your app. Signing contributes to its trustworthiness and helps ensure that an existing “good reputation” will be inherited by future app updates, making it less likely to be blocked inadvertently by threat detection systems. Recently moved into public preview, trusted signing makes this process simpler by managing every aspect of the certificate lifecycle. And it integrates with popular development tooling like Azure DevOps and GitHub.
  • Win32 app isolation: A new security feature, currently in preview, Win32 app isolation makes it easier for Windows app developers to contain damage and safeguard user privacy choices in the event of an application compromise. Win32 app isolation is built on the foundation of AppContainers, which offer a security boundary, and components that virtualize resources and provide brokered access to other resources—like printer, registry, and file access. Win32 app isolation is close to general availability thanks to feedback from our developer community. App developers can now use Win32 app isolation with seamless Visual Studio integration.
  • Making admin users more secure: Most people run as full admins on their devices, which means apps and services have the same access to the kernel and other critical services as users. And the problem is that these apps and services can access critical resources without the user knowing. This is why Windows is being updated to require just in time administrative access to the kernel and other critical services as needed, not all the time, and certainly not by default. This makes it harder for an app to unexpectedly abuse admin privileges and secretly put malware or malicious code on Windows. When this feature is enabled, such as when an app needs special permissions like admin rights, you’ll be asked for approval. When an approval is needed, Windows Hello provides a secure and easy way to approve or deny these requests, giving you, and only you, full control over your device. Currently in private preview, this will be available in public preview soon. 
  • VBS enclaves: Previously available to Windows security features only, VBS enclaves are now available to third-party application developers. This software-based trusted executive environment within a host application’s address space offers deep operating system protection of sensitive workloads, like data decryption. Try the VBS enclave APIs to experience how the enclave is shielded from both other system processes and the host application itself. This results in more security for your sensitive workloads.

As we see cyberattackers come up with new strategies and targets, we continue to harden Windows code to address where bad actors are spending their time and energy.

  • Windows Protected Print: In late 2023, we launched Windows Protected Print Mode to build a more modern and secure print system that maximizes compatibility and puts users first. This will be the default print mode in the future.
  • Tool tips: In the past, tool tips have been exploited, leading to unauthorized access to memory. In older Windows versions, tool tips were managed as a single window for each desktop, established by the kernel and recycled for displaying any tool tip. We are revamping how tool tips work to be more secure for users. With the updated approach, the responsibility for managing the lifecycle of tool tips has been transferred to the respective application that is being used. Now, the kernel monitors cursor activity and initiates countdowns for the display and concealment of tool tip windows. When these countdowns conclude, the kernel notifies the user-level environment to either generate or eliminate a tool tip window.
  • TLS server authentication: TLS (transport layer security) server authentication certificates verify the server’s identity to a client and ensure secure connections. While 1024-bit RSA encryption keys were previously supported, advancements in computing power and cryptanalysis require that Windows no longer trust these weak key lengths by default. As a result, TLS certificates with RSA keys less than 2048 bits chaining to roots in the Microsoft Trusted Root Program will not be trusted.

Lastly, with each Windows release we add more levers for commercial customers to lock down Windows within their environment.

  • Config Refresh: Config Refresh allows administrators to set a schedule for devices to reapply policy settings without needing to check in to Microsoft Intune or other mobile device management vendors, helping to ensure settings remain as configured by the IT admin. It can be set to refresh every 90 minutes by default or as frequently as every 30 minutes. There is also an option to pause Config Refresh for a configurable period, useful for troubleshooting or maintenance, after which it will automatically resume or can be manually reactivated by an administrator.
  • Firewall: The Firewall Configuration Service Provider (CSP) in Windows now enforces an all-or-nothing application of firewall rules from each atomic block of rules. Previously, if the CSP encountered an issue with applying any rule from a block, the CSP would not only stop that rule, but also would cease to process subsequent rules, leaving a potential security gap with partially deployed rule blocks. Now, if any rule in the block cannot be applied successfully to the device, the CSP will stop processing subsequent rule and all rules from that same atomic block will be rolled back, eliminating the ambiguity of partially deployed rule blocks.
  • Personal Data Encryption (PDE): PDE enhances security by encrypting data and only decrypting it when the user unlocks their PC using Windows Hello for Business. PDE enables two levels of data protection. Level 1, where data remains encrypted until the PC is first unlocked; or Level 2, where files are encrypted whenever the PC is locked. PDE complements BitLocker’s volume level protection and provides dual-layer encryption for personal or app data when paired with BitLocker. PDE is in preview now and developers can leverage the PDE API to protect their app content, enabling IT admins to manage protection using their mobile device management solution. 
  • Zero Trust DNS: Now in private preview, this feature will natively restrict Windows devices to connect only to approved network destinations by domain name. Outbound IPv4 and IPv6 traffic is blocked and won’t reach the intended destination unless a trusted, protected DNS server resolved it, or an IT admin configures an exception. Plan now to avoid blocking issues by configuring apps and services to use the system DNS resolver.

Explore the new Windows 11 security features

We truly believe that security is a team sport. By partnering with OEMs, app developers and others in the ecosystem—along with helping people to be better at protecting themselves—we are delivering a Windows that is more secure by design and secure by default. The Windows Security Book is available to help you learn more about what makes it easy for users to stay secure with Windows.

Learn more about Windows 11.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Password Guidance, Microsoft Identity Protection Team. 2016.

2Windows 11 Survey Report, Techaisle. February 2022.

3Users can manage their LSA protection state in the Windows Security Application under Device Security -> Core Isolation -> Local Security Authority.

The post New Windows 11 features strengthen security to address evolving cyberthreat landscape appeared first on Microsoft Security Blog.

]]>
How to prevent lateral movement attacks using Microsoft 365 Defender http://approjects.co.za/?big=en-us/security/blog/2022/10/26/how-to-prevent-lateral-movement-attacks-using-microsoft-365-defender/ Wed, 26 Oct 2022 16:00:00 +0000 Learn how Microsoft 365 Defender can enhance mitigations against lateral movement paths in your environment, stopping attackers from gaining access to privileged and sensitive accounts.

The post How to prevent lateral movement attacks using Microsoft 365 Defender appeared first on Microsoft Security Blog.

]]>
Microsoft 365 Defender is becoming Microsoft Defender XDR. Learn more.

It’s been 10 years since the first version of the Mitigating Pass-the-Hash Attacks and Other Credential Theft whitepaper was made available, but the techniques are still relevant today, because they help prevent attackers from gaining a network foothold and using credential-dumping tools to extract password hashes, user credentials, or Kerberos tickets from local memory.1 With those tools in hand, an attacker could move laterally in the network to obtain the credentials of more privileged accounts. All this leads to their ultimate goal—access to your sensitive business data, the Active Directory (AD) database, crucial business applications, and more.

In this blog post, we’ll look at the three fundamental mitigations for preventing lateral movement and how Microsoft 365 Defender can help your team achieve maximum effectiveness from each mitigation:

  1. Restricting privileged domain accounts.
  2. Restricting and protecting local accounts with administrator privileges.
  3. Restricting inbound traffic using Windows Defender Firewall.

1. Restricting privileged domain accounts

Segmenting privileged domain accounts can be achieved through implementing the tier model. The tier model helps to mitigate credential theft by segregating your AD environment into three different tiers of varying privileges and access. Creating separate tiers cuts off lateral movement from a standard user workstation to an application server or domain controller. Meaning, if a standard user account’s machine is compromised and password hashes are obtained by an attacker, there will be no movement path toward more sensitive accounts and servers. The three tiers are arranged 0 to 2, with 0 being the most restricted:

  • Tier 0: All accounts and servers in this tier are either domain administrators or have a direct path to domain administrator privileges. Examples of servers include domain controllers, AD servers, and any management server for applications and agents running on Tier 0 servers. For an account to be considered Tier 0, it does not have to be a member of domain administrators; having privileged access to any Tier 0 server or application (through things like access control lists and User Right Assignments) will also classify an account as Tier 0. 
  • Tier 1: In most cases, Tier 1 will contain the most business-critical applications. All accounts and servers in this tier are either running enterprise applications or have permissions on servers running applications. Examples include file shares, application servers, and database servers.
  • Tier 2: This tier can be thought of as any account or machine that does not fall into either of the other tiers. This is where normal user workstations will reside, as well as standard user accounts. 
A Simplified schematic IT environment is split into three zones, Tier 0 with Domain Controllers, Tier 1 with servers and applications and Tier 2 with users and workstation systems. Zones are separated by red dotted line.

Figure 1: Tier model for Active Directory.

For the tier model to function as intended, the different tiers must be completely segregated from each other. This can be accomplished by creating Group Policy Objects (GPOs) that deny signing in across tiers. No account can be allowed to cross the tier boundaries. For example, an administrator on Tier 0 should be denied access to a Tier 1 or Tier 2 machine. If credentials are exposed to another tier, the password must be reset for that account.

Using Privileged Access Workstations (PAW) also mitigates against lateral movement. Because an account in one tier can only sign in to computers in the same tier, users with more than one account in the domain must use separate computers. A Tier 0 user should use a PAW to access only Tier 0 assets. But the person who owns the Tier 0 account should not use the same machine for checking their email or productivity applications (a Tier 2 activity).

Note: Read-level access to higher tiers is still allowed for all users because this is crucial for AD authentication and for users to access applications.

As explained earlier, if an attacker can harvest the credentials of any of the accounts in the path, they will be able to move laterally to gain the credentials of the sensitive account. One way to spot any lateral movement paths in your environment is to use Microsoft Defender for Identity. By correlating data from account sessions, local admins on machines, and group memberships, Defender for Identity can help prevent this and quickly identify any lateral movement paths for each sensitive account. If the attacker can harvest the credentials of any of the accounts in the path, they will also be able to move laterally to gain the credentials of the sensitive account. 

Simple graph with two nodes representing two users and an arrow link between them. First node represents User 4 and second node represents admin user. Computer icon above the link states that User 4 is an admin on machine client 5, where admin user is logged into.

Figure 2: Lateral movement path view from Microsoft Defender for Identity portal.

By default, Defender for Identity classifies certain groups and their members as sensitive, while providing functionality to add more accounts and groups to the classification if needed. The goal is to break the possible attack paths (see Figure 2) by removing local administrators, denying access, or by separating accounts.

2. Restricting and protecting local accounts with administrator privileges

Local admin access opens up vast credential harvesting and lateral movement possibilities, making local admins a prime target for attackers. To make matters worse, local admin management and monitoring are sometimes overlooked. Often the local administrator password is set once for all machines in the organization during the operating system deployment, including machines used by administrators. When local admin passwords are not randomized across client machines, an attacker can compromise a local account password on one machine and automatically obtain administrator-level access to all client machines in the network.

Fortunately, Microsoft Local Admin Password Solution (LAPS) is an easy-to-deploy tool that fully automates password management for local accounts. Once installed on the machine, LAPS will set the local admin account password to a random string and write it to a confidential attribute of the corresponding computer account in AD. During deployment, your team can specify computers to be managed and which users will be able to retrieve passwords from AD—for example, the helpdesk team accessing a client computer’s credentials.

Microsoft Defender for Endpoint tracks LAPS configuration on endpoints and can be found in Vulnerability management > Security recommendations.

This screenshot shows a security recommendation on Microsoft Defender for Endpoint called Enable Local Admin password management is active. This reveals that 8,000 devices out of 50,000 devices are exposed.

Figure 3: LAPS security recommendations page in the Microsoft 365 Defender portal.

For a detailed report on your devices, run the following query in Advanced Hunting

DeviceTvmSecureConfigurationAssessment  
| where ConfigurationId == "scid-84" 
| where OSPlatform == "Windows10" 
| where IsCompliant == 0 
| project DeviceName, OSPlatform

A similar report can be found in Microsoft Defender for Cloud Apps with Defender for Identity integration. It also tracks LAPS deployment from an AD perspective by highlighting computer objects that did not have their LAPS password updated in the last 60 days. Although both reports provide similar information, it is obtained from different sources. Therefore, the two reports can be used to crosscheck LAPS deployment status.  

Defender for Endpoint customers can view all activities being monitored and configure custom detections for suspicious local administrator account behavior. For example, the following query detects local admin usage over the network: 

DeviceLogonEvents 
| where AccountSid endswith '-500' and parse_json(AdditionalFields).IsLocalLogon != true 
| join kind=leftanti IdentityLogonEvents on AccountSid // Remove the domain's built-in admin account 

Your team can also block local admin accounts’ access over the network by adding the Local account and member of Administrators group (S-1-5-114) entity to Deny access to this computer from the network GPO setting. This will further complicate an attacker’s lateral movement, as well as cover any possible extra local admin accounts available on the machine, since LAPS can only cover one account per device.

3. Restricting inbound traffic with Windows Defender Firewall

Our experience has shown that this last mitigation is often overlooked. By simply removing the ability to connect from one computer to another, this mitigation provides a simple and robust way to make lateral movement more difficult for an attacker.

Host-based firewalls may have a reputation for being difficult to manage but blocking inbound traffic on Windows clients using Windows Defender Firewall is not a tedious task. Most client-server applications initiate network communication from the client side and don’t expect any inbound connections initiated from the servers. But for this mitigation to work, Windows Defender Firewall must be set to block all inbound connections (unless specifically allowed by one of the rules). It is key to disable local firewall rules merging, since failure to do so will negate the effect of this mitigation. For details on Windows Defender Firewall configuration, please check the Pass-the-Hash Mitigations whitepaper1 for a GPO approach or the Microsoft Intune documentation

Screenshot of Windows Defender Firewall interface with firewall enabled for Domain, Private and Public firewall profiles with the same settings across all profiles. All inbound connections are blocked unless specifically allowed by one of the rules, all outbound connections are allowed, unless specifically blocked by one of the rules.

Figure 4: Windows Defender Firewall settings for mitigating lateral movement.

Once initial configuration is done, it’s crucial to identify any applications that were overlooked and did not receive exceptions to accept inbound connections. This is where Defender for Endpoint can help by significantly expanding firewall monitoring and reporting capabilities. Once Windows Defender Firewall is set to block inbound connections on a test group of devices, your team can easily start analyzing firewall logs for any misconfigurations.  

The Reports section in the Microsoft 365 Defender portal has a built-in firewall report with all the information needed. Each report section contains an Advanced hunting button that shows the relevant query and allows you to dive deeper into the data. 

Sample report from Defender for Endpoint portal reports section showing statistics of connections blocked by Windows Firewall. Page contains graph showing number of firewall blocked inbound connections, graphics with top local ports from blocked inbound connections and tables with top processes initiating blocked connections, number of blocked connection per computer, remote IPs with the most connection attempts.

Figure 5: Remote IPs targeting multiple computers report in Microsoft 365 Defender portal’s Reports page.

In this example, the most relevant report is Remote IPs targeting multiple computers. The existing query can easily be adjusted to only include test devices: 

DeviceEvents 
| where DeviceName in ("testdevice1.contoso.com", "testdevice2.contoso.com") 
| where ActionType == "FirewallInboundConnectionBlocked" 
| summarize ConnectionsBlocked = count() by RemoteIP 
| sort by ConnectionsBlocked 

Once IP addresses returned by the query are verified as legitimate applications requiring inbound access to client computers (such as remote management software or any peer-to-peer applications), then the firewall configuration can be adjusted to include these IP addresses as exclusions. For extra reporting flexibility, a Power BI firewall report can be connected to Defender for Endpoint.

Learn more

At Microsoft, we believe that the mitigations outlined in this article can significantly improve your security posture and reduce the threat of lateral movement in your environment. Using Microsoft 365 Defender can help you in the process.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


1Mitigating Pass-the-Hash Attacks and Other Credential Theft, Microsoft. July 7, 2014.

The post How to prevent lateral movement attacks using Microsoft 365 Defender appeared first on Microsoft Security Blog.

]]>
Discover the anatomy of an external cyberattack surface with new RiskIQ report http://approjects.co.za/?big=en-us/security/blog/2022/04/21/discover-the-anatomy-of-an-external-cyberattack-surface-with-new-riskiq-report/ Thu, 21 Apr 2022 16:00:00 +0000 Learn how supply chains, shadow IT, and other factors are growing the external attack surface—and where you need to defend your enterprise.

The post Discover the anatomy of an external cyberattack surface with new RiskIQ report appeared first on Microsoft Security Blog.

]]>
The internet is now part of the network. That might sound like hyperbole, but the massive shift to hybrid and remote work and a multicloud environment means security teams must now defend their entire online ecosystem. Recent ransomware attacks against internet-facing systems have served as a wake-up call. Now that Zero Trust has become the gold standard for enterprise security, it’s critical that organizations gain a complete picture of their attack surface—both external and internal.

Microsoft acquired RiskIQ in 2021 to help organizations assess the security of their entire digital enterprise.1 Powered by the RiskIQ Internet Intelligence Graph, organizations can discover and investigate threats across the components, connections, services, IP-connected devices, and infrastructure that make up their attack surface to create a resilient, scalable defense.2 For security teams, such a task might seem like trying to boil the ocean. So, in this post, I’ll help you put things in perspective with five things to remember when managing external attack surfaces. Learn more in the full RiskIQ report.

Your attack surface grows with the internet

In 2020, the amount of data on the internet hit 40 zettabytes or 40 trillion gigabytes.3 RiskIQ found that every minute, 117,298 hosts and 613 domains are added.4 Each of these web properties contains underlying operating systems, frameworks, third-party applications, plugins, tracking codes, and more, and the potential attack surface increases exponentially.

Some of these threats never traverse the internal network. In the first quarter of 2021, 611,877 unique phishing sites were detected,5 with 32 domain-infringement events and 375 total new threats emerging per minute.4 These types of threats target employees and customers alike with rogue assets and malicious links, all while phishing for sensitive data that can erode brand confidence and harm consumer trust.

Every minute, RiskIQ detects:4

·       15 expired services (susceptible to subdomain takeover)

·       143 open ports

A remote workforce brings new vulnerabilities

The COVID-19 pandemic accelerated digital growth. Almost every organization has expanded its digital footprint to accommodate a remote or hybrid workforce. The result: attackers now have more access points to exploit. The use of remote-access technologies like Remote Desktop Protocol (RDP) and VPN has skyrocketed by 41 percent and 33 percent respectively as the pandemic pushed organizations to adopt a work from home policy.6

Along with the dramatic rise in RDP and VPN usage came dozens of new vulnerabilities giving attackers new footholds. RiskIQ has surfaced thousands of vulnerable instances of the most popular remote access and perimeter devices, and the torrential pace shows no sign of slowing. Overall, the National Institute of Standards and Technology (NIST) reported 18,378 such vulnerabilities in 2021.7

Attack surfaces hide in plain sight

With the rise of human-operated ransomware, security teams have learned to look for smarter, more insidious threats coming from outside the firewall. Headline-grabbing cyberattacks such as the 2020 NOBELIUM attack have shown that the supply chain is especially vulnerable. But threats can also sneak in from third parties, such as business partners or controlled and uncontrolled apps. Most organizations lack a complete view of their internet assets and how they connect to the global attack surface. Contributing to this lack of visibility are three vulnerability factors:

  • Shadow IT: Unmanaged and orphaned assets form an Achilles heel in today’s enterprise security. This aptly named shadow IT leaves your security team in the dark. New RiskIQ customers typically find approximately 30 percent more assets than they thought they had, and RiskIQ detects 15 expired services and 143 open ports every minute.4
  • Mergers and acquisitions (M&A): Ordinary business operations and critical initiatives such as M&A, strategic partnerships, and outsourcing—all of it creates and expands external attack surfaces. Today, less than 10 percent of M&A deals contain cybersecurity due diligence.8
  • Supply chains: Modern supply chains create a complicated web of third-party relationships. Many of these are beyond the purview of security and risk teams. As a result, identifying vulnerable digital assets can be a challenge.

A lack of visibility into these hidden dependencies has made third-party attacks one of the most effective vectors for threat actors. In fact, 53 percent of organizations have experienced at least one data breach caused by a third party.9

Ordinary apps can target organizations and their customers

Americans now spend more time on mobile devices than watching live TV.10 With this demand has come a massive proliferation of mobile apps. Global app store downloads rose to USD230 billion worldwide in 2021.11 These apps act as a double-edged sword—helping to drive business outcomes while creating a significant attack surface beyond the reach of security teams.

Threat actors have been quick to catch on. Seeing an opening, they began to produce rogue apps that mimic well-known brands or pretend to be something they’re not. The massive popularity of rogue flashlight apps is one noteworthy example.12 Once an unsuspecting user downloads the malicious app, threat actors can use it to deploy phishing scams or upload malware to users’ devices. RiskIQ blocklists a malicious mobile app every five minutes.

Adversaries are part of an organization’s attack surface, too

Today’s internet attack surface forms an entwined ecosystem that we’re all part of—good guys and bad guys alike. Threat groups now recycle and share infrastructure (IPs, domains, and certificates) and borrow each other’s tools, such as malware, phish kits, and command and control (C2) components. The rise of crimeware as a service (CaaS) makes it particularly difficult to attribute a crime to a particular individual or group because the means and infrastructure are shared among multiple bad actors.13

More than 560,000 new pieces of malware are detected every day.14 In 2020 alone, the number of detected malware variants rose by 74 percent.15 RiskIQ now detects a Cobalt Strike C2 server every 49 minutes.3 For all these reasons, tracking external threat infrastructure is just as important as tracking your own.

The way forward

The traditional security strategy has been a defense-in-depth approach, starting at the perimeter and layering back to protect internal assets. But in today’s world of ubiquitous connectivity, users—and an increasing amount of digital assets—often reside outside the perimeter. Accordingly, a Zero Trust approach to security is proving to be the most effective strategy for defending today’s decentralized enterprise.

To learn more, read Anatomy of an external attack surface: Five elements organizations should monitor. Stay on top of evolving security issues by visiting Microsoft’s Security Insider for insightful articles, threat reports, and much more.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


1Microsoft acquired RiskIQ to strengthen cybersecurity of digital transformation and hybrid work, Eric Doerr. July 12, 2021.

2Episode 37, “Uncovering the threat landscape,” Steve Ginty, Director Threat Intelligence at RiskIQ, Ben Ben-Aderet, GRSEE. November 29, 2021.

3How big is the internet, and how do we measure it? HealthIT.

4The 2021 Evil Internet Minute, RiskIQ.

5Number of unique phishing sites detected worldwide from 3rd quarter 2013 to 1st Quarter 2021, Joe Johnson. July 20, 2021.

6RDP and VPN use skyrocketed since coronavirus onset, Catalin Cimpanu. March 29, 2020.

7With 18,378 vulnerabilities reported in 2021, NIST records fifth straight year of record numbers, Jonathan Greig. December 8, 2021.

8Top Five Cyber Risks in Mergers & Acquisitions, Ian McCaw.

9Mitigating Third-Party Cyber Risk with Secure Halo, Secure Halo.

10Americans Now Spend More Time Using Apps Than Watching Live TV, Tyler Lee. January 13, 2021.

11App Annie: Global app stores’ consumer spend up 19% to $170B in 2021, downloads grew 5% to 230B, Sarah Perez. January 12, 2022.

12The Top Ten Mobile Flashlight Applications Are Spying On You. Did You Know? Gary S. Miliefsky. October 1, 2014.

13The Crimeware-as-a-Service model is sweeping over the cybercrime world. Here’s why, Pierluigi Paganini. October 16, 2020.

14Malware Statistics & Trends Report, AV-TEST. April 12, 2022.

15Malware statistics and facts for 2022, Sam Cook. February 18, 2022.

The post Discover the anatomy of an external cyberattack surface with new RiskIQ report appeared first on Microsoft Security Blog.

]]>
Align your security and network teams to Zero Trust security demands http://approjects.co.za/?big=en-us/security/blog/2022/01/10/align-your-security-and-network-teams-to-zero-trust-security-demands/ Mon, 10 Jan 2022 18:00:00 +0000 Get expert advice on how to bridge gaps between your SOC and NOC and enable Zero Trust security in today’s rapidly evolving threat landscape.

The post Align your security and network teams to Zero Trust security demands appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Jennifer Minella, Founder and Principal Advisor on Network Security at Viszen Security about strategies for aligning the security operations center (SOC) and network operations center (NOC) to meet the demands of Zero Trust and protect your enterprise.

Natalia: In your experience, why are there challenges bringing together networking and security teams?

Jennifer: Ultimately, it’s about trust. As someone who’s worked on complex network-based security projects, I’ve had plenty of experience sitting between those two teams. Often the security teams have an objective, which gets translated into specific technical mandates, or even a specific product. As in, we need to achieve X, Y, and Z level security; therefore, the networking team should just go make this product work. That causes friction because sometimes the networking team didn’t get a voice in that.

Sometimes it’s not even the right product or technology for what the actual goal was, but it’s too late at that point because the money is spent. Then it’s the networking team that looks bad when they don’t get it working right. It’s much better to bring people together to collaborate, instead of one team picking a solution.

Natalia: How does misalignment between the SOC and NOC impact the business?

Jennifer: When there’s an erosion of trust and greater friction, it makes everything harder. Projects take longer. Decisions take longer. That lack of collaboration can also introduce security gaps. I have several examples, but I’m going to pick healthcare here. Say the Chief Information Security Officer’s (CISO) team believes that their bio-medical devices are secured a certain way from a network perspective, but that’s not how they’re secured. Meaning, they’re secured at a lower level that would not be sufficient based on how the CISO and the compliance teams were tracking it. So, there’s this misalignment, miscommunication. Not that it’s malicious; nobody is doing it on purpose, but requirements aren’t communicated well. Sometimes there’s a lack of clarity about whose responsibility it is, and what those requirements are. Even within larger organizations, it might not be clear what the actual standards and processes are that support that policy from the perspective of governance, risk, and compliance (GRC).

Natalia: So, what are a few effective ways to align the SOC and NOC?

Jennifer: If you can find somebody that can be a third partysomebody that’s going to come in and help the teams collaborate and build trustit’s invaluable. It can be someone who specializes in organizational health or a technical third party; somebody like me sitting in the middle who says, “I understand what the networking team is saying. I hear you. And I understand what the security requirements are. I get it.” Then you can figure out how to bridge that gap and get both teams collaborating with bi-directional communication, instead of security just mandating that this thing gets done.

It’s also about the culturethe interpersonal relationships involved. It can be a problem if one team is picked (to be in charge) instead of another. Maybe it’s the SOC team versus the NOC team, and the SOC team is put in charge; therefore, the NOC team just gives up. It might be better to go with a neutral internal person instead, like a program manager or a digital-transformation leadersomebody who owns a program or a project but isn’t tied to the specifics of security or network architecture. Building that kind of cross-functional team between departments is a good way to solve problems.

There isn’t a wrong way to do it if everybody is being heard. Emails are not a great way to accomplish communication among teams. But getting people together, outlining what the goal is, and working towards it, that’s preferable to just having discrete decision points and mandates. Here’s the big goalwhat are some ideas to get from point A to point B? That’s something we must do moving into Zero Trust strategies.

Natalia: Speaking of Zero Trust, how does Zero Trust figure into an overarching strategy for a business?

Jennifer: I describe Zero Trust as a concept. It’s more of a mindset, like “defense in depth,” “layered defense,” or “concepts of least privilege.” Trying to put it into a fixed model or framework is what’s leading to a lot of the misconceptions around the Zero Trust strategy. For me, getting from point A to point B with organizations means taking baby stepsidentifying gaps, use cases, and then finding the right solutions.

A lot of people assume Zero Trust is this granular one-to-one relationship of every element on the network. Meaning, every user, every endpoint, every service, and application data set is going to have a granular “allow or deny” policy. That’s not what we’re doing right now. Zero Trust is just a mindset of removing inherent trust. That could mean different things, for example, it could be remote access for employees on a virtual private network (VPN), or it could be dealing with employees with bring your own device (BYOD). It could mean giving contractors or people with elevated privileges access to certain data sets or applications, or we could apply Zero Trust principles to secure workloads from each other.

Natalia: And how does Secure Access Service Edge (SASE) differ from Zero Trust?

Jennifer: Zero Trust is not a product. SASE, on the other hand, is a suite of products and services put together to help meet Zero Trust architecture objectives. SASE is a service-based product offering that has a feature set. It varies depending on the manufacturer, meaning, some will give you these three features and some will give you another five or eight. Some are based on endpoint technology, some are based on software-defined wide area network (SD-WAN) solutions, while some are cloud routed.

Natalia: How does the Zero Trust approach fit with the network access control (NAC) strategy?

Jennifer: I jokingly refer to Zero Trust as “NAC 4.0.” I’ve worked in the NAC space for over 15 years, and it’s just a few new variables. But they’re significant variables. Working with cloud-hosted resources in cloud-routed data paths is fundamentally different than what we’ve been doing in local area network (LAN) based systems. But if you abstract thatthe concepts of privilege, authentication, authorization, and data pathsit’s all the same. I lump the vendors and types of solutions into two different categories: cloud-routed versus traditional on-premises (for a campus environment). The technologies are drastically different between those two use cases. For that reason, the enforcement models are different and will vary with the products. 

Natalia: How do you approach securing remote access with a Zero Trust mindset? Do you have any guidelines or best practices?

Jennifer: It’s alarming how many organizations set up VPN remote access so that users are added onto the network as if they were sitting in their office. For a long time that was accepted because, before the pandemic, there was a limited number of remote users. Now, remote access, in addition to the cloud, is more prevalent. There are many people with personal devices or some type of blended, corporate-managed device. It’s a recipe for disaster.

The threat surface has increased exponentially, so you need to be able to go back in and use a Zero Trust product in a kind of enclave model, which works a lot like a VPN. You set up access at a point (wherever the VPN is) and the users come into that. That’s a great way to start and you can tweak it from there. Your users access an agent or a platform that will stay with them through that process of tweaking and tuning. It’s impactful because users are switching from a VPN client to a kind of a Zero Trust agent. But they don’t know the difference because, on the back end, the access is going to be restricted. They’re not going to miss anything. And there’s lots of modeling engines and discovery that products do to map out who’s accessing what, and what’s anomalous. So, that’s a good starting point for organizations.

Natalia: How should businesses think about telemetry? How can security and networking teams best use it to continue to keep the network secure?

Jennifer: You need to consider the capabilities of visibility, telemetry, and discovery on endpoints. You’re not just looking at what’s on the endpointwe’ve been doing thatbut what is the endpoint talking to on the internet when it’s not behind the traditional perimeter. Things like secure web gateways, or solutions like a cloud access security broker (CASB), which further extends that from an authentication standpoint, data pathing with SD-WAN routing—all of that plays in.

Natalia: What is a common misconception about Zero Trust?

Jennifer: You don’t have to boil the ocean with this. We know from industry reports, analysts, and the National Institute of Standards and Technology (NIST) that there’s not one product that’s going to meet all the Zero Trust requirements. So, it makes sense to chunk things into discrete programs and projects that have boundaries, then find a solution that works for each. Zero Trust is not about rip and replace.

The first step is overcoming that mental hurdle of feeling like you must pick one product that will do everything. If you can aggregate that a bit and find a product that works for two or three, that’s awesome, but it’s not a requirement. A lot of organizations are trying to research everything ad nauseum before they commit to anything. But this is a volatile industry, and it’s likely that with any product’s features, the implementation is going to change drastically over the next 18 months. So, if you’re spending nine months researching something, you’re not going to get the full benefit in longevity. Just start with something small that’s palatable from a resource and cost standpoint.

Natalia: What types of products work best in helping companies take a Zero Trust approach?

Jennifer: A lot of requirements stem from the organization’s technological culture. Meaning, is it on-premises or a cloud environment? I have a friend that was a CISO at a large hospital system, which required having everything on-premises. He’s now a CISO at an organization that has zero on-premises infrastructure; they’re completely in the cloud. It’s a night-and-day change for security. So, you’ve got that, combined with trying to integrate with what’s in the environment currently. Because typically these systems are not greenfield, they’re brownfield—we’ve got users and a little bit of infrastructure and applications, and it’s a matter of upfitting those things. So, it just depends on the organization. One may have a set of requirements and applications that are newer and based on microservices. Another organization might have more on-premises legacy infrastructure architectures, and those aren’t supported in a lot of cloud-native and cloud-routed platforms.

Natalia: So, what do you see as the future for the SOC and NOC?

Jennifer: I think the message moving forward is—we must come together. And it’s not just networking and security; there are application teams to consider as well. It’s the same with IoT. These are transformative technologies. Whether it’s the combination of operational technology (OT) and IT, or the prevalence of IoT in the environment, or Zero Trust initiatives, all of these demand cross-functional teams for trust building and collaboration. That’s the big message.

Learn more

Get key resources from Microsoft Zero Trust strategy decision makers and deployment teams. To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Align your security and network teams to Zero Trust security demands appeared first on Microsoft Security Blog.

]]>
Adopting a Zero Trust approach throughout the lifecycle of data http://approjects.co.za/?big=en-us/security/blog/2021/11/17/adopting-a-zero-trust-approach-throughout-the-lifecycle-of-data/ Wed, 17 Nov 2021 17:00:13 +0000 Encrypting data—at rest, in transit, and in use—is critical in preparation for a potential breach of your data center.

The post Adopting a Zero Trust approach throughout the lifecycle of data appeared first on Microsoft Security Blog.

]]>
Instead of believing everything behind the corporate firewall is safe, the Zero Trust model assumes breach and verifies each request as though it originates from an uncontrolled network. Regardless of where the request originates or what resource it accesses, Zero Trust teaches us to “never trust, always verify.”

At Microsoft, we consider Zero Trust an essential component of any organization’s security plan based on these three principles:

  1. Verify explicitly: Always authenticate and authorize based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies.
  2. Use least privileged access: Limit user access with just-in-time (JIT) and just-enough-access (JEA), risk-based adaptive policies, and data protection to protect both data and productivity.
  3. Assume breach: Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.

In this article, we will focus on the third principle (assume breach) and how encryption and data protection play a significant role in getting prepared for a potential breach in your data center.

Protect data with end-to-end encryption

As part of a comprehensive security posture, data should always be encrypted so that in the event where an attacker is able to intercept customer data, they are unable to decipher usable information.

End-to-end encryption is applied throughout the following three stages: at rest, in transit, and in use.

Three icons representing data at rest, in transit, and in use.

Data protection is critical across all three of these stages, so let’s dive a little deeper into how each stage works and how it can be implemented.

Protect data at rest

Encryption at rest provides data protection for stored data (at rest). Attacks against data at rest include attempts to obtain physical access to the hardware on which the data is stored, and then compromising the contained data. In such an attack, a server’s hard drive may have been mishandled during maintenance allowing an attacker to remove the hard drive. Later the attacker would put the hard drive into a computer under their control to attempt to access the data.

Encryption at rest is designed to prevent the attacker from accessing the unencrypted data by ensuring the data is encrypted when on disk. If an attacker obtains a hard drive with encrypted data but not the encryption keys, the attacker must defeat the encryption to read the data. This attack is much more complex and resource-consuming than accessing unencrypted data on a hard drive. For this reason, encryption at rest is highly recommended and is a high priority requirement for many organizations.

Flow chart of Microsoft Azure Key Vault encryption process.

At rest, it is important that your data is protected through disk encryption which enables IT administrators to encrypt your entire virtual machine (VM) or operating system (OS) disks.

One of the concerns that we hear from customers is how can they reduce the chances that certificates, passwords, and other secrets may accidentally get leaked. A best practice is to use central storage of application secrets in a secured vault to have full control of their distribution. When using a secured vault, application developers no longer need to store security information in their applications, which reduces risk by eliminating the need to make this information part of the code.

Data encryption at rest is a mandatory step toward data privacy, compliance, and data sovereignty. These Microsoft Azure security services are recommended for this purpose:

  • Azure Storage Service Encryption: Microsoft Azure Storage uses server-side encryption (SSE) to automatically encrypt your data when it is persisted to the cloud. Azure Storage encryption protects your data to help you to meet your organizational security and compliance commitments.
  • SQL Server Transparent Database Encryption (TDE): Encryption of a database file is done at the page level with Transparent Data Encryption. The pages in an encrypted database are encrypted before they’re written to disk and are decrypted when read into memory.
  • Secrets management: Microsoft Azure Key Vault can be used to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets.
  • Key management: Azure Key Vault can also be used as a key management solution. Azure Key Vault makes it easy to create and control the encryption keys used to encrypt your data.
  • Certificate management: Azure Key Vault lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with Azure and your internal connected resources.
  • Hardware security modules (HSM): Store and protect your secrets and keys either by software or FIPS 140-2 Level 2, which validates HSMs.

Protect data in transit

A “data in transit” condition exists when data is transferred within the data center between different network elements or data centers.

Organizations that fail to protect data in transit are more susceptible to man-in-the-middle attacks, eavesdropping, and session hijacking. These attacks can be the first step attackers use to gain access to confidential data.

For example, the recent NOBELIUM cyberattacks show that no one can be 100 percent protected against a breach. During this attack, 18,000 SolarWinds customers were vulnerable, including Fortune 500 companies and multiple agencies in the US government.

Data in transit should cover two independent encryption mechanisms:

  1. Application layer—the HTTPS and TLS encryption that takes place between the client and server node.
  2. Data link layer—encryption that takes place on the frames transferred over the Ethernet protocol, just above the physical connections

It is recommended customers not only encrypt data on the application layer but also have visibility into their data in transit by using TLS inspection capabilities.

These Microsoft Azure network security services are recommended for this purpose:

As part of the TLS inspection, the above network services perform full decryption and encryption of the traffic, give the ability to utilize intrusion detection and prevention systems (IDPS), as well as providing customers with visibility into the data itself.

To provide customers double encryption when sending data between regions, Azure provides data link layer encryption utilizing media access control security (MACSec).

MACSec is a vendor-independent IEEE Standard (802.1ae), which provides data link layer, point-to-point encryption of traffic between network devices. The packets are encrypted/decrypted on the hardware before being sent and are designed to prevent even a physical “man-in-middle” attack. Because MACSec uses line rate encryption, it can secure data without the performance overhead and complexity of IP encryption technologies such as IPSec/GRE.

Data in transit is encrypted on the wire to block physical man-in-the-middle attacks.

Whenever Azure customer traffic moves between Azure datacenters—outside physical boundaries not controlled by Microsoft (or on behalf of Microsoft)—a data link layer encryption method using the IEEE 802.1AE MAC Security Standards is applied from point-to-point across the underlying network hardware. The packets are encrypted and decrypted on the devices before being sent and applied by default for all Azure traffic traveling within a region or between regions.

Protect data in use

We often hear from customers that they are concerned about moving extremely sensitive IP and data to the cloud. To effectively protect assets, not only must data be secured at rest and in transit, but data must also be protected from threats while in use.

To protect data in use for services across your software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS) cloud models, we offer two important capabilities: Azure confidential computing and centralized storage of application secrets.

Azure confidential computing encrypts data in memory in hardware-based trusted execution environments (TEEs) and only processes it once the cloud environment is verified, preventing data access from cloud operators, malicious admins, and privileged software such as the hypervisor. By protecting data in use, organizations can achieve the highest levels of data privacy and enable secure multi-party data analytics, without giving access to their data.

These Azure services are recommended to be used for data in use protection:

  1. Application Enclaves: You can optimize for confidentiality at the application level by customizing your app to run in confidential virtual machines with Intel SGX application enclaves, or lift and shift existing applications using an independent software vendor (ISV) partner.
  2. Confidential Virtual Machines: You can optimize for ease of use by moving your existing workloads to Azure and making them confidential without changing any code by leveraging encryption across the entire virtual machine with AMD SEV-SNP or Intel SGX with total memory encryption (TME) technologies.
  3. Trusted Launch: Trusted Launch with Secure boot and vTPMs ensure your virtual machines boot with legitimate code, helping you protect against advanced and persistent attack techniques such as rootkits and bootkits.
  4. Confidential Containers: Azure Kubernetes Services (AKS) worker nodes are available on confidential computing virtual machines, allowing you to secure your containers with encrypted memory.
  5. Confidential Services: We are continuing to onboard Azure confidential services to leverage within your solutions, now supporting—Azure confidential ledger in preview, Azure SQL Always Encrypted, Azure Key Vault Managed hardware security modules (HSM), and Microsoft Azure Attestation, all running on Azure confidential computing.

Strengthening your organization’s data protection posture

Protecting your data throughout its lifecycle and wherever it resides or travels is the most critical step to safeguard your business data.

To learn more about the end-to-end implementation of data protection as a critical part of your Zero Trust strategy, visit our Deployment Center.

To see how your organization’s data security posture stacks up against the Zero Trust maturity model, take this interactive quiz.

For more information about a Zero Trust security posture, visit the Microsoft Zero Trust website.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Adopting a Zero Trust approach throughout the lifecycle of data appeared first on Microsoft Security Blog.

]]>
Azure network security helps reduce cost and risk according to Forrester TEI study http://approjects.co.za/?big=en-us/security/blog/2021/10/12/azure-network-security-helps-reduce-cost-and-risk-according-to-forrester-tei-study/ Tue, 12 Oct 2021 16:00:33 +0000 As organizations move their computing from on-premises to the cloud, they realize that leveraging cloud-native security tools can provide additional cost savings and business benefits to their security infrastructure. Azure network security offers a suite of cloud-native security tools to protect Azure workloads while automating network management, implementing developer security operations (DevSecOps) practices, and reducing the risk of a material security breach.

The post Azure network security helps reduce cost and risk according to Forrester TEI study appeared first on Microsoft Security Blog.

]]>
As organizations move their computing from on-premises to the cloud, they realize that leveraging cloud-native security tools can provide additional cost savings and business benefits to their security infrastructure. Microsoft Azure network security offers a suite of cloud-native security tools to protect Azure workloads while automating network management, implementing developer security operations (DevSecOps) practices, and reducing the risk of a material security breach.

We are excited to share that Forrester Consulting has just conducted a commissioned Total Economic Impact™ (TEI) study on behalf of Microsoft, which involved interviewing existing customers who have deployed Azure network security. This study also provides organizations with a framework for evaluating the financial impact on their organizations.

The Forrester study concluded that a composite organization experienced benefits of $2.23 million over three years versus costs of $840.3 thousand, adding up to a net present value (NPV) of $1.39 million and a return on investment (ROI) of 165 percent. The study shows that Azure network security delivers:

  • Increased speed of delivering development projects by one month or 67 percent.
  • Reduced total cost of on-premises security tools by 25 percent.
  • Reduced risk of a security breach of 30 percent.
  • Improved efficiency of network-related IT work by 73 percent.

The study concluded that the composite organization reduced their total cost of ownership related to security infrastructure, established DevSecOps processes, reduced their risk of material security breaches, and reduced the burden on IT to manage networks and upgrades, allowing these teams to focus on more strategic workstreams.

Productivity gains with Azure network security

Azure network security enabled organizations to implement infrastructure-as-code practices, incorporating security directly into application development workflows, speeding development, and time-to-market of applications. With the adoption of DevSecOps workflows, security moved to being an enabler of development speed rather than a gate.

Graphic depicting development speed acceleration at 3 times.

“We’re seeing tremendous speed spinning stuff up in cloud. We have given the application team more reach, where in our on-premises data centers, it was difficult to get access to security appliances with different teams doing different workstreams. With Azure, we’re able to use Azure Resource Manager (ARM) templates.”–Chief solutions architect, technology.

Cost savings

Organizations reduced their total cost of ownership of on-premises security tools by 25 percent when protecting 20 percent of their organization’s total computing with Azure network security. Interviewees saved costs directly related to decommissioned on-premises security tools as well as time costs to maintain this infrastructure and from vendor management.

Graphic depicting 25 percent reduced in total cost.

“We were able to more cost-effectively use Azure security to manage our workloads in the cloud and reduced the footprint of additional agents or services for our cloud, which is clearly different than on-premises data centers.”–Chief solutions architect, technology.

Risk reduction

Azure network security provides automated network security upgrades and improved visibility of the environment. This improves the overall security environment of Azure workloads and reduces the likelihood of experiencing external and internal costs associated with a breach.

Graphic depicting reduced risk of security breach by 30 percent.

“There is no doubt Azure network security improved our security posture. I feel far more comfortable and sleep much better at night having our Azure estate protected by Azure network security as opposed to the combination of what we had on-premises.”–Vice President of applications and infrastructure, education.

Efficiency gains

Azure network security improved the efficiency of IT teams delivering network-related work. It reduced firewall management by 80 percent, security policy management by 15 percent, and security audit process by 96 percent.

Graphic depicting time for security audit reduced by 30 percent.

“Before Azure network security, we had an outage where we were managing calls with three vendors: three different IT teams, three systems’ support, three sets of account managers. There was a lot of finger-pointing and, in the end, the issue was never even resolved. Now, everything is resolved in a matter of hours.”–Chief solutions architect, technology.

Read the study and get started today

Read the full Forrester TEI study for Azure network security website or download the full PDF study.

To learn more about Azure network security portfolio of cloud-native services, visit the Azure network security website.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Azure network security helps reduce cost and risk according to Forrester TEI study appeared first on Microsoft Security Blog.

]]>