Network security Insights | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/topic/network-security/ Expert coverage of cybersecurity topics Thu, 12 Sep 2024 21:14:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 The four stages of creating a trust fabric with identity and network security http://approjects.co.za/?big=en-us/security/blog/2024/06/04/the-four-stages-of-creating-a-trust-fabric-with-identity-and-network-security/ Tue, 04 Jun 2024 16:00:00 +0000 The trust fabric journey has four stages of maturity for organizations working to evaluate, improve, and evolve their identity and network access security posture.

The post The four stages of creating a trust fabric with identity and network security appeared first on Microsoft Security Blog.

]]>

How implementing a trust fabric strengthens identity and network

Read the blog

At Microsoft, we’re continually evolving our solutions for protecting identities and access to meet the ever-changing security demands our customers face. In a recent post, we introduced the concept of the trust fabric. It’s a real-time approach to securing access that is adaptive and comprehensive. In this blog post, we’ll explore how any organization—large or small—can chart its own path toward establishing their own digital trust fabric. We’ll share how customers can secure access for any trustworthy identity, signing in from anywhere, to any app or resource on-premises, and in any cloud. While every organization is at a different stage in their security journey, with different priorities, we’ll break down the trust fabric journey into distinct maturity stages and provide guidance to help customers prioritize their own identity and network access improvements.

Graphic showing the four stages for creating a trust fabric.

Stage 1: Establish Zero Trust access controls

“Microsoft enabled secure access to data from any device and from any location. The Zero Trust model has been pivotal to achieve the desired configuration for users, and Conditional Access has helped enable it.”

Arshaad Smile, Head of Cloud Security, Standard Bank of South Africa 

This first stage is all about your core identity and access management solutions and practices. It’s about securing identities, preventing external attacks, and verifying explicitly with strong authentication and authorization controls. Today, identity is the first line of defense and the most attacked surface area. In 2022, Microsoft tracked 1,287 password attacks every second. In 2023 we saw a dramatic increase, with an average of more than 4,000 password attacks per second.1

To prevent identity attacks, Microsoft recommends a Zero Trust security strategy, grounded in the following three principles—verify explicitly, ensure least-privilege access, and assume breach. Most organizations start with identity as the foundational pillar of their Zero Trust strategies, establishing essential defenses and granular access policies. Those essential identity defenses include:

  • Single sign-on for all applications to unify access policies and controls.
  • Phishing-resistant multifactor authentication or passwordless authentication to verify every identity and access request.
  • Granular Conditional Access policies to check user context and enforce appropriate controls before granting access.

In fact, Conditional Access is the core component of an effective Zero Trust strategy. Serving as a unified Zero Trust access policy engine, it reasons over all available user context signals like device health or risk, and decides whether to grant access, require multifactor authentication, monitor or block access.

Recommended resources—Stage 1

For organizations in this stage of their journey, we’re detailing a few recommendations to make it easier to adopt and advance Zero Trust security fundamentals:

  1. Implement phishing-resistant multifactor authentication for your organization to protect identities from compromise.
  2. Deploy the recommended Conditional Access policies, customize Microsoft-managed policies, and add your own. Test in report-only mode. Mandate strong, phishing-resistant authentication for any scenario.
  3. Check your Microsoft Entra recommendations and Identity Secure Score to measure your organization’s identity security posture and plan your next steps. 

Stage 2: Secure access for your hybrid workforce

Once your organization has established foundational defenses, the next priority is expanding Zero Trust strategy by securing access for your hybrid workforce. Flexible work models are now mainstream, and they pose new security challenges as boundaries between corporate networks and open internet are blurred. At the same time, many organizations increasingly have a mix of modern cloud applications and legacy on-premises resources, leading to inconsistent user experiences and security controls.

The key concept for this stage is Zero Trust user access. It’s about advanced protection that extends Zero Trust principles to any resource, while making it possible to securely access any application or service from anywhere. At the second stage of the trust fabric journey, organizations need to:                          

  1. Unify Conditional Access across identity, endpoint, and network, and extend it to on-premises apps and internet traffic so that every access point is equally protected.
  2. Enforce least-privilege access to any app or resource—including AI—so that only the right users can access the right resources at the right time.
  3. Minimize dependency on the legacy on-premises security tools like traditional VPNs, firewalls, or governance that don’t scale to the demands of cloud-first environments and lack protections for sophisticated cyberattacks.

A great outcome of those strategies is much improved user experience, as now any application can be made available from anywhere, with familiar, consistent sign-in experience.

Recommended resources—Stage 2

Here are key recommendations to secure access for your employees:

  1. Converge identity and network access controls and extend Zero Trust access controls to on-premises resources and the open internet.
  2. Automate lifecycle workflows to simplify access reviews and ensure least privilege access.
  3. Replace legacy solutions such as basic Secure Web Gateway (SWG), Firewalls, and Legacy VPNs.

Stage 3: Secure access for customers and partners

With Zero Trust user access in place, organizations need to also secure access for external users including customers, partners, business guests, and more. Modern customer identity and access management (CIAM) solutions can help create user-centric experiences that make it easier to securely engage with customers and collaborate with anyone outside organizational boundaries—ultimately driving positive business outcomes.

In this third stage of the journey towards an identity trust fabric, it’s essential to:

  1. Protect external identities with granular Conditional Access policies, fraud protection, and identity verification to make sure security teams know who those external users are.
  2. Govern external identities and their access to ensure that they only access resources that they need, and don’t keep access when it’s no longer needed.
  3. Create user-centric, frictionless experiences to make it easier for external users to follow your security policies.
  4. Simplify developer experiences so that any new application has strong identity controls built-in from the start.

Recommended resources—Stage 3

  1. Learn how to extend your Zero Trust foundation to external identities. Protect your customers and partners against identity compromise.
  2. Set up your governance for external users. Implement strong access governance including lifecycle workflows for partners, contractors, and other external users.
  3. Protect customer-facing apps. Customize and control how customers sign up and sign in when using your applications.

Stage 4: Secure access to resources in any cloud

The journey towards an organization’s trust fabric is not complete without securing access to resources in multicloud environments. Cloud-native services depend on their ability to access other digital workloads, which means billions of applications and services connect to each other every second. Already workload identities exceed human identities by 10 to 1 and the number of workload identities will only grow.2 Plus, 50% of total identities are super identities, that have access to all permissions and all resources, and 70% of those super identities are workload identities.3

Managing access across clouds is complex, and challenges like fragmented role-based access control (RBAC) systems, limited scalability of on-premises Privileged Access Management (PAM) solutions, and compliance breaches are common. These issues are exacerbated by the growing adoption of cloud services from multiple providers. Organizations typically use seven to eight different products to address these challenges. But many still struggle to attain complete visibility into their cloud access.

Graphic that shows the progression of steps for how to discover, detect, enforce, and automate with Microsoft Entra.

We’re envisioning the future for cloud access management as a unified platform that will deliver comprehensive visibility into permissions and risk for all identities—human and workloads—and will secure access to any resources in any cloud. In the meantime, we recommend the following key actions for in the fourth stage of their journey towards the trust fabric:

Read our recent blog titled “Securing access to any resource, anywhere” to learn more about our vision for Cloud Access Management.

Recommended resources—Stage 4

As we work towards making this vision a reality, customers today can get started on their stage four trust fabric journey by learning more about multicloud risk, getting visibility, and remediating over-provisioned permissions across clouds. Check out the following resources to learn more.

  1. Understand multicloud security risks from the 2024 State of Multicloud Security Risk Report.
  2. Get visibility into cloud permissions assigned to all identities and permissions assigned and used across multiple clouds and remediate risky permissions.
  3. Protect workload-to-workload interactions by securing workload identities and their access to cloud resources.

Accelerate your trust fabric with Generative AI capabilities and skills

To increase efficiency, speed, and scale, many organizations are looking to AI to help augment existing security workflows. Microsoft Entra and Microsoft Copilot for Security work together at machine speed, integrating with an admin’s daily workflow to prioritize and automate, understand cyberthreats in real time, and process large volumes of data.

Copilot skills and capabilities embedded in Microsoft Entra helps admins to:

  • Discover high risk users, overprivileged access, and suspicious sign-ins.
  • Investigate identity risks and help troubleshoot daily identity tasks.
  • Get instant risk summaries, steps to remediate, and recommended guidance for each identity at risk.
  • Create lifecycle workflows to streamline the process of provisioning user access and eliminating configuration gaps.

Copilot is informed by large-scale data and threat intelligence, including the more than 78 trillion security signals processed by Microsoft each day, and coupled with large language models to deliver tailored insights and guide next steps. Learn more about how Microsoft Copilot for Security can help support your trust fabric maturity journey.

Microsoft Entra

Protect any identity and secure access to any resource with a family of multicloud identity and network access solutions.

Side view close-up of a man typing on his phone while standing behind a Microsoft Surface Studio.

Microsoft is here to help

No matter where you are on your trust fabric journey, Microsoft can help you with the experience, resources, and expertise at every stage. The Microsoft Entra family of identity and network access solutions can help you create a trust fabric for securing access for any identity, from anywhere, to any app or resource across on-premises and clouds. The products listed below work together to prevent identity attacks, enforce least privilege access, unify access controls, and improve the experience for users, admins, and developers.

Graph showing the functions of Microsoft Entra and which product is key to each function.

Learn more about securing access across identity, endpoint, and network to accelerate your organization’s trust fabric implementation on our new identity and network access solution page.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2023.

2How do cloud permission risks impact your organization?, Microsoft.

32024 State of Multicloud Security Risk Report, Microsoft.

The post The four stages of creating a trust fabric with identity and network security appeared first on Microsoft Security Blog.

]]>
New Windows 11 features strengthen security to address evolving cyberthreat landscape http://approjects.co.za/?big=en-us/security/blog/2024/05/20/new-windows-11-features-strengthen-security-to-address-evolving-cyberthreat-landscape/ Mon, 20 May 2024 18:00:00 +0000 Today, ahead of the Microsoft Build 2024 conference, we announced a new class of Windows computers, Copilot+ PC. Alongside this exciting new class of computers, we are introducing important security features and updates that make Windows 11 more secure for users and organizations, and give developers the tools to prioritize security.

The post New Windows 11 features strengthen security to address evolving cyberthreat landscape appeared first on Microsoft Security Blog.

]]>
Ahead of the Microsoft Build 2024 conference, we announced a new class of Windows computers, Copilot+ PC. Alongside this exciting new class of PCs, we are introducing important security features and updates that make Windows 11 more secure for users and organizations and give developers the tools to prioritize security.

Today’s threat landscape is unlike any we’ve seen before. Attacks are growing in speed, scale, and sophistication. In 2015, our identity systems were detecting around 115 password attacks per second. Less than a decade later, that number has surged 3,378% to more than 4,000 password attacks per second.1 This landscape requires stronger and more comprehensive security approaches than ever before, across all devices and technologies we use in our lives both at home and at work.

Cybersecurity at the forefront of all we do

We’ve always had a longstanding commitment to security in Windows. Several years back, when we saw cyberattackers increasingly exploiting hardware, we introduced the Secured-core PC to help secure from chip to cloud and that critical layer of computing.

As we’ve seen identity-based cyberattacks increase at an alarming rate over the years, we’ve expanded our passwordless offerings quickly and broadly. In September 2023, we announced expanded passkey support with cross-device authentication, and have continued to build on that momentum. Earlier this month we announced passkey support for Microsoft consumer accounts and for device-bound passkeys in the Microsoft Authenticator app for iOS and Android users, expanding our support of this industry initiative backed by the FIDO Alliance. Passkeys on Windows are protected by Windows Hello technology that encompasses both Windows Hello and Windows Hello for Business. This latest step builds on nearly a decade of critical work strengthening Windows Hello to give users easier and more secure sign-in options and eliminate points of vulnerability.

Earlier this month we expanded our Secure Future Initiative (SFI), making it clear that we are prioritizing security above all else. SFI, a commitment we shared first in November 2023, prioritizes designing, building, testing, and operating our technology in a way that helps to ensure secure and trustworthy product and service delivery. With these commitments in mind, we’ve not only built new security features into Windows 11, but we’ve also doubled down on security features that will be turned on by default. Our goal remains simple: make it easy to stay safe with Windows. 

Today we are sharing exciting updates that make Windows more secure out of the box, by design and by default.

SUR24-COMMR-Pro-10-Platinum-WindowsCopilot-007-RGB

Windows 11

Create, collaborate, and keep your stuff protected.

Modern, secure hardware

We believe security is a team sport. We are working in close partnership with our Original Equipment Manufacturer (OEM) partners to complement OEM security features and deliver more secure devices out of the box.

While Secured-core PCs were once considered specialized devices for those handling sensitive data, now Windows users can benefit from enhanced security and AI on one device. We announced that all Copilot+ PCs will be Secured-core PCs, bringing advanced security to both commercial and consumer devices. In addition to the layers of protection in Windows 11, Secured-core PCs provide advanced firmware safeguards and dynamic root-of-trust measurement to help protect from chip to cloud. 

Microsoft Pluton security processor

Learn more

Microsoft Pluton security processor will be enabled by default on all Copilot+ PCs. Pluton is a chip-to-cloud security technology—designed by Microsoft and built by silicon partners—with Zero Trust principles at the core. It helps protect credentials, identities, personal data, and encryption keys, making it significantly harder to remove, even if a cyberattacker installs malware or has physical possession of the PC.

All Copilot+ PCs will also ship with Windows Hello Enhanced Sign-in Security (ESS). This provides more secure biometric sign ins and eliminates the need for a password. ESS provides an additional level of security to biometric data by leveraging specialized hardware and software components, such as virtualization-based security (VBS) and Trusted Platform Module 2.0 to help isolate and protect authentication data and secure the channel on which it is communicated. ESS is also available on other compatible Windows 11 devices.

Stay ahead of evolving threats with Windows

To enhance user security from the start, we’re continuously updating security measures and enabling new defaults within Windows.

Windows 11 is designed with layers of security enabled by default, so you can focus on your work, not your security settings. Out-of-the-box features such as credential safeguards, malware shields, and application protection led to a reported 58% drop in security incidents, including a 3.1 times reduction in firmware attacks. In Windows 11, hardware and software work together to help shrink the attack surface, protect system integrity, and shield valuable data.2 

Windows Hello for Business

Learn more

Credential and identity theft is a prime focus of cyberattackers. Enabling multifactor authentication with Windows Hello, Windows Hello for Business, and passkeys are effective multifactor authentication solutions. But, as more people enable multifactor authentication, cyberattackers are moving away from simple password-based attacks and focusing energy on other types of credential theft. We have been working to make this more difficult with our latest updates:

  • Local Security Authority protection: Windows has several critical processes to verify a user’s identity, including the Local Security Authority (LSA). LSA authenticates users and verifies Windows sign ins, handling tokens and credentials, such as passwords, that are used for single sign-on to Microsoft accounts and Microsoft Azure services. LSA protection, previously on by default for all new commercial devices, is now also enabled by default for new consumer devices. For users upgrading where it has not previously been enabled, For new consumer devices and for users upgrading where it has not been enabled, LSA protection will enter into a grace period. LSA protection prevents LSA from loading untrusted code and prevents untrusted processes from accessing LSA memory, offering significant protection against credential theft.3 
  • NT LAN Manager (NTLM) deprecation: Ending the use of NTLM has been a huge ask from our security community as it will strengthen authentication. NTLM is being deprecated, meaning that, while supported, it is no longer under active feature development. We are introducing new features and tools to ease customers’ transitions to stronger authentication protocols.
  • Advancing key protection in Windows using VBS: Now available in public preview for Windows Insiders, this feature helps to offer a higher security bar than software isolation, with stronger performance compared to hardware-based solutions, since it is powered by the device’s CPU. While hardware-backed keys offer strong levels of protection, VBS is helpful for services with high security, reliability, and performance requirements.
  • Windows Hello hardening: With Windows Hello technology being extended to protect passkeys, if you are using a device without built-in biometrics, Windows Hello has been further hardened by default to use VBS to isolate credentials, protecting from admin-level attacks.

We have also prioritized helping users know what apps and drivers can be trusted to better protect people from phishing attacks and malware. Windows is both creating new inbox capabilities as well as providing more features for the Windows app developer community to help strengthen app security.

  • Smart App Control: Now available and on by default on select new systems where it can provide an optimal experience, Smart App Control has been enhanced with AI learning. Using an AI model based on the 78 trillion security signals Microsoft collects each day, this feature can predict if an app is safe. The policy keeps common, known-to-be-safe apps running while unknown, malware-connected apps are blocked. This is incredibly effective protection against malware.
  • Trusted Signing: Unsigned apps pose significant risks. In fact, Microsoft research has revealed that a lot of malware comes in the form of unsigned apps. The best way to ensure seamless compatibility with Smart App Control is with signing of your app. Signing contributes to its trustworthiness and helps ensure that an existing “good reputation” will be inherited by future app updates, making it less likely to be blocked inadvertently by threat detection systems. Recently moved into public preview, trusted signing makes this process simpler by managing every aspect of the certificate lifecycle. And it integrates with popular development tooling like Azure DevOps and GitHub.
  • Win32 app isolation: A new security feature, currently in preview, Win32 app isolation makes it easier for Windows app developers to contain damage and safeguard user privacy choices in the event of an application compromise. Win32 app isolation is built on the foundation of AppContainers, which offer a security boundary, and components that virtualize resources and provide brokered access to other resources—like printer, registry, and file access. Win32 app isolation is close to general availability thanks to feedback from our developer community. App developers can now use Win32 app isolation with seamless Visual Studio integration.
  • Making admin users more secure: Most people run as full admins on their devices, which means apps and services have the same access to the kernel and other critical services as users. And the problem is that these apps and services can access critical resources without the user knowing. This is why Windows is being updated to require just in time administrative access to the kernel and other critical services as needed, not all the time, and certainly not by default. This makes it harder for an app to unexpectedly abuse admin privileges and secretly put malware or malicious code on Windows. When this feature is enabled, such as when an app needs special permissions like admin rights, you’ll be asked for approval. When an approval is needed, Windows Hello provides a secure and easy way to approve or deny these requests, giving you, and only you, full control over your device. Currently in private preview, this will be available in public preview soon. 
  • VBS enclaves: Previously available to Windows security features only, VBS enclaves are now available to third-party application developers. This software-based trusted executive environment within a host application’s address space offers deep operating system protection of sensitive workloads, like data decryption. Try the VBS enclave APIs to experience how the enclave is shielded from both other system processes and the host application itself. This results in more security for your sensitive workloads.

As we see cyberattackers come up with new strategies and targets, we continue to harden Windows code to address where bad actors are spending their time and energy.

  • Windows Protected Print: In late 2023, we launched Windows Protected Print Mode to build a more modern and secure print system that maximizes compatibility and puts users first. This will be the default print mode in the future.
  • Tool tips: In the past, tool tips have been exploited, leading to unauthorized access to memory. In older Windows versions, tool tips were managed as a single window for each desktop, established by the kernel and recycled for displaying any tool tip. We are revamping how tool tips work to be more secure for users. With the updated approach, the responsibility for managing the lifecycle of tool tips has been transferred to the respective application that is being used. Now, the kernel monitors cursor activity and initiates countdowns for the display and concealment of tool tip windows. When these countdowns conclude, the kernel notifies the user-level environment to either generate or eliminate a tool tip window.
  • TLS server authentication: TLS (transport layer security) server authentication certificates verify the server’s identity to a client and ensure secure connections. While 1024-bit RSA encryption keys were previously supported, advancements in computing power and cryptanalysis require that Windows no longer trust these weak key lengths by default. As a result, TLS certificates with RSA keys less than 2048 bits chaining to roots in the Microsoft Trusted Root Program will not be trusted.

Lastly, with each Windows release we add more levers for commercial customers to lock down Windows within their environment.

  • Config Refresh: Config Refresh allows administrators to set a schedule for devices to reapply policy settings without needing to check in to Microsoft Intune or other mobile device management vendors, helping to ensure settings remain as configured by the IT admin. It can be set to refresh every 90 minutes by default or as frequently as every 30 minutes. There is also an option to pause Config Refresh for a configurable period, useful for troubleshooting or maintenance, after which it will automatically resume or can be manually reactivated by an administrator.
  • Firewall: The Firewall Configuration Service Provider (CSP) in Windows now enforces an all-or-nothing application of firewall rules from each atomic block of rules. Previously, if the CSP encountered an issue with applying any rule from a block, the CSP would not only stop that rule, but also would cease to process subsequent rules, leaving a potential security gap with partially deployed rule blocks. Now, if any rule in the block cannot be applied successfully to the device, the CSP will stop processing subsequent rule and all rules from that same atomic block will be rolled back, eliminating the ambiguity of partially deployed rule blocks.
  • Personal Data Encryption (PDE): PDE enhances security by encrypting data and only decrypting it when the user unlocks their PC using Windows Hello for Business. PDE enables two levels of data protection. Level 1, where data remains encrypted until the PC is first unlocked; or Level 2, where files are encrypted whenever the PC is locked. PDE complements BitLocker’s volume level protection and provides dual-layer encryption for personal or app data when paired with BitLocker. PDE is in preview now and developers can leverage the PDE API to protect their app content, enabling IT admins to manage protection using their mobile device management solution. 
  • Zero Trust DNS: Now in private preview, this feature will natively restrict Windows devices to connect only to approved network destinations by domain name. Outbound IPv4 and IPv6 traffic is blocked and won’t reach the intended destination unless a trusted, protected DNS server resolved it, or an IT admin configures an exception. Plan now to avoid blocking issues by configuring apps and services to use the system DNS resolver.

Explore the new Windows 11 security features

We truly believe that security is a team sport. By partnering with OEMs, app developers and others in the ecosystem—along with helping people to be better at protecting themselves—we are delivering a Windows that is more secure by design and secure by default. The Windows Security Book is available to help you learn more about what makes it easy for users to stay secure with Windows.

Learn more about Windows 11.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Password Guidance, Microsoft Identity Protection Team. 2016.

2Windows 11 Survey Report, Techaisle. February 2022.

3Users can manage their LSA protection state in the Windows Security Application under Device Security -> Core Isolation -> Local Security Authority.

The post New Windows 11 features strengthen security to address evolving cyberthreat landscape appeared first on Microsoft Security Blog.

]]>
How to prevent lateral movement attacks using Microsoft 365 Defender http://approjects.co.za/?big=en-us/security/blog/2022/10/26/how-to-prevent-lateral-movement-attacks-using-microsoft-365-defender/ Wed, 26 Oct 2022 16:00:00 +0000 Learn how Microsoft 365 Defender can enhance mitigations against lateral movement paths in your environment, stopping attackers from gaining access to privileged and sensitive accounts.

The post How to prevent lateral movement attacks using Microsoft 365 Defender appeared first on Microsoft Security Blog.

]]>
It’s been 10 years since the first version of the Mitigating Pass-the-Hash Attacks and Other Credential Theft whitepaper was made available, but the techniques are still relevant today, because they help prevent attackers from gaining a network foothold and using credential-dumping tools to extract password hashes, user credentials, or Kerberos tickets from local memory.1 With those tools in hand, an attacker could move laterally in the network to obtain the credentials of more privileged accounts. All this leads to their ultimate goal—access to your sensitive business data, the Active Directory (AD) database, crucial business applications, and more.

In this blog post, we’ll look at the three fundamental mitigations for preventing lateral movement and how Microsoft 365 Defender can help your team achieve maximum effectiveness from each mitigation:

  1. Restricting privileged domain accounts.
  2. Restricting and protecting local accounts with administrator privileges.
  3. Restricting inbound traffic using Windows Defender Firewall.

1. Restricting privileged domain accounts

Segmenting privileged domain accounts can be achieved through implementing the tier model. The tier model helps to mitigate credential theft by segregating your AD environment into three different tiers of varying privileges and access. Creating separate tiers cuts off lateral movement from a standard user workstation to an application server or domain controller. Meaning, if a standard user account’s machine is compromised and password hashes are obtained by an attacker, there will be no movement path toward more sensitive accounts and servers. The three tiers are arranged 0 to 2, with 0 being the most restricted:

  • Tier 0: All accounts and servers in this tier are either domain administrators or have a direct path to domain administrator privileges. Examples of servers include domain controllers, AD servers, and any management server for applications and agents running on Tier 0 servers. For an account to be considered Tier 0, it does not have to be a member of domain administrators; having privileged access to any Tier 0 server or application (through things like access control lists and User Right Assignments) will also classify an account as Tier 0. 
  • Tier 1: In most cases, Tier 1 will contain the most business-critical applications. All accounts and servers in this tier are either running enterprise applications or have permissions on servers running applications. Examples include file shares, application servers, and database servers.
  • Tier 2: This tier can be thought of as any account or machine that does not fall into either of the other tiers. This is where normal user workstations will reside, as well as standard user accounts. 
A Simplified schematic IT environment is split into three zones, Tier 0 with Domain Controllers, Tier 1 with servers and applications and Tier 2 with users and workstation systems. Zones are separated by red dotted line.

Figure 1: Tier model for Active Directory.

For the tier model to function as intended, the different tiers must be completely segregated from each other. This can be accomplished by creating Group Policy Objects (GPOs) that deny signing in across tiers. No account can be allowed to cross the tier boundaries. For example, an administrator on Tier 0 should be denied access to a Tier 1 or Tier 2 machine. If credentials are exposed to another tier, the password must be reset for that account.

Using Privileged Access Workstations (PAW) also mitigates against lateral movement. Because an account in one tier can only sign in to computers in the same tier, users with more than one account in the domain must use separate computers. A Tier 0 user should use a PAW to access only Tier 0 assets. But the person who owns the Tier 0 account should not use the same machine for checking their email or productivity applications (a Tier 2 activity).

Note: Read-level access to higher tiers is still allowed for all users because this is crucial for AD authentication and for users to access applications.

As explained earlier, if an attacker can harvest the credentials of any of the accounts in the path, they will be able to move laterally to gain the credentials of the sensitive account. One way to spot any lateral movement paths in your environment is to use Microsoft Defender for Identity. By correlating data from account sessions, local admins on machines, and group memberships, Defender for Identity can help prevent this and quickly identify any lateral movement paths for each sensitive account. If the attacker can harvest the credentials of any of the accounts in the path, they will also be able to move laterally to gain the credentials of the sensitive account. 

Simple graph with two nodes representing two users and an arrow link between them. First node represents User 4 and second node represents admin user. Computer icon above the link states that User 4 is an admin on machine client 5, where admin user is logged into.

Figure 2: Lateral movement path view from Microsoft Defender for Identity portal.

By default, Defender for Identity classifies certain groups and their members as sensitive, while providing functionality to add more accounts and groups to the classification if needed. The goal is to break the possible attack paths (see Figure 2) by removing local administrators, denying access, or by separating accounts.

2. Restricting and protecting local accounts with administrator privileges

Local admin access opens up vast credential harvesting and lateral movement possibilities, making local admins a prime target for attackers. To make matters worse, local admin management and monitoring are sometimes overlooked. Often the local administrator password is set once for all machines in the organization during the operating system deployment, including machines used by administrators. When local admin passwords are not randomized across client machines, an attacker can compromise a local account password on one machine and automatically obtain administrator-level access to all client machines in the network.

Fortunately, Microsoft Local Admin Password Solution (LAPS) is an easy-to-deploy tool that fully automates password management for local accounts. Once installed on the machine, LAPS will set the local admin account password to a random string and write it to a confidential attribute of the corresponding computer account in AD. During deployment, your team can specify computers to be managed and which users will be able to retrieve passwords from AD—for example, the helpdesk team accessing a client computer’s credentials.

Microsoft Defender for Endpoint tracks LAPS configuration on endpoints and can be found in Vulnerability management > Security recommendations.

This screenshot shows a security recommendation on Microsoft Defender for Endpoint called Enable Local Admin password management is active. This reveals that 8,000 devices out of 50,000 devices are exposed.

Figure 3: LAPS security recommendations page in the Microsoft 365 Defender portal.

For a detailed report on your devices, run the following query in Advanced Hunting

DeviceTvmSecureConfigurationAssessment  
| where ConfigurationId == "scid-84" 
| where OSPlatform == "Windows10" 
| where IsCompliant == 0 
| project DeviceName, OSPlatform

A similar report can be found in Microsoft Defender for Cloud Apps with Defender for Identity integration. It also tracks LAPS deployment from an AD perspective by highlighting computer objects that did not have their LAPS password updated in the last 60 days. Although both reports provide similar information, it is obtained from different sources. Therefore, the two reports can be used to crosscheck LAPS deployment status.  

Defender for Endpoint customers can view all activities being monitored and configure custom detections for suspicious local administrator account behavior. For example, the following query detects local admin usage over the network: 

DeviceLogonEvents 
| where AccountSid endswith '-500' and parse_json(AdditionalFields).IsLocalLogon != true 
| join kind=leftanti IdentityLogonEvents on AccountSid // Remove the domain's built-in admin account 

Your team can also block local admin accounts’ access over the network by adding the Local account and member of Administrators group (S-1-5-114) entity to Deny access to this computer from the network GPO setting. This will further complicate an attacker’s lateral movement, as well as cover any possible extra local admin accounts available on the machine, since LAPS can only cover one account per device.

3. Restricting inbound traffic with Windows Defender Firewall

Our experience has shown that this last mitigation is often overlooked. By simply removing the ability to connect from one computer to another, this mitigation provides a simple and robust way to make lateral movement more difficult for an attacker.

Host-based firewalls may have a reputation for being difficult to manage but blocking inbound traffic on Windows clients using Windows Defender Firewall is not a tedious task. Most client-server applications initiate network communication from the client side and don’t expect any inbound connections initiated from the servers. But for this mitigation to work, Windows Defender Firewall must be set to block all inbound connections (unless specifically allowed by one of the rules). It is key to disable local firewall rules merging, since failure to do so will negate the effect of this mitigation. For details on Windows Defender Firewall configuration, please check the Pass-the-Hash Mitigations whitepaper1 for a GPO approach or the Microsoft Intune documentation

Screenshot of Windows Defender Firewall interface with firewall enabled for Domain, Private and Public firewall profiles with the same settings across all profiles. All inbound connections are blocked unless specifically allowed by one of the rules, all outbound connections are allowed, unless specifically blocked by one of the rules.

Figure 4: Windows Defender Firewall settings for mitigating lateral movement.

Once initial configuration is done, it’s crucial to identify any applications that were overlooked and did not receive exceptions to accept inbound connections. This is where Defender for Endpoint can help by significantly expanding firewall monitoring and reporting capabilities. Once Windows Defender Firewall is set to block inbound connections on a test group of devices, your team can easily start analyzing firewall logs for any misconfigurations.  

The Reports section in the Microsoft 365 Defender portal has a built-in firewall report with all the information needed. Each report section contains an Advanced hunting button that shows the relevant query and allows you to dive deeper into the data. 

Sample report from Defender for Endpoint portal reports section showing statistics of connections blocked by Windows Firewall. Page contains graph showing number of firewall blocked inbound connections, graphics with top local ports from blocked inbound connections and tables with top processes initiating blocked connections, number of blocked connection per computer, remote IPs with the most connection attempts.

Figure 5: Remote IPs targeting multiple computers report in Microsoft 365 Defender portal’s Reports page.

In this example, the most relevant report is Remote IPs targeting multiple computers. The existing query can easily be adjusted to only include test devices: 

DeviceEvents 
| where DeviceName in ("testdevice1.contoso.com", "testdevice2.contoso.com") 
| where ActionType == "FirewallInboundConnectionBlocked" 
| summarize ConnectionsBlocked = count() by RemoteIP 
| sort by ConnectionsBlocked  

Once IP addresses returned by the query are verified as legitimate applications requiring inbound access to client computers (such as remote management software or any peer-to-peer applications), then the firewall configuration can be adjusted to include these IP addresses as exclusions. For extra reporting flexibility, a Power BI firewall report can be connected to Defender for Endpoint.

Learn more

At Microsoft, we believe that the mitigations outlined in this article can significantly improve your security posture and reduce the threat of lateral movement in your environment. Using Microsoft 365 Defender can help you in the process.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


1Mitigating Pass-the-Hash Attacks and Other Credential Theft, Microsoft. July 7, 2014.

The post How to prevent lateral movement attacks using Microsoft 365 Defender appeared first on Microsoft Security Blog.

]]>
Discover the anatomy of an external cyberattack surface with new RiskIQ report http://approjects.co.za/?big=en-us/security/blog/2022/04/21/discover-the-anatomy-of-an-external-cyberattack-surface-with-new-riskiq-report/ Thu, 21 Apr 2022 16:00:00 +0000 Learn how supply chains, shadow IT, and other factors are growing the external attack surface—and where you need to defend your enterprise.

The post Discover the anatomy of an external cyberattack surface with new RiskIQ report appeared first on Microsoft Security Blog.

]]>
The internet is now part of the network. That might sound like hyperbole, but the massive shift to hybrid and remote work and a multicloud environment means security teams must now defend their entire online ecosystem. Recent ransomware attacks against internet-facing systems have served as a wake-up call. Now that Zero Trust has become the gold standard for enterprise security, it’s critical that organizations gain a complete picture of their attack surface—both external and internal.

Microsoft acquired RiskIQ in 2021 to help organizations assess the security of their entire digital enterprise.1 Powered by the RiskIQ Internet Intelligence Graph, organizations can discover and investigate threats across the components, connections, services, IP-connected devices, and infrastructure that make up their attack surface to create a resilient, scalable defense.2 For security teams, such a task might seem like trying to boil the ocean. So, in this post, I’ll help you put things in perspective with five things to remember when managing external attack surfaces. Learn more in the full RiskIQ report.

Your attack surface grows with the internet

In 2020, the amount of data on the internet hit 40 zettabytes or 40 trillion gigabytes.3 RiskIQ found that every minute, 117,298 hosts and 613 domains are added.4 Each of these web properties contains underlying operating systems, frameworks, third-party applications, plugins, tracking codes, and more, and the potential attack surface increases exponentially.

Some of these threats never traverse the internal network. In the first quarter of 2021, 611,877 unique phishing sites were detected,5 with 32 domain-infringement events and 375 total new threats emerging per minute.4 These types of threats target employees and customers alike with rogue assets and malicious links, all while phishing for sensitive data that can erode brand confidence and harm consumer trust.

Every minute, RiskIQ detects:4

·       15 expired services (susceptible to subdomain takeover)

·       143 open ports

A remote workforce brings new vulnerabilities

The COVID-19 pandemic accelerated digital growth. Almost every organization has expanded its digital footprint to accommodate a remote or hybrid workforce. The result: attackers now have more access points to exploit. The use of remote-access technologies like Remote Desktop Protocol (RDP) and VPN has skyrocketed by 41 percent and 33 percent respectively as the pandemic pushed organizations to adopt a work from home policy.6

Along with the dramatic rise in RDP and VPN usage came dozens of new vulnerabilities giving attackers new footholds. RiskIQ has surfaced thousands of vulnerable instances of the most popular remote access and perimeter devices, and the torrential pace shows no sign of slowing. Overall, the National Institute of Standards and Technology (NIST) reported 18,378 such vulnerabilities in 2021.7

Attack surfaces hide in plain sight

With the rise of human-operated ransomware, security teams have learned to look for smarter, more insidious threats coming from outside the firewall. Headline-grabbing cyberattacks such as the 2020 NOBELIUM attack have shown that the supply chain is especially vulnerable. But threats can also sneak in from third parties, such as business partners or controlled and uncontrolled apps. Most organizations lack a complete view of their internet assets and how they connect to the global attack surface. Contributing to this lack of visibility are three vulnerability factors:

  • Shadow IT: Unmanaged and orphaned assets form an Achilles heel in today’s enterprise security. This aptly named shadow IT leaves your security team in the dark. New RiskIQ customers typically find approximately 30 percent more assets than they thought they had, and RiskIQ detects 15 expired services and 143 open ports every minute.4
  • Mergers and acquisitions (M&A): Ordinary business operations and critical initiatives such as M&A, strategic partnerships, and outsourcing—all of it creates and expands external attack surfaces. Today, less than 10 percent of M&A deals contain cybersecurity due diligence.8
  • Supply chains: Modern supply chains create a complicated web of third-party relationships. Many of these are beyond the purview of security and risk teams. As a result, identifying vulnerable digital assets can be a challenge.

A lack of visibility into these hidden dependencies has made third-party attacks one of the most effective vectors for threat actors. In fact, 53 percent of organizations have experienced at least one data breach caused by a third party.9

Ordinary apps can target organizations and their customers

Americans now spend more time on mobile devices than watching live TV.10 With this demand has come a massive proliferation of mobile apps. Global app store downloads rose to USD230 billion worldwide in 2021.11 These apps act as a double-edged sword—helping to drive business outcomes while creating a significant attack surface beyond the reach of security teams.

Threat actors have been quick to catch on. Seeing an opening, they began to produce rogue apps that mimic well-known brands or pretend to be something they’re not. The massive popularity of rogue flashlight apps is one noteworthy example.12 Once an unsuspecting user downloads the malicious app, threat actors can use it to deploy phishing scams or upload malware to users’ devices. RiskIQ blocklists a malicious mobile app every five minutes.

Adversaries are part of an organization’s attack surface, too

Today’s internet attack surface forms an entwined ecosystem that we’re all part of—good guys and bad guys alike. Threat groups now recycle and share infrastructure (IPs, domains, and certificates) and borrow each other’s tools, such as malware, phish kits, and command and control (C2) components. The rise of crimeware as a service (CaaS) makes it particularly difficult to attribute a crime to a particular individual or group because the means and infrastructure are shared among multiple bad actors.13

More than 560,000 new pieces of malware are detected every day.14 In 2020 alone, the number of detected malware variants rose by 74 percent.15 RiskIQ now detects a Cobalt Strike C2 server every 49 minutes.3 For all these reasons, tracking external threat infrastructure is just as important as tracking your own.

The way forward

The traditional security strategy has been a defense-in-depth approach, starting at the perimeter and layering back to protect internal assets. But in today’s world of ubiquitous connectivity, users—and an increasing amount of digital assets—often reside outside the perimeter. Accordingly, a Zero Trust approach to security is proving to be the most effective strategy for defending today’s decentralized enterprise.

To learn more, read Anatomy of an external attack surface: Five elements organizations should monitor. Stay on top of evolving security issues by visiting Microsoft’s Security Insider for insightful articles, threat reports, and much more.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


1Microsoft acquired RiskIQ to strengthen cybersecurity of digital transformation and hybrid work, Eric Doerr. July 12, 2021.

2Episode 37, “Uncovering the threat landscape,” Steve Ginty, Director Threat Intelligence at RiskIQ, Ben Ben-Aderet, GRSEE. November 29, 2021.

3How big is the internet, and how do we measure it? HealthIT.

4The 2021 Evil Internet Minute, RiskIQ.

5Number of unique phishing sites detected worldwide from 3rd quarter 2013 to 1st Quarter 2021, Joe Johnson. July 20, 2021.

6RDP and VPN use skyrocketed since coronavirus onset, Catalin Cimpanu. March 29, 2020.

7With 18,378 vulnerabilities reported in 2021, NIST records fifth straight year of record numbers, Jonathan Greig. December 8, 2021.

8Top Five Cyber Risks in Mergers & Acquisitions, Ian McCaw.

9Mitigating Third-Party Cyber Risk with Secure Halo, Secure Halo.

10Americans Now Spend More Time Using Apps Than Watching Live TV, Tyler Lee. January 13, 2021.

11App Annie: Global app stores’ consumer spend up 19% to $170B in 2021, downloads grew 5% to 230B, Sarah Perez. January 12, 2022.

12The Top Ten Mobile Flashlight Applications Are Spying On You. Did You Know? Gary S. Miliefsky. October 1, 2014.

13The Crimeware-as-a-Service model is sweeping over the cybercrime world. Here’s why, Pierluigi Paganini. October 16, 2020.

14Malware Statistics & Trends Report, AV-TEST. April 12, 2022.

15Malware statistics and facts for 2022, Sam Cook. February 18, 2022.

The post Discover the anatomy of an external cyberattack surface with new RiskIQ report appeared first on Microsoft Security Blog.

]]>
Align your security and network teams to Zero Trust security demands http://approjects.co.za/?big=en-us/security/blog/2022/01/10/align-your-security-and-network-teams-to-zero-trust-security-demands/ Mon, 10 Jan 2022 18:00:00 +0000 Get expert advice on how to bridge gaps between your SOC and NOC and enable Zero Trust security in today’s rapidly evolving threat landscape.

The post Align your security and network teams to Zero Trust security demands appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Jennifer Minella, Founder and Principal Advisor on Network Security at Viszen Security about strategies for aligning the security operations center (SOC) and network operations center (NOC) to meet the demands of Zero Trust and protect your enterprise.

Natalia: In your experience, why are there challenges bringing together networking and security teams?

Jennifer: Ultimately, it’s about trust. As someone who’s worked on complex network-based security projects, I’ve had plenty of experience sitting between those two teams. Often the security teams have an objective, which gets translated into specific technical mandates, or even a specific product. As in, we need to achieve X, Y, and Z level security; therefore, the networking team should just go make this product work. That causes friction because sometimes the networking team didn’t get a voice in that.

Sometimes it’s not even the right product or technology for what the actual goal was, but it’s too late at that point because the money is spent. Then it’s the networking team that looks bad when they don’t get it working right. It’s much better to bring people together to collaborate, instead of one team picking a solution.

Natalia: How does misalignment between the SOC and NOC impact the business?

Jennifer: When there’s an erosion of trust and greater friction, it makes everything harder. Projects take longer. Decisions take longer. That lack of collaboration can also introduce security gaps. I have several examples, but I’m going to pick healthcare here. Say the Chief Information Security Officer’s (CISO) team believes that their bio-medical devices are secured a certain way from a network perspective, but that’s not how they’re secured. Meaning, they’re secured at a lower level that would not be sufficient based on how the CISO and the compliance teams were tracking it. So, there’s this misalignment, miscommunication. Not that it’s malicious; nobody is doing it on purpose, but requirements aren’t communicated well. Sometimes there’s a lack of clarity about whose responsibility it is, and what those requirements are. Even within larger organizations, it might not be clear what the actual standards and processes are that support that policy from the perspective of governance, risk, and compliance (GRC).

Natalia: So, what are a few effective ways to align the SOC and NOC?

Jennifer: If you can find somebody that can be a third partysomebody that’s going to come in and help the teams collaborate and build trustit’s invaluable. It can be someone who specializes in organizational health or a technical third party; somebody like me sitting in the middle who says, “I understand what the networking team is saying. I hear you. And I understand what the security requirements are. I get it.” Then you can figure out how to bridge that gap and get both teams collaborating with bi-directional communication, instead of security just mandating that this thing gets done.

It’s also about the culturethe interpersonal relationships involved. It can be a problem if one team is picked (to be in charge) instead of another. Maybe it’s the SOC team versus the NOC team, and the SOC team is put in charge; therefore, the NOC team just gives up. It might be better to go with a neutral internal person instead, like a program manager or a digital-transformation leadersomebody who owns a program or a project but isn’t tied to the specifics of security or network architecture. Building that kind of cross-functional team between departments is a good way to solve problems.

There isn’t a wrong way to do it if everybody is being heard. Emails are not a great way to accomplish communication among teams. But getting people together, outlining what the goal is, and working towards it, that’s preferable to just having discrete decision points and mandates. Here’s the big goalwhat are some ideas to get from point A to point B? That’s something we must do moving into Zero Trust strategies.

Natalia: Speaking of Zero Trust, how does Zero Trust figure into an overarching strategy for a business?

Jennifer: I describe Zero Trust as a concept. It’s more of a mindset, like “defense in depth,” “layered defense,” or “concepts of least privilege.” Trying to put it into a fixed model or framework is what’s leading to a lot of the misconceptions around the Zero Trust strategy. For me, getting from point A to point B with organizations means taking baby stepsidentifying gaps, use cases, and then finding the right solutions.

A lot of people assume Zero Trust is this granular one-to-one relationship of every element on the network. Meaning, every user, every endpoint, every service, and application data set is going to have a granular “allow or deny” policy. That’s not what we’re doing right now. Zero Trust is just a mindset of removing inherent trust. That could mean different things, for example, it could be remote access for employees on a virtual private network (VPN), or it could be dealing with employees with bring your own device (BYOD). It could mean giving contractors or people with elevated privileges access to certain data sets or applications, or we could apply Zero Trust principles to secure workloads from each other.

Natalia: And how does Secure Access Service Edge (SASE) differ from Zero Trust?

Jennifer: Zero Trust is not a product. SASE, on the other hand, is a suite of products and services put together to help meet Zero Trust architecture objectives. SASE is a service-based product offering that has a feature set. It varies depending on the manufacturer, meaning, some will give you these three features and some will give you another five or eight. Some are based on endpoint technology, some are based on software-defined wide area network (SD-WAN) solutions, while some are cloud routed.

Natalia: How does the Zero Trust approach fit with the network access control (NAC) strategy?

Jennifer: I jokingly refer to Zero Trust as “NAC 4.0.” I’ve worked in the NAC space for over 15 years, and it’s just a few new variables. But they’re significant variables. Working with cloud-hosted resources in cloud-routed data paths is fundamentally different than what we’ve been doing in local area network (LAN) based systems. But if you abstract thatthe concepts of privilege, authentication, authorization, and data pathsit’s all the same. I lump the vendors and types of solutions into two different categories: cloud-routed versus traditional on-premises (for a campus environment). The technologies are drastically different between those two use cases. For that reason, the enforcement models are different and will vary with the products. 

Natalia: How do you approach securing remote access with a Zero Trust mindset? Do you have any guidelines or best practices?

Jennifer: It’s alarming how many organizations set up VPN remote access so that users are added onto the network as if they were sitting in their office. For a long time that was accepted because, before the pandemic, there was a limited number of remote users. Now, remote access, in addition to the cloud, is more prevalent. There are many people with personal devices or some type of blended, corporate-managed device. It’s a recipe for disaster.

The threat surface has increased exponentially, so you need to be able to go back in and use a Zero Trust product in a kind of enclave model, which works a lot like a VPN. You set up access at a point (wherever the VPN is) and the users come into that. That’s a great way to start and you can tweak it from there. Your users access an agent or a platform that will stay with them through that process of tweaking and tuning. It’s impactful because users are switching from a VPN client to a kind of a Zero Trust agent. But they don’t know the difference because, on the back end, the access is going to be restricted. They’re not going to miss anything. And there’s lots of modeling engines and discovery that products do to map out who’s accessing what, and what’s anomalous. So, that’s a good starting point for organizations.

Natalia: How should businesses think about telemetry? How can security and networking teams best use it to continue to keep the network secure?

Jennifer: You need to consider the capabilities of visibility, telemetry, and discovery on endpoints. You’re not just looking at what’s on the endpointwe’ve been doing thatbut what is the endpoint talking to on the internet when it’s not behind the traditional perimeter. Things like secure web gateways, or solutions like a cloud access security broker (CASB), which further extends that from an authentication standpoint, data pathing with SD-WAN routing—all of that plays in.

Natalia: What is a common misconception about Zero Trust?

Jennifer: You don’t have to boil the ocean with this. We know from industry reports, analysts, and the National Institute of Standards and Technology (NIST) that there’s not one product that’s going to meet all the Zero Trust requirements. So, it makes sense to chunk things into discrete programs and projects that have boundaries, then find a solution that works for each. Zero Trust is not about rip and replace.

The first step is overcoming that mental hurdle of feeling like you must pick one product that will do everything. If you can aggregate that a bit and find a product that works for two or three, that’s awesome, but it’s not a requirement. A lot of organizations are trying to research everything ad nauseum before they commit to anything. But this is a volatile industry, and it’s likely that with any product’s features, the implementation is going to change drastically over the next 18 months. So, if you’re spending nine months researching something, you’re not going to get the full benefit in longevity. Just start with something small that’s palatable from a resource and cost standpoint.

Natalia: What types of products work best in helping companies take a Zero Trust approach?

Jennifer: A lot of requirements stem from the organization’s technological culture. Meaning, is it on-premises or a cloud environment? I have a friend that was a CISO at a large hospital system, which required having everything on-premises. He’s now a CISO at an organization that has zero on-premises infrastructure; they’re completely in the cloud. It’s a night-and-day change for security. So, you’ve got that, combined with trying to integrate with what’s in the environment currently. Because typically these systems are not greenfield, they’re brownfield—we’ve got users and a little bit of infrastructure and applications, and it’s a matter of upfitting those things. So, it just depends on the organization. One may have a set of requirements and applications that are newer and based on microservices. Another organization might have more on-premises legacy infrastructure architectures, and those aren’t supported in a lot of cloud-native and cloud-routed platforms.

Natalia: So, what do you see as the future for the SOC and NOC?

Jennifer: I think the message moving forward is—we must come together. And it’s not just networking and security; there are application teams to consider as well. It’s the same with IoT. These are transformative technologies. Whether it’s the combination of operational technology (OT) and IT, or the prevalence of IoT in the environment, or Zero Trust initiatives, all of these demand cross-functional teams for trust building and collaboration. That’s the big message.

Learn more

Get key resources from Microsoft Zero Trust strategy decision makers and deployment teams. To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Align your security and network teams to Zero Trust security demands appeared first on Microsoft Security Blog.

]]>
Adopting a Zero Trust approach throughout the lifecycle of data http://approjects.co.za/?big=en-us/security/blog/2021/11/17/adopting-a-zero-trust-approach-throughout-the-lifecycle-of-data/ Wed, 17 Nov 2021 17:00:13 +0000 Encrypting data—at rest, in transit, and in use—is critical in preparation for a potential breach of your data center.

The post Adopting a Zero Trust approach throughout the lifecycle of data appeared first on Microsoft Security Blog.

]]>
Instead of believing everything behind the corporate firewall is safe, the Zero Trust model assumes breach and verifies each request as though it originates from an uncontrolled network. Regardless of where the request originates or what resource it accesses, Zero Trust teaches us to “never trust, always verify.”

At Microsoft, we consider Zero Trust an essential component of any organization’s security plan based on these three principles:

  1. Verify explicitly: Always authenticate and authorize based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies.
  2. Use least privileged access: Limit user access with just-in-time (JIT) and just-enough-access (JEA), risk-based adaptive policies, and data protection to protect both data and productivity.
  3. Assume breach: Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.

In this article, we will focus on the third principle (assume breach) and how encryption and data protection play a significant role in getting prepared for a potential breach in your data center.

Protect data with end-to-end encryption

As part of a comprehensive security posture, data should always be encrypted so that in the event where an attacker is able to intercept customer data, they are unable to decipher usable information.

End-to-end encryption is applied throughout the following three stages: at rest, in transit, and in use.

Three icons representing data at rest, in transit, and in use.

Data protection is critical across all three of these stages, so let’s dive a little deeper into how each stage works and how it can be implemented.

Protect data at rest

Encryption at rest provides data protection for stored data (at rest). Attacks against data at rest include attempts to obtain physical access to the hardware on which the data is stored, and then compromising the contained data. In such an attack, a server’s hard drive may have been mishandled during maintenance allowing an attacker to remove the hard drive. Later the attacker would put the hard drive into a computer under their control to attempt to access the data.

Encryption at rest is designed to prevent the attacker from accessing the unencrypted data by ensuring the data is encrypted when on disk. If an attacker obtains a hard drive with encrypted data but not the encryption keys, the attacker must defeat the encryption to read the data. This attack is much more complex and resource-consuming than accessing unencrypted data on a hard drive. For this reason, encryption at rest is highly recommended and is a high priority requirement for many organizations.

Flow chart of Microsoft Azure Key Vault encryption process.

At rest, it is important that your data is protected through disk encryption which enables IT administrators to encrypt your entire virtual machine (VM) or operating system (OS) disks.

One of the concerns that we hear from customers is how can they reduce the chances that certificates, passwords, and other secrets may accidentally get leaked. A best practice is to use central storage of application secrets in a secured vault to have full control of their distribution. When using a secured vault, application developers no longer need to store security information in their applications, which reduces risk by eliminating the need to make this information part of the code.

Data encryption at rest is a mandatory step toward data privacy, compliance, and data sovereignty. These Microsoft Azure security services are recommended for this purpose:

  • Azure Storage Service Encryption: Microsoft Azure Storage uses server-side encryption (SSE) to automatically encrypt your data when it is persisted to the cloud. Azure Storage encryption protects your data to help you to meet your organizational security and compliance commitments.
  • SQL Server Transparent Database Encryption (TDE): Encryption of a database file is done at the page level with Transparent Data Encryption. The pages in an encrypted database are encrypted before they’re written to disk and are decrypted when read into memory.
  • Secrets management: Microsoft Azure Key Vault can be used to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets.
  • Key management: Azure Key Vault can also be used as a key management solution. Azure Key Vault makes it easy to create and control the encryption keys used to encrypt your data.
  • Certificate management: Azure Key Vault lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with Azure and your internal connected resources.
  • Hardware security modules (HSM): Store and protect your secrets and keys either by software or FIPS 140-2 Level 2, which validates HSMs.

Protect data in transit

A “data in transit” condition exists when data is transferred within the data center between different network elements or data centers.

Organizations that fail to protect data in transit are more susceptible to man-in-the-middle attacks, eavesdropping, and session hijacking. These attacks can be the first step attackers use to gain access to confidential data.

For example, the recent NOBELIUM cyberattacks show that no one can be 100 percent protected against a breach. During this attack, 18,000 SolarWinds customers were vulnerable, including Fortune 500 companies and multiple agencies in the US government.

Data in transit should cover two independent encryption mechanisms:

  1. Application layer—the HTTPS and TLS encryption that takes place between the client and server node.
  2. Data link layer—encryption that takes place on the frames transferred over the Ethernet protocol, just above the physical connections

It is recommended customers not only encrypt data on the application layer but also have visibility into their data in transit by using TLS inspection capabilities.

These Microsoft Azure network security services are recommended for this purpose:

As part of the TLS inspection, the above network services perform full decryption and encryption of the traffic, give the ability to utilize intrusion detection and prevention systems (IDPS), as well as providing customers with visibility into the data itself.

To provide customers double encryption when sending data between regions, Azure provides data link layer encryption utilizing media access control security (MACSec).

MACSec is a vendor-independent IEEE Standard (802.1ae), which provides data link layer, point-to-point encryption of traffic between network devices. The packets are encrypted/decrypted on the hardware before being sent and are designed to prevent even a physical “man-in-middle” attack. Because MACSec uses line rate encryption, it can secure data without the performance overhead and complexity of IP encryption technologies such as IPSec/GRE.

Data in transit is encrypted on the wire to block physical man-in-the-middle attacks.

Whenever Azure customer traffic moves between Azure datacenters—outside physical boundaries not controlled by Microsoft (or on behalf of Microsoft)—a data link layer encryption method using the IEEE 802.1AE MAC Security Standards is applied from point-to-point across the underlying network hardware. The packets are encrypted and decrypted on the devices before being sent and applied by default for all Azure traffic traveling within a region or between regions.

Protect data in use

We often hear from customers that they are concerned about moving extremely sensitive IP and data to the cloud. To effectively protect assets, not only must data be secured at rest and in transit, but data must also be protected from threats while in use.

To protect data in use for services across your software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS) cloud models, we offer two important capabilities: Azure confidential computing and centralized storage of application secrets.

Azure confidential computing encrypts data in memory in hardware-based trusted execution environments (TEEs) and only processes it once the cloud environment is verified, preventing data access from cloud operators, malicious admins, and privileged software such as the hypervisor. By protecting data in use, organizations can achieve the highest levels of data privacy and enable secure multi-party data analytics, without giving access to their data.

These Azure services are recommended to be used for data in use protection:

  1. Application Enclaves: You can optimize for confidentiality at the application level by customizing your app to run in confidential virtual machines with Intel SGX application enclaves, or lift and shift existing applications using an independent software vendor (ISV) partner.
  2. Confidential Virtual Machines: You can optimize for ease of use by moving your existing workloads to Azure and making them confidential without changing any code by leveraging encryption across the entire virtual machine with AMD SEV-SNP or Intel SGX with total memory encryption (TME) technologies.
  3. Trusted Launch: Trusted Launch with Secure boot and vTPMs ensure your virtual machines boot with legitimate code, helping you protect against advanced and persistent attack techniques such as rootkits and bootkits.
  4. Confidential Containers: Azure Kubernetes Services (AKS) worker nodes are available on confidential computing virtual machines, allowing you to secure your containers with encrypted memory.
  5. Confidential Services: We are continuing to onboard Azure confidential services to leverage within your solutions, now supporting—Azure confidential ledger in preview, Azure SQL Always Encrypted, Azure Key Vault Managed hardware security modules (HSM), and Microsoft Azure Attestation, all running on Azure confidential computing.

Strengthening your organization’s data protection posture

Protecting your data throughout its lifecycle and wherever it resides or travels is the most critical step to safeguard your business data.

To learn more about the end-to-end implementation of data protection as a critical part of your Zero Trust strategy, visit our Deployment Center.

To see how your organization’s data security posture stacks up against the Zero Trust maturity model, take this interactive quiz.

For more information about a Zero Trust security posture, visit the Microsoft Zero Trust website.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Adopting a Zero Trust approach throughout the lifecycle of data appeared first on Microsoft Security Blog.

]]>
Azure network security helps reduce cost and risk according to Forrester TEI study http://approjects.co.za/?big=en-us/security/blog/2021/10/12/azure-network-security-helps-reduce-cost-and-risk-according-to-forrester-tei-study/ Tue, 12 Oct 2021 16:00:33 +0000 As organizations move their computing from on-premises to the cloud, they realize that leveraging cloud-native security tools can provide additional cost savings and business benefits to their security infrastructure. Azure network security offers a suite of cloud-native security tools to protect Azure workloads while automating network management, implementing developer security operations (DevSecOps) practices, and reducing the risk of a material security breach.

The post Azure network security helps reduce cost and risk according to Forrester TEI study appeared first on Microsoft Security Blog.

]]>
As organizations move their computing from on-premises to the cloud, they realize that leveraging cloud-native security tools can provide additional cost savings and business benefits to their security infrastructure. Microsoft Azure network security offers a suite of cloud-native security tools to protect Azure workloads while automating network management, implementing developer security operations (DevSecOps) practices, and reducing the risk of a material security breach.

We are excited to share that Forrester Consulting has just conducted a commissioned Total Economic Impact™ (TEI) study on behalf of Microsoft, which involved interviewing existing customers who have deployed Azure network security. This study also provides organizations with a framework for evaluating the financial impact on their organizations.

The Forrester study concluded that a composite organization experienced benefits of $2.23 million over three years versus costs of $840.3 thousand, adding up to a net present value (NPV) of $1.39 million and a return on investment (ROI) of 165 percent. The study shows that Azure network security delivers:

  • Increased speed of delivering development projects by one month or 67 percent.
  • Reduced total cost of on-premises security tools by 25 percent.
  • Reduced risk of a security breach of 30 percent.
  • Improved efficiency of network-related IT work by 73 percent.

The study concluded that the composite organization reduced their total cost of ownership related to security infrastructure, established DevSecOps processes, reduced their risk of material security breaches, and reduced the burden on IT to manage networks and upgrades, allowing these teams to focus on more strategic workstreams.

Productivity gains with Azure network security

Azure network security enabled organizations to implement infrastructure-as-code practices, incorporating security directly into application development workflows, speeding development, and time-to-market of applications. With the adoption of DevSecOps workflows, security moved to being an enabler of development speed rather than a gate.

Graphic depicting development speed acceleration at 3 times.

“We’re seeing tremendous speed spinning stuff up in cloud. We have given the application team more reach, where in our on-premises data centers, it was difficult to get access to security appliances with different teams doing different workstreams. With Azure, we’re able to use Azure Resource Manager (ARM) templates.”–Chief solutions architect, technology.

Cost savings

Organizations reduced their total cost of ownership of on-premises security tools by 25 percent when protecting 20 percent of their organization’s total computing with Azure network security. Interviewees saved costs directly related to decommissioned on-premises security tools as well as time costs to maintain this infrastructure and from vendor management.

Graphic depicting 25 percent reduced in total cost.

“We were able to more cost-effectively use Azure security to manage our workloads in the cloud and reduced the footprint of additional agents or services for our cloud, which is clearly different than on-premises data centers.”–Chief solutions architect, technology.

Risk reduction

Azure network security provides automated network security upgrades and improved visibility of the environment. This improves the overall security environment of Azure workloads and reduces the likelihood of experiencing external and internal costs associated with a breach.

Graphic depicting reduced risk of security breach by 30 percent.

“There is no doubt Azure network security improved our security posture. I feel far more comfortable and sleep much better at night having our Azure estate protected by Azure network security as opposed to the combination of what we had on-premises.”–Vice President of applications and infrastructure, education.

Efficiency gains

Azure network security improved the efficiency of IT teams delivering network-related work. It reduced firewall management by 80 percent, security policy management by 15 percent, and security audit process by 96 percent.

Graphic depicting time for security audit reduced by 30 percent.“Before Azure network security, we had an outage where we were managing calls with three vendors: three different IT teams, three systems’ support, three sets of account managers. There was a lot of finger-pointing and, in the end, the issue was never even resolved. Now, everything is resolved in a matter of hours.”–Chief solutions architect, technology.

Read the study and get started today

Read the full Forrester TEI study for Azure network security website or download the full PDF study.

To learn more about Azure network security portfolio of cloud-native services, visit the Azure network security website.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Azure network security helps reduce cost and risk according to Forrester TEI study appeared first on Microsoft Security Blog.

]]>
Securing our approach to domain fronting within Azure http://approjects.co.za/?big=en-us/security/blog/2021/03/26/securing-our-approach-to-domain-fronting-within-azure/ Fri, 26 Mar 2021 22:00:55 +0000 Changes Microsoft is making in Azure to address challenges with domain fronting.

The post Securing our approach to domain fronting within Azure appeared first on Microsoft Security Blog.

]]>
Every single day our teams analyze the trillions of signals we see to understand attack vectors, and then take those learnings and apply them to our products and solutions. Having that understanding of the threat landscape is key to ensuring our customers are kept safe every day. However, being a security provider in a complex world sometimes requires deeper thinking and reflection on how to address emerging issues, especially when the answer is not always immediately clear. Our approach to domain fronting within Azure is a great example of how the ever-changing dynamics of our world have prompted us to re-examine an important and complicated issue—and ultimately make a change.

Let’s start with some background. Domain fronting is a networking technique that enables a backend domain to utilize the security credentials of a fronting domain. For example, if you have two domains under the same content delivery network (CDN), domain #1 may have certain restrictions placed on it (regional access limitations, etc.) that domain #2 does not. By taking the valid domain #2 and placing it into the SNI header, and then using domain #1 in the HTTP header, it’s possible to circumvent those restrictions. To the outside observer, all subsequent traffic appears to be headed to the fronting domain, with no ability to discern the intended destination for particular user requests within that traffic. It is possible that the fronting domain and the backend domain do not belong to the same owner.

As a company that is committed to delivering technology for good, supporting certain use cases that support free and open communication are an important consideration when weighing the potential impacts of a technique like domain fronting. However, we know that domain fronting is also abused by bad actors and threat actors engaging in illegal activities, and we’ve become aware that in some cases bad actors configure their Azure services to enable this.

When it comes to situations like this, Microsoft—as a security company—leads from a place of providing greater simplicity for our customers when they face increased complexity. Our mission is to give our customers peace of mind and help them adapt quickly to a rapidly shifting threat landscape. Therefore, we’re making a change to our policy to ensure that domain fronting will be stopped and prevented within Azure.

Changes like this one are not made lightly, and we understand that there will be impacts across a number of areas:

  • Our engineering teams are already working to ensure the platform will block anyone from practicing the domain fronting technique on Azure, while also continuing to ensure our products and services provide the highest levels of protection against domain fronting based threats.
  • We’re continuing to provide clear guidance for penetration testing on our Azure properties, and working closely with security researchers around the world to make sure they have a clear understanding of these changes.

These changes are just another example of the broad impact that security has on our ever-changing world and we’ll continue to put the security of our customers and their users at the forefront of everything we do. I’d like to thank my colleagues Nick Carr and Christopher Glyer for their tireless research on Domain Fronting, which helped us to make these policy changes to Azure.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Securing our approach to domain fronting within Azure appeared first on Microsoft Security Blog.

]]>
Zero Trust—Part 1: Networking http://approjects.co.za/?big=en-us/security/blog/2020/06/15/zero-trust-part-1-networking/ Mon, 15 Jun 2020 20:45:23 +0000 Taking a Zero Trust approach can help to ensure optimal security without compromising end user application experiences.

The post Zero Trust—Part 1: Networking appeared first on Microsoft Security Blog.

]]>
Enterprises used to be able to secure their corporate perimeters with traditional network controls and feel confident that they were keeping hackers out. However, in a mobile- and cloud-first world, in which the rate and the sophistication level of security attacks are increasing, they can no longer rely on this approach. Taking a Zero Trust approach can help to ensure optimal security without compromising end user application experiences.

Microsoft has a long history of working with customers on how to protect against a broad range of security attacks and we are one of the largest producers of threat intelligence built on the variety of data that flows through our network.

Today, I’d like to share how you can be successful implementing the Zero Trust model by rethinking your network strategy. Here’s a video that will give you a quick overview:

Over a series of three blogs (of which this is the first), we will take a deeper dive into the aspects of the Networking pillar in the Microsoft Zero Trust security model. We will go through each of the dimensions listed (network segmentation, threat protection, and encryption) and show design patterns and helpful guidance on using Microsoft Azure services to achieve optimality.

As mentioned in our Maturity Model paper, all data is ultimately accessed over network infrastructure. Networking controls can provide critical “in pipe” controls to enhance visibility and help prevent attackers from moving laterally across the network. Networks should be segmented (including deep in network micro-segmentation) and real-time threat protection, end-to-end encryption, monitoring, and analytics should be employed.

Maturity model

Maturity model.

We will go over the first one, network segmentation, in this blog. One thing to keep in mind is that while moving straight from the traditional stage to optimal is ideal, most organizations will need to take a phased approach that generally follows along the maturity model journey.

The need for network segmentation

If you refer to the three core principles (Verify Explicitly, Use Least Privilege Access, and Assume Breach), a Zero Trust approach encourages you to think that a security incident can happen anytime and you are always under attack. One of the things you want to be ready with is a setup that minimizes the blast radius of such an incident—this is where segmenting your network while you design its layout becomes important. In addition, by implementing these software-defined perimeters with increasingly granular controls, you will increase the “cost” to attackers to propagate through your network and thereby dramatically reduce the lateral movement of threats.

Network segmentation in Azure

When you operate on Azure, you have a wide and diverse set of segmentation controls available to help create isolated environments. Here are the five basic controls that you can use to perform network segmentation in Azure:

Network segmentation in Azure

Segmentation patterns

There are three common segmentation patterns when it comes to organizing your workload in Azure:

  1. Single Virtual Network
  2. Multiple Virtual Networks with peering
  3. Multiple Virtual Networks in hub-and-spoke model

Each of these provide a different type of isolation and connectivity. As to which one works best for your organization is a planning decision based on your organization’s needs. Here’s where you can read about Segmenting Virtual Networks in more detail and learn how each of these models can be done using Azure Networking services.

The internet boundary

Whether you are building a modern application in the cloud or you just migrated a set of applications to Azure, most applications require some ability to send and receive data to/from the public internet. Any time you expose a resource to a network you increase threat risk, and with internet exposure this is further compounded by a large set of possible threats.

The recommended approach in Azure is to use Azure DDoS Protection Service, Azure Firewall, and Azure Web Application Firewall to provide comprehensive threat protection. This setup of having an internet boundary using these services is important in a segmentation architecture since it essentially segments your application stack away from the internet while providing carefully inspected traffic to/from it.

The datacenter or on-premises network boundary

In addition to internet connectivity, your application stack on Azure might need connectivity back to your IT footprint in your on-premises datacenter(s) and/or other public clouds. You have multiple options to achieve that: you can choose to have direct connectivity using Express Route, use our VPN Gateway, or have a more unified distributed connectivity experience using Azure Virtual WAN. The same concept of segmenting away your application stack applies here, so that any threats that might affect your datacenter or on-premises network will have a harder time propagating to your cloud platform (and vice-versa). 

The PaaS services boundary

As with most modern applications, chances are that your application will be using one of the many platform-as-a-service (PaaS) offerings available on Azure. Some examples of PaaS services you may want your application to call into include Azure Storage, Azure SQL Database, and Azure KeyVault. These are segmented away from your workload in an Azure virtual network since they run as a separate service built and operated by Azure.

On top of this built-in segmentation of PaaS services, Azure also makes it possible for you to do all your interactions with these services in the private address space using Azure PrivateLink. This connectivity capability ensures that all your interactions with PrivateLink-enabled PaaS services are done securely and all data exchanged remains in the Microsoft Network.

The PaaS services boundary.

In closing

Networking represents a great opportunity to make meaningful headway in your Zero Trust journey. Your Zero Trust efforts will not only help your security posture, but most efforts will also help you modernize your environment and improve organizational productivity. In this blog, we discussed how you can use networking services from Azure to build three types of segmentation patterns. In future blogs, we will dive deeper into how you can do the same for threat protection and encryption, the other two dimensions in the networking pillar described in our Zero Trust vision paper. In the meantime, we also invite you to watch our Ignite session to get additional information about network security offerings from Azure.

Make sure to check out the other deployment guides in the series by following the Microsoft Security blog. For more information on Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Zero Trust—Part 1: Networking appeared first on Microsoft Security Blog.

]]>
Success in security: reining in entropy http://approjects.co.za/?big=en-us/security/blog/2020/05/20/success-security-reining-entropy/ Wed, 20 May 2020 18:00:12 +0000 Your network is unique. It’s a living, breathing system evolving over time. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment.

The post Success in security: reining in entropy appeared first on Microsoft Security Blog.

]]>
Your network is unique. It’s a living, breathing system evolving over time. Data is created. Data is processed. Data is accessed. Data is manipulated. Data can be forgotten. The applications and users performing these actions are all unique parts of the system, adding degrees of disorder and entropy to your operating environment. No two networks on the planet are exactly the same, even if they operate within the same industry, utilize the exact same applications, and even hire workers from one another. In fact, the only attribute your network may share with another network is simply how unique they are from one another.

If we follow the analogy of an organization or network as a living being, it’s logical to drill down deeper, into the individual computers, applications, and users that function as cells within our organism. Each cell is unique in how it’s configured, how it operates, the knowledge or data it brings to the network, and even the vulnerabilities each piece carries with it. It’s important to note that cancer begins at the cellular level and can ultimately bring down the entire system. But where incident response and recovery are accounted for, the greater the level of entropy and chaos across a system, the more difficult it becomes to locate potentially harmful entities. Incident Response is about locating the source of cancer in a system in an effort to remove it and make the system healthy once more.

Let’s take the human body for example. A body that remains at rest 8-10 hours a day, working from a chair in front of a computer, and with very little physical activity, will start to develop health issues. The longer the body remains in this state, the further it drifts from an ideal state, and small problems begin to manifest. Perhaps it’s diabetes. Maybe it’s high blood pressure. Or it could be weight gain creating fatigue within the joints and muscles of the body. Your network is similar to the body. The longer we leave the network unattended, the more it will drift from an ideal state to a state where small problems begin to manifest, putting the entire system at risk.

Why is this important? Let’s consider an incident response process where a network has been compromised. As a responder and investigator, we want to discover what has happened, what the cause was, what the damage is, and determine how best we can fix the issue and get back on the road to a healthy state. This entails looking for clues or anomalies; things that stand out from the normal background noise of an operating network. In essence, let’s identify what’s truly unique in the system, and drill down on those items. Are we able to identify cancerous cells because they look and act so differently from the vast majority of the other healthy cells?

Consider a medium-size organization with 5,000 computer systems. Last week, the organization was notified by a law enforcement agency that customer data was discovered on the dark web, dated from two weeks ago. We start our investigation on the date we know the data likely left the network. What computer systems hold that data? What users have access to those systems? What windows of time are normal for those users to interact with the system? What processes or services are running on those systems? Forensically we want to know what system was impacted, who was logging in to the system around the timeframe in question, what actions were performed, where those logins came from, and whether there are any unique indicators. Unique indicators are items that stand out from the normal operating environment. Unique users, system interaction times, protocols, binary files, data files, services, registry keys, and configurations (such as rogue registry keys).

Our investigation reveals a unique service running on a member server with SQL Server. In fact, analysis shows that service has an autostart entry in the registry and starts the service from a file in the c:\windows\perflogs directory, which is an unusual location for an autostart, every time the system is rebooted. We haven’t seen this service before, so we investigate against all the systems on the network to locate other instances of the registry startup key or the binary files we’ve identified. Out of 5,000 systems, we locate these pieces of evidence on only three systems, one of which is a Domain Controller.

This process of identifying what is unique allows our investigative team to highlight the systems, users, and data at risk during a compromise. It also helps us potentially identify the source of attacks, what data may have been pilfered, and foreign Internet computers calling the shots and allowing access to the environment. Additionally, any recovery efforts will require this information to be successful.

This all sounds like common sense, so why cover it here? Remember we discussed how unique your network is, and how there are no other systems exactly like it elsewhere in the world? That means every investigative process into a network compromise is also unique, even if the same attack vector is being used to attack multiple organizational entities. We want to provide the best foundation for a secure environment and the investigative process, now, while we’re not in the middle of an active investigation.

The unique nature of a system isn’t inherently a bad thing. Your network can be unique from other networks. In many cases, it may even provide a strategic advantage over your competitors. Where we run afoul of security best practice is when we allow too much entropy to build upon the network, losing the ability to differentiate “normal” from “abnormal.” In short, will we be able to easily locate the evidence of a compromise because it stands out from the rest of the network, or are we hunting for the proverbial needle in a haystack? Clues related to a system compromise don’t stand out if everything we look at appears abnormal. This can exacerbate an already tense response situation, extending the timeframe for investigation and dramatically increasing the costs required to return to a trusted operating state.

To tie this back to our human body analogy, when a breathing problem appears, we need to be able to understand whether this is new, or whether it’s something we already know about, such as asthma. It’s much more difficult to correctly identify and recover from a problem if it blends in with the background noise, such as difficulty breathing because of air quality, lack of exercise, smoking, or allergies. You can’t know what’s unique if you don’t already know what’s normal or healthy.

To counter this problem, we pre-emptively bring the background noise on the network to a manageable level. All systems move towards entropy unless acted upon. We must put energy into the security process to counter the growth of entropy, which would otherwise exponentially complicate our security problem set. Standardization and control are the keys here. If we limit what users can install on their systems, we quickly notice when an untrusted application is being installed. If it’s against policy for a Domain Administrator to log in to Tier 2 workstations, then any attempts to do this will stand out. If it’s unusual for Domain Controllers to create outgoing web traffic, then it stands out when this occurs or is attempted.

Centralize the security process. Enable that process. Standardize security configuration, monitoring, and expectations across the organization. Enforce those standards. Enforce the tenet of least privilege across all user levels. Understand your ingress and egress network traffic patterns, and when those are allowed or blocked.

In the end, your success in investigating and responding to inevitable security incidents depends on what your organization does on the network today, not during an active investigation. By reducing entropy on your network and defining what “normal” looks like, you’ll be better prepared to quickly identify questionable activity on your network and respond appropriately. Bear in mind that security is a continuous process and should not stop. The longer we ignore the security problem, the further the state of the network will drift from “standardized and controlled” back into disorder and entropy. And the further we sit from that state of normal, the more difficult and time consuming it will be to bring our network back to a trusted operating environment in the event of an incident or compromise.

The post Success in security: reining in entropy appeared first on Microsoft Security Blog.

]]>