Microsoft Offensive Research & Security Engineering (MORSE), Author at Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog Expert coverage of cybersecurity topics Mon, 15 Dec 2025 14:57:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Introducing kernel sanitizers on Microsoft platforms http://approjects.co.za/?big=en-us/security/blog/2023/01/26/introducing-kernel-sanitizers-on-microsoft-platforms/ Thu, 26 Jan 2023 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=125692 We share technical details of our work on the AddressSanitizer (ASAN) and how it contributes to durably improving software quality and security at Microsoft.

The post Introducing kernel sanitizers on Microsoft platforms appeared first on Microsoft Security Blog.

]]>
As part of Microsoft’s commitment to continuously raise security baselines, we have been introducing innovations to the foundation of the chip-to-cloud security outlined in the Windows 11 Security Book. Strong foundational security enables us to build defenses from the ground up and develop secure-by-design products that are hardened against current and future threats. 

These innovations enable Microsoft to further improve the security embedded into Windows and other Microsoft products before they are delivered to our customers. For example, in the past few years, we have been architecting and developing support for kernel sanitizers—powerful detection features that can uncover bugs in kernel-mode components—on different Microsoft platforms. With reach and precision that exceed the capabilities of other similar features, kernel sanitizers enable Microsoft engineering teams to identify and fix vulnerabilities earlier in the software development cycle than ever before possible.

Various teams at Microsoft use these kernel sanitizers for fuzzing, stress-testing, and other development tasks. Kernel sanitizers have shown that they have the potential to eliminate whole classes of memory bugs, and we continue to expand implementations of these features from Windows to other platforms, including Xbox and Hyper-V. This work leads to lasting improvements in software quality and security across Microsoft products and services, and ultimately contributes to better and more secure user experiences for customers.

In this blog post, we share technical details of the work by Microsoft Offensive Research & Security Engineering (MORSE) on kernel sanitizers, the impact they have on Windows and other platforms, and the opportunities they provide to continuously advance built-in security.

User-mode AddressSanitizer and why we took security even further

AddressSanitizer (ASAN) is a compiler and runtime technology initially developed by Google that detects several classes of memory bugs in C/C++ programs, including critical security bugs like buffer overflows. The support for ASAN on Windows user-mode applications was introduced in 2019, and it was extended in 2021 to cover Xbox user-mode applications.

Within Microsoft, ASAN has been leveraged to identify and fix bugs in user-mode software components and is now used routinely on user-mode components during development.

However, user mode is only one layer of Windows. The Windows operating system is a modern and complex piece of software that involves several components operating in different privilege domains and interacting with each other:

Diagram showing user mode, kernel mode, and hypervisor components of the Windows partition and Secure partition in the Windows OS
Fig. 1: Privilege domains in the Windows OS

While ASAN is effective in catching bugs in Windows user-mode components, we needed a similar feature to equally detect bugs in the other layers of the operating system. We began with the Windows kernel attack surface.

Introducing the Kernel AddressSanitizer

The Kernel AddressSanitizer (KASAN) is a variation of the user-mode ASAN that has been architected to work specifically for the Windows kernel and its drivers.

Implementation details

Let’s dive into the technical details of the implementation, focusing on the use case where KASAN is enabled on a driver. It should be noted that the same principle applies when KASAN is enabled on the Windows kernel itself.

Tracking logic: The shadow

KASAN works by first tracking the validity of each byte of memory in the kernel virtual address space using a new 16TB virtual memory region called the shadow, which acts similarly to a large bitmap that indicates whether each byte of the kernel virtual address space is valid or not.

The shadow is of size 1/8 of the entirety of the kernel virtual address space size, and linearly backs all of it:

Visual diagram showing the WIndows kernel virtual address with the KASAN shadow that is 1/8 the size
Fig. 2: The KASAN shadow

Each set of 8 bytes in the kernel address space is backed by one byte in the shadow. As the KASAN shadow is a region of kernel memory, it resides within the kernel virtual address space and therefore implicitly backs itself too:

Visual diagram showing that the KASAN shadow resides within the kernel virtual address space
Fig. 3: The KASAN shadow within the address space

Initially, the shadow is just a large 16TB read-only region that maps into a single 4KB physical page full of zeroes. Therefore, even though 16TB of virtual memory is reserved, only a single page of physical memory is used:

Visual diagram showing the virtual memory and physical memory used by the KASAN shadow
Fig. 4: The initial physical layout of the shadow

Later, at run time, when the kernel performs memory allocations, for example via ExAllocatePool2(), it makes the shadow that backs these allocations writable. It does this by dynamically allocating new physical pages and remapping portions of the shadow to these pages, while keeping the other portions untouched and still pointing to the initial read-only zero page:

Visual diagram showing the dynamic allocation of new physical pages and remapping of portions of the shadow to these pages
Fig. 5: How physical memory is split for used and unused portions of the shadow

This process of splitting the shadow takes place during each backend memory allocation. Overall, the memory consumption of the shadow increases as portions of it are progressively made writable to back the kernel memory allocations.

The KASAN runtime then proceeds to dynamically initialize the shadow values of each memory allocation, depending on its current state. During a memory allocation once the shadow has been made writable for the allocated buffer, the KASAN runtime marks the allocated buffer as valid in the shadow, and the paddings below and above it as invalid:

Visual diagram showing the allocated buffer marked as valid by KASAN runtime with paddings marked as invalid
Fig. 6: Shadow state following an allocation

Later, when this buffer gets freed, the KASAN runtime marks it as entirely invalid in the shadow:

Visual diagram showing the buffer marked as invalid by the KASAN runtime
Fig. 7: Shadow state following a deallocation

By updating the shadow contents this way, the KASAN runtime maintains a consistent view of which bytes of memory are valid and which bytes are invalid. From there, the ASAN instrumentation, which we describe below, then gets used as part of the verification logic to enforce validity checks on memory accesses using the information provided by the shadow.

Verification logic: The ASAN instrumentation

As part of enabling KASAN on a target kernel-mode component such as a kernel driver, the component must be recompiled with a specific set of compiler flags that cause the compiler to insert the ASAN instrumentation into the compiled binary directly.

The compiler chooses one of two verification methods:

  1. Function calls to __asan_*(): Before each access that a program makes to memory, the compiler inserts a call to one of the __asan_{load,store}{#n}(Address) functions, chosen depending on the specifics of the access, and passes as argument the memory address about to be accessed. For example, if the access is a write of two bytes, then __asan_store2() is chosen and is therefore called by the compiled program before the access is performed. We won’t discuss the details of the __asan_*() functions as public documentation is available, but overall, these functions calculate the shadow address of the memory about to be accessed and verify whether the shadow says that #n bytes starting from the address given as argument are valid. If not, these functions are expected to halt program execution.
  2. Verification bytecodes: To improve performance, in certain cases instead of inserting a function call, the compiler directly inlines a bytecode that implements the same logic—the  bytecode reads the value of a global variable called __asan_shadow_memory_dynamic_address (which must exist in the program), calculates from it the shadow address of the memory about to be accessed, and verifies whether the shadow says that the memory is valid. If not, the bytecode calls an __asan_report_*() function that is expected to halt program execution.

While this ASAN instrumentation was initially developed for user-mode components, it is reused as-is in KASAN, such that:

  • The NTOS kernel has a KASAN runtime that exports the __asan_*() functions. These functions can be imported by drivers compiled with KASAN, and are also used when NTOS itself is compiled with KASAN.
  • Within NTOS, an __asan_shadow_memory_dynamic_address global variable is declared, and is initialized and used when NTOS is compiled with KASAN, but is not exported to drivers. For drivers, a different mechanism based on KasanLib is used and is described below.
  • The halting of program execution is implemented in the form of a kernel bug check: when an access is made to memory bytes marked as invalid in the shadow, KASAN triggers a KASAN_ILLEGAL_ACCESS (0x1F2) bug check. The parameters of this bug check include useful information for debugging, such as the type of memory being accessed (heap, stack, and so forth), the number of bytes being accessed, whether the access is a read or a write, along with additional metadata.

The ASAN instrumentation therefore constitutes the KASAN verification logic: it verifies whether each memory byte that the component accesses at run time is marked as valid in the KASAN shadow, and triggers a bug check if not, to report any illegal memory access.

Challenges with the instrumentation

There were several challenges in getting the ASAN instrumentation to work in KASAN, especially in cases where the compiler inserts a bytecode instead of a function call.

Using the correct calculation

The expected calculation to get the shadow address of a regular address is the following:

Shadow(Address) = ShadowBaseAddress + OffsetWithinAddressSpace(Address) / 8

In the context of the user-mode AddressSanitizer, the user-mode address space starts at address 0x0. Therefore, any user-mode address is equal to the offset of that address within the user-mode address space:

OffsetWithinUserAddressSpace(UserAddress) = UserAddress – 0
	                                  = UserAddress

For this reason, the verification bytecodes inserted as part of the ASAN instrumentation use the following formula, where __asan_shadow_memory_dynamic_address contains the base address of the shadow:

Shadow(Address) = __asan_shadow_memory_dynamic_address + (Address / 8)

The following is an example of a generated bytecode assembly:

mov     rcx, cs:__asan_shadow_memory_dynamic_address
shr     rax, 3
add     rcx, rax

Here the bytecode calculates the shadow address of RAX by reading __asan_shadow_memory_dynamic_address and adding to it RAX right-shifted by 3 (meaning divided by 8). This implements the aforementioned formula to get the shadow of an address.

In the context of the kernel AddressSanitizer however, the kernel-mode address space starts at address 0xFFFF800000000000:

OffsetWithinKernelAddressSpace(KernelAddress) = KernelAddress –
                                                    0xFFFF800000000000
	                                     != KernelAddress

Therefore, reusing the simplified formula as-is in KASAN would not be correct: it would result in the bytecodes using the wrong shadow addresses when verifying memory accesses. We solved that issue in KASAN by initializing __asan_shadow_memory_dynamic_address to a different value that is not exactly the KASAN shadow base address:

__asan_shadow_memory_dynamic_address = KasanShadowBaseAddress –
	                                           (0xFFFF800000000000 / 8)

Developing the formula using this value gives the following:

Shadow(KernelAddress) = __asan_shadow_memory_dynamic_address + (KernelAddress / 8)
     = KasanShadowBaseAddress – (0xFFFF800000000000 / 8) + (KernelAddress / 8)
     = KasanShadowBaseAddress + (KernelAddress - 0xFFFF800000000000) / 8
     = KasanShadowBaseAddress + OffsetWithinKernelAddressSpace(KernelAddress) / 8

The formula therefore falls back to the expected calculation with KASAN: the bytecodes take the base address of the KASAN shadow, add to it the offset of the address within the kernel address space divided by 8, and this results in the correct shadow address for the given kernel address.

Using this trick, we avoided the need to make a compiler change to modify the bytecode generation for KASAN.

Dealing with non-tracked memory

When we described how the KASAN shadow is mapped, we did not explain why we were using a splitting mechanism with a zeroed page. The reason is simple: the verification bytecodes always want to read the shadow of the buffers they verify, and they do not have any knowledge about whether a buffer has a shadow that backs it or not. There must therefore always be a shadow mapped for every byte of kernel virtual address memory, and we achieve that thanks to the splitting mechanism, which guarantees that a shadow always exists while minimizing the memory consumption of the non-tracked regions by having their shadow point to a single zeroed physical page.

The fact that the physical page used in the splitting mechanism is full of zeroes causes KASAN to always consider non-tracked memory as valid.

Dealing with user-mode pointers

The Windows kernel and its drivers are allowed to directly access user-mode memory, for example to fetch the user-mode arguments passed to a syscall. This creates an issue with the verification bytecodes, because they need to get the shadow of an address that is outside of the kernel address space and that therefore does not have a shadow.

To deal with this case, we pass a compiler flag as part of KASAN that instructs the compiler to never use bytecodes and always prefer function calls to __asan_*(), except when the compiler is certain that the accesses are to stack memory.

This means in practice that in order to verify the accesses to local variables on the stack, the compiler generates verification bytecodes, but for any other access the compiler uses function calls to __asan_*(). Given that these __asan_*() functions are implemented in the KASAN runtime, we have full control over their verification logic and can make sure to exclude user-mode pointers from the verification via a simple if condition.

Using this trick, we again avoided the need to make a compiler change to have the instrumentation deal with user-mode pointers.

Telling the kernel to export KASAN support

By default, the kernel does not create the KASAN shadow, and does not export the KASAN runtime. In other words, it does not make KASAN available to drivers by default. For this to be done, the user must explicitly set the following registry key:

HKLM\System\CurrentControlSet\Control\Session Manager\Kernel\KasanEnabled

The bootloader reads this key at boot time and decides based on its value whether or not to instruct the kernel to make KASAN support available to drivers.

With this established, the following sections contain details of how the kernel loads drivers compiled with KASAN.

Loading kernel drivers with KASAN

For drivers compiled with KASAN, a small code-only library called KasanLib is linked into the final driver binary and does two things:

  1. It declares an __asan_shadow_memory_dynamic_address global variable that remains local to the driver itself and is not exported to the kernel namespace. The verification bytecodes described earlier that get inserted into the driver use this global variable as part of their calculation of the KASAN shadow.
  2. It publishes a section called “KASAN” in the resulting PE binary of the driver. This section contains information and metadata, the format of which may change in the future and is not relevant to discuss here.

Upon loading a driver, the kernel verifies whether the driver has a “KASAN” section, and can take two paths:

  1. If the driver has a “KASAN” section and the KasanEnabled registry key is not set, then the kernel will refuse to load the driver. This is to prevent the system from malfunctioning; there is, after all, no way the driver is going to work, since it will try to use a shadow that wasn’t created by the kernel and call a runtime that the kernel does not export.
  2. If the driver has a “KASAN” section and the KasanEnabled registry key is set, then the kernel parses this section in order to initialize KASAN on the driver. Part of this initialization includes setting the value described earlier in the driver’s __asan_shadow_memory_dynamic_address global variable. After this initialization is complete, the “KASAN” section is no longer used and is discarded from memory to save up kernel memory.

From then on, the driver can start executing.

How it all falls together: example of a buggy driver

We have now exposed all the ingredients required for KASAN to work on drivers: how the shadow is created, how the instrumentation operates, how the kernel exports KASAN support, and how KASAN gets initialized on drivers when they are loaded. To give an example of how it all falls together, let’s consider a hypothetical driver that we compiled with KASAN.

We have set the KasanEnabled registry key in our system, and the kernel has therefore created a KASAN shadow and is exporting the KASAN runtime. We proceed to load the driver in the system. The kernel sees that the PE of the driver has a “KASAN” section, parses it, initializes KASAN on the driver, and discards the section. The driver finally starts executing.

Let’s assume that the driver contains this buggy code:

PCHAR buffer;
buffer = ExAllocatePool2(POOL_FLAG_NON_PAGED, 18, WHATEVER_TAG);
buffer[18] = 'a';

Here, a heap buffer of size 18 bytes is allocated. During this allocation, the KASAN runtime initialized two redzones below and above the buffer, as described earlier. Then an ‘a’ is written into the 19th byte of the buffer. This is, of course, an out-of-bounds write access, which is incorrect and can cause a serious security risk.

Given that our driver was compiled with KASAN, it was subject to the ASAN instrumentation, meaning that the actual compiled code looks like the following:

PCHAR buffer;
buffer = ExAllocatePool2(POOL_FLAG_NON_PAGED, 18, WHATEVER_TAG);
__asan_store1(&buffer[18]);
buffer[18] = 'a';

Here the compiler inserted a function call to __asan_store1() and did not choose a verification bytecode because it couldn’t conclude that “buffer” was a pointer to stack memory (which it is not).

__asan_store1() is part of the KASAN runtime that the kernel exported and that the driver imported. This function looks at the shadow of &buffer[18], sees that it is marked as invalid (because the byte at this address is part of the right redzone of the buffer), and proceeds to issue a KASAN_ILLEGAL_ACCESS bug check to halt system execution.

As the owners of the system, we can then collect the crash dump and investigate what was the memory safety bug that KASAN detected using the actionable information provided alongside the KASAN bug check.

Without KASAN, this bug would not be easily observed. With KASAN, however, it is immediately detected before the bug triggers and turns into a real security risk. As such, KASAN is able to detect whole classes of memory bugs that could otherwise remain undiscovered.

Granularity, and memory regions covered

As can be deduced from the details we provided thus far, KASAN operates at the byte granularity. KASAN is currently able to detect illegal memory accesses on several types of memory regions:

  • The global variables
  • The kernel stacks
  • The pool allocators (ExAllocatePool*())
  • The lookaside list allocators (ExAllocateFromLookasideListEx(), etc.)
  • The IO/contiguous allocators (MmMapIoSpaceEx(), MmAllocateContiguousNodeMemory(), etc.)

Internally, the support for these regions is implemented using a KASAN API that is also exported by the NTOS kernel. Microsoft will continue to improve this API to expand its implementation to other scenarios.

Thanks to the ability to detect bugs at the byte granularity and the large number of memory regions covered, KASAN exceeds the capabilities of existing bug-detection technologies such as the Special Pool, which typically operate at a coarser granularity and do not cover the kernel stacks and other regions.

Performance cost

Naturally, the KASAN shadow consumes memory, and the validity checks inserted by the ASAN instrumentation consume CPU time and increase the size of the compiled binaries.

Some effort has gone into micro-optimizing KASAN by limiting the number of instructions that the KASAN runtime emits, by making the KASAN shadow NUMA-aware, by compressing the KASAN metadata in order to reduce the binary sizes, and so forth.

Overall, KASAN currently introduces a ~2x slowdown, which we measured using widely available benchmarking tools. As such, KASAN cannot be seen as a production feature since its performance cost is not negligible. This cost, however, is acceptable for debug, development, stress-testing, or security-related setups.

It should be noted that this cost is higher than that of existing technologies, such as the Special Pool, and that KASAN does not have a performance impact when not explicitly enabled. In other words, KASAN does not affect the performance of Windows 11 by default.

Immediate impact

Microsoft generates special builds of Windows, called MegaAsan builds, that produce fully bootable Windows disks that have KASAN enabled on the Windows kernel and on more than 95% of all kernel drivers shipped by Microsoft in Windows 11.

By using these builds in testing, fuzzing, but also in simple desktop setups, MORSE has been able to identify and fix more than 35 memory safety bugs in various drivers and in the Windows kernel, that were not previously detectable by existing technologies.

We also implemented KASAN support for the Xbox kernels and the drivers they load, and similarly generate builds of Xbox systems with KASAN enabled. As such, KASAN also contributes to the quality and security of the Xbox product line.

Extending ASAN to the other ring0 domains

We have so far discussed KASAN on Windows kernel drivers and on the Windows kernel:

Diagram showing KASAN on the drivers and Windows kernel in the Windows user mode of the Windows OS
Fig. 8: KASAN in the operating system

Having KASAN is a considerable step forward, because it provides precise detection of memory errors on large and critical parts of the system in a way that wasn’t achievable before. Following up on our work on KASAN, we developed similar detection capabilities on the remaining parts of the system.

Introducing SKASAN…

The Secure kernel is a different kernel, completely separated from the Windows kernel, that executes in a more privileged domain and is in charge of a number of security operations in the system. It is part of virtualization-based security on Windows.

We developed the Secure Kernel AddressSanitizer (SKASAN), which covers the Secure kernel and a few of the modules it loads dynamically.

SKASAN has a number of similarities with KASAN. For example, the SKASAN support for Secure Kernel modules is implemented using a “SKASAN” section, comparable to the “KASAN” section used in Windows kernel drivers. Overall, SKASAN works similarly to KASAN, but simply applied to the Secure kernel domain.

…and HASAN

Finally, Hyper-V is the Microsoft hypervisor that plays a central role on Windows and in Azure, and it too could benefit from the capabilities that ASAN provides; we therefore developed the Hyper-V AddressSanitizer (HASAN) which is yet another ASAN implementation but tied to the Hyper-V kernel.

Same, but different… but still same

KASAN, SKASAN, and HASAN are built on the same logic, which is having a shadow and a compiler instrumentation, and overall have similar costs in terms of memory consumption and slowdown.

Some inherent differences do exist, however. First, the Windows kernel, Secure kernel, and Hyper-V kernel have different allocators, and the *ASAN support for them differs accordingly. Second, the memory layout of these kernels is not the same, and this leads to drastic implementation differences; for instance, HASAN actually uses two different shadows concatenated together.

We leave the rest of the technical differences as a reverse engineering exercise to interested readers.

The final picture, and results

As of November 2022, we have developed and stabilized KASAN, SKASAN, and HASAN. Combined together, these deliver precise detection of memory errors on all the kernel-mode components that execute on Windows 11:

Diagram showing KASAN the Hyper-V, as well as modules and Secure kernel on Isolated user mode, in addition to the drivers and Windows kernel in the Windows user mode of the Windows OS
Fig. 9: all *ASAN implementations in the operating system

We produce internal MegaAsan builds with all of these *ASAN implementations enabled, and internal teams are using them in a number of fuzzing and stress-testing scenarios. As a result, we have been able to identify and fix dozens of memory bugs of various severity:

Pie chart showing types of bugs found by kernel sanitizers, with 73% making up out-of-bounds access, 21% type confusion, and 6% user-after-free.
Fig. 10: Types of bugs found by *ASAN

Finally, as part of our *ASAN work we have also applied numerous improvements and cleanups to various areas, such as the Windows and Hyper-V kernels, but also to the Microsoft Visual C++ (MSVC) compiler to improve the *ASAN experience on Microsoft platforms.

Overall, these *ASAN features have the potential to eliminate whole classes of memory bugs and, going forward, will significantly contribute to ensuring the quality and security of the Microsoft products.

This concludes our first blog post on kernel sanitizers. Beyond *ASAN, we have implemented several other sanitizers that specialize in uncovering other classes of bugs. We will communicate about them in future posts.

Maxime Villard
Principal Security Engineer, Microsoft Offensive Research & Security Engineering (MORSE)

The post Introducing kernel sanitizers on Microsoft platforms appeared first on Microsoft Security Blog.

]]>
Secure your healthcare devices with Microsoft Defender for IoT and HCL’s CARE http://approjects.co.za/?big=en-us/security/blog/2022/03/14/secure-your-healthcare-devices-with-microsoft-defender-for-iot-and-hcls-care/ Mon, 14 Mar 2022 16:00:00 +0000 Recently, Microsoft and global technology services firm HCL Technologies teamed up to help solve the security challenge with a high-performance solution for medical devices. The result is a new reference architecture and platform for building secure medical devices and services based on HCL's CARE, Microsoft Defender for IoT and Azure IoT.

The post Secure your healthcare devices with Microsoft Defender for IoT and HCL’s CARE appeared first on Microsoft Security Blog.

]]>
It wasn’t long ago that medical devices were isolated and unconnected, but the rise of IoT has brought real computing power to the network edge. Today, medical devices are transforming into interconnected, smart assistants with decision-making capabilities.

Any device in a medical setting must be designed with one core priority in mind: delivering patient care. Medical professionals need instant access to data from devices with minimal friction so they can focus on what they do best. But at the same time, any device holding sensitive medical records must be secure.

To balance these needs, security software for medical devices must be lightweight enough to maximize the performance of the device without overloading the processor, taxing battery life, or putting the user through cumbersome processes. It must be high-performing and reliable with great battery life, so the device is always ready and works every time it’s needed.  

Recently, Microsoft and global technology services firm HCL Technologies teamed up to help solve the security challenge with a high-performance solution for medical devices. The result is a new reference architecture and platform for building secure medical devices and services based on HCL’s Connected Assets in Regulated Environment (CARE), Microsoft Defender for IoT, and Azure IoT.

By freeing medical device manufacturers from the need to build security solutions and cloud services, this new platform will enable them to focus on their own core mission and strengths, which are healthcare-related innovation and patient care, even as they build new, better, and more secure medical devices.

Combining HCL’s CARE and Microsoft Defender for IoT

As a long-time Microsoft partner, HCL brings deep expertise in applications, systems integration, network engineering, and managed services.

Built on Microsoft Azure, HCL’s CARE Platform has been designed and developed with security best practices and standards in mind. The platform provides the foundation and platform that medical device manufacturers need to develop innovative high-performance healthcare services and devices while ensuring an integrated security approach from the cloud to the network edge.

By including Microsoft Defender for IoT in the device itself, device builders are able to create secure-by-design, managed IoT devices. Defender for IoT offers continuous asset discovery, vulnerability management, and threat detection—continually reducing risk with real-time security posture monitoring across the device’s operating system and applications.

Partner Director of Enterprise and OS Security for Azure Edge and Platform at Microsoft, David Weston, highlighted the value of this collaboration saying, “By partnering with HCL to incorporate Defender for IoT into HCL’s CARE, we see a bright future for medical device manufacturers to build secured medical devices, with minimal effort.” Sunil Aggarwal, Senior Vice President at HCL and Client Partner for Microsoft, added, “HCL’s CARE enables medical original design manufactures (ODMs) and original equipment manufacturers (OEMs) to quickly develop new devices and solutions focused on patients’ needs. By including Defender for IoT, those devices benefit from Microsoft’s deep security expertise, thousands of security professionals, and trillions of security signals captured each day.”

The combined Microsoft and HCL solution for healthcare IoT provides the high-performance security needed to protect the sensitive data on the medical device—in transit and in the cloud. By using a combination of endpoint and network security signals, the system can monitor what’s happening on the network, in the operating system, and at the application layer while keeping a pulse on the integrity of the device. This combination of external and internal security signals yields advanced security not often found on medical devices, which are typically monitored using only network data.   

Advanced threat detection with Defender for IoT

CARE’s use of Defender for IoT offers the best possible security using Defender’s agent-based monitoring. This means security is built directly into IoT devices with the Microsoft Defender for IoT security agent, which supports a wide range of operating systems including popular Linux distributions. With an agent, richer asset inventory, vulnerability management, and threat detection and response is possible.  

Image shows devices are monitored and assessed for vulnerabilities and security recommendations. A prioritized list of recommendations are listed. The combination of Network and Endpoint signals enable a deeper assessment and broader range of detections.

Figure 1. Devices are monitored and assessed for vulnerabilities and security recommendations. The combination of network and endpoint signals enables a deeper assessment and a broader range of detections.

Defender for IoT security monitors the security of the device and enables the following scenarios for medical device manufacturers using HCL’s CARE with Defender for IoT:

  • Asset inventory: Gain visibility to all your IoT devices so operators can manage a complete inventory of their entire healthcare IoT fleet.
  • Posture management: Identify and prioritize misconfigurations based on industry benchmarks and software vulnerabilities or anomalies in the software bill of materials (SBOM) that may arise from supply chain attacks and use integrated workflows to bring devices into a more secure state.
  • Threat detection and response: Leverage behavioral analytics, machine learning, and threat intelligence based on trillions of signals to detect attacks through anomalous or unauthorized activity.  
  • Microsoft Security integration: Defender for IoT is part of the Microsoft security information and event management (SIEM) and extended detection and response (XDR) offering, enabling quick detection and response capabilities for multistage attacks that may move across network boundaries.
  • Third-party integration: Integrates with third-party tools you’re already using, including SIEM, ticketing, configuration management database (CMDB), firewall, and other tools.

Powerful automated services for detection and response

HCL’s CARE Gateway and CARE Device Agent complement Defender for IoT’s security and can help capture application-level security events and send them into Defender for IoT analytics services, such as an attempt to connect an unknown device, use of invalid provisioning credentials, attempts to run unauthorized commands remotely, short-and-lengthy remote access sessions, anomalies related to data transfer rate, event sequence anomalies, and more.

Diagram shows a medical device with the H C L's Care and Defender for I o T Agents. Using the agents, the devices send security and other types of events to the H C L Care Gateway which forwards the data to the Azure I o T hub in Azure. Security events are forwarded to the Defender for I o T cloud services while non security events are sent to the H C L's Care Core and business app.

Figure 2. Medical devices send security and other types of events to HCL’s CARE Gateway which forwards data to the Azure IoT hub. Security events are forwarded to the Defender for IoT cloud services while non-security-related events are sent to HCL’s CARE Core and business app.

Integrating HCL’s CARE with Defender for IoT can protect and monitor connected medical devices and gateways too. The CARE Platform integrated with Defender for IoT provides a powerful solution to secure healthcare devices:

  • CARE Cloud runs in Azure, utilizing Azure cloud security services to ensure that customers’ health data is secure and accessible only to authorized persons.
  • CARE Device Gateway keeps devices isolated from the public internet.
  • The Defender for IoT micro agent can help to capture events at the system level and push them to Defender for IoT analytics services, along with the service level events captured by gateway itself.
  • Device Agent connects to Device Gateway to get events out. It can also capture device software level events and push them to Defender for IoT analytics services through the Device Gateway.
  • CARE Cloud can make critical events captured at Defender for IoT analytics services actionable, such as gracefully isolating medical devices from the network and alerting device owners.
  • CARE Reusable Modules and design guidelines make the application and connected device secure by enabling secure design, development, and deployment. This includes static and dynamic application security testing and software composition analysis.
  • CARE can also act on critical events by alerting the device owners’ IT security, and sending commands to devices for network isolation, graceful shutdown, and other preconfigured actions.

Find out more

Both Microsoft and HCL are excited to bring this new platform and security technologies to the medical device industry, and we invite you to learn more about how HCL’s CARE and Defender for IoT deliver the security that medical device manufacturers need. Using these technologies, manufacturers can focus more on medical and patient innovation and the quicker delivery of new solutions to the marketplace.

These new security capabilities are available today. Medical device manufacturers and OEMs should check out HCL’s CARE, Microsoft Defender for IoT, and Microsoft’s recently announced Edge Secured-core preview.  

If you are an IoT solution builder, reach out to the Azure Certified Device team. We are ready to work with you!

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Secure your healthcare devices with Microsoft Defender for IoT and HCL’s CARE appeared first on Microsoft Security Blog.

]]>
Improve kernel security with the new Microsoft Vulnerable and Malicious Driver Reporting Center http://approjects.co.za/?big=en-us/security/blog/2021/12/08/improve-kernel-security-with-the-new-microsoft-vulnerable-and-malicious-driver-reporting-center/ Wed, 08 Dec 2021 17:00:58 +0000 Windows 10 and Windows 11 have continued to raise the security bar for drivers running in the kernel. Kernel-mode driver publishers must pass the hardware lab kit (HLK) compatibility tests, malware scanning, and prove their identity through extended validation (EV) certificates.

The post Improve kernel security with the new Microsoft Vulnerable and Malicious Driver Reporting Center appeared first on Microsoft Security Blog.

]]>
Windows 10 and Windows 11 have continued to raise the security bar for drivers running in the kernel. Kernel-mode driver publishers must pass the Hardware Lab Kit (HLK) compatibility tests, malware scanning, and prove their identity through extended validation (EV) certificates. This has significantly reduced the ability for malicious actors to run nefarious kernel code on Windows 10 and Windows 11 devices.

Vulnerable driver attacks

Increasingly, adversaries are leveraging legitimate drivers in the ecosystem and their security vulnerabilities to run malware. Multiple malware attacks, including RobinHood, Uroburos, Derusbi, GrayFish, and Sauron, have leveraged driver vulnerabilities (for example CVE-2008-3431,1 CVE-2013-3956,2 CVE-2009-0824,3 and CVE-2010-1592).4

Vulnerable driver attack campaigns target security vulnerabilities in well-intentioned drivers from trusted original equipment manufacturers (OEMs) and hardware vendors to gain kernel privileges, modify kernel signing policies, and load their malicious unsigned driver into the kernel. In some cases, these unsigned drivers will disable antivirus products to avoid detection. From there, ransomware, spyware, and other types of malware can be executed.

Microsoft Defender for Endpoint and Windows Security teams work diligently with driver publishers to detect security vulnerabilities before they can be exploited by malicious software. We also build automated mechanisms to help block vulnerable versions of drivers and help protect customers against vulnerability exploits based on the ecosystem and partner engagement.

Reporting vulnerabilities: Vulnerable and Malicious Driver Reporting Center

To help protect users against these types of attacks, Microsoft has created the new Vulnerable and Malicious Driver Reporting Center. The Reporting Center is designed to be easy-to-use and requires only the driver file and a few details to open a driver analysis case. Simply provide the driver binary for our analysis, details about the vulnerability or malicious behavior of the driver, and an email address for follow-up.

Homepage of the Vulnerable and Malicious Driver Reporting Center.
Further down the homepage of the Vulnerable and Malicious Driver Reporting Center.

Figure 1: The Vulnerable and Malicious Driver Reporting Center.

The Reporting Center backend automatically analyzes the potentially vulnerable or malicious driver binary and identifies dangerous behaviors and security vulnerabilities including:

  • Drivers with the ability to map arbitrary kernel, physical, or device memory to user mode.
  • Drivers with the ability to read or write arbitrary kernel, physical, or device memory, including Port I/O and central processing unit (CPU) registers from user mode.
  • Drivers that provide access to storage that bypass Windows access control.

The Reporting Center can scan and analyze Windows drivers built for x86 and x64 architectures. Vulnerable and malicious scanned drivers are flagged for analysis and investigation by Microsoft’s Vulnerable Driver team. This program is currently not eligible for the Microsoft Security Response Center’s Bug Bounty program.

Report a driver for analysis now.

Feedback loop: Vulnerable drivers are automatically blocked in the ecosystem

Our security teams work closely with the driver publisher to help analyze and patch the vulnerability and update in-market affected devices. Once the driver publisher patches the vulnerability, updates to all affected drivers are distributed by the driver publisher, typically through Windows Update (WU). Once affected devices receive the latest security patches, drivers with confirmed security vulnerabilities will be blocked on Windows 10 devices in the ecosystem using Microsoft Defender for Endpoint attack surface reduction (ASR) and Microsoft Windows Defender Application Control (WDAC) technologies to protect devices against exploits involving vulnerable drivers to gain access to the kernel.

Microsoft Defender for Endpoint attack surface reduction rules

Vulnerable drivers ASR rule

E3 and E5 enterprise customers will gain the benefit of using Microsoft Defender for Endpoint’s ASR rules to block malicious and vulnerable drivers. ASR rules target and block entry points and code behavior used by malware and abused by attackers, preventing attacks from beginning in the first place. The vulnerable signed driver ASR rule prevents an application from writing a signed vulnerable driver to the system.

Vulnerable and malicious drivers are added to the vulnerable driver ASR rule to protect Microsoft Defender for Endpoint users against driver malware campaigns without any user intervention. ASR rules are supported in the following versions:

  • Windows 10 Pro or Enterprise, version 1709 or later.
  • Windows Server 1803 or later.
  • Windows Server 2019.

Configuring the vulnerable driver ASR rule

The vulnerable driver ASR rule can be enabled and configured using Intune, mobile device management (MDM), Microsoft Endpoint Configuration Manager, Group Policy, and PowerShell. To enable the vulnerable driver ASR rule by each method, please refer to the Microsoft documentation Use attack surface reduction to prevent malware infection.

ASR rules offer the following four settings:

  1. Not configured: Disable the ASR rule.
  2. Block: Enable the ASR rule.
  3. Audit: Evaluate how the ASR rule would impact your organization if enabled.
  4. Warn: Enable the ASR rule but allow the user to bypass the block.

The vulnerable driver ASR GUID is 56a863a9-875e-4185-98a7-b882c64b5ce5. The Intune name is Block abuse of exploited vulnerable signed drivers.

For the full list of ASR rule’s feature differences between E3 and E5 licenses, please refer to the Microsoft documentation Attack surface reduction features across Windows versions.

Windows Defender Application Control

Microsoft driver blocklist

Driver vulnerabilities confirmed by Microsoft Defender for Endpoint and Windows Security teams, including those reported by our security community through the Vulnerable Driver Reporting Center, are blocked by the Microsoft-supplied policy. This policy is automatically updated and pushed down through WU to Secured-core devices, Hypervisor-Protected Code Integrity (HVCI) enabled, and Windows in 10 S mode devices, by default. These classes of devices use WDAC and HVCI technology to block vulnerable and malicious drivers from running on devices before they are loaded into the kernel. The vulnerable driver blocklist policy is regularly updated and pushed out through WU to help protect against the latest kernel exploits.

To learn how to turn on HVCI in Windows 10 to opt into the automated Microsoft driver blocklist, or to verify if HVCI is enabled, visit Enable virtualization-based protection of code integrity.

Defending your devices against vulnerable and malicious drivers

Creating custom WDAC block policies

Windows users can create and apply custom driver block policies to gain security parity with the Microsoft-supplied driver block policy. Microsoft publishes the block policy and recommends all customers apply kernel block rules to help prevent drivers with vulnerabilities from running on your devices or being exploited. By default, the policy is in audit mode. In this mode, drivers are not blocked from executing but will provide audit logging events. We recommend placing new policies in audit mode before enforcing them to determine the impact and scope of the blocked binaries using the audit logging events. For more information about interpreting log events, please refer to the Microsoft documentation Use audit events to create WDAC policy rules.

WDAC driver block policies are easy to create and deploy. Microsoft supplies both built-in PowerShell Cmdlets and the WDAC Wizard desktop application to create, edit, and merge WDAC policies. Below is an example of the steps to deploy the driver block policy in enforcement mode.

Step 1. Initialize the variables to be used in the script.

1$PolicyXML="$env:windir\schemas\CodeIntegrity\ExamplePolicies\RecommendedDriverBlock_Enforced.xml"
1"$DestinationBinary=$env:windir+\System32\CodeIntegrity\SiPolicy.p7b"

Step 2. Run the following to convert the XML file to binary in an elevated PowerShell host.

1ConvertFrom-CIPolicy$PolicyXML$DestinationBinary

Step 3. Deploy and activate the driver control policy using Windows Management Instrumentation (WMI).

1Invoke-CimMethod-Namespaceroot\Microsoft\Windows\CI-ClassNamePS_UpdateAndCompareCIPolicy-MethodNameUpdate-Arguments@{FilePath = $DestinationBinary}

Learn more

For more information about deploying WDAC policies, see the Microsoft documentation Deploy WDAC policies using script.

In addition to kernel-mode block and allow rules, rules can also be created for user-mode software. See our Microsoft recommended block rules for more information. For general information about WDAC technology and policies, please see the Windows Defender Application Control official documentation.

If you are a driver developer, follow the driver security checklist and the development best practices to reduce the risk of security vulnerabilities. You can also open a driver analysis case through the new Vulnerable and Malicious Driver Reporting Center.

If you have questions about the program or suspect a driver is vulnerable or malicious, please contact vulnerabledrivers@microsoft.com.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.


1CVE-2008-3431, CVE Details. 11 October 2018.

2CVE-2013-3956, CVE Details. 22 August 2013.

3CVE-2009-0824, CVE Details. 10 October 2018.

4CVE-2010-1592, CVE Details. 29 April 2010.

The post Improve kernel security with the new Microsoft Vulnerable and Malicious Driver Reporting Center appeared first on Microsoft Security Blog.

]]>
Windows 11 offers chip to cloud protection to meet the new security challenges of hybrid work http://approjects.co.za/?big=en-us/security/blog/2021/10/04/windows-11-offers-chip-to-cloud-protection-to-meet-the-new-security-challenges-of-hybrid-work/ Mon, 04 Oct 2021 20:00:53 +0000 As the world has changed over the past 18-months, companies have been wrestling with ways to keep employees and data protected as they support new ways of hybrid working. We built Windows 11 to be the most secure Windows yet with built-in chip to cloud protection that ensures company assets stay secure no matter where work happens.

The post Windows 11 offers chip to cloud protection to meet the new security challenges of hybrid work appeared first on Microsoft Security Blog.

]]>
As the world has changed over the past 18-months, companies have been wrestling with ways to keep employees and data protected as they support new ways of hybrid working. We built Windows 11 to be the most secure Windows yet with built-in chip to cloud protection that ensures company assets stay secure no matter where work happens.

Seventy-five percent of software decision-makers feel that the move to hybrid work leaves their organization more vulnerable to security threats.

The threat intelligence journey to build in protection

The expansion of both remote and hybrid workplaces brings new opportunities to organizations. But the expansion of access, increased number of endpoints, and desire for employees to work from anywhere on any device has also introduced new threats and risks. In 2020, Microsoft protected customers from 30 billion email threats, 6 billion threats to endpoint devices, and processed more than 30 billion authentications. Yet most employees still struggle to avoid clicking phishing links in email, spoofed websites, and more. The National Institute of Standards and Technology (NIST) shows a more than five-fold increase in hardware attacks over three years, and Microsoft’s initial Security Signals report found that more than 80 percent of Vice Presidents and above admitted to experiencing a hardware attack in the last two years.

We designed Windows 11 for today’s hybrid workplace. With Windows 11, hardware and software work together for protection from the central processing unit (CPU) all the way to the cloud so our customers can enable hybrid productivity and high-quality employee experiences without compromising security.

“In this new hybrid work environment, more information is being handled outside the confines of the traditional office and outside the control of IT departments. This creates new, acute security challenges and makes it more important than ever to add as many layers of protection as possible to keep devices secure. Hardware protections are a key component to instilling a higher degree of confidence that devices haven’t been compromised.”—Michael Mattioli, Vice President, Goldman Sachs

Windows 11: Security by default

NIST shows a more than five-fold increase in hardware attacks over three years, and Microsoft’s initial Security Signals report found that more than 80 percent of Vice Presidents and above admitted to experiencing a hardware attack in the last two years. To address the increasing sophistication and number of attacks against firmware/hardware, we partnered with manufacturers to create a new class of Secured-core PCs in 2019 and a new security-specific processor in 2020, the Microsoft Pluton, that redefines Windows security at the CPU. In Secured-core PCs, hardware-backed security features are enabled by default without any action required by the user or IT. Secured-core PCs were initially designed for highly targeted industries like financial services and healthcare with mission-critical roles that handle company IP, customer Personal Identifiable Information (PII), sensitive government data, financial information, or patient history. But as the move to hybrid work becomes the new normal and the threat landscape becomes more complex, the need to apply better security features from chip to cloud becomes a high priority.

Eighty percent of security decision-makers believe software alone is not enough protection from emerging threats.

We leveraged our learnings from secured-core PCs and brought them to Windows 11. The new hardware security requirements that come with Windows 11 are designed to build a foundation that is even stronger and more resilient to attacks. Windows 11 isolates software from hardware. This isolation helps protect access—from encryption keys and user credentials to other sensitive data—behind a hardware barrier, so malware and attackers can’t access or tamper with that data during the boot process. And Windows 11 requires hardware that can enable even more protections like Windows Hello, Device Encryption, virtualization-based security (VBS), hypervisor-protected code integrity (HVCI), and Secure Boot. The combination of these features has been shown to reduce malware by 60 percent on tested devices. All Windows 11 supported CPUs have an embedded Trusted Platform Module (TPM) chip, support secure boot, and support virtualization-based security (VBS) and specific VBS capabilities, fully turned on out-of-the-box.

Windows 11: Powerful security from chip to cloud. For a comprehensive view of the Windows 11 security investments, see the Windows 11 Security book.

Enhanced hardware and operating system security

With hardware-based isolation security that begins at the chip, Windows 11 stores sensitive data behind additional security barriers, separated from the operating system. As a result, information including encryption keys and user credentials are protected from unauthorized access and tampering. In Windows 11, hardware and software work together to protect the operating system, with VBS and Secure Boot built-in and enabled by default on new CPUs. Even if bad actors get in, they don’t get far.

Robust application security and privacy controls

To help keep personal and business information protected and private, Windows 11 has multiple layers of application security to safeguard critical data and code integrity. Application isolation and controls, code integrity, privacy controls, and least-privilege principles enable developers to build in security and privacy from the ground up. This integrated security protects against breaches and malware, helps keep data private, and gives IT administrators the controls they need.

Secured identities

Passwords are inconvenient to use and prime targets for cybercriminals—and they’ve been an important part of digital security for years. That changes with the passwordless protection available with Windows 11. After a secure authorization process, credentials are protected behind layers of hardware and software security, giving users secure, passwordless access to their applications and cloud services.

Connecting to cloud services

Windows 11 security enables policies, controls, procedures, and technologies that work together to protect your devices, data, applications, and identities from anywhere. Microsoft offers comprehensive cloud services for identity, storage, and access management in addition to the tools to attest that any Windows device connecting to your network is trustworthy. You can also enforce compliance and conditional access with a modern device management (MDM) service such as Microsoft Intune that works with Microsoft Azure Active Directory to control access to applications and data through the cloud.

Learn more

Windows 11 rises to the challenge of modern threats of hybrid computing and enables customers to get ultimate productivity and intuitive experiences without compromising security.

For customers who aren’t ready to transition to new devices, the baseline security features in Windows 11 are also available on Windows 10, which will remain supported through October 14, 2025. We are committed to supporting Windows 10 customers and offering choices in their computing journey.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Windows 11 offers chip to cloud protection to meet the new security challenges of hybrid work appeared first on Microsoft Security Blog.

]]>
Defend against zero-day exploits with Microsoft Defender Application Guard http://approjects.co.za/?big=en-us/security/blog/2021/09/29/defend-against-zero-day-exploits-with-microsoft-defender-application-guard/ Wed, 29 Sep 2021 16:00:15 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=97800 Zero-day security vulnerabilities are like gold to attackers. With zero-days, or even zero-hours, developers have no time to patch the code, giving hackers enough access and time to explore and map internal networks, exfiltrate valuable data, and find other attack vectors.

The post Defend against zero-day exploits with Microsoft Defender Application Guard appeared first on Microsoft Security Blog.

]]>
Zero-day security vulnerabilities—known to hackers, but unknown to software creators, security researchers, and the public—are like gold to attackers. With zero-days, or even zero-hours, developers have no time to patch the code, giving hackers enough access and time to explore and map internal networks, exfiltrate valuable data, and find other attack vectors.

Zero-days has become a great profit engine for hackers due to the imperil it poses to the public, organizations, and government. These vulnerabilities are often sold on the dark web for thousands of dollars, fueling nation-state and ransomware attacks and making the cybercrime business even more appealing and profitable to attackers.

Social engineering unlocks doors to zero-day attacks

With zero-day being the new constant, organizations must defend and protect themselves, paying special attention to the user applications as most of the zero-day vulnerabilities out there fall within this environment.

Attackers leverage social engineering tactics to gain users’ trust, deceive them, and influence their actions—from opening a malicious link attached to an email to visiting a compromised website. The malicious code executes when the application opens the weaponized content, exploiting vulnerabilities and downloading malware on the endpoint.

This combination of sophisticated social engineering attacks is a lethal weapon that leverages “the art of deception” combined with human-operated ransomware, allowing attackers to stay undercover while exploiting a system’s vulnerabilities. It creates the perfect scenario for a zero-day attack, allowing attackers to expertly spread and compromise more devices than ever before.

App isolation helps defend against zero-day exploits

In such a challenging environment, where application and web browser scans and filters on their own may not be able to stop attackers from tricking users and preventing malicious code to execute, isolation technology is the way forward to defend against zero-day exploits.

Based on the Zero Trust principles of explicit verification, least privilege access, and assume breach, isolation treats any application and browsing session as untrustworthy by default, adding multiple roadblocks for attackers attempting to get into users’ environments.

Isolation is fully embedded into Microsoft Windows chip to cloud security posture, enabling applications to apply and run in state-of-the-art virtualization technology, such as Microsoft Defender Application Guard (Application Guard), to significantly reduce the blast radius of compatible compromised applications.

With Application Guard, websites and Office files run in an isolated hypervisor (Hyper-V) based container, ensuring that anything that happens within the container remains isolated from the desktop operating system. This means that malicious code originates from a document or website which is running inside the container, the desktop remains intact, and the blast radius of the infection remains confined within the container.

This is the same virtualization-based security (VBS) technology that also powers other Windows security features like Credential Guard and Hypervisor Code Integrity (HVCI).

Presenting Hardware Isolation of Microsoft Edge and Microsoft Office products. Workflow being displayed at the bottom with Device Hardware being the focal point, flowing through Kernel, into the Windows platform before reaching Microsoft Office, Microsoft Edge, and Apps.

Today, the power of Application Guard local isolation is natively built into Microsoft Edge and Microsoft Office, providing seamless protection against malicious Word, PowerPoint, and Excel files and also malicious websites. We have extended this protection to Google Chrome and Mozilla Firefox via the Application Guard plugin, which allows untrusted websites to be opened in isolation using Microsoft Edge.

Application Guard delivers a great first line of defense for organizations—when users run an app or open email attachments and click on a link or an URL, if any of these have malware, it will be contained in the sandbox environment and won’t be able to access the desktop, its systems, or data. Additionally, every malicious attack contained by Application Guard helps inform and improve global threat intelligence, enhancing overall detection capabilities and protecting not only your organization but also millions of other Microsoft customers across the world.

Application Guard for Zero Trust

Isolation is an important part of any organization’s strategy in deploying Zero Trust and defending your system from being compromised without jeopardizing performance and productivity.

Based on the following principles of Zero Trust, isolation technology in Windows forms the backbone of Application Guard providing stronger protection and greater assurance to your users while empowering them to click anywhere.

  • Verify explicitly: Admins can also configure device health attestation policies in their organization using Microsoft Intune. Together with conditional access, these policies will ensure and attest that Windows boots with secure boot enabled—ensuring that the hypervisor booted correctly, and the App Guard container is secure.
  • Least privilege: The hardware isolated container used by Application Guard implements a secure kernel and user space and does not allow any access to the user’s desktop or other trusted resources in an enterprise.
  • Assume breach: For all purposes, this container is considered non-trustworthy and is used to run untrusted content. There is no user data or any identity present inside the container. It is assumed that the untrusted content may contain malicious code.

Learn more

For more information, check out:

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Defend against zero-day exploits with Microsoft Defender Application Guard appeared first on Microsoft Security Blog.

]]>
A deep-dive into the SolarWinds Serv-U SSH vulnerability http://approjects.co.za/?big=en-us/security/blog/2021/09/02/a-deep-dive-into-the-solarwinds-serv-u-ssh-vulnerability/ Thu, 02 Sep 2021 16:00:56 +0000 We're sharing technical information about the vulnerability tracked as CVE-2021-35211, which was used to attack the SolarWinds Serv-U FTP software in limited and targeted attacks.

The post A deep-dive into the SolarWinds Serv-U SSH vulnerability appeared first on Microsoft Security Blog.

]]>
Several weeks ago, Microsoft detected a 0-day remote code execution exploit being used to attack the SolarWinds Serv-U FTP software in limited and targeted attacks. The Microsoft Threat Intelligence Center (MSTIC) attributed the attack with high confidence to DEV-0322, a group operating out of China, based on observed victimology, tactics, and procedures. In this blog, we share technical information about the vulnerability, tracked as CVE-2021-35211, that we shared with SolarWinds, who promptly released security updates to fix the vulnerability and mitigate the attacks.

This analysis was conducted by the Microsoft Offensive Research & Security Engineering team, a focused group tasked with supporting teams like MSTIC with exploit development expertise. Our team’s remit is to make computing safer. We do this by leveraging our knowledge of attacker techniques and processes to build and improve protections in Windows and Azure through reverse engineering, attack creation and replication, vulnerability research, and intelligence sharing.

In early July, MSTIC provided our team with data that seemed to indicate exploit behavior against a newly-discovered vulnerability in the SolarWinds Serv-U FTP server’s SSH component. Although the intel contained useful indicators, it lacked the exploit in question, so our team set out to reconstruct the exploit, which required to first find and understand the new vulnerability in the Serv-U SSH-related code.

As we knew this was a remote, pre-auth vulnerability, we quickly constructed a fuzzer focused on the pre-auth portions of the SSH handshake and noticed that the service captured and passed all access violations without terminating the process. It immediately became evident that the Serv-U process would make stealthy, reliable exploitation attempts simple to accomplish. We concluded that the exploited vulnerability was caused by the way Serv-U initially created an OpenSSL AES128-CTR context. This, in turn, could allow the use of uninitialized data as a function pointer during the decryption of successive SSH messages. Therefore, an attacker could exploit this vulnerability by connecting to the open SSH port and sending a malformed pre-auth connection request. We also discovered that the attackers were likely using DLLs compiled without address space layout randomization (ASLR) loaded by the Serv-U process to facilitate exploitation.

We shared these findings, as well as the fuzzer we created, with SolarWinds through Coordinated Vulnerability Disclosure (CVD) via Microsoft Security Vulnerability Research (MSVR), and worked with them to fix the issue. This is an example of intelligence sharing and industry collaboration that result in comprehensive protection for the broader community through detection of attacks through products and fixing vulnerabilities through security updates.

Vulnerability in Serv-U’s implementation of SSH

Secure Shell (SSH) is a widely adopted protocol for secure communications over an untrusted network. The protocol behavior is defined in multiple requests for comment (RFCs), and existing implementations are available in open-source code; we primarily used RFC 4253, RFC 4252, and libssh as references for this analysis.

The implementation of SSH in Serv-U was found by enumerating references to the “SSH-“ string, which must be present in the first data sent to the server. The most likely instance of such code was the following:

Screenshot of code showing instance of SSH

Figure 1. Promising instance of “SSH-” string

Putting a breakpoint on the above code and attempting to connect to Serv-U with an SSH client confirmed our hypothesis and resulted in the breakpoint being hit with the following call stack:

Screenshot of code showing call stack resulting from break point

Figure 2. The call stack resulting from a break point set on code in Figure 1.

At this point, we noticed that Serv-U.dll and RhinoNET.dll both have ASLR support disabled, making them prime locations for ROP gadgets, as any addresses within them will be constant across any server instances running on the internet for a given Serv-U version.

After reversing related code in the RhinoNET and Serv-U DLLs, we could track SSH messages’ paths as Serv-U processes them. To handle an incoming SSH connection, Serv-U.dll creates a CSUSSHSocket object, which is derived from the RhinoNET!CRhinoSocket class. The CSUSSHSocket object lifetime is the length of the TCP connection—it persists across possibly many individual TCP packets. The underlying CRhinoSocket provides a buffered interface to the socket such that a single TCP packet may contain any number of bytes. This implies a single packet may include any number of SSH messages (provided they fit in the maximum buffer size), as well as partial SSH messages. The CSUSSHSocket::ProcessRecvBuffer function is then responsible for parsing the SSH messages from the buffered socket data.

CSUSSHSocket::ProcessRecvBuffer begins by checking for the SSH version with ParseBanner. If ParseBanner successfully parses the SSH version from the banner, ProcessRecvBuffer then loops over ParseMessage, which obtains a pointer to the current message in the socket data and extracts the msg_id and length fields from the message (more on the ParseMessage function later).

Screenshot of code

Figure 3. Selection of code from CSUSSHSocket::ProcessRecvBuffer processing loop

The socket data being iterated over is conceptually an array of the pseudo-C structure ssh_msg_t, as seen below. The message data is contained within the payload buffer, the first byte of which is considered the msg_id:

Screenshot of code

ProcessRecvBuffer then dispatches handling of the message based on the msg_id. Some messages are handled directly from the message parsing loop, while others get passed to ssh_pkt_others, which posts the message to a queue for another thread to pick up and process.

Screenshot of code

Figure 4.Pre-auth reachable handlers in CSUSSHSocket::ProcessRecvBuffer

If the msg_id is deferred to the alternate thread, CSSHSession::OnSSHMessage processes it. This function mainly deals with messages that need to interact with Serv-U managed user profile data (e.g., authentication against per-user credentials) and UI updates. CSSHSession::OnSSHMessage turned out to be uninteresting in terms of vulnerability hunting as most message handlers within it require successful user authentication (initial telemetry indicated this was a pre-authentication vulnerability), and no vulnerabilities were found in the remaining handlers.

When initially running fuzzers against Serv-U with a debugger attached, it was evident that the application was catching exceptions which would normally crash a process (such as access violations), logging the error, modifying state just enough to avoid termination of the process, and then continuing as if there had been no problem. This behavior improves uptime of the file server application but also results in possible memory corruption lingering around in the process and building up over time. As an attacker, this grants opportunities like brute-forcing addresses of code or data with dynamic addresses.

This squashing of access violations assists with exploitation, but for fuzzing, we filtered out “uninteresting” exceptions generated by read/write access violations and let the fuzzer run until hitting a fault wherein RIP had been corrupted. This quickly resulted in the following crashing context:

Screenshot of Wndbg

Figure 5. WinDbg showing crashing context from fuzzer-generated SSH messages

As seen above, CRYPTO_ctr128_encrypt in libeay32.dll (part of OpenSSL) attempted to call an invalid address. The version of OpenSSL used is 1.0.2u, so we obtained the sources to peruse. The following shows the relevant OpenSSL function:

Screenshot of code

Meanwhile, the following shows the structure that is passed:

Screenshot of code

The crashing function was reached from the OpenSSL API boundary via the following path: EVP_EncryptUpdate -> evp_EncryptDecryptUpdate -> aes_ctr_cipher -> CRYPTO_ctr128_encrypt.

Looking further up the call stack, it is evident that Serv-U calls EVP_EncryptUpdate from CSUSSHSocket::ParseMessage, as seen below:

Screenshot of code showing location of SSL

Figure 6. Location of call into OpenSSL, wherein attacker-controlled function pointer may be invoked

At this point, we manually minimized the TCP packet buffer produced by the fuzzer until only the SSH messages required to trigger the crash remained. In notation like that used in the RFCs, the required SSH messages were:

Screenshot of code

Note that the following description references “encrypt” functions being called when the crashing code path is clearly attempting to decrypt a buffer. This is not an error: Serv-U uses the encrypt OpenSSL API and, while not optimal for code clarity, it is behaviorally correct since Advanced Encryption Standard (AES) is operating in counter (CTR) mode.

After taking a Time Travel Debugging trace and debugging through the message processing sequence, we found that the root cause of the issue was that Serv-U initially creates the OpenSSL AES128-CTR context with code like the following:

Screenshot of code

Calling EVP_EncryptInit_ex with NULL key and/or IV is valid, and Serv-U does so in this case because the context is created while handling the KEXINIT message, which is before key material is ready. However, AES key expansion is not performed until the key is set, and the data in the ctx->cipher_data structure remains uninitialized until the key expansion is performed. We can (correctly) surmise that our sequence of messages to hit the crash has caused enc_algo_client_to_server->decrypt to be called before the key material is initialized. The Serv-U KEXINIT handler creates objects for all parameters given in the message. However, the corresponding objects currently active for the connection are not replaced with the newly created ones until the following NEWKEYS message is processed. The client always completes the key exchange process In a normal SSH connection before issuing a NEWKEYS message. Serv-U processed NEWKEYS (thus setting the m_bCipherActive flag and replacing the cipher objects) no matter the connection state or key exchange. From this, we can see that the last message type in our fuzzed sequence does not matter—there only needs to be some data remaining to be processed in the socket buffer to trigger decryption after the partially initialized AES CTR cipher object has been activated.

Exploitation

As the vulnerability allows loading RIP from uninitialized memory and as there are some modules without ASLR in the process, exploitation is not so complicated: we can find a way to control the content of the uninitialized cipher_data structure, point the cipher_data->block function pointer at some initial ROP gadget, and start a ROP chain. Because of the exception handler causing any fault to be ignored, we do not necessarily need to attain reliable code execution upon the first packet. It is possible to retry exploitation until code execution is successful, however this will leave traces in log files and as such it may be worthwhile to invest more effort into a different technique which would avoid logging.The first step is to find the size of the cipher_data allocation, as the most direct avenue to prefill the buffer is to spray allocations of the target allocation size and free them before attempting to reclaim the address as cipher_data. ctx->cipher_data is allocated and assigned in EVP_CipherInit_ex with the following line:

Screenshot of code

With a debugger, we can see the ctx_size in our case is 0x108, and that this allocator winds up calling ucrtbase!_malloc_base. From previous reversing, we know that both CRhinoSocket and CSUSSHSocket levels of packet parsing call operator new[] to allocate space to hold the packets we send. Luckily, that also winds up in ucrtbase!_malloc_base, using the same heap. Therefore, prefilling the target allocation is as simple as sending a properly sized TCP packet or SSH message and then closing the connection to ensure it is freed. Using this path to spray does not trigger other allocations of the same size, so we don’t have to worry about polluting the heap.

Another important value to pull out of the debugger/disassembly is offsetof(EVP_AES_KEY, block), as that offset in the sprayed data needs to be set to the initial ROP gadget. This value is 0xf8. Conveniently, most of the rest of the EVP_AES_KEY structure can be used for the ROP chain contents itself, and a pointer to the base of this structure exists in registers rbx, r8, and r10 at the time of the controlled function pointer call.

As a simple proof of concept, consider the following python code:

Screenshot of code

The above results in the following context in the debugger:

Screenshot of code showing machine context

Figure 7. Machine context showing rcx, rdx, and rip controlled by attacker

Conclusion: Responsible disclosure and industry collaboration improves security for all

Our research shows that the Serv-U SSH server is subject to a pre-auth remote code execution vulnerability that can be easily and reliably exploited in the default configuration. An attacker can exploit this vulnerability by connecting to the open SSH port and sending a malformed pre-auth connection request. When successfully exploited, the vulnerability could then allow the attacker to install or run programs, such as in the case of the targeted attack we previously reported.

We shared our findings to SolarWinds through Coordinated Vulnerability Disclosure (CVD). We also shared the fuzzer we created. SolarWinds released an advisory and security patch, which we strongly encourage customers to apply. If you are not sure if your system is affected, open a support case in the SolarWinds Customer Portal.

In addition to sharing vulnerability details and fuzzing tooling with SolarWinds, we also recommended enabling ASLR compatibility for all binaries loaded in the Serv-U process. Enabling ASLR is a simple compile-time flag which is enabled by default and has been available since Windows Vista. ASLR is a critical security mitigation for services which are exposed to untrusted remote inputs, and requires that all binaries in the process are compatible in order to be effective at preventing attackers from using hardcoded addresses in their exploits, as was possible in Serv-U.

We would like to thank SolarWinds for their prompt response. This case further underscores the need for constant collaboration among software vendors, security researchers, and other players to ensure the safety and security of users’ computing experience.

 

Microsoft Offensive Research & Security Engineering team

The post A deep-dive into the SolarWinds Serv-U SSH vulnerability appeared first on Microsoft Security Blog.

]]>
Building Zero Trust networks with Microsoft 365 http://approjects.co.za/?big=en-us/security/blog/2018/06/14/building-zero-trust-networks-with-microsoft-365/ Thu, 14 Jun 2018 15:00:35 +0000 Zero Trust networks eliminate the concept of trust based on network location within a perimeter.

The post Building Zero Trust networks with Microsoft 365 appeared first on Microsoft Security Blog.

]]>
The traditional perimeter-based network defense is obsolete. Perimeter-based networks operate on the assumption that all systems within a network can be trusted. However, today’s increasingly mobile workforce, the migration towards public cloud services, and the adoption of Bring Your Own Device (BYOD) model make perimeter security controls irrelevant. Networks that fail to evolve from traditional defenses are vulnerable to breaches: an attacker can compromise a single endpoint within the trusted boundary and then quickly expand foothold across the entire network.

Zero Trust networks eliminate the concept of trust based on network location within a perimeter. Instead, Zero Trust architectures leverage device and user trust claims to gate access to organizational data and resources. A general Zero Trust network model (Figure 1) typically comprises the following:

  • Identity provider to keep track of users and user-related information
  • Device directory to maintain a list of devices that have access to corporate resources, along with their corresponding device information (e.g., type of device, integrity etc.)
  • Policy evaluation service to determine if a user or device conforms to the policy set forth by security admins
  • Access proxy that utilizes the above signals to grant or deny access to an organizational resource
Basic components of a general Zero Trust network model

Figure 1. Basic components of a general Zero Trust network model

Gating access to resources using dynamic trust decisions allows an enterprise to enable access to certain assets from any device while restricting access to high-value assets on enterprise-managed and compliant devices. In targeted and data breach attacks, attackers can compromise a single device within an organization, and then use the “hopping” method to move laterally across the network using stolen credentials. A solution based on Zero Trust network, configured with the right policies around user and device trust, can help prevent stolen network credentials from being used to gain access to a network.

Zero Trust is the next evolution in network security. The state of cyberattacks drives organizations to take the “assume breach” mindset, but this approach should not be limiting. Zero Trust networks protect corporate data and resources while ensuring that organizations can build a modern workplace using technologies that empower employees to be productive anytime, anywhere, any which way.

Zero Trust networking based on Azure AD conditional access

Today, employees access their organization’s resources from anywhere using a variety of devices and apps. Access control policies that focus only on who can access a resource is not sufficient. To master the balance between security and productivity, security admins also need to factor in how a resource is being accessed.

Microsoft has a story and strategy around Zero Trust networking. Azure Active Directory conditional access is the foundational building block of how customers can implement a Zero Trust network approach. Conditional access and Azure Active Directory Identity Protection make dynamic access control decisions based on user, device, location, and session risk for every resource request. They combine (1) attested runtime signals about the security state of a Windows device and (2) the trustworthiness of the user session and identity to arrive at the strongest possible security posture.

Conditional access provides a set of policies that can be configured to control the circumstances in which users can access corporate resources. Considerations for access include user role, group membership, device health and compliance, mobile applications, location, and sign-in risk. These considerations are used to decide whether to (1) allow access, (2) deny access, or (3) control access with additional authentication challenges (e.g., multi-factor authentication), Terms of Use, or access restrictions. Conditional access works robustly with any application configured for access with Azure Active Directory.

Microsoft’s high-level approach to realizing Zero Trust networks using conditional access.

Figure 2. Microsoft’s high-level approach to realizing Zero Trust networks using conditional access.

To accomplish the Zero Trust model, Microsoft integrates several components and capabilities in Microsoft 365: Windows Defender Advanced Threat Protection, Azure Active Directory, Windows Defender System Guard, and Microsoft Intune.

Windows Defender Advanced Threat Protection

Windows Defender Advanced Threat Protection (ATP) is an endpoint protection platform (EPP) and endpoint detection response (EDR) technology that provides intelligence-driven protection, post-breach detection, investigation, and automatic response capabilities. It combines built-in behavioral sensors, machine learning, and security analytics to continuously monitor the state of devices and take remedial actions if necessary. One of the unique ways Windows Defender ATP mitigates breaches is by automatically isolating compromised machines and users from further cloud resource access.

For example, attackers use the Pass-the-Hash (PtH) and the “Pass the ticket for Kerberos” techniques to directly extract hashed user credentials from a compromised device. The hashed credentials can then be used to make lateral movement, allowing attackers to leapfrog from one system to another, or even escalate privileges. While Windows Defender Credential Guard prevents these attacks by protecting NTLM hashes and domain credentials, security admins still want to know that such an attack occurred.

Windows Defender ATP exposes attacks like these and generates a risk level for compromised devices. In the context of conditional access, Windows Defender ATP assigns a machine risk level, which is later used to determine whether the client device should get a token required to access corporate resources. Windows Defender ATP uses a broad range of security capabilities and signals, including:

Windows Defender System Guard runtime attestation

Windows Defender System Guard protects and maintains the integrity of a system as it boots up and continues running. In the “assume breach” mentality, it’s important for security admins to have the ability to remotely attest the security state of a device. With the Windows 10 April 2018 Update, Windows Defender System Guard runtime attestation contributes to establishing device integrity. It makes hardware-rooted boot-time and runtime assertions about the health of the device. These measurements are consumed by Windows Defender ATP and contribute to the machine risk level assigned to the device.

The single most important goal of Windows Defender System Guard is to validate that the system integrity has not been violated. This hardware-backed high-integrity trusted framework enables customers to request a signed report that can attest (within guarantees specified by the security promises) that no tampering of the device’s security state has taken place. Windows Defender ATP customers can view the security state of all their devices using the Windows Defender ATP portal, allowing detection and remediation of any security violation.

Windows Defender System Guard runtime attestation leverages the hardware-rooted security technologies in virtualization-based security (VBS) to detect attacks. On virtual secure mode-enabled devices, Windows Defender System Guard runtime attestation runs in an isolated environment, making it resistant to even a kernel-level adversary.

Windows Defender System Guard runtime attestation continually asserts system security posture at runtime. These assertions are directed at capturing violations of Windows security promises, such as disabling process protection.

Azure Active Directory

Azure Active Directory is a cloud identity and access management solution that businesses use to manage access to applications and protect user identities both in the cloud and on-premises. In addition to its directory and identity management capabilities, as an access control engine Azure AD delivers:

  • Single sign-on experience: Every user has a single identity to access resources across the enterprise to ensure higher productivity. Users can use the same work or school account for single sign-on to cloud services and on-premises web applications. Multi-factor authentication helps provide an additional level of validation of the user.
  • Automatic provisioning of application access: Users’ access to applications can be automatically provisioned or de-provisioned based on their group memberships, geo-location, and employment status.

As an access management engine, Azure AD makes a well-informed decision about granting access to organizational resources using information about:

  • Group and user permissions
  • App being accessed
  • Device used to sign in (e.g., device compliance info from Intune)
  • Operating system of the device being used to sign in
  • Location or IP ranges of sign-in
  • Client app used to sign in
  • Time of sign-in
  • Sign-in risk, which represents the probability that a given sign-in isn’t authorized by the identity owner (calculated by Azure AD Identity Protection’s multiple machine learning or heuristic detections)
  • User risk, which represents the probability that a bad actor has compromised a given user (calculated by Azure AD Identity Protection’s advanced machine learning that leverages numerous internal and external sources for label data to continually improve)
  • More factors that we will continually add to this list

Conditional access policies are evaluated in real-time and enforced when a user attempts to access any Azure AD-connected application, for example, SaaS apps, custom apps running in the cloud, or on-premises web apps. When suspicious activity is discovered, Azure AD helps take remediation actions, such as block high-risk users, reset user passwords if credentials are compromised, enforce Terms of Use, and others.

The decision to grant access to a corporate application is given to client devices in the form of an access token. This decision is centered around compliance with the Azure AD conditional access policy. If a request meets the requirements, a token is granted to a client. The policy may require that the request provides limited access (e.g., no download allowed) or even be passed through Microsoft Cloud App Security for in-session monitoring.

Microsoft Intune

Microsoft Intune is used to manage mobile devices, PCs, and applications in an organization. Microsoft Intune and Azure have management and visibility of assets and data valuable to the organization, and have the capability to automatically infer trust requirements based on constructs such as Azure Information Protection, Asset Tagging, or Microsoft Cloud App Security.

Microsoft Intune is responsible for the enrollment, registration, and management of client devices. It supports a wide array of device types: mobile devices (Android and iOS), laptops (Windows and macOS), and employees’ BYOD devices. Intune combines the machine risk level provided by Windows Defender ATP with other compliance signals to determine the compliance status (“isCompliant”) of the device. Azure AD leverages this compliance status to block or allow access to corporate resources. Conditional access policies can be configured in Intune in two ways:

  • App-based: Only managed applications can access corporate resources
  • Device-based: Only managed and compliant devices can access corporate resources

More on how to configure risk-based conditional access compliance check in Intune.

Conditional access at work

The value of conditional access can be best demonstrated with an example. (Note: The names used in this section are fictitious, but the example illustrates how conditional access can protect corporate data and resources in different scenarios.)

SurelyMoney is one of the most prestigious financial institutions in the world, helping over a million customers carry out their business transactions seamlessly. The company uses Microsoft 365 E5 suite, and their security enterprise admins have enforced conditional access.

An attacker seeks to steal information about the company’s customers and the details of their business transactions. The attacker sends seemingly innocuous e-mails with malware attachments to employees. One employee unwittingly opens the attachment on a corporate device, compromising the device. The attacker can now harvest the employee’s user credentials and try to access a corporate application.

Windows Defender ATP, which continuously monitors the state of the device, detects the breach and flags the device as compromised. This device information is relayed to Azure AD and Intune, which then denies the access to the application from that device. The compromised device and user credentials are blocked from further access to corporate resources. Once the device is auto-remediated by Windows Defender ATP, access is re-granted for the user on the remediated device.

This illustrates how conditional access and Windows Defender ATP work together to help prevent the lateral movement of malware, provide attack isolation, and ensure protection of corporate resources.

Azure AD applications such as Office 365, Exchange Online, SPO, and others

The executives at SurelyMoney store a lot of high-value confidential documents in Microsoft SharePoint, an Office 365 application. Using a compromised device, the attacker tries to steal these documents. However, conditional access’ tight coupling with O365 applications prevents this from taking place.

Office 365 applications like Microsoft Word, Microsoft PowerPoint, and Microsoft Excel allow an organization’s employees to collaborate and get work done. Different users can have different permissions, depending on the sensitivity or nature of their work, the group they belong to, and other factors. Conditional access facilitates access management in these applications as they are deeply integrated with the conditional access evaluation. Through conditional access, security admins can implement custom policies, enabling the applications to grant partial or full access to requested resources.

Zero Trust network model for Azure AD applications.

Figure 3. Zero Trust network model for Azure AD applications

Line of business applications

SurelyMoney has a custom transaction-tracking application connected to Azure AD. This application keeps records of all transactions carried out by customers. The attacker tries to gain access to this application using the harvested user credentials. However, conditional access prevents this breach from happening.

Every organization has mission-critical and business-specific applications that are tied directly to the success and efficiency of employees. These typically include custom applications related to e-commerce systems, knowledge tracking systems, document management systems, etc. Azure AD will not grant an access token for these applications if they fail to meet the required compliance and risk policy, relying on a binary decision on whether access to resources should be granted or denied.

Zero Trust network model expanded for line of business apps.

Figure 4. Zero Trust network model expanded for line of business apps

On-premises web applications

Employees today want to be productive anywhere, any time, and from any device. They want to work on their own devices, whether they be tablets, phones, or laptops. And they expect to be able to access their corporate on-premises applications. Azure AD Application Proxy allows remote access to external applications as a service, enabling conditional access from managed or unmanaged devices.

SurelyMoney has built their own version of a code-signing application, which is a legacy tenant application. It turns out that the user of the compromised device belongs to the code-signing team. The requests to the on-premises legacy application are routed through the Azure AD Application Proxy. The attacker tries to make use of the compromised user credentials to access this application, but conditional access foils this attempt.

Without conditional access, the attacker would be able to create any malicious application he wants, code-sign it, and deploy it through Intune. These apps would then be pushed to every device enrolled in Intune, and the hacker would be able to gain an unprecedented amount of sensitive information. Attacks like these have been observed before, and it is in an enterprise’s best interests to prevent this from happening.

Zero Trust network model for on-premises web applications.

Figure 5. Zero Trust network model for on-premises web applications

Continuous innovation

At present, conditional access works seamlessly with web applications. Zero Trust, in the strictest sense, requires all network requests to flow through the access control proxy and for all evaluations to be based on the device and user trust model. These network requests can include various legacy communication protocols and access methods like FTP, RDP, SMB, and others.

By leveraging device and user trust claims to gate access to organizational resources, conditional access provides comprehensive but flexible policies that secure corporate data while ensuring user productivity. We will continue to innovate to protect the modern workplace, where user productivity continues to expand beyond the perimeters of the corporate network.

Sumesh Kumar, Ashwin Baliga, Himanshu Soni, Jairo Cadena
Enterprise & Security


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

The post Building Zero Trust networks with Microsoft 365 appeared first on Microsoft Security Blog.

]]>