Windows Server 2016 | Microsoft Windows Server Blog http://approjects.co.za/?big=en-us/windows-server/blog/product/windows-server-2016/ Your Guide to the Latest Windows Server Product Information Wed, 25 Feb 2026 21:47:58 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/windows-server/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element.png Windows Server 2016 | Microsoft Windows Server Blog http://approjects.co.za/?big=en-us/windows-server/blog/product/windows-server-2016/ 32 32 Planning ahead for Windows Server 2016 end of support http://approjects.co.za/?big=en-us/windows-server/blog/2026/02/25/planning-ahead-for-windows-server-2016-end-of-support/ Wed, 25 Feb 2026 16:00:00 +0000 In accordance with the Microsoft Lifecycle Policy, extended support for Windows Server 2016 will end on January 12, 2027.

The post Planning ahead for Windows Server 2016 end of support appeared first on Microsoft Windows Server Blog.

]]>
Customers rely on Windows Server to power their mission-critical workloads. Guided by customer feedback, we continue to deliver new innovations for Windows Server across Azure, on-premises environments, and the edge.

As we continue to innovate, support for older Windows Server versions—including security updates—eventually comes to an end. In accordance with the Microsoft Lifecycle Policy, extended support for Windows Server 2016 will end on January 12, 2027.

Many customers are already upgrading to the latest version of Windows Server to take advantage of the newest innovations and modernize their IT environment. Windows Server 2025 stands as the most secure and cloud‑connected version ever—bringing cloud‑grade security, hotpatching, and centralized hybrid management to on‑premises environments.

However, we recognize that Windows Server often supports complex, business-critical applications, and some customers may need additional time to complete their modernization journey. To help protect these workloads during the transition, we are pleased to offer flexible options and benefits through Azure and the latest Windows Server releases.

Today, we are announcing Extended Security Updates for Windows Server 2016.

Extended Security Updates for Windows Server 2016 deliver an enhanced cloud experience through Azure Arc. Security updates are available through the Azure portal, providing a streamlined, customer-centric way to protect on-premises and multi-cloud environments. For customers who need to keep workloads on premises, Extended Security Updates enabled by Azure Arc provide additional Azure benefits, including licensing flexibility, Azure management capabilities, and advanced security features, while also unlocking flexible subscription billing for Windows Server 2016 workloads.

While this provides an option to continue running Windows Server 2016 to avoid any disruption to business-critical applications, this period also presents the opportunity to upgrade to Windows Server 2025 or consider migrating to Azure.

To get started with planning Windows Server 2016 end of support, please refer to the Extended Security Updates frequently asked questions for more information, and learn about the latest in Azure Migration and Modernization Program. For more information on the Microsoft Lifecycle Policy, see the Windows Server 2016 lifecycle page.

The post Planning ahead for Windows Server 2016 end of support appeared first on Microsoft Windows Server Blog.

]]>
25 reasons to choose Azure Stack HCI http://approjects.co.za/?big=en-us/windows-server/blog/2019/06/06/25-reasons-to-choose-azure-stack-hci/ http://approjects.co.za/?big=en-us/windows-server/blog/2019/06/06/25-reasons-to-choose-azure-stack-hci/#comments Thu, 06 Jun 2019 18:00:43 +0000 On May 22, 2019 we had an incredible session on hyperconverged infrastructure (HCI) with Windows Server 2019 at the Windows Server Summit. If you haven’t had a chance to watch the event, check out the recording of the live stream and deep dive sessions by registering online.

The post 25 reasons to choose Azure Stack HCI appeared first on Microsoft Windows Server Blog.

]]>
This blog post was authored by Dianna Marks, Product Marketing Manager, Windows Server Marketing. 

At the Windows Server Summit in May, Cosmos Darwin and Greg Cusanza from the Windows Server team presented a lightning round all about hyperconverged infrastructure (HCI) powered by Windows Server. If you haven’t had a chance to watch the event, check out the recording of the live stream and deep dive sessions by registering online. It’s quick and free.

Here are the 25 things they presented in the lightning round:

1. Azure Stack HCI Catalog

Available for purchase right now, there are over 75 Azure Stack HCI solutions from over 15 partners. Check out the Azure Stack HCI Catalog to find solutions from your preferred hardware vendor and get started today.

2. Networking and SDN coexisting side-by-side

Now all HCI solutions include what is required for software-defined networking (SDN). You no longer need to devote your entire infrastructure to SDN. Instead, you can mix and match per virtual machine (VM), using traditional VLAN-based networking alongside SDN. Try it out yourself in the latest Windows Admin Center release.

3. Deploy with SDN Express

Deploying SDN is easier than ever with SDN Express. Download the scripts and run SDN Express to get a helpful wizard that guides you through all the steps necessary for deployment–all in under 30 minutes. Learn more by reading the documentation for SDN deployment.

4. Windows Admin Center for HCI

Windows Admin Center is the future of Windows Server in-box management, and that extends to HCI as well. Add your HCI cluster to Windows Admin Center to get purpose-built tools for managing and monitoring Storage Spaces Direct and SDN, including capabilities like provisioning volumes, managing Hyper-V virtual machines, troubleshooting configuration or hardware problems, and much more.

5. Deduplication and compression for ReFS

Deduplication and compression are now available for ReFS, Microsoft’s recommended file system for HCI. Deduplication and compress increase usable capacity by identifying duplicate portions of files and only storing them once. Savings vary depending on the type of file but can range up to 90 percent for highly repetitive storage like ISO or VHDX backups. Check out the demo “Deduplication and compression for Storage Spaces Direct“ from Microsoft Ignite 2018, and read the documentation for Data Deduplication and ReFS.

6. Larger maximum scale

Even with deduplication and compression, it’s still possible to run out of capacity, so in Windows Server 2019 the maximum total raw storage capacity per cluster is increased from 1 PB in Windows Server 2016 up to 4 PB now. That’s enough space to store all of Wikipedia, in every language, with complete edit history, uncompressed! Watch the demo “Scale to over 3.5 PB with Windows Server 2019 and QCT QxStor” from Microsoft Ignite 2018 for an example.

7. Cluster sets

Now in Windows Server 2019, we can encapsulate a cluster within a cluster set and we can add additional clusters in a cluster set. The great thing about this is that a virtual machine (VM) can seamlessly live migrate from one cluster to a host in a different cluster and continue to access its storage. To learn more, read the documentation on cluster sets.

8. Span sites with SDN

In Windows Server 2019 we’ve improved the gateway performance for SDN’s by increasing from 4 Gbps to 18 Gbps in a single SDN gateway. We also have generic routing encapsulation (GRE) tunneling that connects two network controllers to allow different workloads to talk to each other as if they’re one network. To learn more about high performance gateways in Windows Server 2019, read the blog post “Top 10 Networking Features in Windows Server 2019: #6 High Performance SDN Gateways” on the Windows Server Networking Blog.

9. Native support for persistent memory

Windows Server has become more scalable over time with regards to both capacity and performance. It is on the leading edge of x86 hardware innovation and is consistently one of the first hardware systems and hypervisors to support new hardware technology, such as the Intel Xeon processors and Intel Optane. Watch the demo at Microsoft Ignite 2018, and read the documentation “Understand and deploy persistent memory.”

10. Faster networking with fewer cycles/byte

In addition to hardware improvements, we’ve also been investing in our networking stack. Some of the feature improvements include nearly double the throughput for send and receive paths, lower CPU utilization, more equipped for high bandwidth, high latency links, and a Data Plane Developer Kit (DPDK) for Windows that bypasses the host networking stack to speed up packet processing capabilities. You can read more about all of these features on our Windows Server Networking Blog.

11. Mirror-accelerated parity is 2X faster

The storage team has also been focused on optimizations with mirror accelerated parity, a technology that allows you to create a volume that partly uses mirror resiliency and parity, or erasure coding resiliency. This provides the benefit of faster writes and opens up capacity.

12. Built-in performance history

HCI now has built-in performance history. It easily gets historical data and displays over 50 performance counters in aggregate. There’s nothing that you have to install, set up, or configure. Explore more in the documentation for performance history.

13. Shielded virtual machines

Shielded virtual machines are part of the core hypervisor and have been improved so that even if you don’t have network access you can still connect to it through the console in PowerShell Direct. We’ve also added the ability to add Linux inside your shielded VMs. Watch the five minute overview video of shielded VMs and check out the documentation for VM connect and PowerShell Direct to shielded VMs, as well as deploying Linux inside a shielded VM.

14. Core scheduler

It’s also important to protect your hypervisor host. In Windows Server 2016 we had the Classic Scheduler that offered fair share, preemptive round-robin scheduling for guest virtual processors. In Windows Server 2019, we have a new hypervisor scheduler called Core Scheduler, which constrains the virtual processors to physical core boundaries, further isolating virtual machines. Understand further details by reading the documentation “Managing Hyper-V hypervisor scheduler types.”

15. HTTP/2

In Windows Server 2019 we’ve made HTTP/2 better with connection coalescing, which allows two websites with a common domain name to share a certificate and a single TCP connection. It also has an improved cipher suite selection, which reduces connection failures and continues to enforce blacklisted ciphers.

16. More secure clustering

The core failover clustering has gotten more secure by completely removing dependency on NTLM, exclusively using Kerberos or certificate-based authentication between nodes, and now no change is required by the user or deployment tools. Check out the documentation “What’s new in Failover Clustering” to learn more.

17. Cluster-aware updating for HCI

Cluster-aware updating for HCI now allows you to easily keep your Windows Server fully patched with the latest updates. It is a technology that orchestrates the roll-out of updates across your server nodes. More information is included in the documentation “What’s new in Failover Clustering,” as well as during the demo “Be an IT hero with Storage Spaces Direct in Windows Server 2019” during Microsoft Ignite 2018.

18. USB witness

Now in Windows Serve 2019, in addition to file share witness requiring an on-premises connection, and cloud witness requiring a connection to the cloud, we are also offering a third option called “USB witness,” which allows you to insert into a compatible router or switch. More information can be found in the documentation “What’s new in Failover Clustering,” as well as in the example steps to configure USB witness with the NetGear Nighthawk X4S.

19. Nested resiliency

Nested resiliency keeps you up and running even in the event of having both a drive failure and server failure at the same time. It uses RAID 5 + 1 to do parity resiliency and mirror that across to the other server. This allows you to survive multiple failovers even with a two-node cluster. To learn more, refer to the documentation “Nested resiliency for Storage Spaces Direct.”

20. Protection with Azure Site Recovery

For smaller sites and branch offices, Azure Site Recovery allows you backup your virtual machines to Azure and is integrated into Windows Admin Center. To learn more, refer to the documentation “Protect your Hyper-V Virtual Machines with Azure Site Recovery and Windows Admin Center.”

21. Azure Monitor and Health Service

Health Service on Windows Admin Center is now integrated with Azure Monitor and provides email and SMS notifications when something goes wrong. Learn how to configure Azure Monitor for HCI.

22. Integration with Azure Network Adapter

Azure Network Adapter is an integration into Windows Admin Center that allows you to connect a single server to an Azure virtual gateway so that you can get access from that server to your Azure files and VMs running in Azure. Watch the Microsoft Mechanics video “Windows Server 2019 + Microsoft Azure = hybrid management updates” for a demo.

23. LEDBaT or PacketMon

LEDBaT will back off lower priority workloads in order to let high priority traffic to take over and when the higher priority traffic slows down, the lower priority traffic will pick back up again in a second or two. Read more about LEDBaT on the Networking Blog.

24. High accuracy time

By implementing features such as Precision Time Protocol, Traceability, and Leap Seconds support, we’ve ensured improved time accuracy, especially for those of you in regulated industries. Learn more about high accuracy time features in the Windows Server Summit session and in the Windows Time Service documentation.

25. Over 25,000 clusters worldwide!

Last year, we had 10,000 clusters running around the world and this year we have over 25,000 clusters running storage spaces direct!

That’s a wrap!

We just gave you 25 reasons why you should consider HCI with Windows Server. And again, register online to watch the session from Windows Server Summit if you’ve missed it. From security to scalability and enhanced management, we are continuously improving our products to meet your data center needs. And if you stay tuned, I have no doubt you’ll be seeing 25 more reasons soon!

The post 25 reasons to choose Azure Stack HCI appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2019/06/06/25-reasons-to-choose-azure-stack-hci/feed/ 1
Express updates for Windows Server 2016 re-enabled for November 2018 update http://approjects.co.za/?big=en-us/windows-server/blog/2018/11/12/express-updates-for-windows-server-2016-re-enabled-for-november-2018-update/ http://approjects.co.za/?big=en-us/windows-server/blog/2018/11/12/express-updates-for-windows-server-2016-re-enabled-for-november-2018-update/#comments Tue, 13 Nov 2018 00:00:30 +0000 Starting with the November 13, 2018 update on Tuesday, Windows will again publish Express updates for Windows Server 2016. Express updates for Windows Server 2016 stopped in mid-2017 after a significant issue was found that kept the updates from installing correctly.

The post Express updates for Windows Server 2016 re-enabled for November 2018 update appeared first on Microsoft Windows Server Blog.

]]>
This blog post was authored by Joel Frauenheim, Principal Program Manager, Windows Servicing and Delivery. 

Starting with the November 13, 2018 Update Tuesday, Windows will again publish Express updates for Windows Server 2016. Express updates for Windows Server 2016 stopped in mid-2017 after a significant issue was found that kept the updates from installing correctly. While the issue was fixed in November 2017, the update team took a conservative approach to publishing the Express packages to ensure most customers would have the November 14, 2017 update (KB 4048953) installed on their server environments and not be impacted by the issue.

System administrators for WSUS and System Center Configuration Manager (SCCM) need to be aware that in November 2018 they will once again see two packages for the Windows Server 2016 update: a Full update and an Express update. System Administrators who want to use Express for their server environments need to confirm that the device has taken a full update since November 14, 2017 (KB 4048953) to ensure the Express update installs correctly. Any device which has not been updated since the November 14, 2017 (KB 4048953) will see repeated failures that consume bandwidth and CPU resources in an infinite loop if the Express update is attempted. Remediation for that state would be for the system administrator to stop pushing the Express update and push a recent Full update to stop the failure loop.

With the November 13, 2018 Express update customers will see an immediate reduction of package size between their Management system and the Windows Server 2016 end points.

The post Express updates for Windows Server 2016 re-enabled for November 2018 update appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2018/11/12/express-updates-for-windows-server-2016-re-enabled-for-november-2018-update/feed/ 3
The technical value of WSSD validated HCI solutions, part 2 http://approjects.co.za/?big=en-us/windows-server/blog/2018/02/21/the-technical-value-of-validated-hci-solutions-part-2/ Wed, 21 Feb 2018 20:00:08 +0000 In the previous blog post I discussed the high-level ideas behind our solution validation program, and the technical merits it accrues for people who buy and use those solutions...

The post The technical value of WSSD validated HCI solutions, part 2 appeared first on Microsoft Windows Server Blog.

]]>
This post is authored by Carmen Crincoli, Senior Program Manager, Windows Server, and is the second of a 2-part series aimed at explaining the value provided by the Windows Server-Software Defined (WSSD) program.

In the previous blog post I discussed the high-level ideas behind our solution validation program, and the technical merits it accrues for people who buy and use those solutions. Building on those concepts, I’m going to dive into one particularly thorny integration challenge partners face when creating these solutions. I’ve been working with Windows and hardware at Microsoft for over 20 years. I know many of you have similar experience in the industry. You’re probably pretty certain you know how to make these systems sing together. I’m here to tell you it’s not as straightforward as your past experiences might lead you believe.

Standalone vs distributed systems

The way most servers and storage have been designed and validated in the PC ecosystem (until very recently) has been as standalone systems. You buy a server, you get the parts and sizes you need to support your workload, connect it to external networks and storage, and off you go. The integration work is all done by the OEM before they turn around and sell the system to you. Any external dependencies aren’t necessarily guaranteed, and often need to be tuned and configured to work properly in a customer environment for the best experience. Those external systems often undergo their OWN integration testing. SANs, networks, and other dependent infrastructure are tested and sold in their own silos to work under specific conditions.

The world of HCI blurs those lines dramatically. All of those different parts now converge into the same set of systems and need to work quickly and flawlessly with each other. The server is now the storage and the network and the compute, all in one. All of those separate integration steps need to be considered as part of the system design, not just the ones for the server itself. This is where “whole-solution validation” comes in. I’m going to dive into one of the thorniest and most problematic areas, the storage subsystem.

Off-the-shelf vs vendor supplied

One of the simplest areas to overlook is the differences between retail or channel supplied parts, and their vendor-tuned equivalents. One of the lesser-discussed features of the storage world is how specialized different versions of otherwise identical devices can become, as requested by different vendors. This isn’t anything specific to an OS or an OEM, it’s industry-wide.

Let’s say vendor A has a very popular model of disk. They’ll take that one disk and sell it into the retail channel under their enterprise label. The firmware will be tuned for maximum compatibility, since they don’t know what it will be attached to, and what features are needed or supported. It won’t use aggressive timings. It won’t implement special commands. It won’t do anything fancy that might break in a general configuration. Now, they take that same disk, and sell it to one of their major OEM customers. That OEM has LOTS of requirements. They sell their own SANs. They sell servers. They sell hardened systems that get installed on oil rigs. They sell all kind of things that one disk might need to be plugged into. All of those uses might have their own special needs. They have specialized diagnostics. They have aggressive timings, which allows them to have better performance and latencies than their competitors. They only have 1 or 2 types of HBA they support, so their testing matrix is small. The disk vendor will turn around and sell that device to them with a specialized firmware load that meets all their needs. Now repeat that procedure for a totally different OEM with very different requirements. You now have 3 versions of the same disk, all slightly different. Repeat another few times…you see where this is going, right?

Now, for all intents and purposes those disks look and act like any other disk 99% of the time. After all, there are industry standards around these things, and you’re not going to get many sales for a SATA or SAS disk that won’t ACT like a SATA or SAS disk when addressed by the system. HOWEVER, the 1% of the time it’s slightly different is often when it matters most. When the system is under stress, or when certain commands are issued in a certain order, it can result in behaviors that no one ever explicitly tested for before. OEM A tested for feature X in this scenario. OEM B didn’t test for it at all because they don’t use it that way. Put it into System C, and maybe the firmware on the disk just checks out and never comes back. Now you have a “dead” disk that’s simply an interop bug that no one but YOU would have found, because you’re building from a parts list instead of from a solution list.

You can take that above, very-real example, and apply it across the entire storage chain. You’ll quickly discover that a real compatibility matrix for a small set of physically identical devices multiplies into HUNDREDS of unique configurations. That unfortunate reality is why even though there is a healthy list of parts with the required Software-Defined Data Center AQ’s, we want customers invested in solutions which were designed and tested from end-to-end by our partners and OEMs, rather than just a list of certified devices.

Architectural differences matter too

Now, your first thought after reading that might be, “Fine, I’ll simply use SDDC parts from my preferred OEM, knowing that they’ve been designed to work together.” While that could eliminate certain types of interop bugs, it still leaves the chance for architectural changes that were never tested together as part of an integrated solution design. As an example, mixing SAS and SATA devices on the same storage bus could result in unexpected issues during failures or certain types of I/O. While technically supported by most HBAs and vendors, unless you’ve actually tested the solution that way, you have an incomplete picture of how all the pieces will work together. Another example is that not all SSD’s are created equal. NVMe devices offer a tremendous speed and latency benefit over more traditional SAS and SATA SSDs. With that boost can come a much higher performance ceiling for the entire system, which can result in heavy and unusual I/O patterns for the other storage devices on the system. One configuration using SATA SSD’s and HDD’s may behave very differently the second you swap the SATA SSDs for NVMe ones, despite the fact that all of them may be SDDC certified by one vendor.

This isn’t exclusively a storage problem, either. There are often multiple NICs in an OEM’s catalog that can support high-performance RDMA networks. Some of them will use RoCE, some will use iWARP, likely none of them will be supported by the vendor in a mixed environment, and often they require very specific firmware revisions for different whole solution configurations to ensure maximum reliability and performance. If no one ever tested the whole system as a solution with all the pieces well-defined, from end-to-end, the final reliability of the solution can only be speculated on.

Conclusion

These posts aren’t meant to make blanket statements about the supportability of do-it-yourself HCI and S2D configurations. Assuming your systems have all the proper logos and certifications, and your configuration passes cluster validation and other supportability checks, Microsoft will support you. However, it’s very easy to get caught with a slightly different version of the hardware, firmware, tested drivers, and other components. Building and tracking this in-house is not a trivial task! That’s why we wanted to make it clear why we’re running the Windows Server-Software Defined program, and what benefits you can expect by purchasing one of these configurations for your critical workloads. We feel confident that you will have a better HCI experience with Windows Server over the lifetime of the solution via this program than by building it on your own.

The post The technical value of WSSD validated HCI solutions, part 2 appeared first on Microsoft Windows Server Blog.

]]>
The technical value of WSSD validated HCI solutions, part 1 http://approjects.co.za/?big=en-us/windows-server/blog/2018/02/20/the-technical-value-of-wssd-validated-hci-solutions-part-1/ Tue, 20 Feb 2018 20:00:58 +0000 As many of you are aware, one of the most important scenarios enabled by modern versions of Windows is the creation of a hyper-converged infrastructure (HCI) through new technologies like Storage Spaces Direct (S2D)...

The post The technical value of WSSD validated HCI solutions, part 1 appeared first on Microsoft Windows Server Blog.

]]>
This post is authored by Carmen Crincoli, Senior Program Manager, Windows Server, and is the first of a 2-part series aimed at explaining the value provided by the Windows Server-Software Defined (WSSD) program.

As many of you are aware, one of the most important scenarios enabled by modern versions of Windows is the creation of a hyper-converged infrastructure (HCI) through new technologies like Storage Spaces Direct (S2D). Right now, all the technology you need to create this infrastructure is simply baked into Windows Server Datacenter, waiting for someone to enable and configure it for their use. However, in order to receive support in a production environment, there are a number of quality hurdles that have to be cleared, particularly around the hardware used to create them. The work that goes into designing and testing these solutions is intense. As with any other Windows feature, you can do much of this work yourself. However, most IT organizations don’t have the resources to perform the same level of integration testing as our partner ecosystem.

To ease your path to a high quality experience, we created the WSSD program. WSSD enables our partners to easily list and sell fully tested and supported configurations. We recommend our customers use these pre-certified solutions instead of trying to build their own. Rather than just asking you to take our word for it, I’m going to attempt to provide an understanding of the kind of technical work these vendors do as integrators, and why building off the shelf is a bad idea for critical production workloads.

Device and system certification

The first step in this process is certifying that critical devices in the system can perform the work that’s needed. To that end, we use our existing Windows Server Catalog and logo program to test and enforce some baseline functionality. All devices must have the Windows Server 2016 logo as a baseline requirement. Additionally, in order to be supported with S2D and HCI, some devices need one of the Software-Defined Data Center (SDDC) additional qualifiers (AQ). These AQ’s represent the fact that a device has undergone additional testing specifically meant to ensure they’ll work as expected in an HCI environment. Currently, there are 4 classes of devices that need this additional testing:

  • Systems (Servers)
  • NICs
  • Storage Adapters (SAS/SATA HBAs)
  • Mass Storage Devices (NVMe/SSD/HDD)

While you could assemble a group of different certified devices and build a supported configuration, you’d be missing one critically important step: Integrated testing. All of those parts underwent some level of additional testing, but not together as a group. Your disk from Company A was probably tested in a system from OEM B using an HBA from HBA Vendor C. None of that tells you that same disk will perform in YOUR system from OEM Y that uses an HBA from HBA Vendor Z. It might work fine 99% of the time, but then run into an unexpected interop issue during your heaviest (and usually most important) usage times, and now your business is being impacted.

Solutions testing is at the core of WSSD

Dealing with that integration problem is exactly why we created the WSSD program. WSSD validated solutions have an additional (and critical!) layer of testing beyond what is done for those SDDC AQ’s I mentioned earlier, whole solution testing. That means before one of our partners can list a solution in the WSSD Catalog, they must perform additional levels of testing to prove the configuration they’ve designed and built can handle the demands of HCI. That testing involves building one each of their smallest and largest configurations. Those configurations then go through a 4-day, fully-configured stress test that was designed specifically to work these kinds of HCI systems in areas where they might fail. If the solution doesn’t meet the bar at the end of the 4 days, they’ll need to troubleshoot it and start from scratch again. On top of that, major configuration changes in their catalog or the solution structure can trigger a requirement to re-run the tests, so they can be sure major changes do not introduce serious problems that could have been easily discovered.

The end result of all this additional testing is a solution that both Microsoft and our partners have high confidence in. Taking the results of all the work and putting it into the WSSD catalog is how we ensure customers get the highest quality experience using Microsoft HCI solutions. This post has dealt with the high-level view of why we created the solutions program and how integrated testing helps. I didn’t dive into technical specifics, so some of you may still be skeptical of the value that really gets generated with this additional testing. For those of you that want to see some of what we deal with in terms of integration struggles, please check out part 2.

The post The technical value of WSSD validated HCI solutions, part 1 appeared first on Microsoft Windows Server Blog.

]]>
Windows Server 2016’s Storage Spaces Direct wins CRN Product of the Year http://approjects.co.za/?big=en-us/windows-server/blog/2017/12/19/windows-server-2016s-storage-spaces-direct-wins-crn-product-of-the-year/ Tue, 19 Dec 2017 17:00:49 +0000 This month Windows Server 2016 and its Storage Spaces Direct technology won Product of the Year from Computer Reseller News for the Software-defined Storage category. Winners were chosen through a combination of editorial selection and a survey sent to solution providers to capture real-world satisfaction among partners and their customers.

The post Windows Server 2016’s Storage Spaces Direct wins CRN Product of the Year appeared first on Microsoft Windows Server Blog.

]]>
This month Windows Server 2016 and its Storage Spaces Direct technology won Product of the Year from Computer Reseller News for the Software-defined Storage category.

Winners were chosen through a combination of editorial selection and a survey sent to solution providers to capture real-world satisfaction among partners and their customers. The top five finalists in the 20 Products of the Year technology categories were selected by CRN editors, while solution providers rated those finalists to determine the winner. Solution providers considered a number of factors, including product quality and reliability, technical innovation, compatibility and ease of integration, ability to drive revenue, and fulfillment of market and customer demands.

“Each year CRN calls on the solution provider community to help identify the IT channel’s most useful, well-crafted and innovative offerings,” said Robert Faletra, CEO of The Channel Company. “The resulting list singles out top-performing products that stand at the intersection of technical excellence, outstanding profit potential and high customer demand. Our 2017 Products of the Year list serves as a beacon of much-deserved recognition for stand-out vendors and as an important guide for channel partners to the latest and most rewarding technologies and services on the market.”

Save IT time and resources with one-stop partner solutions

The easiest way for organizations to implement Storage Spaces Direct is via partners that sell Microsoft-validated systems running Windows Server 2016 optimized for hyper-converged compute, storage and networking. Each partner will sell, deploy and support the entire solution, saving IT staff time and resources. This month, we are pleased to welcome InSpur to our growing list of partners that includes dataOn, DellEMC, Fujitsu, HPE, Lenovo, QCT and SuperMicro.

Read more about the CRN award and find a partner solution that fits your needs at www.microsoft.com/wssd.

The post Windows Server 2016’s Storage Spaces Direct wins CRN Product of the Year appeared first on Microsoft Windows Server Blog.

]]>
Webcast: How to leverage Azure for your Windows Server environment http://approjects.co.za/?big=en-us/windows-server/blog/2017/12/05/webcast-how-to-leverage-azure-for-your-windows-server-environment/ http://approjects.co.za/?big=en-us/windows-server/blog/2017/12/05/webcast-how-to-leverage-azure-for-your-windows-server-environment/#comments Tue, 05 Dec 2017 17:00:35 +0000 Hello Windows Server Nation! I spend a lot of time traveling and talking to customers, and I love to hear about all the innovative ways you use Windows Server to run mission-critical workloads...

The post Webcast: How to leverage Azure for your Windows Server environment appeared first on Microsoft Windows Server Blog.

]]>
This blog post was authored by Jeff Woolsey, Principal Program Manager, Microsoft.

Hello Windows Server Nation!

I spend a lot of time traveling and talking to customers, and I love to hear about all the innovative ways you use Windows Server to run mission-critical workloads. Of course one topic that always comes up is THE CLOUD. Early on, questions about cloud adoption were tentative… “Can I trust the cloud for security? For performance? Is it really cost-effective?”

In the last year, I’ve noticed a major shift.

The questions have changed. Now I regularly hear: “How do I get started? What do I run in the cloud vs. what should I keep on-premises? How do I set up a hybrid environment so I get the best of both worlds? How do I get the right training to evolve my skillset?”

The cloud is real, and organizations around the world are moving forward with a cloud-first strategy. Now’s the time to embrace this change and build your skillset. To answer these questions and more, I’d like to invite you to my webcast on Tuesday, December 12, 2017 at 1 pm Pacific TimeWays to leverage Azure for your Windows Server Workloads.

In this webinar you’ll learn how to:

  • Integrate on-premises Windows Server environments with hybrid capabilities such as backup and storage.
  • Easily migrate apps and workloads to Windows Server on Azure to gain cloud agility and efficiency.
  • Secure and manage both Windows and Linux applications running on Windows Server in Azure and across your hybrid environment.
  • Use existing Windows Server licenses to save on Azure.

Of course, I’ll show in-depth demos, plus share some great resources to get started with Azure. I hope you’ll join me for this exciting webcast and get ready to add “cloud admin” to your list of IT superpowers.

Register now for the December 12th webcast.

The post Webcast: How to leverage Azure for your Windows Server environment appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2017/12/05/webcast-how-to-leverage-azure-for-your-windows-server-environment/feed/ 1
In practice: How customers are using Shielded Virtual Machines to secure data http://approjects.co.za/?big=en-us/windows-server/blog/2017/12/04/in-practice-how-customers-are-using-shielded-virtual-machines-to-secure-data/ Mon, 04 Dec 2017 17:00:46 +0000 You’ve read and heard a lot from Microsoft about the unprecedented security provided by Shielded Virtual Machines in Windows Server 2016, but how is this feature being used by real customers? We decided to round up a few customer stories for you, to illustrate the various real-world benefits being reported by users of Shielded VMs.

The post In practice: How customers are using Shielded Virtual Machines to secure data appeared first on Microsoft Windows Server Blog.

]]>
You’ve read and heard a lot from Microsoft about the unprecedented security provided by Shielded Virtual Machines in Windows Server 2016, but how is this feature being used by real customers? We decided to round up a few customer stories for you, to illustrate the various real-world benefits being reported by users of Shielded VMs in Windows Server 2016.

  • Managed hosting you can trust: The most security-conscious organizations often resist hosted solutions for fear that the hoster will have access to their data. For Rackspace, one of the biggest names in managed hosting, this perception was a sales blocker… until it wasn’t. Using Shielded Virtual Machines in Windows Server 2016, augmented by Microsoft System Center 2016 and Microsoft Operations Management Suite for better security monitoring, Rackspace can move customers into a private cloud with the highest level of security assurance.
  • More security, less cost: Convergent Computing (CCO), a boutique IT consulting company based in San Francisco, likes to use the technologies it recommends to customers. An early adopter of Windows Server 2016, CCO has been pleased with the results. “With Shielded VMs, Host Guardian Service, and software-defined networking, we can cost-effectively give customers the most secure network possible,” says Rand Morimoto, the company’s president. “With previous versions of Windows Server, we could create isolated networks but at a much higher cost, because we had to double every component. With Windows Server 2016, we deliver the same tight security at half the cost.”
  • Stopping the enemy at the gate: While most VM security involves protecting virtual machines from unauthorized access and malicious code, up to now there has been little to prevent a bad actor from copying the VM and running it in an unsecured environment where all its data can be privately exfiltrated. “No one else has an answer to the problem of how to protect your virtual machines from compromised fabric credentials or, heaven forbid, compromised admins,” says Kenny Lowe, head of emerging technologies at Brightsolid, one of the leading datacenter hosting companies in Scotland. The Host Guardian Service (HGS) in Windows Server 2016 protects against this through an attestation service which ensures that only trusted Hyper-V hosts can run your Shielded VMs. This closes the door on security exploits that can occur via the storage system, the network, or even while your VM is being backed up.
  • Reduced regulatory costs: ModusLink Global Solutions helps companies across many industries manage supply chains and logistics. For many customers, ModusLink handles their end-customer credit card data and must comply with ever-changing payment information regulatory requirements. “With Shielded VMs, we’re able to reduce the scope of what needs to be reviewed by PCI auditors, because Shielded VMs encrypt the data,” says Andrew Hamlin, Manager of IT Infrastructure at ModusLink. “The use of Shielded VMs reduces our regulatory compliance costs. We can eliminate outside monitoring services, which delivers a significant savings, and our own lean staff can manage a larger datacenter footprint. By reducing our costs, we can put out more competitive bids, which helps us win more deals.”

The post In practice: How customers are using Shielded Virtual Machines to secure data appeared first on Microsoft Windows Server Blog.

]]>
Announcing support for SATADOM boot drives in Windows Server 2016 http://approjects.co.za/?big=en-us/windows-server/blog/2017/08/30/announcing-support-for-satadom-boot-drives-in-windows-server-2016/ http://approjects.co.za/?big=en-us/windows-server/blog/2017/08/30/announcing-support-for-satadom-boot-drives-in-windows-server-2016/#comments Wed, 30 Aug 2017 16:00:02 +0000 As you are probably aware, Hyper-V was launched way back in Windows Server 2008. It’s been almost a decade of evolution based on customer feedback and most recently our learnings in running Azure. The entire operating system changed because of the hypervisor and many other features were added to support the new norm – applications run on virtual machines today.

The post Announcing support for SATADOM boot drives in Windows Server 2016 appeared first on Microsoft Windows Server Blog.

]]>
This blog post was authored by Scott M. Johnson, Senior Program Manager, Windows Server.

As you are probably aware, Hyper-V was launched way back in Windows Server 2008. It’s been almost a decade of evolution based on customer feedback and most recently our learnings in running Azure. The entire operating system changed because of the hypervisor and many other features were added to support the new norm – applications run on virtual machines today. However, there’s one bit of feedback that we haven’t delivered on… yet.

In a clustered virtual environment, the storage is shared between the multiple nodes, usually in a SAN device, which makes the local drives for the virtualization host almost unnecessary. You still need a local drive for the host operating system itself. For some customers, not having a local drive would be ideal, but they still need a drive to boot and the feedback was that SD cards could be used for that purpose. However, Windows Server has some requirements that SD cards are not able to meet around endurance, performance, and capacity.

Today we are excited to announce the support for SATA-connected Disk-on-Modules (SATADOM) devices as primary boot drives for Windows Server 2016 and future Long-Term Servicing Branch (LTSC) or Semi-Annual Channel releases.

“SATADOM modules show that they can operate in a high I/O environment like Windows Server and they can offer significant savings in the cost of the boot drives, density and power,” says Erin Chapple, General Manager, Windows Server. “Each server node that uses a SATADOM for the boot drive uses less power and enables higher storage density, which lowers the cost of the solution.”

Flash devices that are connected over a high-speed interface, such as SATA or PCI-e, have been supported by Windows Server for many years. In our testing, we have found that not all flash storage devices are created equal. There are three main factors needed to insure proper boot support:

  • Endurance: Based on data we collected running VM Fleet workloads, we require a minimum endurance of 0.14 drive writes per day for Server boot devices. This is one order of magnitude above what our endurance data is showing.
  • Performance: To ensure that a boot device can handle the necessary IO load, we are requiring that the system run the Private Cloud Simulator (PCS) test for a minimum of five days without the boot drive causing failures. Flash devices not adhering to these designs can cause large latency spikes that can take down a cluster node or cause Windows processes to slow considerably.
  • Capacity: To ensure enough room for the OS, its updates, the cluster database, and logging over the five-year lifetime of the server we are recommending a minimum capacity of 128GB. As flash storage degrades, it will rewrite memory pages and mark bad cells as unusable. This causes the drive capacity to steadily trend lower over the life of the device. Overprovisioning is required to withstand the workload over time.

Approved SATADOM devices will be identified by the OEM that sells and supports the solution. Many flash storage devices that have been certified can be found on the Windows Server Catalog, however, they will need to go through additional testing and validation by the server manufacturer. OEMs should validate the quality of their SATADOM modules by running the HLK device.storage tests and must perform a full run of the Private Cloud Simulator (PCS) test in a clustered configuration for a minimum of 5 days without the boot drive causing failures.

While a SATADOM device which meets the requirements outlined in this article has sufficient endurance and over-provisoning capacity to handle typical page file and cluster database usage, if a customer has workloads which are anticipated to result in heavy swapping, or has not configured the server with sufficient system DRAM, it is recommended that the system page file is re-routed to an alternative location. Alternatively, if the system is configured with ample system DRAM, the customer can choose to completely disable the system page file. Further, the DOM module must be connected to an internal SATA port and be tagged as “non-removable”. At some point, we will likely adopt additional specifications, including support for high-write endurance devices such as 3D XPoint.

It’s also important to note that SATADOM devices should be used for the OS Boot. In that case, data drives are either Storage Spaces Direct or traditional SAN or NAS. We continue to evaluate Secure Digital (SD) cards and USB-connected flash drives and hope to include support for these types of drives in the future.

If you haven’t already, join our Windows Insiders program so you can access the new preview build for Windows Server Semi-Annual Channel releases and join the conversation in the Tech Community.

The post Announcing support for SATADOM boot drives in Windows Server 2016 appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2017/08/30/announcing-support-for-satadom-boot-drives-in-windows-server-2016/feed/ 1
Performance Tuning Guidelines for Windows Server 2016 http://approjects.co.za/?big=en-us/windows-server/blog/2017/04/26/performance-tuning-guidelines-for-windows-server-2016/ http://approjects.co.za/?big=en-us/windows-server/blog/2017/04/26/performance-tuning-guidelines-for-windows-server-2016/#comments Wed, 26 Apr 2017 16:00:35 +0000 Today, we are pleased to announce the availability of the Windows Server 2016 Performance Tuning Guide. This updated guide provides a comprehensive collection of technical articles with practical guidance for IT professionals and server administrators responsible for monitoring and tuning Windows Server 2016 across the most common server workloads and scenarios.

The post Performance Tuning Guidelines for Windows Server 2016 appeared first on Microsoft Windows Server Blog.

]]>
Today, we are pleased to announce the availability of the Windows Server 2016 Performance Tuning Guide. This updated guide provides a comprehensive collection of technical articles with practical guidance for IT professionals and server administrators responsible for monitoring and tuning Windows Server 2016 across the most common server workloads and scenarios. With this guidance, administrators can tune server settings in Windows Server 2016 and achieve incremental performance or energy efficiency gains, especially when the nature of the workload varies little over time.

It is important that your tuning changes consider the hardware, the workload, the power budgets, and the performance goals of your server. This guide describes each setting and its potential effect to help you make an informed decision about its relevance to your system, workload, performance, and energy usage goals.

Windows Server 2016 performance and tuning recommendations are split across server hardware and power, by server role and by server sub-system tuning considerations:

Server hardware tuningBy server roleBy server subsystem
Hardware performance considerationsActive Directory ServersCache and Memory Management
Hardware power considerationsContainersSoftware Defined Networking
 File ServersStorage Spaces Direct
 Hyper-V Servers 
 Remote Desktop Servers 
 Web Servers 

Guidance available for on-line and off-line viewing

The Windows Server Performance Tuning guide is available to view online, as well as a PDF for off-line consumption.

To download the PDF-version, open your browser and navigate to the Windows Server library. Just below the left-hand table of contents, click on the button labeled, ‘Download PDF’.

Performance Tuning

Share your experience with the community

If you want to share your experience, or exchange tips and tricks with other admins, check out the new Windows Server Tech Community.

The post Performance Tuning Guidelines for Windows Server 2016 appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2017/04/26/performance-tuning-guidelines-for-windows-server-2016/feed/ 1