Virtualization | Microsoft Windows Server Blog http://approjects.co.za/?big=en-us/windows-server/blog/tag/virtualization/ Your Guide to the Latest Windows Server Product Information Sat, 09 Mar 2024 00:39:01 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/windows-server/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element.png Virtualization | Microsoft Windows Server Blog http://approjects.co.za/?big=en-us/windows-server/blog/tag/virtualization/ 32 32 Zero to SDN in under five minutes http://approjects.co.za/?big=en-us/windows-server/blog/2016/02/04/zero-to-sdn-in-under-five-minutes/ http://approjects.co.za/?big=en-us/windows-server/blog/2016/02/04/zero-to-sdn-in-under-five-minutes/#comments Thu, 04 Feb 2016 09:00:00 +0000 Deploy a cloud application quickly with the new Microsoft SDN stack You might have seen this blog post recently published on common data center challenges. In that article, Ravi talked about the challenges surrounding deployment, flexibility, resiliency, and security, and how our Software Defined Networking (SDN) helps you solves those challenges.

The post Zero to SDN in under five minutes appeared first on Microsoft Windows Server Blog.

]]>
Deploy a cloud application quickly with the new Microsoft SDN stack

You might have seen this blog post recently published on common data center challenges. In that article, Ravi talked about the challenges surrounding deployment, flexibility, resiliency, and security, and how our Software Defined Networking (SDN) helps you solves those challenges.

In this blog post series we will go deeper so you’ll know how you can use Microsoft SDN with Hyper-V to deploy a classic application network topology. Think about how long it takes you to deploy a three-tier web application in your current infrastructure. Ok, do you have a figure for it? How long, and how many other people did you need to contact?

This series focuses on a deployment for a lab or POC environment. If you decide to follow along with your own lab setup you’ll interact with the Microsoft network controller, build an overlay software defined network, define security policy, and work with the Software Load Balancer.

  • In Part 1 you’ll be introduced to the SDN stack and the three-tier workload app
  • In Part 2 you’ll learn about the front end web tier and tenant configuration
  • In Part 3 you’ll get into the application tier and the back end data tier

Here’s what you’ll need in your lab environment:

The first step in deploying the cloud application is to install and configure the servers and infrastructure. You will need to install Windows Server 2016 Technical Preview 4 on a minimum of three physical servers. Use the Planning Software Defined Networking TechNet article for guidance in configuring the underlay networking and host networking. The environment I used while writing this post and deploying the three-tier app has the following configuration:

  • Three physical servers each with Dual 2.26 GHz CPUs, 24 GB of memory, 550 GB of storage, two 10Gb Ethernet cards
  • Each host uses a Hyper-V Virtual Switch in a Switch-Embedded Team configuration
  • All hosts are connected to an Active Directory domain named “SDNCloud”
  • Each server is attached to a Management VLAN, and the default gateway is an SVI on a switch
  • The upstream physical switch is configured with the same VLAN tags as the Hyper-V virtual switch, and uses trunk mode so that management and host network traffic can share the same switch ports

Introduction

Enterprise and hosting providers use their IT tool kits to address similar and reoccurring problems:

  • Deploy new services quickly with enough flexibility to accommodate incremental demand for capacity and performance
  • Maintain availability despite multiple failure modes
  • Ensure security

Windows Server 2016 helps you address these challenges in the application platform itself, and for the networking technology we’ll cover in this blog it’s the same technology that services the 1.5 million+ network requests per second average in Microsoft Azure.

The scenario for the series is for a new product that our fictitious firm “Fabrikam” is launching to meet the demands for convenience and self-service in requesting a new passport, renewing an expired passport, or updating a citizen’s personal information. The application is called “Passport Expeditor” and it removes the need for a citizen to go to the passport agency and execute a paper-based process that uses awkward government-speak.

Passport Expeditor is based on a three-tier architecture, which consists of a front-end web tier to present the interface to the user, the application tier to validate inputs and contains the application logic, and a back-end database tier to store passport information. The software in each tier runs in the in a virtual machine, and is connected to one or more networks with associated security policies.

Figure 1: Passport Expeditor application architecture

External users will access Fabrikam’s Passport Expeditor cloud application through a hostname registered to an IP address that is routable on the public Internet. In order to handle the thousands of requests Fabrikam expects to see at launch, load balancing services are required and will be provided in the network fabric using Microsoft’s in-box Server Load Balancer (SLB). The SLB will distribute incoming TCP connections among the web-tier nodes providing both performance and resiliency. To do this the SLB will monitor the health probes installed on each VM and take any VMs which are “down” out of rotation until they become healthy again. The SLB can also increase the number of  VMs servicing the application during periods of peak load and then scale back down when load decreases.

Core concepts

Before we dive in, let’s spend a moment talking about some core concepts and technologies we will be using:

  • PowerShell scripting: We will use PowerShell scripts to create the network policy and resources and use the HTTP verbs PUT and GET to inform the Network Controller of this policy
  • Network Controller: The Microsoft Network Controller is the “brains” of the SDN Stack. Network policy is defined through a set of resources modeled using JSON objects and given to the Network Controller through a RESTful API. The Network Controller will then send this policy to the SDN Host Agent running on each Hyper-V Host (server)
  • SDN Host Agent: The Network Controller communicates directly with the SDN Host Agent running on each server. The Host Agent then programs this policy in the Hyper-V Virtual switch.
  • Hyper-V Virtual Switch: Microsoft’s vSwitch is responsible for enforcing network policy (such as VXLAN based overlay networks, access control lists, address translation rules, etc) provisioned by the network controller.
  • Software Load Balancer: The SLB consists of a multiplexer which advertises a Virtual IP (VIP) address to external clients (using BGP) and distributes connections across a set of Dynamic IP (DIP) addresses assigned to VMs attached to a network.
  • North/South and East/West traffic: These terms refer to where network traffic originates and is destined. North/South indicates the network traffic is going outside of the virtual network or data center. East/West indicates the network traffic is coming from inside the virtual network or within the data center.

Figure 2: Windows Server 2016 SDN stack

Perform the following steps to configure Windows Server Technical Preview 4 for the scenario:

  1. Install the operating system on the physical server
  2. Enable the Hyper-V Role on each host
  3. Create a Hyper-V Virtual Switch on each host. Be sure to use the same name for each virtual switch on each host and bind it to a network interface
  4. Ensure the virtual switch’s Management virtual network interface (vNIC) is connected to the Management VLAN and has an IP address assigned to it
  5. Verify connectivity via the Management IP address between all servers
  6. Join each host to an active directory domain

The system is now ready to receive the SDN Stack components, software infrastructure, and inform the Network Controller about the fabric resources. If you haven’t already retrieved the scripts from GitHub, download them now. All scripts are available on the Microsoft SDN GitHub repository and can be downloaded as a zip file from the link referenced (for more details on Git, please reference this link).

Fabric resource deployment

The Network Controller must be informed of the environment it is responsible for managing by specifying the set of servers, VLANs, and service credentials. These fabric resources will also be the endpoints on which the controller enforces network policies. The resource hierarchy and dependency graph for these fabric resources is shown in Figure 3.

Figure 3: Network controller northbound API fabric resource hierarchy

The variables in the FabricConfig.psd1 file configuration file must be populated with the correct values to match your environment. Insert the appropriate configuration parameter anywhere you see the mark “<<Replace>>”. You will do this for credentials, VLANs, Border Gateway Protocol Autonomous System Numbers (BGP ASNs) and peers, and locations for SDN service VMs.

When customizing the FabricConfig.psd1 file:

  • Ensure the directory specified by the InstallSrcDir variable is shared with Everyone and has Read/Write access.
  • The value of the NetworkControllerRestName variable must be registered in DNS with the value of the floating IP address of the Network Controller specified by the NetworkControllerRestIP parameter.
  • The value of the vSwitchName variable must be the same for all Hyper-V Virtual Switches in each server.
  • The LogicalNetworksarraycontains the fabric resources which correspond to specific VLANs and IP prefixes in the underlay network. In this post series, we will only be configuring and using:
    • Hyper-V Network Virtualization Provider Address (HNVPA): Used as the underlay network for hosting HNV overlay virtual networks.
    • Management: Used for communication between Network Controller and Hyper-V Hosts (and SDN Host Agent)
    • Virtual IP (VIP): Used as the public (routable) IP prefix through which external users will access the HNV overlay virtual network (e.g. Web-Tier). Routes to the VIPs will be advertised using internal BGP peering between the SLB Multiplexer and BGP Router.
    • The Transit and GREVIP networks are used by the Gateways (not covered in this post series). In the future, the SLB Multiplexer will also connect to the Transit logical network.
  • The Hyper-V host section is an array of NodeNameswhich must correspond to the physical hosts registered in DNS. This section determines where to place the infrastructure VMs (Network Controller, SLB Multiplexer, etc.).
    • The IP Addresses for the Network Controller VMs (e.g. NC-01) must come from the Management logical network’s IP prefix.
    • The IP Addresses for the Software Load Balancer VMs (e.g. MUX-01) must come from the HNVPA logical network’s IP prefix.
  • The Management and HNVPA logical network prefixes must be routable between each other.

Figure 4: Deployment environment

After customizing this file and running the SDNExpress.ps1 script documented in the TechNet article, validate your configuration by testing that the requisite fabric resources, e.g. logical networks, servers, and SLB Multiplexer, are correctly provisioned in the Network Controller by following the steps in the TechNet article. As a first step, you should be able to ping the Network Controller (NetworkControllerRestIP) from any host. You should also verify that you can query resources on the Network Controller using the REST Wrappers Get-NC<ResourceName> (e.g. PS > Get-NCServer) and validate that the output includes provisioningState = succeeded.

Note: The deployment script creates multi-tenant Gateway VMs. These will not be used in this blog series.

Figure 5: Network controller provisioning 

The final check is to ensure that the load balancers are successfully peering with the BGP router (either a VM with Routing and Remote Access Server (RRAS) role installed or Top of Rack (ToR) Switch. Border Gateway Protocol (BGP) is used by SLB to advertise the VIP addresses to external clients and then route the client requests to the correct SLB Multiplexer. In my lab I am using the BGP router in the ToR and the switch validation output is shown below:

Figure 6: Successful BGP peering

Summary and validation

In this blog post, we introduced the Passport Expeditor service which can be installed as a cloud application using the new Microsoft Software Defined Networking (SDN) Stack. We walked through the host and network pre-requisites and deployed the underlying SDN infrastructure, including the Network Controller and SLB. The fabric resources deployed will be used as the basis to instantiate and deploy the tenant resources in part II of this blog series. The Network Controller REST Wrapper scripts can be used to query the fabric resources as shown in the TechNet article here.

In the next blog post: Front-end Web Tier Deployment and Tenant Resources

The SDN fabric is now ready to instantiate and deploy tenant resources. In the next part in this blog series, we will be creating the following tenant resources for the front-end web tier of the three-tier application shown in Figure 1 above:

  1. Access Control Lists
  2. Virtual Subnets
  3. VM Network Interfaces
  4. Public IP

We’d love to hear from you. Please let us know if you have any questions in the comments!

The post Zero to SDN in under five minutes appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2016/02/04/zero-to-sdn-in-under-five-minutes/feed/ 2
Free Microsoft Virtual Academy Courses on Hyper-V and Microsoft Virtualization for VMware Professionals http://approjects.co.za/?big=en-us/windows-server/blog/2013/01/22/free-microsoft-virtual-academy-courses-on-hyper-v-and-microsoft-virtualization-for-vmware-professionals/ http://approjects.co.za/?big=en-us/windows-server/blog/2013/01/22/free-microsoft-virtual-academy-courses-on-hyper-v-and-microsoft-virtualization-for-vmware-professionals/#comments Tue, 22 Jan 2013 13:59:00 +0000 This is some pretty exciting stuff being brought to you by our Microsoft & VMware virtualization experts Symon Perriman, Jeff Woolsey and Matt McSpirit.  I know that it may be difficult to block out an entire day for this training, but here is what you can do if you can’t make it for the entire day.

The post Free Microsoft Virtual Academy Courses on Hyper-V and Microsoft Virtualization for VMware Professionals appeared first on Microsoft Windows Server Blog.

]]>
This is some pretty exciting stuff being brought to you by our Microsoft & VMware virtualization experts Symon Perriman, Jeff Woolsey and Matt McSpirit.  I know that it may be difficult to block out an entire day for this training, but here is what you can do if you can’t make it for the entire day.

Did I mention that the events are FREE? Both of these courses are designed for IT Pros that are either new to Windows Server 2012 Hyper-V or have experience with other virtualization technologies like Citrix or VMware.

There is another Jump Start course coming in late February , Microsoft Tools for VMware Integration/Migration Jump Start, The date is TBD. We will make sure to post a link to it when we have a date and registration link.

Hope you can make it and that you find these courses helpful.

The post Free Microsoft Virtual Academy Courses on Hyper-V and Microsoft Virtualization for VMware Professionals appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2013/01/22/free-microsoft-virtual-academy-courses-on-hyper-v-and-microsoft-virtualization-for-vmware-professionals/feed/ 3
Introducing Windows Server “8” Hyper-V Network Virtualization: Enabling Rapid Migration and Workload Isolation in the Cloud http://approjects.co.za/?big=en-us/windows-server/blog/2012/04/16/introducing-windows-server-8-hyper-v-network-virtualization-enabling-rapid-migration-and-workload-isolation-in-the-cloud/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/04/16/introducing-windows-server-8-hyper-v-network-virtualization-enabling-rapid-migration-and-workload-isolation-in-the-cloud/#comments Mon, 16 Apr 2012 13:14:00 +0000 We’ve all heard about the agility that server virtualization delivers.  However, our conversations with people in the trenches made it clear that the full potential of virtualization remains frustratingly beyond their grasp.  In particular, the lack of agile networking limits the agility you can achieve at a reasonable cost.

The post Introducing Windows Server “8” Hyper-V Network Virtualization: Enabling Rapid Migration and Workload Isolation in the Cloud appeared first on Microsoft Windows Server Blog.

]]>
We’ve all heard about the agility that server virtualization delivers.  However, our conversations with people in the trenches made it clear that the full potential of virtualization remains frustratingly beyond their grasp.  In particular, the lack of agile networking limits the agility you can achieve at a reasonable cost.

Windows Server “8” is the most cloud optimized operating system, providing choice and flexibility in configuring private, hybrid, and public cloud solutions.  Bill Laing, in his blog post, Windows Server “8” Beta Available Now, outlined some of our key investments, including Hyper-V Network Virtualization.  In this blog post, Sandeep Singhal (General Manager of the Windows Networking team) and Ross Ortega (Principal Program Manager from the Windows Networking team) describes some of the issues surrounding cloud adoption and how Hyper-V Network Virtualization in Windows Server “8” addresses these challenges.

We’ve spent the past couple of years talking with customers about why they haven’t yet deployed their workloads to a cloud.  We consistently heard three main issues. First, they want to gradually begin moving individual services to the cloud with a flexible hybrid cloud solution. Second, moving to the cloud is difficult. It’s tedious, time-consuming, manual, and error-prone.  Third, customers express concern about their ability to move to the cloud while preserving isolation from other tenants, be they other business units in the private cloud or competitors in a public cloud.  In the end, whether you’re building your own private clouds or considering using a public cloud provider, you want easy onboarding, flexibility to place your virtual machines anywhere—either inside or outside the cloud, and workload isolation.

Network Agility:  An Unfulfilled Promise
Underlying all of these concerns, customers want the control and flexibility to move their services to the cloud, move them to a different cloud provider, or even move them back to their enterprise datacenter. However, today this is quite labor intensive because cloud hosters require that their customers change the IP addresses of services when those services are moved to a particular cloud environment.  This seems like a minor deployment detail, but it turns out that an IP address is not just some arbitrary number assigned by the networking folks for addressing. The IP address also has real semantic meaning to an enterprise. A multitude of network, security, compliance, and performance policies incorporate and are dependent on the actual IP address of a given service. Moving to the cloud means having to rewrite all these policies. Of course you have to find them all first and then negotiate and coordinate with the different organizations that control those policies.  If you wanted to move to a different cloud provider then that new hoster would assign different IP addresses, requiring yet another policy rewrite. The current situation blocks many customers and scenarios from adopting the cloud.

Customers asked us to have Windows make it appear that their services in the cloud were similar to the services running in their internal datacenters, while adhering to their existing policies and providing isolation from other VMs running in the cloud hosting environment.  When moving to the cloud customers want their data to be as isolated and as safe as if it were running in their own datacenter.

In summary, you demanded the ability to run Any Service on Any Server in Any Cloud.

We took this feedback seriously and designed a new technology called Hyper-V Network Virtualization in Windows Server “8” to provide a scalable, secure multi-tenant solution for those building cloud datacenters and to make it easier for customers to incrementally move their network infrastructure to private, hybrid, or public clouds.  As we will describe later, Hyper-V Network Virtualization builds on existing IETF and IEEE standards, providing interoperability with existing and future network equipment, security appliances, and operational processes.

Hyper-V Network Virtualization:  Applying Server Virtualization to Entire Networks
With traditional server virtualization, each physical host is converted to a virtual machine (VM), which can now run on top of a common physical host.  Each VM has the illusion that it is running on a dedicated piece of hardware, even though all resources—memory, CPU, and hardware peripherals are actually shared.

Network virtualization extends the concept of server virtualization to apply to entire networks.  With network virtualization, each physical network is converted to a virtual network, which can now run on top of a common physical network.  Each virtual network has the illusion that it is running on a dedicated network, even though all resources—IP addresses, switching, and routing—are actually shared.

Hyper-V Network Virtualization allow customers to keep their own internal IP addresses when moving to the cloud while providing isolation from other customers’ VMs – even if those VMs happen to use the exact same IP addresses.  We do this by giving each VM two IP addresses. One IP address, the IP address visible in the VM, is relevant in the context of a given tenant’s virtual subnet. Following the IEEE nomenclature we call this the Customer Address (CA). The other IP address is relevant in the context of the physical network in the cloud datacenter. This is called the Provider Address (PA).  This decoupling of tenant and datacenter IP addresses provides many benefits.

The first benefit is that you can move your VMs to the cloud without modifying the VM’s network configuration and without worrying about what else (or who else) is sitting in that datacenter. Your services will continue to just work. In the video demo referenced at the end of this article we used traceroute, a low-level network diagnostic tool, to show how on-premise services were interacting transparently with services that had been moved to the cloud. We highlighted the fact that once the services moved to the cloud, packets simply were now taking an extra hop to get to the cloud datacenter.  The virtual subnet has become a nearly transparent extension of the enterprise’s datacenter. We also created a secure encrypted tunnel to the virtual subnet. The end result is that different customers with the exact same IP address connected to the same virtual switch are isolated.

Imagine Red VM having IP address 10.1.1.7 and Blue VM having 10.1.1.7 as shown above. In this example the 10.1.1.7 IP addresses are CA IP addresses. By assigning these VMs different PA IP addresses (e.g. Blue PA = 192.168.1.10 and Red PA = 192.168.1.11) there is no routing ambiguity. Via policy we restrict the Red VMs to interact only with other Red VMs and similarly Blue VMs are isolated to the Blue virtual network. The Red VM and the Blue VM, each having a CA of 10.1.1.7, can safely coexist on the same Hyper-V virtual switch and in the same cloud datacenter.

Second, policy enforcement in the end hosts provides a scalable solution for multi-tenant isolation. We do not need to reconfigure the network infrastructure to isolate tenants from each other. Before Hyper-V Network Virtualization, the common solution was to use VLANs for isolation. However, VLANs have scalability limitations, only supporting a limited number of tenants in a shared datacenter.  In addition to having scalability limitations, VLANs are more suited for static network topologies and not the more dynamic environment in which tenants may continually join and leave the cloud datacenter or tenant workloads may continually be migrated across physical servers for load balancing or capacity management purposes.  VLANs require the reconfiguration of production switches every time a VM needs to be brought up on a new server. Typically, the VM deployment team creates a service ticket to the network operations team to reconfigure the appropriate switches with the relevant VLAN tags. By eliminating this step, Hyper-V Network Virtualization increases the overall operational efficiency of running a datacenter.

Third, by allowing you to preserve your IP addresses when moving to the cloud, Hyper-V Network Virtualization also enables cross-subnet live migration. When we talk about live migration, we mean that any client talking to a service is unaware that the VM hosting the service has moved from one physical host to a different physical host. Previously cross-subnet live migration was impossible because, by definition, if you move a VM from one subnet to a different subnet its IP address must change. Changing the IP address causes a service interruption. However, if a VM has two IP addresses, then the IP address relevant in the context of the datacenter (Physical Address) can be changed without needing to change the IP address in the VM (Customer Address). Therefore the client talking to the VM via the CA is unaware that the VM has physically moved to a different subnet.

What’s really exciting is that cross-subnet live migration enables new scenarios. Recall our “Any Service, Any Server, Any Cloud” vision.  VMs can now run and live migrate anywhere in the datacenter without a service interruption. New datacenter efficiencies can be achieved. For instance hosters, during light load periods (such as around 3am) can consolidate any active VMs to a subset of the datacenter and power off other parts of the datacenter—all without having to reconfigure the physical network topology. Administrators no longer need to worry about a VM being trapped in one part of the datacenter because its IP address physically restricts where that IP address is valid.  Similarly VM deployment algorithms are free to assign VMs anywhere in the datacenter because the PA address relevant in the context of the physical datacenter can be changed independently of the CA address which is relevant in the context of the virtual network.

With Hyper-V Network Virtualization the virtual machine is totally unaware that its IP address is being virtualized. From the VM’s perspective, all communication is occurring via the CA IP address. Because the VMs are unaware that they are part of a virtual network, any operating system running within a Hyper-V VM (e.g. Windows Server 2008 R2, Windows Server 2003, Linux,  etc.) can be a member of a virtual network. Hyper-V Network Virtualization is completely transparent to the guest OS.

Two Mechanisms for Virtualizing IP Addresses on a Subnet
Customers can deploy Hyper-V Network Virtualization in their existing datacenters using either IP virtualization mechanism without requiring any hardware upgrades or topology changes. We virtualize the CA IP address by using the PA when sending networking traffic between different end hosts.  We use two different mechanisms to virtualize the IP address:  Generic Routing Encapsulation (GRE) and IP Rewrite.  For most environments, GRE should be used for network virtualization, because it provides the most flexibility and performance.  However, IP Rewrite may be appropriate to provide performance and compatibility in some current high-capacity datacenters.

Within the source and destination Hypervisors, packets are associated with a Virtual Subnet ID.   The Virtual Subnet ID allows the hypervisor to differentiate traffic from different virtual subnets that may share the same CA IP address (e.g., differentiating Red 10.1.1.7 from Blue 10.1.1.7).  Using the Virtual Subnet ID, the Hypervisor can apply additional per-tenant policies, such as access controls.

The first IP virtualization mechanism is Generic Routing Encapsulation (GRE), an established IETF standard.  In this case we encapsulate the VM’s packet (using CA IP addresses) inside another packet (using PA IP addresses).  The header of this new packet also contains a copy of the Virtual Subnet ID.  A key advantage of GRE is that because the Virtual Subnet ID is included in the packet, network equipment can apply per-tenant policies on the packets, enabling efficient traffic metering, traffic shaping, and intrusion detection.  Another key advantage of GRE is that all the VMs residing on a given end host can share the same PA because the Virtual Subnet ID can be used to differentiate the various IP addresses from different virtual subnets.  Sharing the PA has a big impact on scalability. The number of IP and MAC addresses that need to be learned by the network infrastructure can be substantially reduced. For instance, if every end host has an average of 20 VMs then the number of IP and MAC addresses that need to be learned by the networking infrastructure is reduced by a factor of 20.  A current drawback of GRE is that the NIC offloads no longer provide the scalability benefit to the end host because the NIC offloads are operating on the outer header and not the inner header. The offloads can be important for high performance environments where a VM requires 10 gigabit bandwidth.  Similarly entropy for datacenter multi-path routing is reduced because the switches, by hashing fields in only the outer packet, will not differentiate traffic coming from different VMs residing on the same end host.

Never fear!  We have a solution for these limitations.

In Windows Server “8” we’ve made working with standards a high priority. Along with key industry thought leaders (Arista, Broadcom, Dell, Emulex, HP, and Intel) we published an informational draft RFC (NVGRE) discussing the use of GRE, an existing IETF standard, as an encapsulation protocol for network virtualization.  Together with server, switch, and NIC partners we have demonstrated broad ecosystem support for Hyper-V Network Virtualization. Once our partners incorporate NVGRE into their products, hosters will get the scalability benefits of GRE without performance loss.  They will also see opportunities to deploy multi-tenant-aware network equipment, including load balancers, firewalls, storage controllers, network monitoring and analysis tools, and other security and performance products.

GRE is the preferred network virtualization approach for most current and future datacenters.  However, some current datacenters may need greater scalability than can be achieved with current generation hardware.  For these environments, Windows Server “8” supports a second IP virtualization mechanism, IP Rewrite.

With IP Rewrite, we rewrite the source and destination CA IP addresses in the packet with the appropriate PA addresses as packets leave the end host. Similarly, when virtual subnet packets enter the end host the PA IP addresses are rewritten with appropriate CA addresses. A key advantage of IP Rewrite is that the packet format is not changed. Existing network hardware offload technologies such as Large Send Offload (LSO) and Virtual Machine Queue (VMQ) work as expected.  These offloads provide significant benefit for network intensive scenarios in a 10 Gigabit Ethernet environment.  In addition, IP Rewrite is fully compatible with existing network equipment, which does not see any new traffic types or formats.  Of course, the Virtual Subnet ID is not transmitted on the network, so that existing network equipment cannot perform per-tenant packet processing.

An Incremental Approach to Creating Hybrid Clouds
With Hyper-V Network Virtualization we’ve made it easy to move your subnets to the cloud. However, once in the cloud the next thing you need is for your virtual subnets to interact with each other. For example, the typical 3-tier architecture is composed of a front end tier, business logic tier, and a database tier. You need a way for these virtual subnets (tiers in this example) to communicate as if they were all located in your own datacenter. Hyper-V Network Virtualization allows you to route between your virtual subnets. That is, not only can you bring your virtual subnet to the cloud, you can also bring your entire network topology to the cloud.

Windows Server “8” also provides a Cloud Cross-Premise connectivity solution that can securely connect your datacenter or private cloud with a public cloud to create a hybrid cloud.  Combining Hyper-V Network Virtualization with Cloud Cross-Premise Connectivity means we have made the cloud a seamless extension of your datacenter.

Internally at Microsoft, we use Hyper-V Network Virtualization in a private cloud deployment using GRE as the IP virtualization mechanism. Here the tenants are the various product groups in the Server and Tools Business unit (STB). We wanted to consolidate our datacenter infrastructure to realize the operational and resource efficiencies of the cloud as well as providing our product groups the necessary flexibility they required when deploying their services in a cloud environment.

Conclusion
We’re excited about Hyper-V Network Virtualization because it benefits customers moving to private, hybrid, and public clouds; provides new efficiencies for hosters and administrators running cloud datacenters; and presents new opportunities for our ecosystem partners.  Hyper-V Network Virtualization—combined with other technologies such as Storage Live Migration, simultaneous Live Migration, and Failover Replication—enables complete VM mobility with Windows Server “8.”

To learn more about Hyper-V Network Virtualization watch the demo we gave at //BUILD/. Our demo starts at 13 minutes and 52 seconds.  Our //BUILD talk:  Building secure, scalable multi-tenant clouds using Hyper-V Network Virtualization provides more technical details.  For deployment information, we encourage you to visit our Technet site.

The post Introducing Windows Server “8” Hyper-V Network Virtualization: Enabling Rapid Migration and Workload Isolation in the Cloud appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/04/16/introducing-windows-server-8-hyper-v-network-virtualization-enabling-rapid-migration-and-workload-isolation-in-the-cloud/feed/ 5
Windows Server 8: Standards-Based Storage Management http://approjects.co.za/?big=en-us/windows-server/blog/2011/10/14/windows-server-8-standards-based-storage-management/ Fri, 14 Oct 2011 11:45:00 +0000 Take a look at Jeffrey Snover’s blog post, Distinguished Engineer and Lead Architect for Window Server, where he discusses standards-based storage management in the next release of Windows Server codenamed “Windows Server 8”. Read Jeffrey’s post on the Microsoft Server and Cloud Platform blog.

The post Windows Server 8: Standards-Based Storage Management appeared first on Microsoft Windows Server Blog.

]]>
Take a look at Jeffrey Snover’s blog post, Distinguished Engineer and Lead Architect for Window Server, where he discusses standards-based storage management in the next release of Windows Server codenamed “Windows Server 8”.

Read Jeffrey’s post on the Microsoft Server and Cloud Platform blog.

The post Windows Server 8: Standards-Based Storage Management appeared first on Microsoft Windows Server Blog.

]]>
Storage and Continuous Availability Enhancements in Windows Server 8 http://approjects.co.za/?big=en-us/windows-server/blog/2011/09/20/storage-and-continuous-availability-enhancements-in-windows-server-8/ Tue, 20 Sep 2011 10:10:00 +0000 Check out Thomas Pfenning’s blog post, General Manager Server and Tools, where he discusses the storage and availability enhancements in the next release of Windows Server codenamed “Windows Server 8”.  Read Thomas’ post on the Microsoft Server and Cloud Platform blog.

The post Storage and Continuous Availability Enhancements in Windows Server 8 appeared first on Microsoft Windows Server Blog.

]]>
Check out Thomas Pfenning’s blog post, General Manager Server and Tools, where he discusses the storage and availability enhancements in the next release of Windows Server codenamed “Windows Server 8”. 

Read Thomas’ post on the Microsoft Server and Cloud Platform blog.

The post Storage and Continuous Availability Enhancements in Windows Server 8 appeared first on Microsoft Windows Server Blog.

]]>
Windows Server 8: An Introduction http://approjects.co.za/?big=en-us/windows-server/blog/2011/09/11/windows-server-8-an-introduction/ http://approjects.co.za/?big=en-us/windows-server/blog/2011/09/11/windows-server-8-an-introduction/#comments Sun, 11 Sep 2011 10:27:00 +0000 Take a look at Bill Laing’s blog, Corporate Vice President for Microsoft’s Server and Cloud business, to get an overview of the next release of Windows Server codenamed Windows Server 8. Read Bill’s post on the Microsoft Server and Cloud Platform blog.

The post Windows Server 8: An Introduction appeared first on Microsoft Windows Server Blog.

]]>
Take a look at Bill Laing’s blog, Corporate Vice President for Microsoft’s Server and Cloud business, to get an overview of the next release of Windows Server codenamed Windows Server 8.

Read Bill’s post on the Microsoft Server and Cloud Platform blog.

The post Windows Server 8: An Introduction appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2011/09/11/windows-server-8-an-introduction/feed/ 1
The RemoteFX industry ships products today http://approjects.co.za/?big=en-us/windows-server/blog/2011/02/22/the-remotefx-industry-ships-products-today/ Tue, 22 Feb 2011 10:25:00 +0000 Hi, I’m Rob Williams, the Partner Ecosystem Program Manager for RemoteFX at Microsoft.  Today is a big day for Microsoft and our RemoteFX partners.

The post The RemoteFX industry ships products today appeared first on Microsoft Windows Server Blog.

]]>
Hi, I’m Rob Williams, the Partner Ecosystem Program Manager for RemoteFX at Microsoft.  Today is a big day for Microsoft and our RemoteFX partners. Today’s release of RemoteFX with Windows Server 2008 R2 Service Pack 1 (SP1) and Windows 7 SP1 is the culmination of two years’ close collaboration between engineers at Microsoft and engineers in great companies across our industry. This work has allowed us to build a new graphics experience for Virtual Desktop Infrastructure (VDI) customers.  The download for this release will enable millions of existing servers to run RDP 7.1 with RemoteFX and Remote Desktop Services.  Hundreds of millions of Windows 7 Client machines will be enabled to take advantage of the benefits associated with accessing RemoteFX-capable servers.
 
RemoteFX for Remote Desktop Services supports both Remote Desktop Session Hosts (RDSH) and Remote Desktop Virtualization Hosts (RDVH).  A new feature of RemoteFX for VDI that we are particularly excited about is the world’s first virtualized Graphical Processing Unit (GPU) platform for VDI. We have been working closely with Intel, Nvidia and AMD to build this feature.  Intel is blogging about their Xeon processors running RemoteFX and both Nvidia and AMD are releasing their official RemoteFX GPU boards today.  These boards are being incorporated into servers from the world’s leading server OEMs. HP, Dell, IBM and NEC are all ready to release and support their customers’ RemoteFX deployments.
 
In addition to virtualized GPUs in servers, another really exciting part of RemoteFX technology is the move to support a wider range of thin clients, including zero clients.  HP, Wyse, iGEL, and DevonIT are all announcing RemoteFX thin clients today.  Two new companies are entering into the thin client market with zero clients to support RemoteFX: Cloudium is blogging about the release today and ThinLinX is posting videos of their client in action for both RDSH and RDVH.  It is great to see the excitement around these new small but fast devices.
 
In anticipation of today’s release HP has been designing RemoteFX capable servers and thin clients.  Today they are releasing a RemoteFX reference architecture along with announcing their support of RemoteFX on their thin client. 
 
Dell is publishing a RemoteFX solution brief and blogging about running RemoteFX on Dell PowerEdge R710 and PowerEdge M610x blade servers.
 
Wyse has been working closely with us since the very early days of RemoteFX.  I’ve been using my Wyse R class desktop and X class mobile thin client running early versions of RemoteFX to access my VMs for the last year.  Today Wyse is announcing that they are delivering RemoteFX on those platforms and their new Z class.
 
iGEL is announcing a line of Linux thin clients that will all support RemoteFX along with RemoteFX support in their Universal Desktop Converter Software.  If you are going to CeBIT, be sure to stop by their booth to see a live demo.
 
DevonIT is also announcing support for RemoteFX in their new TC5Xc thin client and announcing their upcoming support in DeTOS and a new DevonIT ARM client.
 
Because the new model of remoting in RemoteFX enables very thin and zero clients, we have been working closely with Texas Instruments to build the semiconductors that will power the next generation of RemoteFX zero client devices.  TI’s DM3730 is their first entry in a family of TI processors that will target the RemoteFX zero client market.  You will see clients based on that device coming out this year.  Check out TI’s blog about their entry into this market.
 
All of these great companies are joined by our software partners in announcements. These software partners have also been working closely with us to test and support RemoteFX.
 
Last year, Microsoft and Citrix signed a collaboration agreement for RemoteFX which will enable Citrix to integrate and leverage RemoteFX technologies within its XenDesktop suite of products and HDX.  Citrix is blogging about our release today.  We have also been working with Quest who  announced support for RemoteFX and our SP1 release in Quest vWorkspace.  They are also blogging about today’s release.
d
Ericom announced PowerTerm WebConnect’s Integration with RemoteFX.
d
Riverbed has been testing RemoteFX and showing what their technology can do for RemoteFX customers.  Today they are blogging about the release and have a great video showing Steelhead Appliances accelerating RemoteFX across the WAN.
 
It has been great to work so closely with industry leaders to bring this technology to market.  I can’t wait to hear the feedback from users when you finally get your hands on a virtualized desktop that delivers a true Windows 7 experience to the thinnest of clients.
 
Be sure to read, Tad Brockway, RemoteFX’s Product Unit Manager’s blog for a bit of history of the team and details on how RemoteFX will change the industry.
d
Now that RemoteFX is available I encourage you all to download the SP1 updates for your server and clients and take a look at the great RemoteFX products that are available today.

The post The RemoteFX industry ships products today appeared first on Microsoft Windows Server Blog.

]]>
3 Reasons why Partners Choose Microsoft for Customer Journeys to the Cloud http://approjects.co.za/?big=en-us/windows-server/blog/2011/02/08/3-reasons-why-partners-choose-microsoft-for-customer-journeys-to-the-cloud/ Tue, 08 Feb 2011 15:38:00 +0000 Hello!  I’m Kevin McCuistion, Director of Partner programs in Microsoft’s Server and Cloud marketing team.  My team helps deliver programs like the Hyper-V Cloud Practice Builder, Hyper-V Cloud Service Provider program, and Hyper-V Cloud Fast Track.

The post 3 Reasons why Partners Choose Microsoft for Customer Journeys to the Cloud appeared first on Microsoft Windows Server Blog.

]]>
Hello!  I’m Kevin McCuistion, Director of Partner programs in Microsoft’s Server and Cloud marketing team.  My team helps deliver programs like the Hyper-V Cloud Practice Builder, Hyper-V Cloud Service Provider program, and Hyper-V Cloud Fast Track.  One of the most exciting parts of my job is working with partners to help them design and deploy new Microsoft virtualization and cloud solutions.  As a result, growing numbers of systems integrators, service providers, and resellers are delivering and operating Microsoft-based private cloud engagements. 

Here are 3 reasons why partners choose Microsoft for their customer journeys to the cloud:

1. We provide tools and resources to help partners build expertise – starting with the Virtualization competency, they can grow their practices as we continue to deliver cloud-focused training
2. We provide resources to help registered partners build demand – they can find exclusive resources from the Microsoft Partner Marketing Center to help generate sales leads  
3. We help partners boost revenue and increase value for customers – gold competency partners can enroll in the Management & Virtualization Solution Incentive Program

I’m really excited to report that IDC shows Hyper-V winning almost 21 points of share in the first 11 quarters since it launched (Source: IDC WW Quarterly Server Virtualization Tracker, December 2010). 

We’re also seeing some great examples of partners driving customer success at companies like Urban Lending Solutions

Partners – I encourage you to take a look at our Hyper-V Cloud partner offerings to see how you can ramp up to help your customers with their move to the cloud.

Thanks!
Kevin

 

The post 3 Reasons why Partners Choose Microsoft for Customer Journeys to the Cloud appeared first on Microsoft Windows Server Blog.

]]>
GET PRE-VALIDATED PRIVATE CLOUD INFRASTRUCTURE WITH HYPER-V CLOUD FAST TRACK http://approjects.co.za/?big=en-us/windows-server/blog/2010/11/08/get-pre-validated-private-cloud-infrastructure-with-hyper-v-cloud-fast-track/ Mon, 08 Nov 2010 07:00:00 +0000 If you’re looking to get started implementing a Microsoft private cloud infrastructure, the Hyper-V Cloud Fast Track program can offer invaluable help by delivering pre-validated reference architectures. The Hyper-V Cloud Fast Track solutions are currently being offered by 6 Microsoft hardware partners, who cover a broad swath of the Windows Server hardware market.

The post GET PRE-VALIDATED PRIVATE CLOUD INFRASTRUCTURE WITH HYPER-V CLOUD FAST TRACK appeared first on Microsoft Windows Server Blog.

]]>
If you’re looking to get started implementing a Microsoft private cloud infrastructure, the Hyper-V Cloud Fast Track program can offer invaluable help by delivering pre-validated reference architectures. The Hyper-V Cloud Fast Track solutions are currently being offered by 6 Microsoft hardware partners, who cover a broad swath of the Windows Server hardware market.

Hit the Fast Track site and you can browse solution briefs from all these partners, including:

Each partner will be enabling their own Hyper-V Cloud Fast Track Web presence, containing information on their Fast Track configurations as well as related offerings and how to get started. Specifically, you’ll learn how each of these partners takes their compute, network, storage and management technologies and combines those with Windows Server 2008 R2 Hyper-V and System Center to provide a comprehensive stack for private cloud infrastructure.   Check back often for updates!

With the announcement of Hyper-V Cloud Fast Track at TechEd Europe today, some of the partners have also blogged about their participation. Check out these blogs from Dell, Hitachi Data Systems, HP and NEC.

If you’d like to know more about Hyper-V Cloud. Patrick O’Rourke writes about the all up Hyper-V Cloud Program on the Virtualization blog.

The post GET PRE-VALIDATED PRIVATE CLOUD INFRASTRUCTURE WITH HYPER-V CLOUD FAST TRACK appeared first on Microsoft Windows Server Blog.

]]>
Cloud Computing a Catalyst for Server Growth? http://approjects.co.za/?big=en-us/windows-server/blog/2010/09/02/cloud-computing-a-catalyst-for-server-growth/ http://approjects.co.za/?big=en-us/windows-server/blog/2010/09/02/cloud-computing-a-catalyst-for-server-growth/#comments Thu, 02 Sep 2010 11:06:00 +0000 In an interesting piece by Ryan Nichols at Computerworld yesterday: Cloud computing by the numbers: What do all the statistics mean, Nichols summarizes some recent research findings on the market potential of cloud computing, quoting impressive market forecasts from sources such as Gartner ($150 billion by 2013), Merrill Lynch ($160 billion by 2011),and AMI Partners.

The post Cloud Computing a Catalyst for Server Growth? appeared first on Microsoft Windows Server Blog.

]]>
In an interesting piece by Ryan Nichols at Computerworld yesterday: Cloud computing by the numbers: What do all the statistics mean, Nichols summarizes some recent research findings on the market potential of cloud computing, quoting impressive market forecasts from sources such as Gartner ($150 billion by 2013), Merrill Lynch ($160 billion by 2011),and AMI Partners (SMB spend to approach $100 Billion By 2014).

As part of his analysis, he gives a nod to some of the reasons for this growth, with the need for business agility and the proliferation of mobile and social computing being front and center. At the same time, he identifies a couple of “head scratchers,” raising the question: “if virtualization is growing and cloud computing is growing, how can the market for private enterprise servers also be growing?”

It’s a great question, and one that we hear frequently given that we are the only company to provide both a server platform, Windows Server, and a cloud services platform, the Windows Azure platform. How can both markets possibly grow at the same time? And growing they are. As Ryan points out, IDC is seeing strong growth in the server market. Just last week the analyst firm issued its Worldwide Quarterly Server Tracker, which showed that “server unit shipments increased 23.8% year over year in 2Q10… representing the fastest year-over-year quarterly server shipment growth in more than five years.”

While it may seem contradictory at first blush, there are a number of reasons for this and it is actually pretty straight forward. When we talk to customers, the vast majority of them are thinking about cloud computing and looking at how to bring cloud-like capabilities and benefits (cost savings, elastic scalability, self-service, etc.) into their organization. However, they are all in different stages of the process, with very disparate infrastructure and business needs. And for many organizations, a wholesale move to a public cloud service isn’t particularly realistic in the short term, whether it’s due to regulatory requirements, geographic concerns, or the nature of the workloads and data they are hosting.

In addition, there are other organizations that will want to take advantage of the benefits of cloud computing, but also may want to preserve existing infrastructure investments and maintain a level of versatility that can’t be met by public clouds Enter the notion of “private clouds” and again, enter Windows Server. We continue to make enhancements to Windows Server to make it easy for customers and partners to use it to build private (and public) cloud services, such as the recent release of System Center Virtual Machine Manager Self-Service Portal 2.0.

Both of these scenarios continue to drive heavy demand for our Windows Server platform. In that same IDC report from last week, Microsoft is highlighted as the server market leader, “with hardware revenue increasing 36.6% and unit shipments increasing 28.2% year over year.”” Those are big growth numbers, even with more than 10,000 customers signing up to our Windows Azure platform this year.

So is there still room for enterprise servers in a cloud computing era? Absolutely. The numbers and customers don’t lie. Offering both a server and a services platform with onramps to the cloud is at the heart of our business strategy and a reason why we are seeing such success in both areas. For those organizations that want a highly optimized, scalable environment where we prescribe the hardware and normalize the cost of operations, there’s our services platform, the Windows Azure platform. For those that want the versatility to enable environments of any scale, or need custom hardware configurations and operating models, there’s our server platform, built on Windows Server. And, of course, we have a common application development, identity and management model spanning the two platforms, which doesn’t hurt either.

Curious what others think on this topic? What do you think are the reasons for ongoing growth in the server market?

The post Cloud Computing a Catalyst for Server Growth? appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2010/09/02/cloud-computing-a-catalyst-for-server-growth/feed/ 1