Jeffrey Snover, Author at Microsoft Windows Server Blog Your Guide to the Latest Windows Server Product Information Wed, 30 Apr 2025 18:47:47 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/windows-server/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element.png Jeffrey Snover, Author at Microsoft Windows Server Blog 32 32 Server Performance Advisor (SPA) 3.0 http://approjects.co.za/?big=en-us/windows-server/blog/2013/03/11/server-performance-advisor-spa-3-0/ Mon, 11 Mar 2013 09:00:00 +0000 Starting wit Windows Server 2008, we have published a server tuning guide each release designed to help system administrators and IT professionals get the best performance out of their server deployments.  For Windows Server 2012 we published the Windows Server 2012 Tuning Guide, but this time there is a twist.

The post Server Performance Advisor (SPA) 3.0 appeared first on Microsoft Windows Server Blog.

]]>
Starting wit Windows Server 2008, we have published a server tuning guide each release designed to help system administrators and IT professionals get the best performance out of their server deployments.  For Windows Server 2012 we published the Windows Server 2012 Tuning Guide, but this time there is a twist.  This time we harnessed our performance knowledge from the tuning guide and embodied some of it as part of a newly redesigned Server Performance Advisor (SPA) tool. 

SPA 3.0 helps IT administrators collect metrics to diagnose performance issues on Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008 for up to 100 servers unobtrusively without adding software agents or reconfiguring production servers.  It generates comprehensive performance reports as shown in figure 1 below and historical charts with recommendations. 

Introduction

In this post we discuss how SPA works and some of the unique features available at your fingertips once you download the tool. 

Figure 1:  A snapshot of a SPA performance report with two warnings

At a high level, SPA is composed of two parts.  The first is a management console or dashboard for the user to pick which servers they plan to collect data on, the corresponding role of the server, how long they need to collect data for, and how often collections happen.  The console has a set of requirements listed on the download page. 

The second part of SPA is the Advisor Packs or “APs”.  APs contain a set of performance rules.  An AP serves two purposes; first it defines what data gets collected from the server when that AP is instantiated.  Second, the AP rules are used for assessing the server’s behavior.  For example if data from a server shows more than 10% packet retransmit rate for any network adapter and that network adapter has a lot of send activity on it per system counters, then a warning is logged in a report.

How it Works

Now let’s step back and give you a visual representation of the end to end process so you have a better understanding for the role each SPA component plays and how they interact with each another.  The different steps in the flow are numbered 1 through 6 with each step described in greater detail below.

Figure 2:  SPA workflow collecting data from remote servers

1.       Setting up the data collection sessions

The user installed SPA and chooses which APs they want depending on the target server role.  SPA ships with some built in APs like the Core OS AP, IIS AP, and the Hyper-V host AP.  The Core OS AP is a generic AP covering the fundamentals like I/O and resource utilization, while the other 2 are role specific.  The APs are imported into SPA to help define what data is collected and later used to assess the servers’ performance and generate the report.  Users can choose and run multiple APs in a single data collection session.

2.       Starting data collection on the target servers

The user defines how long they want to collect data for and chooses if they want a onetime collection or if they want to collect data at regular intervals.  Keep in mind that the longer the collection the more data there will be to process and send back to the console machine.  The console then sends a data collection request to the target servers over the Performance Logs and Alerts (PLA) service used by tools like Perfmon and SPA starts collecting the data.

3.       Data collection on servers

Each target server receives a request from the console machine to start collecting a predefined set of data.  SPA collects data from Event Tracing for Windows (ETW) events, Windows Management Infrastructure (WMI), performance counters, configuration files, and registry keys. 

4.       Saving the data for post processing

Each server writes the performance data to a pre-defined file share.  We expect administrators to specify a file share on the console and avoid disk impact on the target server, but they can also choose where to create the file share from within SPA’s menu options.

5.       Storing the data for generating a report

When the console pulls the data from the file share, it stores it in a SQL database.  Because the data is stored in a database, SPA provides historical charts that help with trending performance behaviors over a period of days or hours.  Users can also delete older reports from the database. 

6.       Generating the Performance report

SPA analyzes results based on the AP rules.  It summarizes the findings in a report, identifies issues with some possible mitigation and lets the administrator decide if they want to make changes. NOTE: There is an expectation that this tool is used by experienced administrators that understand the intent of an implementation and are knowledgeable enough to determine whether a suggested mitigation is appropriate for the target system.

SPA Features

Now that you have an idea of how the process works end-to-end, let’s shift our focus to what users should come to expect from using SPA 3.0 in terms of installation, data collection, and report viewing.    

  • Zero agent installation on the server  To use SPA, you just need to install it on a console machine meeting the requirements.  SPA can collect performance data locally or from remote servers.  Because SPA uses PLA for remote data collection, a user can point SPA at a remote server and start collecting performance data immediately (while the workload is running).  A user can also have data collections on multiple servers going in parallel.  Of course, the console machine needs to have the right authentication and ports open to ensure success.  NOTE: The collection overhead is typically minimal with no impact to most workloads.  The exceptions are extreme low latency workloads where the collection can alter a workload’s behavior.
  • Extensible scriptable APs  The APs is where all the performance knowledge is embodied and captured.  The APs define what data is captured and what gets used to assess the server’s performance.  Because it’s written in T-SQL, composing an AP is simple. We encourage you to check out the AP development guide to help you write a custom AP to help you expedite diagnosing performance issues. 
  • Multiple data sources for a cohesive view of system performance  Windows has a very rich set of instrumentation points and APs can take advantage of this.  The built-in APs certainly do!  SPA collects data from a number of different sources mentioned previously, which makes it a very powerful tool.  It correlates all of this data together, draws causality between the data points, and presents the user with a rolled up view of how their system is behaving with actionable recommendations addressing any reported issues.  The user can then act on the recommendations which provide good insight into some key performance metrics like latency, scalability, and throughput.
  • Side by side comparison of performance and server configuration data  This new feature in SPA 3.0 allows users to compare data collected at two different points in time for the same server, collected at two different points in time for different servers, or collected for two different servers at the same time.  The ability to compare how the server was behaving before a certain date or before applying a certain patch is very useful in narrowing down the point in time at which a change happened and correlating that with a drop in performance.   The report uses triangles with a yellow exclamation mark to represent warnings and uses check marks inside of green circles to indicate no action is necessary. 

Figure 3:  Side by side comparison report for the same machine at 2 different points in time

  • Charting and historical trending  Charting the performance characteristics and metrics of a server over a specified period of time helps users recognize patterns and anomalies associated with specific days of the week where a certain activity takes place.  For example, an administrator may have a scheduled indexing task kicking in and impacting system performance.  In this case it can increase disk latency because of the incurred disk I/O writes from the indexing service kicking in and spinning the media.  The following figures show some of the cool capabilities built-in SPA 3.0 for charting and trending data.

Figure 4:  Historic trending chart showing performance metrics over time

Figure 5:  Trend for the minimum file write throughput achieved over a period of 7 days

  • Built-in APs for key server roles like Hyper-V and IIS To help users get started with using SPA 3.0, we provide three built-in APs as part of the download package.  The first is the Core OS AP that focuses on basic resource characteristics like CPU utilization, network traffic, memory consumption, and storage related events.  The IIS AP which has Web server specific rules like the top 10 URLs accessed and some of the common configuration parameters in IIS.  The third is the Hyper-V host AP that focuses on a server hosting multiple virtual machines and provides virtualization specific diagnostics for performance issues. 
  • Configurable sampling intervals and durations for collecting data  Some users want the ability to collect data on demand, while others would like to have a predefined collection interval.  The latter can be helpful if you are trying to isolate a performance behavior that doesn’t happen on a regular basis and are trying to catch it by setting up frequent data collection intervals for a specified period of time.  There are also users who want to manually kick off a data collection especially if they just downloaded an update or a patch, or they just did a hardware or firmware upgrade in their environment and want to quantify the performance impact before and after using the side by side comparison view
  • PowerShell scripts support  System administrator can write scripts to invoke SPA cmdlets and schedule remote periodic data collections on target servers within certain time intervals.  They can also query the database for information about which APs were run and the target servers with associated SPA reports.  The SPA manual has more details and syntax for the supported PowerShell cmdlets. 
  • Configurable thresholds inside of APs The built-in APs ship with a predefined set of thresholds for all the rules, but given that different workloads and different customers will have different Service Level Agreements (SLAs) and different alarm points, users have the flexibility to set their own thresholds in the APs to better suite their environment and workload.

Figure 6:  Users can get more information about a rule in the Details view

Conclusion

In this blog we introduced the redesigned Server Performance Advisor 3.0.  We walked you through a high level overview of how the different SPA components interact and what role they each play.  We also shared some of the exciting new features and capabilities available with SPA.  We hope you enjoyed this post and we invite you to download and try out the latest bits from the SPA MSDN download page.  Please share your experiences and send us feedback at spafb@microsoft.com

The post Server Performance Advisor (SPA) 3.0 appeared first on Microsoft Windows Server Blog.

]]>
WS-Management ISO/IEC Standard http://approjects.co.za/?big=en-us/windows-server/blog/2013/02/26/ws-management-isoiec-standard/ Tue, 26 Feb 2013 09:00:00 +0000 Evolution of standards-based management in Windows In a world where management has shifted from managing one server to managing many complex, heterogeneous servers and clouds, standards-based management—long supported by Microsoft—has become essential.

The post WS-Management ISO/IEC Standard appeared first on Microsoft Windows Server Blog.

]]>
Evolution of standards-based management in Windows

In a world where management has shifted from managing one server to managing many complex, heterogeneous servers and clouds, standards-based management—long supported by Microsoft—has become essential. We were one of the founding members of the Distributed Management Task Force (DMTF), and shipped the first, and richest, Common Information Model Object Manager (CIMOM) we know as Windows Management Instrumentation (WMI). In 2005, Microsoft, along with 12 other companies, submitted WS-Management for DMTF standardization. Since then, the specification has been improved, stabilized and implemented widely by the industry. Today, the specification reached its highest level of maturity as it became an ISO (International Organization for Standardization)/IEC (International Electrotechnical Commission) international standard. Windows Remote Management (WinRM), Microsoft’s implementation of WS-Management, has been included with Windows since Windows XP. Today, all versions of Windows, both client and server from XP forward, support WS-Management through WinRM. System Center uses WS-Management to remotely manage systems. This includes both Windows and Linux (System Center Cross Platform). Windows PowerShell uses WS-Management for remote shell access.

In Windows Server 2012, standards-based management was necessary to help make Windows Server 2012 the best Cloud OS. WS-Management provided remoting access for managing Windows resources by using CIM + WS-Management. While WMI has served our customers and partners well, the true promise of standards-based management was only realized through completing and making our WS-Management implementation, WinRM, the default remote management protocol for Windows. In Windows today, Windows PowerShell remoting is built on WS-Management. Additionally, WMI’s default protocol is no longer DCOM, but WinRM.

WS-Management as a management protocol

WS-MAN was developed to enable remote management of systems over a firewall friendly protocol such as HTTP while utilizing existing tools and investments in SOAP.  With the 1.0 and 1.1 releases, WS-MAN has been used as the preferred protocol for desktop and mobile system management as part of the DASH initiative and a recommended protocol for server systems management as part of the SMASH initiative.  Hardware from different vendors in the market today have support for DASH and SMASH and can be managed by Windows and System Center products.

The Web Services for Management (WS-Management) Specification describes a simple object access protocol (SOAP) for managing systems such as PCs, servers, devices, and other remotely manageable entities. The WS-Management protocol identifies a core set of web service specifications and usage requirements that expose a common set of operations central to all systems management. This includes the ability to do the following:

  • Get, put (update), create, and delete individual resource instances, such as settings and dynamic values
  • Enumerate the contents of containers and collections, such as large tables and logs
  • Subscribe to events emitted by managed resources
  • Execute specific management methods with strongly typed input and output parameters

WS-Management now an ISO/IEC standard

The International Organization for Standardization (ISO) is an international standard-setting body composed of representatives from various national standards organizations. This body ensures that products and technologies that reach the ISO standardization are of the highest quality, meeting international demands and requirements. ISO standards gain governmental and broader industry support and adoption.

The International Electrotechnical Commission (IEC) is a non-profit, non-governmental international standards organization that prepares and publishes International Standards for all electrical, electronic and related technologies – collectively known as “electrotechnology”.

We are pleased to report that on January 30, 2013, Web Services for Management (WS-Management or WS-Man) was adopted as an international ISO/IEC standard. With WS-MAN now an international standard, expect to see a wider range of products that will be manageable using WS-MAN.  Imagine being able to manage all types of devices in your datacenter using a consistent set of tools, practices, and skills.  This helps to simplify the datacenter and lower cost of both adoption and on-going management of systems as well as make related skillsets more valuable in the marketplace.

As an ISO/IEC standard, WS-Management, is uniquely positioned to play a key role in streamlining the IT world as more devices and solutions adopt it as the standard protocol for management.  The approval of WS-Management as an ISO/IEC standard is further evidence of the global interest in standards-based management of systems, applications, and devices.

Microsoft makes it easier for the rest of the industry to adopt standards based management

While Windows Server 2012 is the best Cloud OS, supporting the latest ISO/IEC standards such as WS-Management, Windows Server 2012 must interoperate with many devices and technologies in a predictable and standard fashion. To address this issue, and to help the industry adopt and embrace standards-based management, Microsoft has designed and implemented OMI (Open Management Infrastructure) as a small and scalable CIMOM which implements CIM and WS-Management. We contributed OMI as an open source project to Open Group in August 2010.

The public availability of OMI means that you can now easily compile and implement a standards-based management service into any device or platform from a free open-source package, by using the WS-Management ISO/IEC standard protocol and CIM. Our goals are (1) to remove obstacles that stand in the way of implementing standards-based management, so that every device in the world can be managed in a clear, consistent, coherent way; and (2), to nurture a rich ecosystem of standards-based management products.

Further Reading

The following is an architectural overview of the WS-Management stack:

WS-Management Stack

The WS-Management 1.1 ISO/IEC specification can be found here.

The main WS-Management spec is composed of the following specifications available on the DMTF web site:

The post WS-Management ISO/IEC Standard appeared first on Microsoft Windows Server Blog.

]]>
Software Defined Networking, Enabled in Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager http://approjects.co.za/?big=en-us/windows-server/blog/2012/08/22/software-defined-networking-enabled-in-windows-server-2012-and-system-center-2012-sp1-virtual-machine-manager/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/08/22/software-defined-networking-enabled-in-windows-server-2012-and-system-center-2012-sp1-virtual-machine-manager/#comments Wed, 22 Aug 2012 08:24:00 +0000 Unlocking Network Flexibility, Efficiency, and Multi-tenancy for the Cloud  We are very excited about the promise of Software Defined Networking (SDN) for enabling automation, flexibility, and reliability in the multi-tenant cloud.

The post Software Defined Networking, Enabled in Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager appeared first on Microsoft Windows Server Blog.

]]>
Unlocking Network Flexibility, Efficiency, and Multi-tenancy for the Cloud 

We are very excited about the promise of Software Defined Networking (SDN) for enabling automation, flexibility, and reliability in the multi-tenant cloud.  Traditionally the control plane of networking has been proprietary, resulting in datacenter environments that are unable to respond effectively to the dynamically changing needs of today’s cloud workloads.  By enabling network control via software, we give customers the ability to configure and reconfigure their networks to match the changing requirements of their workloads, without compromising multi-tenant isolation and performance that would be expected from traditional networking.

Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager (VMM) enable everyone to take advantage of the power of SDN in your datacenters.  Our integrated solution provides unparalleled automation, flexibility, and control.  The solution supports scalability for even the most mission-critical deployments.  At the same time, we provide a standards-based and open platform that is supported by a rich partner ecosystem.  Best of all, everything you need to deploy SDN is built right into these products, so you do not need to acquire separate management tools or product licenses.

Of course, these attributes of our SDN solution did not come about by accident.  Windows Server 2012 builds on our years of experience running massive datacenters for properties such as Hotmail, Bing, and Windows Azure.  This foundation of experience is why we can confidently say that Windows Server 2012 is the first operating system specifically built for the Cloud – for enabling the public, private, and hybrid cloud.

In this post, we introduce Software-Defined Networking and talk about its origins within our own datacenters.  We then discuss how Windows Server 2012 and VMM deliver an end-to-end SDN solution and how partners are extending the solution.  We then discuss our own experience using SDN and how you can get started deploying this exciting technology today.

What Is Software-Defined Networking (SDN)
Traditionally, networks were defined by their physical topology, how the servers, switches, and routers were cabled together.  That meant that once you built out your network, changes were costly and complex.  Certainly, this type of networking is simply not compatible with the notion of a lights-out datacenter or a cloud environment that needs flexibility to support varying workload demands.

With Software Defined Networking (SDN), software can dynamically configure the network, allowing it to adapt to changing needs.  An SDN solution can accomplish several things:

  • Create virtual networks that run on top of the physical network.  In a multi-tenant cloud, a virtual network might represent a tenant’s network topology, complete with the tenant’s own IP addresses, subnets, and even routing topology.  Through SDN, virtual networks can be created dynamically, and they can support VM mobility throughout the datacenter while preserving the logical network abstraction.
  • Control traffic flow within the datacenter.  Some classes of traffic may need forwarding to a particular appliance (or VM) for security analysis or monitoring.  You may need to create bandwidth guarantees or enforce bandwidth caps on particular workloads.   Through SDN, you can create these policies and dynamically change them according to the needs of your workloads.
  • Create integrated policies that span the physical and virtual networks.  Through SDN, you can ensure that your physical network and endpoints handle traffic similarly.  For example, you may want to deploy common security profiles, or you may want to share monitoring and metering infrastructure across both physical and virtual switches.

In summary, SDN is about being able to configure end hosts and physical network elements, dynamically adjust policies for how traffic flows through the network, and create virtual network abstractions that support real-time VM instantiation and migration throughout the datacenter.  This definition of SDN is, in fact, broader, than the definition currently used by many industry players who only focus on configuration of physical network elements.  Our broader SDN definition includes programmability of end hosts, enabling end-to-end software control in the datacenter.  Our definition also supports real-time changes in response to VM placement and migration.  As we will see below, the integration of VM management and network control is important to facilitate automation and reliability in large-scale datacenters.

Origins of Software Defined Networking
As mentioned above, we at Microsoft have years of experience running massive datacenters for properties such as Bing, Hotmail, and Windows Azure.  This experience taught us several important principles about datacenter network design:

  • Automation is critical:  We have found that the vast majority of network outages arise because of human error.  Networks need to be configured and managed in an autonomous fashion.
  • Multi-tenancy demands network flexibility:  In environments such as Windows Azure, customers expect to have easy ways to on ramp their workloads.  They don’t want to change IP addresses or other network settings in order to move to the cloud.  The cloud needs to be able to give each tenant the illusion of a dedicated network, even though it is shared by multiple tenants.  Interestingly, we have found the need for multi-tenancy even in single-=use datacenters.  For example, we often need to run a production SharePoint environment as well as a test SharePoint deployment simultaneously within the same datacenter.  As much as possible, our test deployment needs to mirror the production deployment, but it is critical for the test deployment to use its own Active Directory and DNS infrastructure.  Of course, we don’t want to deploy physically separate servers for the production and test environments—that would be unreasonably expensive!
  • Centralized control drives simplicity and reliability:  In our experience, virtual machine placement needs to be driven from a central management entity that understands workload needs, hardware capacity, and virtual networks.  This manager drives policies to the end hosts and, therefore, is also best positioned to coordinate the network changes required to support that VM placement.  This approach reduces the possibility of policy inconsistency in the network, reduces delays associated with propagating SDN policies, and simplifies configuration and management.

In fact, based on this datacenter experience, our colleagues in Microsoft Research published seminal work defining new ways to create virtual and physical networks.  This effort heavily influenced our approach to SDN in Windows Azure and Windows Server and in fact, was the foundation for much of the SDN work being done across the industry.

An End-to-End Solution in Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager
Windows Server 2012 and VMM provide an end-to-end SDN solution for public, private, and hybrid clouds.  By building all the pieces as part of a solution—the hypervisor, the SDN control surface on the end host, and the management software—we ensure a set of seamless experiences for datacenter administrators.  All of the solution components work together to provide the most scalable and flexible platform for the cloud.

Our SDN approach consists of several different capabilities.

Hyper-V Network Virtualization delivers network flexibility for the cloud by providing the ability to create multi-tenant virtual networks on a shared physical network.  Each tenant gets a complete virtual network, including multiple virtual subnets and virtual routing.  (Some network virtualization solutions out there assume the tenant only has a single subnet!)  On each host, Hyper-V uses dynamically updatable SDN policies to associate a tenant network and properly direct traffic to the destination. The SDN policy also determines which VM’s these tenant VM’s are allowed to communicate with, providing the requisite isolation.  As a result, Hyper-V Network Virtualization allows tenant workloads to be placed anywhere in the physical datacenter.  Tenant networks even can use private IP addresses (which might overlap with addresses used by other tenants), allowing tenants to rapidly migrate their existing workloads to the cloud by bringing their own IP addresses.  In fact, Windows Server 2012 supports interoperable cross-premise connectivity, so you can seamlessly link your subnets in the public cloud back to your local network.

VMM plays a key role in automating configuration of SDN policies for Hyper-V Network Virtualization.  In VMM, you define and create tenant virtual networks as needed.  Note that because these networks are defined entirely in software, no reconfiguration of the physical network is needed.  VMM takes care of placing VM workloads and applying the necessary SDN policies to the hosts to create those virtual networks.  By applying VM placement decision and the SDN policy updates together, VMM provides a high degree of automation and centralized control, in keeping with our datacenter experience.  In addition, this integrated control plane speeds up policy distribution, reducing downtime and enabling more flexible VM placement and optimization.

Our SDN solution is further enabled through rich traffic control policies on the Hyper-V virtual switch.  On a per-VM basis, you can configure security policies that limit the types of traffic (and destinations).  You can reserve bandwidth to particular VMs, ensuring that mission-critical services can always access necessary network capacity.  You can even apply bandwidth caps, allowing you to avoid traffic starvation or enforce a variety of charging models.  What’s more, these network control policies are dynamic, so they can be adjusted in real-time.

VMM allows customers to unify the individual virtual switches on each Hyper-V host in the datacenter into a distributed logical switch that is dynamically programmed with SDN traffic control policies.  For example, you can define a profile for a set of VMs.  That profile might include the security and bandwidth controls that should be applied.  As it brings VMs up, VMM automatically programs the host virtual switch with the appropriate profile.  The profile moves from host to host as the VM is migrated.  The administrator is essentially defining a single logical datacenter switch, with VMM automating deployment of per-host and per-VM policies, ensuring consistency of SDN policies, and (as we have seen before) providing central control.

With Windows Server 2012, we are excited to introduce the Hyper-V Extensible Switch.  The switch provides a platform through which our partners can extend SDN policies within the switch.  In fact, one of the most common use cases for this extensibility is to integrate the virtual switch with the rest of the physical network infrastructure. A unique aspect of this extensibility is that multiple partners can extend the switch at the same time.  For example, InMon has built an extension that allows traffic monitoring to be done on the Hyper-V switch in the same way it is done on physical switches.  Another partner, NEC, has integrated the Hyper-V switch with their OpenFlow controller.  The NEC OpenFlow controller defines exactly how traffic from the source VM to the destination VM should be routed through the network; NEC solution is completely compatible with Hyper-V Network Virtualization, which defines the origin and destination VMs within the virtual network.  The NEC solution allows for easy configuration of virtual appliances such as load balancers, intrusion detection systems, and network monitoring solutions.

VMM handles the lifecycle and configuration of Hyper-V switch extensions.  In fact, these switch extensions essentially become part of the SDN language that VMM speaks to Hyper-V.  As VMs migrate across the datacenter, VMM and Hyper-V ensure that state information associated with the switch extension is also migrated to the new host.  VMM ensures that the destination host has the switch extensions required by the guest VM or tenant network.  This level of seamless extensibility is unique to the Hyper-V / System Center SDN solution.

Of course, our end-to-end solution recognizes that Hyper-V hosts are not the only components of a datacenter network.  VMM is able to dynamically provision key network elements such as load balancers, site-to-site VPNs, and Hyper-V Network Virtualization gateways.  At the end of the day, SDN is about end-to-end automation, flexibility, and control throughout the data center.

Built for Partners, Built with Partners
Our SDN solution is, from the ground up, designed with partners in mind.  It is open and flexible, allowing partners to offer value added capabilities.  Moreover, the SDN solution supports a close relationship between software and hardware.  Even though it is software-driven, SDN needs to take advantage of capabilities provided by network cards, switches, and routers.

We disagree with many in the industry who say that SDN should “commoditize” the network infrastructure.   In our view, SDN should provide the automation, flexibility, and control to allow you easily to take advantage of the capabilities of the infrastructure.  In fact, SDN should create new innovation opportunities for network hardware.  Customers can only benefit from new innovations across their datacenter.

Within our SDN solution, we have already touched on how partners can build extensions for the Hyper-V Extensible Switch.  In fact, multiple extensions can co-exist in the hypervisor switch, and they can all work in tandem with our other SDN elements, Hyper-V Network Virtualization and rich traffic control policies.  We support our partners with certification tests, interoperability plug fests, development tools, and close engineering support.

This spirit of partner cooperation is evident throughout our SDN solution.  Hyper-V Network Virtualization builds on IETF standard protocols (Generic Routing Encapsulation, or GRE), and together with partners from a variety of network silicon and switch manufacturers, we have published guidance on how GRE enables network virtualization.  This standards-based approach means that network cards and network switches can support and accelerate tenant logical network traffic.  In fact, our design includes tenant ID information in the packet, enabling network equipment to do tenant-specific accounting, policy control, or advanced processing.

Our open approach has enabled several partners to announce solutions that work with Hyper-V Network Virtualization.  For example, nAppliance and IVO Networks have both announced plans for network appliances that provide Hyper-V Network Virtualization gateways.  Stay tuned for more partner announcements shortly!

In addition, VMM supports pluggable interfaces, allowing it to configure arbitrary load balancers, site-to-site VPNs, and network virtualization gateways.  VMM can therefore interoperate with other SDN solutions or network control servers.

Production Tested, Production Used
As we have discussed, our SDN solution grew out of our experience running large datacenters and cloud services.  Needless to say, we have been able to validate our solution in these environments.  Within Microsoft, we are running a large, multi-tenant private cloud used for several mission-critical workloads.  Hyper-V Network Virtualization is in active use within that cloud today, orchestrating communication for tens of thousands of VMs running on over 4000 physical hosts.  As you might expect, our SDN algorithms and protocols are in active use within the Windows Azure datacenter, supporting our Infrastructure as a Service (IaaS) offering that was announced last month.

At the same time, throughout the development of Windows Server 2012 and VMM, we have been working closely with enterprise and hoster customers to validate and deploy our SDN solution.  Many of these customers are already running production services using these cloud components.

Ready for You – and Built Right In!
Software Defined Networking (SDN) holds the promise to revolutionize cloud networks by bringing a new level of automation, flexibility, and control to the network environment.  As we have seen, our SDN approach takes an integrated, end-to-end view which brings simplicity, performance, and reliability to the solution.  At the same time, we have built our solution using open standards and pluggable interfaces.  Just as important, we have been developing a rich partner ecosystem, so you can integrate best-of-breed capabilities across the industry with Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager.

Most important, all of the tools you need to deploy Software Defined Networking are built right in to Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager.  You do not need to buy separate management tools or acquire separate product editions.  Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager deliver the best value for public, private, and hybrid clouds.

With the Release to Manufacturing (RTM) and impending launch of Windows Server 2012, our SDN solution is ready for you to deploy.  We are looking forward to hearing about your experiences building public, private, and hybrid clouds on our SDN platform.

Appendix: Some Resources for Getting Started with SDN
Windows Server® 2012 Hyper-V Network Virtualization Survival Guide helps you get started deploying SDN and network virtualization in your datacenter.

The Hyper-V Network Virtualization Overview gives you a technical overview of the feature and how it works.

The Internet RFC titled NVGRE: Network Virtualization using Generic Routing Encapsulation gives you the details behind the packet encapsulation format Hyper-V network virtualization uses for virtualizing network traffic.

The Hyper-V Extensible Switch article gives you an architectural overview about Hyper-V switch extensions.  You can also learn about Writing Hyper-V Switch Extensions.

The blog article about Cloud Datacenter Network Architecture describes how you can put everything together in order to build a cloud that uses SDN.

Sandeep K. Singhal, GM, Windows Networking

Vijay Tewari, Principal Group Program Manager, System Center Virtual Machine Manager

The post Software Defined Networking, Enabled in Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/08/22/software-defined-networking-enabled-in-windows-server-2012-and-system-center-2012-sp1-virtual-machine-manager/feed/ 5
Download the beta release of Windows Server 2012 Essentials today! http://approjects.co.za/?big=en-us/windows-server/blog/2012/07/11/download-the-beta-release-of-windows-server-2012-essentials-today/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/07/11/download-the-beta-release-of-windows-server-2012-essentials-today/#comments Wed, 11 Jul 2012 08:19:00 +0000 What an exciting time to be part of the Windows Server team! Earlier this week we announced the RTM and general availability of Windows Server 2012 in conjunction with the Windows 8 team’s announcement of their dates.

The post Download the beta release of Windows Server 2012 Essentials today! appeared first on Microsoft Windows Server Blog.

]]>
What an exciting time to be part of the Windows Server team! Earlier this week we announced the RTM and general availability of Windows Server 2012 in conjunction with the Windows 8 team’s announcement of their dates. Since then we’ve seen a steady stream of exciting news coming out of Toronto where the 2012 World Wide Partner Conference is being held. Today I’m happy to host Joe Nalewabau from the Windows Server Essentials team to make yet another exciting announcement.  By now you should be picking up on some reoccurring themes that keep showing up in these blogs:

  • We spent a lot of time listening to our partners and customers. 
  • We focused on simplicity and flexibility.
  • Users are more productive – they can do what they want with fewer steps.
  • Our Partners have more ways to deploy than ever before – Windows Server 2012 Essentials is a perfect example of that.
  • Our focus on partners and customers allowed us to work across groups effectively to reduce the seams and deliver a coherent and comprehensive solution.
  • We love our partners and customers and can’t wait for you to deploy Windows Server 2012 and enjoy the release that you’ve all been asking us for.

Hi, I’m Joe Nalewabau, Group Program Manager on the Windows Server Essentials team, and today I’m excited to introduce the beta for Windows Server 2012 Essentials (Essentials 2012).

The beta is a significant engineering milestone for the team. We’d obviously like to get as much feedback on the product as possible and you can see and give feedback on the beta through the Windows Server Essentials 2012 Beta Essentials forum.  We are working hard to deliver Essentials 2012 this year and so your feedback on the beta will be critical to us over the next few weeks as we work towards a release candidate and an eventual RTM.

As David Fabritius mentioned in his post last week, Essentials 2012 represents a significant milestone for the product. We have made some changes to the way that we think about the first-server market (SMBs, home offices, etc.) and the products that we offer in this space based on feedback from our customers and partners. This post will provide some high-level insight on how the engineering strategy as we built Essentials 2012. We will follow up with additional blog posts containing deeper information about specific features in the coming weeks.

From an engineering perspective, we planned Essentials 2012 around four core principles:

  • Simplicity and flexibility for customers and partners
  • Better together with Windows Server 2012 and Windows 8
  • Increased device support
  • Continued integration with Cloud Services

Simplicity and flexibility for customers and partners

Historically, the engineering team has developed and supported a number of solution products based on Windows Server. The current in-market products developed and supported by our team include: Windows Small Business Server (SBS) 2011 Standard, Windows SBS 2011 Essentials, Windows Home Server (WHS) 2011, and Windows Storage Server 2008 R2 Essentials. We also support previous versions of SBS Standard and WHS.

These products are not targeted at the traditional IT Pros. We spend a lot of time creating simple and integrated experiences that will work for non-IT Pros with the help of our broader partner ecosystem of OEMs, Value-Added Resellers and the Small Business Solution Specialist Community.

We approached simplicity and flexibility for customers in Essentials 2012 in a number of ways:

  • Simplified product line-up. After considerable debate and feedback from our customers and partners we decided to simplify the overall product line-up to a single product. During this simplification process, we decided to bring together as much core functionality from our other products as possible in Essentials 2012 (e.g., media features from Home Server and Storage Server Essentials). This simplification, along with the flexibility described later, will enable partners to design and deploy the best solution for customers based on their specific business needs.
  • Simplified moving past 25 users. One of the major pieces of feedback about SBS 2011 Essentials was that once a customer had grown beyond the 25 user limit they had to migrate to Windows Server Standard. After the migration, key SBS-specific features that they had come to depend on (e.g., client backup, Remote Web Access), were no longer available.We wanted to address this issue in Essentials 2012 and so we now allow customers to do an in-place upgrade to Windows Server 2012 Standard. Now customers are running Windows Server 2012 Standard without any of the licensing limitations of Essentials 2012, but the majority of Essentials 2012 functionality continues to operate and is fully supported for up to 75 users and 75 devices. (Note that while there are no restrictions placed on the number of users/devices that can be added to a Windows Server 2012 Standard environment, there are maximum supportability limits for the Essentials 2012 features.)
  • Flexibility for customers to choose how they want to consume email (on-premises, hosted, or cloud). A major area of flexibility for Essentials 2012 was providing partners and customers with the choice of where they wanted their email service to be located. In SBS 2011 Standard, email was installed and always assumed to be on premises. In SBS 2011 Essentials, we had an add-in for Office 365 connectivity, but no integration was possible with an existing Exchange Server running locally on a second server.In Essentials 2012, you will be able to choose where email services reside from the following choices:
    1. On-Premises. Essentials 2012 will integrate with an on-premises Exchange server running on a second server, which can be either physical or virtual.
    2. Office 365. If customers have an Office 365 account they can choose to use this for their email.
    3. Hosted Exchange. Hosted Exchange providers can offer add-ins to Essentials 2012 that will allow customers to select this option. We know that there are many different types of hosted email providers. While we have focused on hosted Exchange email providers, we engineered the product to be email service agnostic which allows non Exchange based email providers to be integrated through this mechanism (note that this specific feature is not available in the beta).

Better together with Windows Server 2012 and Windows 8

Windows Server 2012 enables an amazing number of scenarios and key technologies for customers. In Essentials 2012 we looked through the huge number of Windows Server features and chose specific ones to deeply integrate. I’d like to call out a few major technologies or processes from Windows Server 2012 and Windows 8 that we have integrated:

  • Storage Spaces. Storage Spaces offers a number of compelling scenarios for first-server environments including easy capacity expansion and resiliency for physical disk failures using commodity disk hardware. The ability to simply add a disk drive and increase capacity has long been a request across from customers and partners and in Essentials 2012 we have integrated Storage Spaces through wizards and alerts to make sure it is simple and easy to use.
  • File History. File History is a new Windows 8 technology that allows you to store changes made to files on your client machine and then easily find and restore previous versions. In Essentials 2012, we have made it simple to configure Windows 8 clients to turn File History on and point the File History folder to the Essentials 2012 server. This is a great experience for Windows 8 clients. This capability is turned on for them and they get the added safely of having their File History stored on the server.
  • Application Compatibility. In the past, several SBS customers reported not being able to get support from Line of Business (LOB) application providers as SBS was not listed as a supported OS even though SBS was built on a supported Windows Server operating system.  We have worked hard to ensure that Essentials 2012 is a part of the overall Windows Server 2012 Application Logo Certification program. Applications that pass the Windows Server 2012 Application Logo Certification requirements will also meet the requirements of working on Essentials 2012.  We also significantly expanded the Essentials 2012 application compatibility testing environment.  These efforts should allow ISVs to offer much better support statements going forward for Essentials 2012.

Of course, customers also get a whole range of Windows Server 2012 technologies for free which makes the release even more compelling.

Increased Device support

Another area of focus for the team was around extending our level of support for devices. We know that customers using our existing products have multiple devices and they want to access information and/or control their server from these devices. In Essentials 2012 we have expanded our device support in a number of different ways:

  • Remote Web Access (RWA). RWA is an existing feature that many of our customers love. In Essentials 2012, we made a number of improvements with one of the biggest being making sure that RWA works well on touch first devices including the iPad and Windows 8 based touch devices. RWA also supports media streaming from the server and we have improved the access to files and folders on the server.
  • Native Windows 8 Metro application. We are building a Windows 8 Metro application for accessing Essentials 2012 servers. The existing client LaunchPad will continue to be available for Windows 8, but we wanted to build a Windows 8 native application to allow people to quickly and easily access and control their server. We are very excited about this application as it allows for some very cool scenarios – especially around people who are travelling and need to access files and folders or media from their server. This is our first client application that supports an off-line mode for people who are travelling – another request from customers. In addition, we implemented many of the Windows 8 standard interfaces in this application which allows for a range of new scenarios natively from Windows 8, e.g., simple uploading and searching of files on Essentials 2012.
  • Updated Windows Phone application. We have updated the existing Windows Phone 7 application to work with Essentials 2012 servers – including the ability to access files and folders on the server (this functionality was not available in the previous version).
  • Web Services for extensibility. This is more of a developer facing feature, but we are very excited about the possibilities this opens up. Essentials 2012 has a set of web services that allow developers to write a new set of client applications for the server. As an implementation note, we use these services inside the Windows 8 Metro and Windows Phone applications. Developers can now write different applications/gadgets, etc., to interact with an Essentials 2012 server.

Continue integrating with Cloud Services

Another major focus for us is continuing to integrate with cloud services. Based on research and feedback from our customers we know that many people are looking for ways to integrate with cloud services and we wanted to ensure that Essentials 2012 had great integration with Microsoft’s offerings:

  • Office 365 Integration. In SBS 2011 Essentials, we had deep integration with Office 365 through the Office 365 Integration Module. We have integrated this module directly into Essentials 2012 and updated the support to display more information about Office 365 as well as update our functionality, e.g., bulk importing of Office 365 accounts into Essentials 2012. Office 365 is completely optional – this is an option that people can choose as an email service when they configure their server.
  • Microsoft Online Backup Service. Essentials 2012 has integration with the Microsoft Online Backup Service which makes it simple for customers to register their server and do online backups of it. This provides an additional layer of protection above the existing Server backup mechanisms.

Essentials 2012 has a rich SDK that allows customers and partners to integrate additional services into the server. We made sure that existing add-ins for SBS 2011 Essentials and WHS 2011 continue to run in Essentials 2012.

Summary

We are excited about Essentials 2012 and thrilled to be able to get the beta in your hands. The engineering team is eagerly looking forward to hearing your feedback which will help make Essentials 2012 a great release.

The post Download the beta release of Windows Server 2012 Essentials today! appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/07/11/download-the-beta-release-of-windows-server-2012-essentials-today/feed/ 24
The Windows Server 2012 Information Experience http://approjects.co.za/?big=en-us/windows-server/blog/2012/07/02/the-windows-server-2012-information-experience/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/07/02/the-windows-server-2012-information-experience/#comments Mon, 02 Jul 2012 09:20:00 +0000 I recently had a conversation where someone was talking about the difference between people vested in their community and those that aren’t.

The post The Windows Server 2012 Information Experience appeared first on Microsoft Windows Server Blog.

]]>
I recently had a conversation where someone was talking about the difference between people vested in their community and those that aren’t.  He said that when you walk by and see some trash on the ground, someone vested in their community will stop and pick it up and throw it into the trash, whereas someone else wouldn’t.  He pointed out those communities that had a lot of people vested in them tended to be clean and crime free whereas other communities where people didn’t contribute got worse and worse and the people in those communities suffered.  The point he was making was that active participation in a community was an enlightened form of self-interest.  You invest a little and in doing so, you establish a norm for others who follow your example and step-by-step things get better and better.

Each of you belongs to the Windows community.  You can choose to be vested in the community or not.  Many of you have already chosen to participate in the community and have made Windows one of the most robust communities out there.  I’m particularly excited by the Windows PowerShell Survival Guide (http://social.technet.microsoft.com/wiki/contents/articles/183.windows-powershell-survival-guide-en-us.aspx), a rich, curated offering of information about Windows Powershell.

But the reality is that getting the benefits of that community is not as easy as it could be and participate could be higher.  In today’s blog, Kathy Watanabe, Senior Director of the Server and Cloud Division Information Experience team, describes some of the innovative thinking and tools that we are delivering in Windows Server 2012 to help the community help itself.  You can do your job much better by leveraging the wisdom and knowledge of the community – so take some time to learn how to use these tools.  But don’t stop there. Start – or increase – your participation in the community. It has never been easier.

With Windows Server 2012, the Information Experience team rethought how we deliver a great information experience to our customers. It is based on a few primary principles:

  • Content aggregation. We integrated links to our content and to content created by the community (more on that later). This provides rich, broad, and diverse product and scenario guidance, where you will find answers as you plan, deploy, and operate Windows Server 2012 in your enterprise. One of the platforms that illustrates this concept of content aggregation (or curation) is the PowerShell Script Explorer, which provides curated access to scripts.
  • Community. The idea of a community—you—is central to shaping Windows Server 2012 guidance. We created the information experience based on your feedback, and we depend on you to extend it.
  • Solutions and scenarios. We provide these in the IT Pro space (in the TechNet library) and in the developer space (on Windows Server Development). These scenarios and solutions are continually updated, to reflect your evolving network needs.

Content Aggregation (Curation)
There are several types of content aggregation offerings. We describe two in this section:

  • A tool called Microsoft Script Explorer
  • TechNet  Wiki-based “survival guides”

Microsoft Script Explorer
One important piece of our content aggregation offering is a tool called Microsoft Script Explorer.  Microsoft Script Explorer for Windows PowerShell helps scripters find Windows PowerShell scripts, snippets, modules, and how-to guidance in online repositories such as the TechNet Script Center Repository, PoshCode, local or network file systems and Bing Search Repository.

Leveraging the Concepts of a Semantic Web for Curation
A semantic web takes the information that is typically hidden inside product documentation, blog posts, support forums and so forth – and enables that information to be defined in a predictable manner – which, of course, enables discovery of that information. A great example of this is a PowerShell Script. Scripts are typically just text hidden inside web pages; search engines such as Bing / Google and Yahoo can’t discriminate whether you are looking for a page containing the words “PowerShell Script” or for pages that actually contain valid PowerShell scripts.

The illustration below attempts to illustrate that PowerShell Scripts can be stored in a number of different places – inside local file systems, network shares, web sites, online forums and script repositories. When the scripts exist inside a web page such as in a blog or a threaded discussion we believe HTML 5’s Micro-Data will enable us to include additional meta-data that will describe the specific parts of the page that contain scripts and information related to the scripts name and purpose that will better enable search. For repositories such as TechNet and POSH we believe OData provides a great programmatic means of accessing these repositories.

The PowerShell Script Explorer surfaces content from major repositories, blog sites and forums by smartly aggregating (or curating) that content. The illustration below shows the high-level design of Script Explorer. The dotted line in the middle illustrates the divide between code and repositories that run/exist inside a corporate network as opposed to those outside of the corporate network. One of the most interesting aspects of Script Explorer is something we have called the Aggregation Service. This service is responsible for helping aggregate scripts from different sources based on your requirements. The service can take content from any number of repositories, regardless of what protocol they use or what format the scripts are exposed in and then aggregate the functionality and expose that data as an OData based feed.

As you can see in the illustration, there are typically two instances of the Aggregation Service running. The first (on the right-hand side) runs on Windows Azure and is responsible for aggregating feeds from different Internet based repositories such as TechNet or Posh. The second (on the left-hand side) runs along-side Script Explorer and takes responsibility for retrieving scripts from the external aggregation service as well as internal resources such as your local file system and any corporate repositories you want to stand up.

The Aggregation Service includes a standard way to write new providers enabling you to search alternative sources of scripts and have them exposed directly inside Script Explorer simply by changing the configuration file. Furthermore, this idea is extensible; you can create/shape your own provider, to use your favorite search engine or to us a schema other than the one used by PowerShell Script Explorer. For more information on creating a new repository or creating a new provider take a look at the sample posted on Codeplex.

Survival Guides
Another curation offering is the survival guides. The Survival Guides are a TechNet  Wiki offering of links to information around a specific product, technology or set of scenarios. Created and managed by the community (including Microsoft), survival guides map information by lifecycle, area, scenario, or other criteria with links to top information. Links can point to any content and often include a mix of information from different Microsoft sites and community sites like blogs, wikis, and YouTube.

Survival guides provide community stewarded, up-to-date information conveniently organized for users and professionals. The Windows PowerShell Survival Guide, System Center Survival Guide, Hyper-V Survival Guide and others on the TechNet Wiki also are helpful for planning training, deployments and finding more information by experts in different regions. Contributors benefit from increased recognition.

Join the TechNet Wiki and share your favorite links, add thoughtful comments or better yet, create a new survival guide for a product or technology that you have a passion for!

Community
Community is a big part of the information experience, as we work together with you to extend and expand the guidance offerings.  Here are the ways we are contributing, how our efforts impact your experience, and how you can get involved:

  • Events and error guidance, guidance added by the Community
  • Forum 2 Wiki. Your common forum questions with our responses are posted to the Wiki
  • Suggestion Box. Recommend areas where you’d like us to add information
  • Script Center. A host of great PowerShell scripts by all of you (and us!).

Events and Errors
Community sourced error and events harness the troubleshooting experiences of community. Using topics that address a single error or event, customers can find current troubleshooting information on the TechNet Wiki using search or, in the near future, forwarding from the Windows Event Viewer. When Event Viewer forwarding goes live, contributors will be able to create new topics that can be found by the forwarding system and used automatically.

This approach helps you, our customers. Since the content is on a Wiki, it can be continuously updated to reflect the latest techniques, insights and best practices. Since the Wiki shares a powerful profile system in common with the TechNet Forums, Blogs, Galleries and other Microsoft places, contributors are recognized and can achieve Wiki fame by creating or revising articles that reach thousands of views.

Join the TechNet Wiki and share your troubleshooting experience. You can update Event ID 1058 – Group Policy Preprocessing (Network) or over 50 others or create your own. Contributing is easy and appreciated by everyone!

Forum 2 Wiki
Forum Curation. This effort converts some of the highest-viewed forum threads on different TechNet forums into TechNet Wiki articles to increase clarity. In some of our customer roundtables (and feedback from others in the community), we learned forum answers are sometimes “lost” in the cacophony of multiple responses, tangential information, and updates to previous answers. Answers can also be difficult to find.

For an example, visit Renaming a Windows Server 2008 Active Directory Domain or try one of these.

By moving into an article on the TechNet Wiki, community can easily modify information inline rather than through additional comments. Content can be tagged, stewarded and easily shared with others. This increases discoverability and clarity for customers and can increase recognition for contributors (a common theme in community, no?).

You can make a difference. When on your favorite forum, convert a popular thread into a Wiki article. Include links from the forum to the Wiki article and the Wiki article back to the forum as a way to acknowledge sources and the work of community. Then ask your networks to review and update.

Suggestion Box
Suggestion Box. The Suggestion Box is a place where you can share ideas and suggestions, prioritize them, and help deliver community content. Our goal here is to help identify information needed by the community and to work together with community members to curate, develop, or point to that information, whether it’s on member blogs, forums, TechNet Wiki articles, Microsoft.com or other sites. The information experience can only improve with community feedback.

While the current iteration of the Suggestion Box does not share the same profile as the TechNet Wiki and other Microsoft community platforms, it is easy to suggest new ideas, vote on existing ideas, or volunteer to deliver content for an existing request. It also provides a prioritized list of ideas for times you want to contribute, but need an idea. This helps you get the content you need (and the recognition you deserve for contributing).

Script Center
For a similar experience focused exclusively on scripts, visit the Script Center. Download resources and applications for Windows 7, Windows Server 2008 R2, Windows Server 2008, SharePoint, System Center, Office, and other products. New resources are added frequently, so check often and see what’s new.  Join us!

Scripting Games. The Scripting Games are the premier learning event of the year for IT Pros, developers, and others who want to learn Windows PowerShell. Managed by Scripting Guy Ed Wilson, the Games are a great way to jump into the PowerShell scripting community (or raise your status as an expert in PowerShell) in a fun, accommodating and rewarding way. Meet other community members, write cool scripts and receive feedback from a distinguished panel of guest judges.

The Scripting Games helps community by raising awareness, fostering engagement and networking, learn some neat tricks and coding techniques, and win some cool prizes. Check out Ed’s top ten reasons to participate and mark your calendar for next year’s event!

Solutions and Scenarios
The integration of our products and technologies into a holistic solution to real customer problems drove our information experience.  As we shaped these scenarios, we considered the new value propositions offered by Windows Server 2012:

  • Building Your Cloud Infrastructure: Scenario Overview
    You can leverage new features around network and storage virtualization that, when combined with improved server virtualization, enable the building of your cloud infrastructure based on Windows Server 2012. This will help with your strategy in delivering Infrastructure as a Service (IaaS) or building hosted services.
  • Dynamic Access Control: Scenario Overview
    You can apply data governance across your file servers to control who can access information and to audit who has accessed information.
  • Hosting-Friendly Web Server Platform (IIS): Scenario Overview
    Rapid and efficient scaling of your web applications makes for a cloud-ready web platform. Enhanced security, application initialization, NUMA-aware scalability, and the sharing of resources across sites allows for this rapid scaling with minimal management overhead.
  • Increasing Server, Storage, and Network Availability: Scenario Overview
    New experiences in Windows Server 2012 work together to improve availability, performance, and reliability at the single-server and multiple-server (scale-up and scale-out) levels.

Thanks for reviewing just some of the exciting information experiences we’re creating for Windows Server 2012. We hope you enjoy them!

The post The Windows Server 2012 Information Experience appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/07/02/the-windows-server-2012-information-experience/feed/ 4
Open Management Infrastructure http://approjects.co.za/?big=en-us/windows-server/blog/2012/06/28/open-management-infrastructure/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/06/28/open-management-infrastructure/#comments Thu, 28 Jun 2012 10:47:00 +0000 Many years ago, Microsoft joined with other companies to define the Hardware Abstraction Layer (HAL), a set of standards to abstract the devices on a PC (and later, a server) for the OS.  The HAL is the unsung hero of the computing industry, allowing an amazing level of choice and interoperability in the x86 ecosystem.

The post Open Management Infrastructure appeared first on Microsoft Windows Server Blog.

]]>
Many years ago, Microsoft joined with other companies to define the Hardware Abstraction Layer (HAL), a set of standards to abstract the devices on a PC (and later, a server) for the OS.  The HAL is the unsung hero of the computing industry, allowing an amazing level of choice and interoperability in the x86 ecosystem.  It is one of the critical hidden technologies behind why all this stuff “just works.” 
With Windows Server 2012, Windows has shifted its focus to become a Cloud OS, so a new abstraction layer is required – a Datacenter Abstraction Layer or DAL.  Microsoft is, once again, joining with other companies to define the DAL.  Instead of starting from scratch or advancing proprietary standards, we are embracing standards-based management to accelerate the process so we can get the ecosystem and our customers to the cloud as quickly as possible. 

As we looked at the task of getting the industry to adopt standards-based management, we saw a couple of challenges. 

The first challenge was to convince the industry that standards-based management was credible and able to do complete management.  We proved that with our big investments in standards-based management in Windows Server 2012.  In this release, we are fully committed to standards-based management as the primary management path; DCOM is provided only for backwards compatibility. 

The next big challenge was to help the industry implement standards-based management.  The existing open source implementations have a number of problems that stopped the ecosystem from embracing this approach.

In today’s blog, Otto Helweg and Wassim Fayed, Program Managers in the Windows Management team, describe what we did to address that concern.  It is truly an exciting time to be working in the computer industry – as a community, we are all about to take this to the next level and our customers are going to reap huge rewards.  What could be better than that?

Microsoft and The Open Group are going big on standards-based management with a new, free, open source technology called Open Management Infrastructure or OMI (formerly known as NanoWBEM).   We are working with Arista and Cisco to port OMI to their network switches for our Windows Azure and cloud data centers.  Jeffrey Snover did a technology demonstration at TechEd Europe in which he used a common set of standards-based tools to manage a base-motherboard controller on a server, a Windows operating system, and an Arista switch running OMI.

The public availability of OMI means that you can now easily compile and implement a standards-based management service into any device or platform from a free open-source package. Our goals are to remove all obstacles that stand in the way of implementing standards-based management so that every device in the world can be managed in a clear, consistent, coherent way and to nurture spur a rich ecosystem of standards-based management products.

Today, datacenters  comprise a slew of heterogeneous devices supplied by different hardware and platform vendors and requiring different tools and management processes. Companies are forced to write their own abstraction layer or to be locked into a single vendor, which limits their choice and agility. This problem can be solved only by moving the industry to adopt the right standard for datacenter devices and platform abstractions.

In addition, the growth of cloud-based computing is , by definition, driving demand for more automation, which, in turn, will require the existence of a solid foundation built upon management standards. For standards-based management to satisfy today’s cloud management demands, it must be sophisticated enough to support the diverse set of devices that are required and it must be easy to implement by hardware and platform vendors alike.  The DMTF CIM and WSMAN standards are up to the task, but implementing them effectively has been a challenge.  Open Management Infrastructure (OMI) addresses this problem.

Easy and Diverse Device Support
Let’s start with a little history. Windows has long been a leader in implementing CIM, beginning with WMI (Windows Management Infrastructure). The Distributed Management Task Force (DMTF) Common Information Model (CIM) is an open standard that defines how managed elements are represented as a common set of objects and defines the relationships between them using associations.

When WMI was first introduced as an out-of-box install for Windows NT 4.0, it implemented early versions of the standards and schemas. WMI used DCOM for remote management, because no standard protocol was defined at that time. In Windows Server 2012, we invested heavily in standards and remote management, synching WMI with the latest DMTF standards and protocols.

The CIM standard is sophisticated and flexible enough to use as a management model for all devices – particularly datacenter devices. Although these DMTF standards have been around for years, they have been a challenge to implement, and existing implementations have been too large for mobile and embedded devices.  To address these challenges, Microsoft has built a highly portable, small footprint, high performance CIM Object Manager called OMI that is designed specifically to implement the DMTF standards. We then worked with The Open Group to make the source code for OMI available to everyone under an Apache 2 license.  OMI is written to be easy to implement in Linux and UNIX systems.

Partners that adopt OMI will get the following:

  • DMTF Standards Support: OMI implements its CIMOM server according to the DMTF standard.
  • Small System Support: OMI is designed to also be implemented in small systems (including embedded and mobile systems).
  • Easy Implementation: Greatly shortened path to implementing WS-Management and CIM in your devices/platforms.
  • Remote Manageability: Instant remote manageability from Windows and non-Windows clients and servers as well as other WS-Management-enabled platforms.
  • API compatibility with WMI:  Providers and management applications can be written on Linux and Windows by using the same APIs.
  • Support for CIM IDE: Tools for generating and developing CIM providers using tools, such as Visual Studio’s CIM IDE.
  • Optional PowerShell Support: If OMI providers use a set of documented conventions, Windows PowerShell will discover them and auto-generate cmdlets from them (This is how many of the 2300+ cmdlets in Windows Server 2012 are implemented).

OMI Details
For developers, OMI’s small footprint (250KB base size with a working set memory usage of 1MB) and high quality code will help reduce the complexity of developing a high performance, stable standards-based management stack. For IT pros, OMI greatly amplifies your effectiveness and productivity by increasing the number and types of devices you can manage and by unifying the management experience with standard-based management and automation tools, such as Windows PowerShell and System Center, and other management solutions.

OMI includes the following components and tools in its implementation of a CIM server.


Extensibility

OMI uses a provider model to enable developers to extend OMI to their specific device or platform. Historically, providers have been very hard to write, which made them costly and unstable. OMI leverages a greatly simplified provider model that is also being used by WMI in Windows Server 2012 and Windows 8. In short, OMI simplifies implementation for the developer by providing the following:

  • Next Generation Provider Interface
  • Compatible with the new WMI provider interface in Windows Server 2012 and Windows 8
  • Generation of provider skeletons (omigen)
  • Generation of concrete CIM class data structures and code
  • Provider registration tool (omireg)

The development model begins by specifying what needs to be managed.  From the specification, the omigen tools generates a set of C data structures and code that implements management model.  The developer adds their code to the skeleton  and registers the provider.

OMI is for Embedded and Mobile Systems
Embedded and mobile device management might be one of the most demanding tasks for a management technology, because they have the most significant processor and memory constraints. We figured that if we could build a management technology that meets their needs, OMI should be well suited to address the management needs of any device. Therefore, to keep OMI small and ideal for embedded systems, the following design characteristics were implemented:

  • Server object size less than 250 kilobytes
  • Server implemented entirely in C
  • Provider interface is C
  • Repository-less server
  • Concrete provider classes yield less code
  • Iterative size optimization
  • Diskless operation

Security
Security matters.  Ever since Bill Gates’s famous Trustworthy Computing memo, we committed ourselves to the Secure Development Lifecycle Model. Security is a primary factor in all aspects of our development and coding process. Despite OMI’s small size, OMI implements the following security capabilities:

  • HTTPS (SSL)
  • HTTP Basic Authentication
  • Local Authentication
  • Pluggable Authentication Module (PAM) support
  • Out-of-process providers
  • Run as requestor
  • Run as server
  • Run as designated user

Great! How Do I Get OMI?
Microsoft has partnered with The Open Group to create a hardware, software, and developer community to leverage, support, and enhance OMI. You can download OMI and/or get more details from The Open Group’s project site: http://omi.opengroup.org. In the near future, you will see this site and community grow and support more detailed documentation, contribution facilities, as well as OMI focused developer conferences.

For specific questions, please reach out to ottoh@microsoft.com

The post Open Management Infrastructure appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/06/28/open-management-infrastructure/feed/ 43
Windows Server 2012 Release Candidate available now http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/31/windows-server-2012-release-candidate-available-now/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/31/windows-server-2012-release-candidate-available-now/#comments Thu, 31 May 2012 11:21:00 +0000 Great news!  We reached another important milestone on the road to the final release of the cloud optimized OS:  Windows Server 2012 Release Candidate (RC) is available now for download and evaluation.

The post Windows Server 2012 Release Candidate available now appeared first on Microsoft Windows Server Blog.

]]>
Great news!  We reached another important milestone on the road to the final release of the cloud optimized OS:  Windows Server 2012 Release Candidate (RC) is available now for download and evaluation.

If you haven’t yet started exploring the wide range of new features and capabilities, this new pre-release version is an ideal time to begin.  With nearly 300,000 downloads to date of the beta release, the excitement around this new Windows Server is unprecedented.   I encourage you to join the worldwide community of IT professional and developers who are already familiarizing themselves with Windows Server 2012 and gearing up take advantage of all it has to offer.   Thanks in advance for participating in this last opportunity to provide us with your feedback so we can deliver you the highest quality release.

I also encourage you to fully explore this blog, if you haven’t already.  It’s a great on ramp to the new Server, providing insights directly from the people who have built it, as well as links out a rich set of blogs and content across Microsoft digital properties.  Below I’ve provided a list of posts since the March 1 beta announcement.

We look forward to showcasing Windows Server 2012 at our June TechEd events in Orlando and Amsterdam and at a Community Roadshow in a city near you.

Windows Server 2012, PowerShell 3.0 and DevOps, Part 2…

Windows Server 2012, PowerShell 3.0 and DevOps, Part 1…

Announcing the Windows Server 2012 Community Roadshow

Introduction to Windows Server 2012 Dynamic Access Control

Improved Server Manageability through Customer Feedback: How the Customer Experience Improvement Program makes Windows Server 2012 a better product for IT Professionals

Windows Server 2012 Remote Desktop Services (RDS)

Introducing the Server and Cloud Partner and Customer Solutions Team Blog

Building Cloud Infrastructure with Windows Server 2012 and System Center 2012 SP1

SMB 2.2 is now SMB 3.0

Introducing Windows Server “8” Hyper-V Network Virtualization: Enabling Rapid Migration and Workload Isolation in the Cloud

Windows Server “8” Beta: Hyper-V & Scale-up Virtual Machines Part 2…

Windows Server “8” Beta: Hyper-V & Scale-up Virtual Machines Part 1…

Standards-based Management in Windows Server “8”

Microsoft Online Backup Service

Building an Optimized Private Cloud using Windows Server “8” Server Core

Rocking the Windows Server “8” Administrative Experience

Where to Find Previous Windows Server “8” Posts

The post Windows Server 2012 Release Candidate available now appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/31/windows-server-2012-release-candidate-available-now/feed/ 11
Windows Server 2012, PowerShell 3.0 and DevOps, Part 2 http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/30/windows-server-2012-powershell-3-0-and-devops-part-2/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/30/windows-server-2012-powershell-3-0-and-devops-part-2/#comments Wed, 30 May 2012 12:09:00 +0000 This concludes my two part series.  In my first post, I provided some background information about PowerShell and DevOps.  In this post, I’ll provide you a bunch of specifics.  PowerShell 3.0, like Windows Server 2012, has a ton of new features and enhancements so I’ll only scratch the surface.

The post Windows Server 2012, PowerShell 3.0 and DevOps, Part 2 appeared first on Microsoft Windows Server Blog.

]]>
This concludes my two part series.  In my first post, I provided some background information about PowerShell and DevOps.  In this post, I’ll provide you a bunch of specifics.  PowerShell 3.0, like Windows Server 2012, has a ton of new features and enhancements so I’ll only scratch the surface. 

While PowerShell has always been focused on the goals of DevOps, PowerShell 3.0 and Windows Server 2012 take this to a new level.  With Windows 2012, we shifted our focus from being a great OS for a server to being a cloud OS for lots of servers and the devices that connect them whether they are physical or virtual, on-premise or off-premise.  In order to achieve this, we needed major investments in:

  1. Automating everything
  2. Robust and agile automation
  3. Making it easier for operators to automate
  4. Make it easier for developers to build tools

Automating Everything
Windows Server 2008/R2 shipped with ~230 cmdlets.  Windows Server 2012 beats that by a factor of over 10 shipping ~ 2,430 cmdlets.  You can now automate almost every aspect of the server.  There are cmdlets for networking, storage, clustering, RDS, DHCP, DNS, File Servers, Print, SMI-S etc. – the list goes on.  If you’ve read blogs about Windows Server 2012, you’ve seen how many things can be done using PowerShell.  If you haven’t kept up to date, check out Jose Barreto’s File Server blog posts, Yigal Edery’s Private Cloud blog posts, Ben Armstrong’s Virtual PC Guy’s Blog posts, the Clustering and High-Availability blog posts or Natalia Mackevicius’ Partner and Customer blog posts and you’ll see what I mean.  Windows Server 2012 is, by far, the most automatable version of Windows ever.

There are already a large number of hardware and software partners that are shipping PowerShell cmdlets and those that haven’t released them yet are  working to quickly deliver them in the next versions of their products.  This was very clear at the recent MMS conference in Las Vegas and I think you’ll see even more support at TechEd.   You should definitely make sure that any product you buy delivers a full set of PowerShell cmdlets.  If it doesn’t, you should think twice and do some due diligence to make sure you are getting a product that is current and is still being invested in.  If they didn’t do PowerShell, what other things they missing?  The good news is that a lot of the products will support PowerShell by the time Windows Server 2012 ships and that the products that have delivered cmdlets found it easy to do and mention the very positive customer feedback they get.  EVERY product that ships PowerShell cmdlets, increases their investment in PowerShell in their next release.

Robust and agile automation

Workflow
We integrated the Windows Workflow Foundation engine into PowerShell to make it simple and easy to automate things that take a long time, that operate against a very large scale, or that require the coordination of multiple steps across multiple machines.  Traditionally Windows Workflow has been a developer-only tool requiring visual studio and a lot of code to create a solution.  We’ve made it an in-the-box solution that operations can easily create a solution using their existing PowerShell scripting skill.  Workflow provides direct support for parallel execution, operation retries, and the ability to suspend and resume operations.  For example, a workflow can detect a problem that requires manual intervention, notify the operator of this condition and then suspend operations until the operator corrects the situation and resumes the workflow.

Operators can use any of the available Workflow designers to create workflows.  However we took it a step further and simplified authoring by extending the PowerShell language with the workflow keyword.  Any operator or developer can now easily author a workflow using the tools that ship in all Windows SKUs.  The behavior of a workflow are different than a function and it has a few more rules but if you know how to write a PowerShell function, you are 80% of the way to being able to write a workflow.  Authoring workflows using PowerShell is much easier than working with XAML and many of us easier to understand than Workflow designer tools.  You also get the benefit of being able to paste them into email and have someone be able to read/review it without having to install special tools.  Below is an example workflow which operates on multiple machines in parallel collecting inventory information in parallel on each of the machines.

The command below will get this inventory information from a list of servers contained in servers.txt and output the results to a file.  If any of the servers is unavailable, the workflow will attempt to contact the server every 60 seconds for an hour.

Workflow is exactly what DevOps practitioners need to reliably and repeatably perform operations.  One of the key techniques of DevOps is A/B testing where two versions of software are deployed and run for a period of time.  They are measured against some goodness metric (e.g. increased sales) and then the winning version is deployed to all machines.  The workflow capabilities allow PowerShell to perform operations against a large number of machines over a large period of time making it easy to automate A/B testing.

Scheduled jobs
We also seamlessly integrated Task Scheduler and PowerShell jobs to make it simple and easy to automate operations that either occur on a regular schedule or in response to an event occurring.  Below is a workflow which is meant to run forever.  It collects configuration information (disk info) and then suspends itself.  The workflow is started and given a well-known name “CONFIG”.  We’ll resume this workflow using Task Scheduler.  In the example, we register a ScheduledJob to run every Friday at 6pm and after every system startup.  When one of the triggers occurs, the scheduled job runs and resumes the workflow using its well-known name.  The workflow then collects the configuration information, putting it into a new file, and suspends itself again.

Robust Networking
In previous releases, PowerShell shipped with remoting disabled by default and required operators to go to each machine and issue the Enable-PSRemoting cmdlet in order to remotely manage it.  As a Cloud OS, remote management of servers via PowerShell is now the mainstream scenario, so we’ve reduced the steps required and enabled PowerShell remoting by default in all server configurations.  We did extensive security analysis and testing to ensure that this was safe.

In Wojtek Kozaczynski’s blog post on Standards-Based management, he described how we made WS-MAN our primary management protocol and kept COM and DCOM for backwards compatibility.  WS-MAN is a Web-Services protocol using HTTP and HTTPS.  While these are effectively REST protocols, PowerShell establishes a session layer on top of these to reuse a remote process for performance and to take advantage of session state.  These sessions were robust in the face of modest network interruptions but would occasionally break when operators managed servers from their laptops over Wi-Fi networks while roaming between buildings.  We’ve enhanced the session layer of WSMAN.  By default, it will survive network interruptions up to 3 minutes.   Disconnected Sessions support was added to PowerShell sessions which give users the option to disconnect from an active remote session and later reconnect to the same session, without losing state or being forced to terminate task execution. You can even connect to the session from a different computer (just like a remote desktop session).

Easier for operators to automate
We wanted to significantly lower the skill level required to successfully automate a complex solution.  Ultimately we want to create a world where operators think about what they want, type it and get it.  Every customer’s needs and scenarios are different so they need to script their own solutions.  Our goal is to make it simple and easy to author scripts gluing together high level task oriented abstractions.  The number one factor in making it simple is cmdlet coverage.  That is why having ~2,430 cmdlets makes Windows Server 2012 so much easier to automate.  A number of these cmdlets are extremely effective in dealing with the messy, real-world life of datacenters.  We have cmdlets to work with REST APIs, JSON objects and even to get, parse and post web pages from management applications if required.

PowerShell 3.0 simplifies the language and utility cmdlets to reduce the steps and syntax necessary to perform an operation.  Below is an example showing the old way of doing something and the new simplified syntax.

PowerShell3.0 improves the authoring tools operators use to create scripts and author workflows.  PowerShell-ISE now supports rich IntelliSense, snippets, 3rd party extensibility and a Show-Command window which makes it easy to find exactly the right command and parameters you need to accomplish a task.

Easier for developers to build tools
Developers have always loved scripting with PowerShell because of its power, its use of C language conventions and its ability to program against .Net objects.  PowerShell 3.0 cleans up a number of seams in dealing with .NET and objects and expands to allow developers to use PowerShell in a much wider range of scenarios.

Tool building enhancements
PowerShell 3.0 now has an Abstract Syntax Tree (AST).  This allows new classes of intelligent tools to create, analyze, and manipulate PowerShell scripts.  One of the Microsoft cloud services depends upon a very large number of PowerShell scripts to run all aspects of the service.  Their development team used the AST to develop a script analysis tool to enforce a set of scripting best practices for their operators.  The public AST is the reason why IntelliSense is freakishly powerful.  It uses the AST to reason about the actual behavior of the program.

We modified a number of key areas of PowerShell to make them easier for developers to use and extend to write their own tools.  This includes access to our serializer, API improvements, and an extensibility model for PowerShell_ISE.

Scripting enhancements
PowerShell 3.0 now uses the .NET Dynamic Language Runtime (DLR) technology.  PowerShell monitors how a script is executing and will compile the script or portions of the script on the fly to optimize performance.  Performance varies but some scripts run 6 times faster in 3.0.

Intellisense (and tab completion on the command line) now work with .NET namespaces and types.

It is able to reason about the program and use variable type-inferencing to improve the quality of the IntelliSense.

We extended our hashtable construct with two variations which make it much easier for developers to get the behavior they want:

Platform building enhancements
We have streamlined the process to support delegated administration scenarios.  PowerShell 3.0 allows you to register a remoting endpoint, configure what commands it makes available and specify what credentials those command should run as.  This allows you to let regular uses run a well-defined set of cmdlets using Admin privileges.  We’ve simplified the process of defining which cmdlets are available to using a declarative session configuration file.

PowerShell 3.0 is also available as an optional component of WINPE.

Windows Server 2012 and PowerShell 3.0 are excellent DevOps tools
DevOps is a new term and there is some disagreement about what it entails but at the heart it is all about making change safe through automation and bridging the gap between operators and developers.  There is a lot to do in this area but Windows Server 2012 and PowerShell 3.0 make excellent progress towards accomplishing those goals.  PowerShell won’t be the only tool in your DevOps toolbox but it should be in every DevOps toolbox.  Download the beta today and find out for yourself.

The post Windows Server 2012, PowerShell 3.0 and DevOps, Part 2 appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/30/windows-server-2012-powershell-3-0-and-devops-part-2/feed/ 9
Windows Server 2012, PowerShell 3.0 and DevOps, Part 1… http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/29/windows-server-2012-powershell-3-0-and-devops-part-1/ http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/29/windows-server-2012-powershell-3-0-and-devops-part-1/#comments Tue, 29 May 2012 09:55:00 +0000 In the first of a two part series, I provide some background information about PowerShell and DevOps.  In the second post, I’ll provide you a bunch of specifics.  PowerShell 3.

The post Windows Server 2012, PowerShell 3.0 and DevOps, Part 1… appeared first on Microsoft Windows Server Blog.

]]>
In the first of a two part series, I provide some background information about PowerShell and DevOps.  In the second post, I’ll provide you a bunch of specifics.  PowerShell 3.0, like Windows Server 2012, has a ton of new features and enhancements so I’ll only scratch the surface

The first time I heard DevOps was a podcast describing the 2009 Velocity conference.  While most of the industry was struggling to deploy releases a few times a year, John Allspaw and Paul Hammond rocked the house with the talk “10 Deploys Per Day: Dev And Ops Cooperation at Flickr”.  They made the case for delivering business results through changes in culture and tools, and gave birth to a new term: DevOps.  The problem is that developers think they are responsible for delivering features and operators are responsible for keeping the site running.  The gap between developers and operators leads to finger-pointing when things go wrong.  Successful business requires an IT culture of joint accountability and mutual respect: developers thinking about the needs and concerns of operators and operators thinking about the needs and concerns of developers.

Their talk described how businesses required rapid change but that change is the root cause of most site-down events. Shunning the traditional “avoid change” approach, they advocated minimizing risk by making change safe through automation.  This is the job of DevOps – safe change.  This was the Taguchi quality approach applied to IT operations.  Taguchi observed that the root cause of poor quality was variation.  The solution was to first figure out how to do something repeatably.  Once you could do that, then you can make small modifications in the process to see whether they make things better or worse.  Back out the changes that make things worse. Keep doing the things that make things better.  The key is repeatability.  Repeatability allows experimentation which drives improvement.  We get repeatability in IT operations through automation.

We envisioned a distributed automation engine with a scripting language which would be used by beginner operators and sophisticated developers.   PowerShell’s design was driven by the same thinking and values that drove the birth of DevOps:

  1. Focus on the business
  2. Make change safe through automation
  3. Bridge the gap between developers and operators

Focus on the business
PowerShell has always focused on people using computers in a business context.  PowerShell needed to be consistent, safe, and productive.  Much has been made of the similarities between PowerShell and UNIX but in this regard, our ties are much closer to VMS/DCL and AS400/CL.

Consistent:  Operators and developers don’t have a lot of time to learn new things.  A consistent experience lets them to invest once in a set of skills and then use those skills over and over again.  PowerShell uses a single common parser for all commands and performs common parameter validation delivering absolute consistency in command line syntax.  PowerShell cmdlets are designed in a way that ubiquitous parameters can provide consistent functions to all commands (e.g.  –ErrorAction, –ErrorVariable, –OutputVariable, etc)

Safe:  An Operator once told me that occasionally he was about to do something and realized that if he got it wrong, he would be fired.  In PowerShell, if you ever execute a cmdlet which has a side-effect on the system, you can always type –WhatIf to test what would happen if you go through with the operation.  We also support –Confirm, -Verbose and –Debug.  Despite these safeguards, things can go wrong and when they do, PowerShell spends a lot of effort to speed up the process of diagnosing and resolving the error.

Productive:  Every aspect of PowerShell’s design maximizes the power of users (ergo the name).  PowerShell makes it easy to perform bulk operations across a large number of machines.  PowerShell also makes it easy to have productive engagements between your operators and developers because it allows them to speak a common language and to help each other with their scripts.

Make change safe through automation
There has been a lot of discussion about whether PowerShell is a .Net language, a scripting language, or an interactive shell.  PowerShell is a distributed automation engine with a scripting language and interactive shell(s).   Interactive shells and a scripting language are critical components but the focus has always been on automation through scripting.  Automation is the process of reducing and/or eliminating operations performed by a human.  A script documents what is going to happen.  People can review a script and you can modify it based upon their feedback.  You can test the script, observe the outcome, modify the script and if modification is good, keep it and it if is bad back it out. In other words, scripting provides the repeatability required to apply the Taguichi method to IT operations.  Once you have an automated process, you can safely apply it over and over again.  These processes can now be performed reliabily by lower skilled admins.  These steps aren’t possible when you use traditional GUI admin tools.

Bridge the gap between developers and operators
Our goal has always been to deliver a single tool which could span the needs of operators doing ad hoc operations, simple scripting, formal scripting, advanced scripting and developers doing systems-level programming.
PowerShell spends a ton of effort trying to project the world in terms of high level task-oriented abstractions with uniform syntax and semantics.  We call these cmdlets. And this is what operators want to efficiently and effectively manage systems.  In order to copy a file using APIs, you would do this:

Have you ever wondered why PowerShell uses curly braces {} (and other C constructs) instead of BEGIN/END as other scripting languages do?  We did that because we wanted to make it easier to adopt by developers of other C-based programming languages: C++, Objective C, Java, JavaScript, Perl, PHP, etc.  We did some testing and determined that operators were able to readily adapt to this syntax.  We also wanted to provide a smooth glide path between PowerShell and C# .  This provides career mobility for operators who might want to transition to being a developer.

Most importantly, we wanted to develop a tool which could be used by BOTH operators and developers to bridge the gap between the groups and allow them to create common scripts, learn from each other and work together.

Windows Server 2012 and PowerShell 3.0 are excellent DevOps tools
DevOps is a new term and there is some disagreement about what it entails but at the heart it is all about making change safe through automation and bridging the gap between operators and developers.  There is a lot to do in this area but Windows Server 2012 and PowerShell 3.0 make excellent progress towards accomplishing those goals.  PowerShell won’t be the only tool in your DevOps toolbox but it should be in every DevOps toolbox.  Download the beta today and find out for yourself.

The post Windows Server 2012, PowerShell 3.0 and DevOps, Part 1… appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/29/windows-server-2012-powershell-3-0-and-devops-part-1/feed/ 11
Improved Server Manageability through Customer Feedback: How the Customer Experience Improvement Program makes Windows Server 2012 a better product for IT Professionals http://approjects.co.za/?big=en-us/windows-server/blog/2012/05/17/improved-server-manageability-through-customer-feedback-how-the-customer-experience-improvement-program-makes-windows-server-2012-a-better-product-for-it-professionals/ Thu, 17 May 2012 11:38:00 +0000 I once talked to a doctor who told me about a recent patient that had serious medical symptoms for over a year before visiting the doctor. He said that if the patient had mentioned these symptoms when they first arose, the prognosis was very good but now the patient was in trouble.

The post Improved Server Manageability through Customer Feedback: How the Customer Experience Improvement Program makes Windows Server 2012 a better product for IT Professionals appeared first on Microsoft Windows Server Blog.

]]>
I once talked to a doctor who told me about a recent patient that had serious medical symptoms for over a year before visiting the doctor. He said that if the patient had mentioned these symptoms when they first arose, the prognosis was very good but now the patient was in trouble. That reminded me of some advice I once heard, “Never hold anything back from your doctor”. Doctors have exactly one job: to help you. They can only help you with problems that they know about so if you aren’t completely open and honest with them, you are only hurting yourself. The other thing is that by sharing your situation with a doctor, the doctor gains knowledge and skills to help other people as well. This model and thinking applies to our Customer Experience Improvement Program (CEIP) for Windows Server 2012 Beta. That is where we ask you to allow us to collect data about the health and usage of your servers. We frequently receive questions about CEIP; ‘what is CEIP?’ and ‘how is CEIP data used?’. In this post, Karen answers these questions along with the most important question ‘why should I enable CEIP?’

Karen Albrecht, a Program Manager on the Windows Server Telemetry team, authored this post.

–Cheers! Jeffrey

When we talk to the server community about the Windows Customer Experience Improvement Program (CEIP), most people say ‘Never heard of it’. Those that have heard of it sometimes don’t enable it because they ‘don’t want to share their data’. In this blog article we will explore what CEIP is and what benefits you may receive by enabling it on your deployed Servers. We will also discuss several new features in Windows Server 2012that make it easier to enable CEIP.

Let’s start by answering the question ‘What is CEIP?’ For those who have never seen CEIP before, using Windows Server2012 Beta you can get there through Server Manager -> Local Server -> select the Customer Experience Improvement Program link.

CEIP is the program by which we learn how you use Windows Server 2012, in order to improve the product based on your feedback. You can join the Windows Server2012 CEIP program in several ways. First, for pre-release beta software, such as the Windows Server2012 Beta, CEIP is enabled by default to help us improve the software before its’ final release. Alternatively, in released products such as Windows Server 2008 R2 we provide notice through the CEIP user interface (shown above) so you can elect to opt-in to the program.

We know that you need to get the most out of your servers, especially when it comes to server performance and network bandwidth. The CEIP report collection and transfer process are light weight in order to meet this need. Windows records CEIP usage information using a high-speed tracing component, Event Tracing for Windows (ETW). ETW enables Windows Server2012 to write out CEIP usage data no noticeable impact to server performance. CEIP usage information is transferred to Microsoft in a two part process using the Consolidator and Uploader scheduled tasks. The consolidator exports CEIP data into a compressed binary format that is ready for transfer. The binary is typically less than 1 MB in size so that the transfer has minimal impact to network bandwidth. The uploader scheduled task runs every once every 24 hours and transfers the CEIP binary data to the Microsoft frontend servers using the Windows Telemetry Protocol.

Another question we are often asked is ‘What data is collected by CEIP’? The data consists of basic information about how your server is configured and used; roles installed, features installed, settings used, and information about hardware. CEIP does not intentionally collect Personally Identifiable Information (PII). So, CEIP reports do not contain your contact information, such as your name, address, or phone number. This means CEIP will not ask you to participate in surveys or to read junk e-mail and you will not be contacted in any other way. The Microsoft Customer Experience Improvement Program privacy statement discusses, in detail, the data collected by CEIP and how we use it.

Moving on to the heart of the question, ‘What do I get for sending this data to Microsoft?’, you might be surprised in the ways Windows Server uses your data to improve the product. There are many examples beyond what is listed here. However we narrowed it down to the following to give you a flavor of some of the ways CEIP data is used to improve the product.

  1. Increased server reliability:  In the Windows Server 2012 Developer Preview and Windows Server 2012 Beta pre-release versions, Reliability Analysis Component (RAC) features are enabled to determine the root cause of Windows server crashes, Windows server hangs, and application crashes.  RAC combines CEIP data with Windows Error Reporting (WER) data in order to reconstruct a full view of the system state at the time of the crash or hang.  By analyzing the combined data in these two programs we can identify high occurrence issues in order to triage and fix them so that you have a more reliable platform release over release.  To learn more about the data collected by WER, see the Microsoft Error Reporting Privacy Statement.
  2. Improved programmability for server administration scripts:  For large scale deployments, IT administration is often done using PowerShell and WMI scripts because scripting simplifies manageability at scale.  When a commandlet or WMI interface changes or is removed, it can be painful to rewrite scripts to accommodate the platform changes.  In Windows Server 2012 we are using CEIP to address this by monitoring deprecated API usage so that APIs are not removed until it has minimum impact to you.  As an example, in Windows Server 2012 the Win32_ServerFeature WMI interface had been considered to be deprecated and being replaced with MSFT_ServerManagerDeploymentTasks.  (For those who haven’t used it, Win32_ServerFeature detects installed roles and features.) 

    As part of the deprecation process, we added CEIP data to record interface usage and based on the latest Windows Server 2012 Beta CEIP data, we found that 47% of customers are using Win32_ServerFeature.  Using this data, we are able to identify migration off of Win32_ServerFeature so that it is not formally removed from the product until migration to MSFT_ServerManagerDeploymentTasks can be done without impact to you. 


  3. Diversity of Windows Certified hardware:  One of the frequently asked questions we get is ‘What CEIP data does Microsoft share with partners?’  There are certain scenarios where a subset of CEIP data (but no PII) is shared with IxVs (independent hardware or software vendors) as part of hardware certification.  An important part of the Windows server offering is supporting high quality drivers for a diversity of devices in market.  The challenge is to understand what devices are most commonly used in market.  CEIP data is used to model hardware profiles and map diversity of different devices in order to inform certification strategy for IxVs.  Using this data, IxVs determine the breadth of drivers to certify (based on what is in market) and prioritize which devices get certified first (based on popularity).

  4. Improved product experiences: CEIP data is used on a day-by-day basis to understand a broad range of feature configurations so that we can prioritize work according to your usage patterns.  For example, in order to reduce the cost to setup new servers, CEIP records what settings you use.  This allows us to refine default settings by tuning them to reflect most common usage patterns so it is faster for you to setup a new server.  Another example of internal usage is in testing.  In order to increase test coverage of real world test patterns, we analyze CEIP data to understand how the product is used.  This ensures that both design and testing are driven with your usage patterns in mind.  There are many, many more examples of how CEIP is used to drive customer feedback into the product but in the interest of time, let’s move on to how to configure CEIP.

After the release of Windows Server 2008 R2, we did an assessment of CEIP adoption and found that 5-7% of servers in market were reporting CEIP. While working with customers on CEIP adoption we found that although servers were opted-in we weren’t getting data from them. We did a root cause analysis and learned that the main reason servers weren’t reporting is because they are deployed in firewalled environments. To send CEIP data, servers need to be able to communicate over HTTPS (default port 443) and need to have proxy settings configured (if the server is in a network that uses a proxy server). In working with Technology Adoption Program (TAP) customers, we found that frequently one or more of these settings were not configured, thus preventing CEIP data from reaching Microsoft.

To make it easy to send CEIP data, Windows Server 2012 Beta ships several new features that allow you to get past the blocking issues so you can ‘set and forget’ CEIP. To participate in the CEIP program, the simplest way to deliver CEIP data to us is to use a new feature called Windows Feedback Forwarder (WFF). WFF is a service that proxies CEIP data from machines in a domain to Microsoft. WFF will proxy CEIP data Windows products including Windows 7 and Windows Server 2008 or higher. WFF will also proxy data for any Microsoft product that is enabled to ‘send customer feedback’.

The forwarder can sit within the domain or as an edge server. Machines in the domain are configured to send data to the forwarder via group policy. When an individual machine is triggered to collect data, it sends the data to the forwarder over HTTP and the forwarder relays the data to Microsoft over HTTPS.

  1. To install Windows Feedback Forwarder
  2. Using the User Interface (UI)
  1. On any Windows Server 2012 machine, launch Server Manager and then launch the Add Roles and Features wizard. 
  2. In the Add Roles and Features Wizard, navigate to the Features page, select Windows Feedback Forwarder. 
  3. Specify an incoming port number (default port number is 53533).  If the domain has an internet proxy, specify the proxy information.  Finish the install.
  4. In Server Manager, select ‘All Servers’ in the left hand navigation pane.  In the ‘Servers’ tile, right click the server that you installed Windows Feedback Forwarder on and select ‘Windows Feedback Forwarder Configuration’.  Keep the dialog open for the next step.
  • OR Using PowerShell
  1. Launch PowerShell and run ‘Add-WindowsFeature WFF’
  2. In Server Manager, select ‘All Servers’ in the left hand navigation pane.  In the ‘Servers’ tile, right click the server that you installed Windows Feedback Forwarder on and select ‘Windows Feedback Forwarder Configuration’. 
  3. Select the ‘Forwarding Settings’ tab and specify an incoming port number (default port number is 53533).  If the domain has an internet proxy, specify the proxy information.  Click ‘Apply’.
  4. Keep the dialog open for the next step.
  • To deploy the Windows Feedback Forwarder group policy
  1. The easiest way to configure machines in a domain to send CEIP data to your Windows Feedback Forwarder is to deploy a group policy.  There are 2 options to deploy the group policy.  You can either use the Windows Feedback Forwarder configuration dialog or you use the Group Policy Management Console to create and link the group policy object. 
  2. Use Windows Feedback Forwarder configuration dialog
  1. In the Windows Feedback Forwarder configuration dialog select the group policy tab. 
  2. Enter the domain name that you want to deploy the group policy object to and click ‘Find’.  Note: you may have to enter credentials at this step depending on the settings of the current user context.
  3. After the list of organizational units is populated, select one or more organizational units.
  4. Click the ‘Apply’ button
  • Manually create a group policy object
  1. In the Windows Feedback Forwarder configuration dialog, select the ‘Forwarding Settings’ tab.  Copy the Windows Feedback Forwarding URL and store it temporarily.
  2. In GPMC create a new group policy object and set:

An alternative method to enable CEIP is the Windows Automatic Feedback dialog, which is a new multi-machine opt-in experience that ships in Server Manager. It enables you to configure multiple individual machines to send CEIP data within just 3 clicks.

  1. Launch Server Manager and select ‘All Servers’ in the left hand navigation.
  2. In the ‘Servers’ tile select ctrl+a to select all servers -> right click and select ‘Configure Windows Automatic Feedback’
  3. Clicking Enable both Customer Experience Improvement Program and Windows Error Reporting will enable both on all servers connected to that Server Manager console

We would love to know what you think of this program and how we can improve it to provide the best experience for your deployments and Windows Server usage. Please give us your comments below.

Karen Albrecht
Program Manager
Windows Server Telemetry

The post Improved Server Manageability through Customer Feedback: How the Customer Experience Improvement Program makes Windows Server 2012 a better product for IT Professionals appeared first on Microsoft Windows Server Blog.

]]>