HPC | Microsoft Windows Server Blog http://approjects.co.za/?big=en-us/windows-server/blog/tag/hpc/ Your Guide to the Latest Windows Server Product Information Sat, 09 Mar 2024 00:38:06 +0000 en-US hourly 1 http://approjects.co.za/?big=en-us/windows-server/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element.png HPC | Microsoft Windows Server Blog http://approjects.co.za/?big=en-us/windows-server/blog/tag/hpc/ 32 32 Windows HPC Server 2008 R2 Ships! http://approjects.co.za/?big=en-us/windows-server/blog/2010/09/20/windows-hpc-server-2008-r2-ships/ http://approjects.co.za/?big=en-us/windows-server/blog/2010/09/20/windows-hpc-server-2008-r2-ships/#comments Mon, 20 Sep 2010 09:25:00 +0000 We have finished version 3 of Windows HPC Server! The verbose, official, and approved name is Windows HPC Server 2008 R2 Suite signaling that we are leveraging Windows Server 2008 R2 as our core operating system but make no mistake, this is the big v3 release, our most ambitious release.

The post Windows HPC Server 2008 R2 Ships! appeared first on Microsoft Windows Server Blog.

]]>
We have finished version 3 of Windows HPC Server! The verbose, official, and approved name is Windows HPC Server 2008 R2 Suite signaling that we are leveraging Windows Server 2008 R2 as our core operating system but make no mistake, this is the big v3 release, our most ambitious release. It’s like the third stage of a rocket firing. What makes it a big deal? We have continued to improve performance while adding new features that will increase the size of the HPC community.

Sometimes people think Supercomputing is all about the Top500 List, the list of the most powerful 500 supercomputers in the world, but HPC is more than that. It’s about enabling the next generation of complex simulations in biology, chemistry, physics, finance, weather, and more. Microsoft’s ambition is not limited to the top500 but also to ensure the next 500K is just as capable, increasing the number of people and applications that can use the power of cluster based supercomputing.

We’re doing a bunch of things to enable more people and organization to use HPC. First, we’re improving the tools developers use to write multi-core applications with Visual Studio 2010. My favorite feature is the Parallel Performance Analyzer, a tool that allows developers to see stuff like context switching for the threads of their application. We’ve also partnered with Intel to make sure their parallel performance tools run great in Visual Studio. Customers asked for a more resilient programming model for service oriented applications. With this release we’ve created a new asynchronous programming model that allows applications to submit millions of calls to the cluster, disconnect from the cluster, and collect the results later.

Enabling more people to use HPC also means making it easier to setup and manage. First, you have to get the cluster up and running and to do that we’ve added support for network boot as well as the ability to dual boot with Linux. Next, we’ve improved usage of the cluster with new job scheduling policies as well as SharePoint integration. Finally, it is the nature of distributed systems to fail. To help with failures we’ve improved our diagnostics making it easier to identify hardware failures, network failures, or failures in applications. ISVs shipping applications with the Windows HPC logo ship HPC Server diagnostics with their application, making it easier for administrators fix their clusters.

Another way to enable more people to use HPC is ensuring commonly used applications like Mathematica or Matlab. With this release we now support running Microsoft Excel 2010 on the cluster, increasing the size and complexity of models computed in Excel. There are an estimated 300 million Excel users worldwide and Excel is often cited as a modeling and simulation tool used by engineers, scientists and financial quants. Some of our customers run thousands of embarrassingly parallel simulations in Excel. The ability to offload to a cluster enables them to reduce their simulation runs from days to hours.

There are two major categories of supercomputing applications: tightly coupled applications and embarrassingly parallel applications. Our performance improvements benefit both types of applications. First, on performance, we have continued turning the performance crank for tightly coupled applications that use the Message Passing Interface (MPI) and high-speed RDMA networking. The result is performance that equals Linux based on open source and application specific benchmarks. And we continue to contribute our MPI improvements to Argonne’s open source MPI project, making these contributions some of Microsoft’s largest open source contributions.

Second, we’ve made improvements to leverage the multi-core revolution, improving performance when running on the latest multicore chips from Intel and AMD and adding support for general purpose GPUs, the latest revolution in multi-core processing.

Finally, with this release we’ll support using Windows 7 workstations as part of the cluster environment using our new Desktop Compute Cloud (DCC) feature. With DCC administrators can specify particular hours of the day to use workstations in the cluster, for example, every night after 7PM.  Of course if users are logged in and still working, we won’t use the workstation for computation. With DCC we further expand the compute fabric for HPC.

So, wrapping up, we have finally put the finishing touches on Windows HPC Server 2008 R2, our third release – a release that will expand the HPC community.  As for the future of Windows HPC Server, today at the High Performance Computing in Financial Markets conference we demonstrated integration Azure, which will be released in a product update in the fall, allowing HPC Server users use compute nodes that are traditional compute nodes in a data center, desktops, and/or instances in Azure.

Hurray!

Ryan Waite, general manager

The post Windows HPC Server 2008 R2 Ships! appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2010/09/20/windows-hpc-server-2008-r2-ships/feed/ 1
Microsoft acquires the technology assets of Interactive Supercomputing (ISC) http://approjects.co.za/?big=en-us/windows-server/blog/2009/09/21/microsoft-acquires-the-technology-assets-of-interactive-supercomputing-isc/ Mon, 21 Sep 2009 16:14:00 +0000 Hello everyone, Today, I’m very excited to announce that Microsoft has acquired the technology assets of Interactive Supercomputing (ISC), a company that specializes in bringing the power of parallel computing to the desktop and making high performance computing more accessible to end users.

The post Microsoft acquires the technology assets of Interactive Supercomputing (ISC) appeared first on Microsoft Windows Server Blog.

]]>
Hello everyone,
Today, I’m very excited to announce that Microsoft has acquired the technology assets of Interactive Supercomputing (ISC), a company that specializes in bringing the power of parallel computing to the desktop and making high performance computing more accessible to end users.  This move represents our ongoing commitment to parallel computing and high performance computing (HPC) and will bring together complementary technologies that will help simplify the complexity and difficulty of expressing problems that can be parallelized.  ISC’s products and technology enable faster prototyping, iteration, and deployment of large-scale parallel solutions, which is well aligned with our vision of making high performance computing and parallel computing easier, both on the desktop and in the cluster.
Bill Blake, CEO of ISC, is bringing over a team of industry leading experts on parallel and high performance computing that will join the Microsoft team at the New England Research & Development Center in Cambridge, MA.  He and I are both excited to start working together on the next generation of technology for researchers, analysts, and engineers, as well as those who have yet to be exposed to the benefits of parallel computing and HPC technologies or may have thought they were out of reach.
We have recently begun plans to integrate ISC technologies into future versions of Microsoft products and will provide more information over the coming months on where and how that integration will occur. Beginning immediately, Microsoft will provide support for ISC’s current Star-P customers and we are committed to continually listening to customer needs as we develop the next generation of HPC and parallel computing technologies.  I’m looking forward to the opportunities our two combined groups have to greatly improve the capability, performance, and accessibility of parallel computing and HPC technologies.
You can find more information on HPC and parallel computing at Microsoft in these links and stay up to date on integration news and updates at Microsoft Pathways, our acquisition information site.

Kyril Faenov
General Manager, High Performance & Parallel Computing Technologies

The post Microsoft acquires the technology assets of Interactive Supercomputing (ISC) appeared first on Microsoft Windows Server Blog.

]]>
Personal Supercomputing Goes Quad Core http://approjects.co.za/?big=en-us/windows-server/blog/2006/09/28/personal-supercomputing-goes-quad-core/ Thu, 28 Sep 2006 13:53:00 +0000 “Quadrophenia” is how CNET charaterized it. At Intel Developer Forum this week, Intel is showing off its forthcoming “Clovertown” quad-core processor for servers. Catch Intel’s podcast previewing its announcement of the quad-cores.

The post Personal Supercomputing Goes Quad Core appeared first on Microsoft Windows Server Blog.

]]>
“Quadrophenia” is how CNET charaterized it. At Intel Developer Forum this week, Intel is showing off its forthcoming “Clovertown” quad-core processor for servers. Catch Intel’s podcast previewing its announcement of the quad-cores.

Today at IDF we participated in a personal supercomputing demo that used the “Clovertown” processors within Tyan Compter’s Typhoon system, running Windows CCS, along with Mellanox Infiniband interconnect and Wolfram’s gridMathematica. I’m told that this is the first ever HPC cluster demo using quad-core processors. Earlier this week Tyan Computer announced some details of the demo.

Stephen S. Pawlowski, a senior fellow and CTO at Intel, gave the demo. The slides and webcast of Pawlowski’s presentation will be posted – shortly I’m told.

Patrick O’Rourke

The post Personal Supercomputing Goes Quad Core appeared first on Microsoft Windows Server Blog.

]]>
Windows CCS from your desktop http://approjects.co.za/?big=en-us/windows-server/blog/2006/09/14/windows-ccs-from-your-desktop/ Thu, 14 Sep 2006 01:11:00 +0000 I’m not talking about deskside HPC clusters, but rather learning more from your desk about the new Windows Server edition for running parallel applications on HPC clusters. First you can start with Scientific Computing, which is hosting a webcast on high-performance computing going mainstream.

The post Windows CCS from your desktop appeared first on Microsoft Windows Server Blog.

]]>
I’m not talking about deskside HPC clusters, but rather learning more from your desk about the new Windows Server edition for running parallel applications on HPC clusters.

First you can start with Scientific Computing, which is hosting a webcast on high-performance computing going mainstream. Speakers include our own Tony Hey, University of Southampton’s Simon Cox and IDC analyst Earl Joseph. The show is Sept. 27, starting at 2pm Eastern.

And if that’s not enough, do check out Anand’s video on desk-side clusters.

Patrick

The post Windows CCS from your desktop appeared first on Microsoft Windows Server Blog.

]]>
Supercomputers can do Windows http://approjects.co.za/?big=en-us/windows-server/blog/2006/06/28/supercomputers-can-do-windows/ Wed, 28 Jun 2006 05:16:00 +0000 The supercomputing world publishes a list twice a year of the 500 most powerful supercomputers. Conveniently, it’s called the Top500 project. The list is part bragging rights (among vendors) and part trends tracking (among scholars). But without a doubt, the list represents the who’s who of supercomputing and high-performance computing.

The post Supercomputers can do Windows appeared first on Microsoft Windows Server Blog.

]]>
The supercomputing world publishes a list twice a year of the 500 most powerful supercomputers. Conveniently, it’s called the Top500 project. The list is part bragging rights (among vendors) and part trends tracking (among scholars). But without a doubt, the list represents the who’s who of supercomputing and high-performance computing. The 27th Edition was announced today at 2006 International Supercomputer conference in Dresden, Germany.

Coming in at #131 is a cluster system running Windows Compute Cluster Server 2003. The cluster, called Lincoln, is housed at the National Center for Supercomputing Applications at the University of Illinois. The Lincoln cluster was benchmarked at 4.1 Tflops on 896 Intel Xeon (x64) processors, and using Dell PowerEdge 1855 blade servers, Cisco Topspin InfiniBand switches and Force10 Gigabit Ethernet (GigE) switches. A datasheet of the configuration will be published today.

While Linux is the OS for 70% of the Top500 systems, the goal of the Windows CCS team is to ensure that there’s one Windows-based cluster on the Top500 list each year. We’re not looking to win the race to petaflops. Because of the integration with existing tools and infrastructure, Windows CCS will be most appealing to organizations wanting workgroup, departmental and divisional HPC clusters. As one team members says, “it’s the democratization of HPC” … power to the people is another way to think of it. The Top500 list is an industry-accepted benchmark the Windows CCS team can use to demonstrate the headroom and scale of Windows CCS … and to keep the naysayers at bay (just kidding Greg).

You’ll be able to read more about NCSA’s Lincoln cluster and the Top500 result on PressPass.

Auf wiedersehen,

Patrick O’Rourke

The post Supercomputers can do Windows appeared first on Microsoft Windows Server Blog.

]]>
10 Things about Windows CCS that You Won’t Read Anywhere Else http://approjects.co.za/?big=en-us/windows-server/blog/2006/06/09/10-things-about-windows-ccs-that-you-wont-read-anywhere-else/ http://approjects.co.za/?big=en-us/windows-server/blog/2006/06/09/10-things-about-windows-ccs-that-you-wont-read-anywhere-else/#comments Fri, 09 Jun 2006 11:18:00 +0000 Windows Compute Cluster Server has been RTM’d, and the bloggers and press are running their stories. And you can watch/listen to Zane talk about Windows CCS at the Virtual TechEd site. And of course you can go to the Windows CCS site for technical details and white papers.

The post 10 Things about Windows CCS that You Won’t Read Anywhere Else appeared first on Microsoft Windows Server Blog.

]]>
Windows Compute Cluster Server has been RTM’d, and the bloggers and press are running their stories. And you can watch/listen to Zane talk about Windows CCS at the Virtual TechEd site. And of course you can go to the Windows CCS site for technical details and white papers.

Here are 10 things that you won’t read there that you may find interesting about Windows CCS.

1) Windows CCS is the first product that MSFT has shipped that was partially developed by the dev center in Shanghai, China.

2) The first benchmark (LinPack) data for Windows CCS can be seen on page 62 of this report. It’s also referenced by InformationWeek, and the benchmark was done with the folks at NCSA/University of Illinois.

3) Besides being sold by the big OEMs, Windows CCS will be distributed by the personal supercomputing vendors like Tyan Computer.

4) MS employs some well-known names in supercomputing and HPC, such as Burton Smith, Craig Mundie, Gordon Bell, Jim Gray, Tony Hey and Fabrizio Gagliardi. But the Windows CCS dev team also includes a woman who used to be on the dev team for IBM Blue Gene, a gentleman who developed one of the first Beowulf clusters, a gentleman from Argonne National Labs, and several folks from Cray.

5) Thirty years ago, at Los Alamos labs, Seymour Cray installed the first Cray-1 supercomputer, which was capable of 160 Mflops and cost $8.8 million. Fast-forward to today where the Windows’s LinPack benchmark shows 4.1 TFlops, and if it had a price tag, it’d be in the hundreds of thousands.

6) In 1997, the folks from NCSA deployed the first Windows cluster on NT4.

7) The conceptual idea for developing Windows CCS dates back to an internal email titled, “the High End” on Dec. 23, 1998, that said in part:

A month ago we had a day devoted to scalability… we started by learning about the Beowulf clustering work being done on Linux and how that is the focus of University activity … we need to get a focus here so that we form some partnerships

8)  In 2001, MS Computational Clustering Preview kit and “Beowulf Cluster Computing with Windows” book released

9) The folks at National Center for Atmospheric Research ported their well-known Weather Research and Forecasting Model to Windows CCS. WRF is a next-generation numerical weather prediction system, in use by the US National Weather Service, Air Force Weather Agency and over 3,000 users. WRF was originally developed for UNIX systems. 360,000 lines of code in C++, Fortran, OpenMP and MPI. What’s interesting, is that 750 lines required modification to produce native Win64 binary that ran Windows CCS/MPI.

10) Partner training will be running throughout the U.S., Asia and Europe starting in mid-July through early September. Contact your local office for more info, and keep an eye on the Windows CCS site.

Patrick

The post 10 Things about Windows CCS that You Won’t Read Anywhere Else appeared first on Microsoft Windows Server Blog.

]]>
http://approjects.co.za/?big=en-us/windows-server/blog/2006/06/09/10-things-about-windows-ccs-that-you-wont-read-anywhere-else/feed/ 2