DSM Perspective: Another Point of View
Distributed Shared Memory computers (DSMs) have arrived (Bell, 1992, 1995) to challenge mainframes. DSMs scale to 128 processors with 2-8 processor nodes. As shared memory multiprocessors (SMPs), DSMs provide a single system image and maintain a “shared everything” model. Large scale UNIX servers using the SMP architecture challenge mainframes in legacy use and applications. These have up to 64 processors and a more uniform memory access. In contrast, clusters both complement and compete with SMPs and DSMs, using a “shared nothing” model. Clusters built from commodity computers, switches, and operating systems scale to almost arbitrary sizes at lower cost, while trading off SMP’s single system image. Clusters are required for high availability applications. Highest performance scientific computers use the cluster (or MPP 1 ) approach. High growth markets e.g. Internet servers, OLTP, and database systems can all use clusters. The mainline future of DSM may be questionable because: small SMPs are not as cost-effective unless built from commodity components; large SMPs can be built that without the DSM approach; and clusters are a cost-effective alternative for most applications to SMPs, including DSMs, for a wide scaling range. Nevertheless, commercial DSMs are being introduced that compete with SMPs over a broad range. 1 We use clusters or multicomputers to mean either MPP (for massively, parallel processing) and interconnected SMP nodes with 2-64 processors. With multicomputers, an O/S manages each node. In the early 1990s the technical community defined massive as any system with >1000 processors that did not use a single shared memory and passed messages among the nodes.