{"id":2843,"date":"2012-12-19T09:00:00","date_gmt":"2012-12-19T17:00:00","guid":{"rendered":"https:\/\/blogs.technet.microsoft.com\/dataplatforminsider\/2012\/12\/19\/disk-and-file-layout-for-sql-server\/"},"modified":"2024-01-22T22:49:26","modified_gmt":"2024-01-23T06:49:26","slug":"disk-and-file-layout-for-sql-server","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/sql-server\/blog\/2012\/12\/19\/disk-and-file-layout-for-sql-server\/","title":{"rendered":"Disk and File Layout for SQL Server"},"content":{"rendered":"

Guest blog post by Paul Galjan, Microsoft Applications specialist, EMC Corporation. To learn more from Paul and other EMC experts, please<\/em><\/span>\u00a0join our <\/em>Everything Microsoft Community<\/em><\/a>.<\/em><\/span><\/p><\/blockquote>\n

The RAID group is dead \u2013 long live the storage pool!\u00a0\u00a0 Pools fulfill the real promise of centralized storage \u2013 the elimination of storage silos.\u00a0 Prior to pool technology, when you deployed centralized storage you simply moved the storage silo from the host to within the array.\u00a0 You gained some efficiencies, but it wasn\u2019t complete.\u00a0 Pools are now common across the storage industry, and you can even create them within Windows 2012, where they are called \u201cStorage Spaces.\u201d\u00a0 This post is about how you allocate logical disks (LUNs) from pools so that you can maintain the visibility into performance.\u00a0 The methods described can be used with any underlying pool technology.<\/span><\/p>\n

To give some context, here\u2019s how storage arrays were typically laid out 10 years ago. <\/span><\/p>\n

\"Layout<\/a><\/span><\/p>\n

A single array could host multiple workloads (in this case, Exchange, SQL, and Oracle), but usually it stopped there \u2013 spindles (disks) would be dedicated to a workload.\u00a0 There were all sorts of goodies like virtual LUN migration that allowed you to seamlessly move workloads between the silos (RAID groups) within the array, but those silos were still there.\u00a0 If you ran out of resources for Exchange, and had some spare resources assigned to SQL Server, then you\u2019d have to go through gyrations to move those resources.\u00a0 For contrast, this is how pool technology works:<\/span><\/p>\n

\"Pool<\/a><\/span><\/p>\n

All the workloads are sharing the same physical resources.\u00a0 When you run out of resources (either performance or capacity) you just add more. The method is really enabled by automatic tiering and extended cache techniques.\u00a0 So the popularity of pool technology is understandable. Increasingly I see VNX and VMAX customers happily running with just one or two pools per array.<\/span><\/p>\n

The question here is this: if you\u2019re not segregating the workload at the physical resource level, is there any need to segregate the workloads at the logical level?<\/span><\/b>\u00a0 For example, if tempdb and my user databases are in a single pool of disk on the array, should I bother having them on multiple LUNs (Logical Disks) on the host?<\/span><\/p>\n

If the database is performance sensitive, then the reason is \u201cYes.\u201d If you don\u2019t, you may have a difficult time troubleshooting problems down the road.\u00a0 Take an example of a query that\u2019s resulting in an extraordinarily large number of IOs.\u00a0 If your tempdb is on the same LUN as your user databases, then you really don\u2019t know where those IOs are destined for.\u00a0 It also reduces your ability to potentially deal with problems.\u00a0 Pools may be the default storage option, but they\u2019re not perfect, and not all workloads are appropriate for pools.\u00a0 Segregating workloads into separate LUNs allows me to move them between pools, in and out of RGs without interrupting the database.<\/span><\/p>\n

So here\u2019s my default starting layout for any performance sensitive SQL Server environment:<\/span><\/p>\n