Pages

Friday, November 15, 2013

Windows 2012R2 Storage Pools

Windows 2012R2 Clustered Storage Pool

Prep

As many may be aware Windows 2012R2 was released, but I had not seen any posts or blogs about clustered storage pools. Growing more curious on the how’s and why’s of this product, I made a decision to grab some hardware and try and make one.
Equipment used:
(2) R620 Servers
Dual Xeon Processors
64 GB of RAM
2 10 Gb Intel NICs
2 1 Gb Intel NICs
2 73 GB 15k RPM drives for the OS in RAID1
4 300 GB 10k RPM drives for RAID5 data drive

(2) LSI – 9207-e8 SAS cards

(1) MD1200
10 7.2k RPM 1 TB drives
2 400 GB SSD drives

Did i say that it is great working for a company that allows me to grab some gear and test when i get these crazy ideas? Ok it is and I am thankful and I get to make a blog on it.

Install

The hardware configuration for this is different than anything that I have normally supported nor endorsed; so this was a good change and challenge for me. I started with installing the LSI card into each server and direct connecting this to the Enclosure Management Modules (EMM) of the MD1200 using the SAS cables.


Next was the installation of Windows 2012R2 which a breeze and nothing out of the norm. It was easy to create the cluster without the witness disk as I did not have any initial shared storage – yet! This is where the ease of using storage pools came in. I also installed Hyper-V for my final testing.

Setup of the pool was done for this test using the Server Manager. I had completely left the drives as RAW. The pool design included all 10 1TB and the 2 SSD drives. I did dedicate one as the hot spare as I am a bit paranoid and hate losing data. My pool final design was this:


Next was to create my Virtual Disks for my Witness disk and two Cluster Shared Volumes (CSV). What was interesting was the new option for a Tiered storage.


Followed by


I was then able to allocate portion this new Virtual Disk to both the SSD and standard.


Both CSV were 2 TB in size, however the SSD Tiered storage used 250 GB of the SSD drive space for this test, so 25% of the CSV was SSD. Next was the adding the Witness disk to the cluster via the configuration wizard. Interesting was the fact that I created a 1 GB witness disk and ended with 8 GB allocated.

Now my cluster is officially “done” with two CSVs and a Witness disk all stored on the MD1200 directly attached to both my servers. Really all and all it was nice for things to go so easy. Before I leave this step I want to point out that I also added the cluster aware updates and updated the LSI firmware (that will be another blog post coming later).

Testing

Testing I wanted to get as creative as I could but as simple too. So I then created Virtual Machines each with Windows 2012 on a VHDX and then a 2nd 40 GB VHDX that was left RAW to run IOMeter testing on. So I ended up with two VM’s one on the SSD tiered storage and the 2nd on the non-tiered storage. All other settings were standard with 1 CPU and the memory of 1 GB allocated. My goal was to make them as similar as possible here.
IOMeter was setup as follows:
Worker 1 with 16 outstanding IO per target running the 16K 50% Read, 0% Random testing.
Worker 2 with 16 outstanding IO per target running the All in one testing.
I then ran both for 5 hours.

VM1 (without SSD Tiered option)

VM2 (with SSD Tiered)

Summary

This setup and quick dive into storage spaces was a bit of a change and honestly a bit different than what I expected to see. Hopefully your testing will prove to be the same in values or better than mine.  I have yet to apply hot fixes or any updates to the hosts.  Literally these are "out-of-the-box" systems so values more than likely will change.

 I am typically working with Storage Array Networks where IOPs on the Virtual Disk are maxed out at or around 5k and seeing a 15k with some bursts of more were nice to see on off the shelf 1 TB Near Line SAS drives. Of course I am nowhere loading this server or running nearly enough VMs to really work this, so I will continue to test.  

In contrast was the nearly doubled speeds of the SSD-Tiered storage of nearly twice the IOPs. This is definitely a change and something to monitor and watch for. In the coming weeks I plan to do a few other tests – such as drive failure and rebuild time. That is really the most crucial for my line of work at this time. Next would be a test running 10 VMs all doing similar or the same tests here and comparing numbers to see the change. Also a test with 20 and maybe more. A test on the replication features of the VM’s to a 3rd host will be coming as well.

2 comments:

  1. Hi. I'm new to this tech.
    The support from Dell says: If 2 servers are connected into one MD1220 with 2 EMMs through SAS card, each of these servers can only see a part of HDDs. No one HDD can be seen by 2 servers at the same time.
    Is that correct?
    How can it possible to build a clustered storage pool if that's true?

    Thanks in advanced.

    ReplyDelete
    Replies
    1. Are you looking at the documentation for for Storage Spaces by Dell or the MD12xx users guide?

      The difference here is that the MD12xx user guide is built using the older stand that a hardware RAID controller will be connecting to the JBOD. Thus this card will "own" the physical drives.

      In Spaces we use a SAS HBA or a pass-through enabled HBA that will allow the software and thus the cluster to "own" these.

      The official published paper is located here:
      http://www.dell.com/learn/us/en/04/shared-content~data-sheets~en/documents~deploying_storage_spaces_on_powervault_md12xx-v1.pdf

      Enjoy!

      Delete