Storage Caching vs Tiering Part 1

Recently I had the privilege of being a Tech Field Day Delegate. Tech Field Day is organized by Gestalt IT. If you want more detail on Tech Field Day visit right here. In interest of full disclosure the vendors we visit sponsor the event. The delegates are under no obligation to review good or bad the sponsoring companies.

The first place hosting the delegates was NetApp. I basically have worked with several different storage vendors but I must admit I have never experienced NetApp in any way before. Except for Storage vMotioning Virtual Machines from an old NetApp (I don’t even know the model) to a new SAN.

Among the 4 hours of slide shows I learned a ton. One great topic is Storage Caching vs Tiering. Some of the delegates have already blogged about the sessions here and here.

So I am going to give my super quick summary of Caching as I understood it from the NetApp session. Followed by a post about Tiering as I learned from one of our subsequent sessions from Avere.

1. Caching is superior to Tiering because Tiering requires too much management.
2. Caching outperforms tiering.
3. Tiering drives cost up.

The NetApp method is to use really quick Flash Memory to speed up the performance of the SAN. Their software attempts to predict what data will be read and keep that data available in the cache. This “front-ends” a giant pool of SATA drives. The cache cards provide the performance the the SATA drives provide a single large pool to manage. With a simplified management model and using just one type of big disk the cost is driven down.

My Take Away in Tierless-Caching

This is a solution that has a place and would work well for many situations. This is not the only solution. All in all the presentation was very good. The comparisons against tiering were really setup against a “straw-man”. A multi-device tiered solution requiring manual management off all the different storage tiers is of course a really hard solution. It could cost more to obtain and could be more expensive to manage. I asked about fully virtual automated tiering solutions. Solutions that manage your “tiers” as one big pool. These solutions would seem to solve the problem of managing tiers of disks, keeping the cost down. The question was somewhat deflected because these solutions will move data on a schedule. “How can I know when to move my data up to the top tier?” was the question posed by NetApp. Of course this is not exactly how a fully-automated tiering SAN works, but is a valid concern.

My Questions for the Smartguys:

1. How can the NetApp caching software choices be better/worse than software that makes tiering decisions from companies that have done this for several years?
2. If tiering is so bad, why does Compellent’s stock continue to rise in anticipation of an acquisition from someone big?
3. Would I really want to pay NetApp sized money to send my backups to a NetApp pool of SATA disks? Would I be better off with a more affordable SATA solution for Backup to Disk even if I have to spend slightly more time managing the device?

Equallogic, VAAI and the Fear of Queues

Previously I posted on how using bigger VMFS volumes helps Equallogic reduce their scalability issues when it comes to total iSCSI connections. There was a comment about does this mean we can have a new best practice for VMFS size. I quickly said, “Yeah, make em big or go home.” I didn’t really say that but something like it. Since the commenter responded with a long response from Equallogic saying VAAI only fixes SCSI locks all the other issues with bigger datastores still remain. ALL the other issues being “Queue Depth.”

Here is my order of potential IO problems on with VMware on Equallogic:

  1. Being spindle bound. You have an awesome virtualized array that will send IO to every disk in the pool or group. Unlike some others you can take advantage of a lot of spindles. Even then, depending on the types of disks some IO workloads are going to use up all your potential IO.
    Solution(s): More spindles is always a good solution if you have unlimited budget. Not always practical. Put some planning into your deployment. Don’t just buy 17TB of SATA. Get some faster disk and break your Group into pools and separate the workloads into something better suited to the IO needs.
  2. Connection Limits. The next problem you will run into if you are not having IO problems is the total iSCSI connections. In an attempt to get all of the IO you can from your array you have multiple vmk ports using MPIO. This multiplies the connections very quickly. When you reach the limit, connections drop and bad things happen.
    Solution: The new 5.02 firmware increases the total maximum connections. Additionally, bigger datastores means less connections. Do the math.
  3. Queue Depth. There are queues everywhere, the SAN ports have queues. Each LUN has a queue. The HBA has a queue. I would need to defer to a this article by Frank Denneman (a much smarter guy than myself.) That balanced storage design is best course of action.
    Solution(s): Refer to problem 1. Properly designed storage is going to give you the best solution for any potential (even though unlikely) queue problems. In your great storage design, make room for monitoring. Equallogic gives you SANHQ USE IT!!! See how your front end queues are doing on all your ports. Use ESXTOP or RESXTOP to see how the queues look on the ESX host. Most of us will find that queues are not a problem when problem one is properly taken care of. If you still have a queuing problem then go ahead and make a new datastore. I would also request Equallogic (and others) release a Path Selection Policy plugin that uses a Least Queue Depth algorithm (or something smarter). That would help a lot.

So I will repeat my earlier statement that VAAI allows you to make bigger datastores and house more VM’s per store. I will add a caveat, if you have a particular application that needs a high IO workload, give it a datastore.

The Fun Stuff at VMworld 2010

Much of my planned activities for the blog didn’t work out this year. Not too much in the sessions or keynotes that was worth a blog post yet. Expect some View 4.5 and vCloud Directory posts once I can get it in the lab. Probably the most useful parts of VMworld were the discussions at the Thristy Bear, Bloggers Lounge, Chieftain, over breakfast or dinner among many other places. There was a great turn out for the In-n-out trip noting that it took around 30 minutes on public transportation to get there. This post is sharing some of the few experiences* I had and the couple of pictures I thought to make while in San Francisco. I met a lot more people than last year. I couldn’t even begin to name them all off but it was a great time hanging out with all of you enjoying a few drinks and talking Virtualization and Storage and other topics.

DSC04648.png

This is the hall in our hotel. I kept seeing these twin girls at the end of the hall. It was scary.

In-n-out.png

Here is proof of my In-n-Out take down. Double Double and Fries welldone. Several people showed up. I hope everyone enjoyed it. I do not think any In-n-Out vs Five Guys battles were decided though.

DSC04653.png

I hung off the of the Cable Car all the way back to Powell and Market. Jase McCarty @jasemccarty and Josh Leibster @vmsupergenius

DSC04657.png

The view from the top of the hill and the front of the Cable Car. The picture does not do justice to how steep the hill is.

DSC04659.png

Random shot at the Veeam party.

DSCN0396.png

A couple of VMware R&D Managers I met at the CTO gathering before the VMware party. Steve Herrod hosted a party that included a great mix of vExperts and some of the thought leaders at VMware. Great chance to meet some people, @kendrickcoleman beat me down in wii bowling though. I will be practicing until next year.

JO-vmCTO.png

Proof that I at least made it to the door of the CTO party, by Wednesday I had a pretty good collection of flare on my badge. TGI Fridays made me an offer but I didn’t want to move my family back to the West Coast.

RB-RV-JO-vmCTO.png

A less fuzzy picture with Rich Brambley @rbrambley and Rick Vanover @rickvanover. I am honored to just hold the sign for these guys.

GroupPhoto_DragonCon06_1.png

The Veeam party got bit crazy when 17 Princess Leia’s showed up.

atlanta-dragon-con-parade.png

The EMC vSpecialists roll up on VMworld 2010, there was at least 4000 more people at VMworld than last year. 3500 of them were from EMC. Actually found out they were real guys (and girls) and were really cool. Really good conversations about virtualization were had with many of these guys. If you haven’t seen it yet Nick Weaver @lynxbat and other vSpecialist put together a pretty good rap video. Check it out here

*in the event I did not have actual pictures of the event artistic liberties were taken.

Storage IO Control and An Idea

After being out of town for almost all of July, I am finally getting to make a run at vSphere 4.1. I am throwing different features at our lab environment and seeing what they do. I don’t think I would be writing anything new in saying vMotion and Storage vMotion is faster. Clones and deploying from a template is faster (VAAI). I decided to take a peak at the Resource Allocation for IOps per VM. Nothing you do not already know, you can now assign shares and limits to the Disk IO. Useful if you need certain machines to never take too much IO and cause storage latency. This only kicks in when the latency threshold is exceeded.

My wacky ideas usually come from the idea of resource pools, shares and limits are cool but I don’t want them used all the time. So why don’t I apply the limits or shares dynamically based on a certain time or expected workload. Lets say my third party backup software runs at 8pm, and that software is on a VM. At 7:59 I could lower all the shares of all the vm’s and raise the disk shares of my backup server. This prevents rogue dba from killing your backup window with a query or stored procedure that is heavy in the disk. Even deeper if I could return the shares to each VM as the backup software finishes backing up all the vm’s on that datastore. I wonder if this will actually shorten backup windows or just make the dba’s mad. Either way you win. 🙂

While clearing up my understanding on the issue of SIOC William Lam sent me to these two scripts (very useful):
http://www.virtuallyghetto.com/2010/07/script-configure-vm-disk-shares.html
http://www.virtuallyghetto.com/2010/07/script-automate-storage-io-control-in.html

media_1281030767190.png

Adaptive Queuing in ESX

While troubleshooting another issue a week or two ago I came across this VMware knowledge base article. Having spent most of the time with other brand arrays in the past, I thought this was a pretty cool solution verses just increasing the queue length of the HBA. I would recommend setting this on your 3par BEFORE you get QFULL problems. Additionally, Netapp has an implementation of this as well.

Be sure to read the note at the bottom especially:

If hosts running operating systems other than ESX are connected to array ports that are being accessed by ESX hosts, while the latter are configured to use the adaptive algorithm, make sure those operating systems use an adaptive queue depth algorithm as well or isolate them on different ports on the storage array.

I do need to dig deeper how this affects performance as the queue begins to fill, not sure if one method is better than another. Is this the new direction that many Storage Vendors will follow?

Until then, the best advice is to do what your storage vendor recommends, especially if they say it is critical.

Here is a quick run through for you.

In the vSphere Client

wpid348-media_1272214293023.png

Select the ESX host and go to the configuration tab and click on the Advanced Settings under Software.

In the Advanced Settings

wpid349-media_1272214590686.png

Select the option for Disk and scroll down to the QFullSampleSize and QFullThreshold.
Change the values to the 3par recommended values:
QFullSampleSize = 32
QFullThreshold = 4

Random Half Thoughts While Driving

So I often have epiphany teasers while driving long distances or stuck in traffic. I call them teasers because they are never fully developed ideas and often disappear into thoughts about passing cars, or yelling at the person on their cell phone going 15 MPH taking up 2 lanes.

Here is some I was able to save today (VMware related):

1. What if I DID want an HA cluster to be split in two different locations, Why?
2. Why must we over-subscribe iSCSI vmkernel ports to make the best use of the 1gbe phyical nics. Is it a just the software iSCSI in vSphere? Is just something that happens with IP storage? I should test that sometime…
3. If I had 10 GB nics I wouldn’t use them on Service Console or Vmotion that would be a waste. No wait, VMotion ports could use it to speed up  your VMotions.
4. Why do people use VLAN 1 for their production servers? Didnt’ their Momma teach em?
5.  People shouldn’t fear using extents, they are not that bad. No, maybe they are. Nah, I bet they are fine, how often does just 1 lun go down. What are the chances of it being the first lun in your extent? Ok maybe it happens a bunch. I am too scared to try it today.

VMware View and Xsigo

*Disclaimer – I work for a Xsigo and VMware partner.

I was in the VMware View Design and Best practices class a couple weeks ago. Much of the class is built on the VMware View Reference Architecture. The picture below is from that PDF.

It really struck me how many IO connections (Network or Storage) it would take to run this POD. Minimum (in my opinion) would be 6 cables per host with ten 8 host clusters that is 480 cables! Let’s say that 160 of those are 4 gb Fiberchannel and the other 320 are 1 gb ethernet. The is 640 gb for storage and 320 for network.

Xsigo currently uses 20 gb infiniband and best practice would be to use 2 cards per server. The same 80 servers in the above cluster would have 3200 gb of bandwidth available. Add in the flexibility and ease of management you get using virtual IO. The cost savings in the number director class fiber switches and datacenter switches you no longer need and the ROI I would think the pays for the Xsigo Directors. I don’t deal with pricing so this is pure contemplation. So I will stick with the technical benefits. Being in the datacenter I like any solution that makes provisioning servers easier, takes less cabling, and gives me unbelievable bandwidth.

So just in the way VMware changed the way we think about the datacenter. Virtual IO will once again change how we deal with our deployments.

iSCSI Connections on EqualLogic PS Series

Equallogic PS Series Design Considerations

VMware vSphere introduces support for multipathing for iSCSI. Equallogic released a recommended configuration for using MPIO with iSCSI.   I have a few observations after working with MPIO and iSCSI. The main lesson is know the capabilities of the storage before you go trying to see how man paths you can have with active IO.

  1. EqualLogic defines a host connection as 1 iSCSI path to a volume. At VMware Partner Exchange 2010 I was told by a Dell guy, “Yeah, gotta read those release notes!”
  2. EqualLogic limits the number of hosts in the to 128 per pool or 256 per group connections in the 4000 series (see table 1 for full breakdown) and to 512/2048 per pool/group connections in the 6000 series arrays.
  3. The EqualLogic MPIO recommendation mentioned above can consume many connections with just a few vSphere hosts.

I was under the false impression that by “hosts” we were talking about physical connections to the array. Especially since the datasheet says “Hosts Accessing PS series Group”. It actually means iSCSI connections to a volume. Therefore if you have 1 host with 128 volumes singly connected via 1 iSCSI path each, you are already at your limit (on the PS4000).

An example of how fast vSphere iSCSI MPIO (Round Robin) can consume available connections can be seen this this scenario. Five vSphere hosts with 2 network cards each on the iSCSI network. If we follow the whitepaper above we will create 4 vmkernel ports per host. Each vmkernel creates an additional connection per volume. Therefore if we have 10 300 GB volumes for datastores we already have 200 iSCSI connections to our Equallogic array. Really no problem for the 6000 series but the 4000 will start to drop connections. I have not even added the connections created by the vStorage API/VCB capable backup server. So here is a formula*:

N – number of hosts

V – number of vmkernel ports

T – number of targeted volumes

B – number of connections from the backup server

C – number of connections

(N * V * T) + B = C

Equallogic PS Series Array Connections (pool/group)
4000E 128/256
4000X 128/256
4000XV 128/256
6000E 512/2048
6000S 512/2048
6000X 512/2048
6000XV 512/2048
6010,6500,6510 Series 512/2048

Use multiple pools within the group in order to avoid dropped iSCSI connections and provide scalability. This reduces the number of spindles you are hitting with your IO. Using care to know the capacity of the array will help avoid big problems down the road.

*I have seen the connections actually be higher and I can only figure this is because the way EqualLogic does iSCSI redirection.

New VMware KB – zeroedthick or eagerzeroedthick

Due to the performance hit while zeroing mentioned in the Thin Provisioning Performance white paper this article in the VMware knowledge base could be of some good use.

I would suggest using eagerzeroedthick for any high IO tier 1 type of Virtual Machine. This can be done when creating the VMDK from the GUI by selecting the “Support Clustering Features such as Fault Tolerance” check box.

So go out and check your VMDK’s.

Thin Disk on vSphere My First Glance

So today I got around to putting ESXi 4 on my spare box at home. I first deployed a new virtual server and decided to use the thin provisioning built into the new version. After getting everything all setup. I was suprised to still see this.

I was like DANG! that is some awesome thin provisioning. I was more thinking something had to be wrong. A 42 GB drive with Windows 2008 only using 2.28KB that is sweet! I thought for sure since I had not seen this screen on the information of the VM it had already refreshed. It was too good to be true though I clicked the Refresh Storage and it ended up like this. Which made alot more sense for a fresh and patched Windows install. So far this leads to my first question, why the manual refresh? Should this refresh automatically when the screen redraws?