VMworld 2018 in Las Vegas

I was going to write my own post, but Cody Hosterman already did a great one.

Cody’s VMworld 2018 and Pure Storage Blog

The sessions are filling up so it will be a good idea to register and get there early. I am very excited about talking about Kubernetes on vSphere. It will follow my journey of learning containers and Kubernetes over the last 2 years or so. Hope everyone learns something.

Last year,  here I am talking about containers in front of a container. Boom!

Come see @CodyHosterman at VMworld, and if he is too busy you can see me

Co89Fz2UAAAiAD9Look for a post about going to In-n-Out some time soon, it is my tradition.

Be sure to check out what we will be doing at VMworld at the end of the month. Click the banner below once you are done being mesmerized by Chappy. Sign up for a 1:1 demo or meeting, I’ll be there are would love to meed with you. See how focused a demo I give.

 

vmworld-sig chappyFB

Sessions to be sure to see featuring the Amazing Cody Hosterman

SDDC9456-SPO: Implementing Self-Service Storage Provisioning with vRealize Automation Xaas

VMware vCenter is no longer meant to be the end-user interface for requesting and managing virtual machines and related resources. Storage is no exception. Join Cody Hosterman as he discusses how vRealize Automation Anything-as-a-Service (Xaas) provides the ability to easily import vRealize Orchestrator workflows to control, manage and provision storage via the self-service catalog offering vRealize Automation.

Wednesday, Aug 31, 2:00 PM – 3:00 PM

NF9455-SPO: Best Practices for All-Flash Data Reduction Arrays with VMware vSphere

As All-Flash Data Reduction arrays are becoming common place in VMware environments due to their performance, flexibility and ease-of-use, it is important to understand how to best implement and manage them with EXXi. Data-reduction and flash changes how an administrator should think about various configuration options within VMware and those will be discussed in detail. VAAI, Space Reclamation, virtual disks, SIOC, SDRS Queue depths, Multipathing and other points will be highlighted.

Monday, Aug 29, 2:30 PM – 3:30 PM

UNMAP – Do IT!

Pretty sure my friend Cody Hosterman has talked about this until he turned blue in the face.  Just a point I want to quickly re-iterate here for the record. Run unmap on your vSphere Datastores.

Read this if you are running Pure Storage, but even if you run other arrays (especially all-flash) find a way to do UNMAP on a regular basis:

http://www.codyhosterman.com/2016/01/flasharray-unmap-script-with-the-pure-storage-powershell-sdk-and-poweractions/

Additionally, start to learn the ins-n-outs of vSphere 6 and automatic unmap!

http://blog.purestorage.com/direct-guest-os-unmap-in-vsphere-6-0-2/

Speaking of In-n-out…. I want a double double before I start Whole 30.

in-n-out

Register: VMUG Webinar and Pure Storage September 22

Register here: http://tinyurl.com/pq5fd9k

September 22 at 1:00pm Eastern time Pure Storage and VMware will be highlighting the results of ESG Lab Validation paper. The study on consolidating workloads with VMware and Pure Storage used a single FlashArray //m50 and deployed five virtualized mission-critical workloads VMware Horizon View, Microsoft Exchange Server, Microsoft SQL Server (OLTP), Microsoft SQL Server (data warehouse) and Oracle (OLTP). While I won’t steal all the thunder it is good to note that all of this was run with zero tuning on the applications. Want out of the business of tweaking and tuning everything in order to get just a little more performance from your application? Problem Solved. Plus check out the FlashArray and the consistent performance even during failures.

Tier 1 workloads in 3u of Awesomeness

wpid1910-media_1442835406510.png

You can see in the screenshot the results of running tier one application on an array made to withstand real-world ups and downs of the datacenter. Things happen to hardware and software even, but it is good to see the applications still doing great. We always tell customers, it is not how fast the array is in a pristine benchmark, but how does it respond when things are not going well, when controller loses power or a drive (or two) fails. That is what sets Pure Storage apart (that and data reduction and real Evergreen Storage).

Small note: Another proven environment with near 32k block sizes. This one hung out between 20k and 32k, don’t fall for 4k or 8k nonsense benchmarks. When the blocks hit the array from VMware this is just what we see.

Register for the Webinar
http://tinyurl.com/pq5fd9k
You can win a GoPro too.

Finally Getting my vSphere 6 Lab running on Ravello

Using the Autolab 2.6 Config

Head on over the Labguides.com and check out autolab. I wanted a quick start, but didn’t want all the fun to be automated out of my hands. So I will give a quick tour of how I got my basic lab up and going. Part 2 I will add a VSAN cluster so I can catch up there too.

The auto-builds of Windows worked great. The domain controller auto setup with DHCP, Active Directory and the fun bits to get PXE working the the ESXi install. This is stuff I didn’t want to waste time on.

I had to re-kick off the vCenter build to get Powershell and PowerCLI up and going. I had to manually install vCenter 6 and the vCenter Appliance doesn’t play nice with Autolab. Ok for me because I actually wanted to run through the install to check the options and see if things like SSO setup got any better.

Letting the hosts PXE boot for vSphere 6 install.

ESXi Install finished

Installing vCenter 6.0

Deploying vCenter was actually pretty smooth. Small lab so I am using the Embedded Deployment.

Adding Hosts

Troubleshooting HA Issues

Just like old times the HA agents didn’t install correctly the first time. The more things change…

 

Great stuff from Ravello

Very thankful for the vExpert Lab space Ravello provided. If you are considering a home lab but don’t want to buy servers and switches and even storage this can be a good way to play with vSphere. I also spun up Photon and Openstack. Although I want to walk through the Openstack install from start to finish.

One of my hosts did this on boot but a quick restart and it was fine. Next step is to add some VSAN hosts which I will show next time.

(Hey, it’s emulated IntelVT on top of AWS, so its not-PROD.)

Links:

http://www.ravellosystems.com

http://www.labguides.com

Use Ravello Repo to get the autolab config, openstack (which I am also playing with), and some other blueprints for labs.

https://www.ravellosystems.com/repo/

Some help from this post from William Lam

http://www.virtuallyghetto.com/2015/04/running-nested-esxi-vsan-home-lab-on-ravello.html

 

 

Not the Same Ol’ Sessions from Pure Storage at VMworld

I am really excited to be going to VMworld once again. I will be wearing my Orange Nike so most likely my feet won’t hurt quite as bad. Also expect the Pure Orange Superman to make an appearance.
IMG_2992
More about the sessions. So I will be attending VMworld San Francisco, and speaking in EMEA.

STO2996-SPO – The vExpert Storage Game Show

The session I am stoked to be a part of is STO2996-SPO – The vExpert Storage Game Show. It will be a fun and informative time about next generation storage architectures presented in the form of a game show.  PLUS,  two members of the audience will join the session to help the vExpert teams. I know everyone will want to be on my team in EMEA.

STO3000-SPO – Flash Storage Best Practices and Technology Preview 

This very exciting session with Vaughn and Cody (super-genius vExperts) will go into what to consider when moving your datacenter to all flash. Plus previews of the Pure VVOLs.  If you think you are not ready for all flash, come to this session and learn how Flashy you can be.

STO2999-SPO – Customers Unplugged: Real-World Results with VMware on Flash

I wish I had thought of this. Customers using All Flash with VMware. All Tech, No Slides.

STO1965 – Virtual Volumes Technical Deep Dive

Dive into Virtual Volumes with Rawlinson Rivera – VMware, Suzy Visvanathan – VMware and Vaughn Stewart – Pure Storage. So many customers have asked me what will VVOLS actually do over the last 3 years. This will be a great chance to find that out.

VAPP2132 – Virtualizing Mission Critical Applications on All Flash Storage 

How does Pure storage enable that final 10% of critical applications that just a few years ago people said would be impossible? Meet my friend Avi Nayek from Pure and Mohan Potheri from VMware and learn how flash eliminates storage as the road block to critical applications becoming virtual.

MGT1265 – Improving Cloud Operations Visibility with Log Management and vCenter Log Insight

Cody Hosterman, Did I tell you he is smart? Yeah. He is. Join Cody and Dominic Rivera from US Bank and Bill Roth from VMware on how to increase your Cloud Operations Visibility.

SDDC2754-SPO – New Kids on the Storage Block, File and Share: Lessons in Storage and Virtualization

Lessons from all the upstarts in the storage industry. Most of them are not “startups” anymore. Finding new ways to solve the issues of using Virtualization with legacy storage. Pure Storage, Nimble Storage, Tintri, Tegile, Coho Data, Data Gravity and moderated by Howard Marks from DeepStorage.net.

STO2496-SPO – vSphere Storage Best Practices: Next-Gen Storage Technologies

The Chad and Vaughn show. Now with Rawlinson Rivera! Storage is changing. Did I say that yet?

More information on Pure Storage Sessions

Coming Soon: Support for VMware VVOLs
Pure Storage set to paint VMworld 2014 orange!

VAAI and XCOPY with Pure Storage

VAAI has been around (almost 4 years now)for a while now and this is one thing I don’t often hear customers or others talking about very often. When your vSphere hosts detect that Hardware Acceleration is compatible. The host will attempt to send VAAI compatible commands to the storage device. As we describe it usually Full Copy is explained as if you need to clone or Storage vMotion a VM the ESXi host issues a command to move the storage device to move the blocks. So when describing this in the past it was a very simple, the Host issue the command and the blocks move. Set it and forget it, right?

Not so fast, my friend!

media_1401732696712.png

As good ol’ Lee Corso would say, “Not so fast, my Friend!”

The VAAI Xcopy command tells the storage device to move 4096 KB (AKA 4MB) at a time. So every 4MB is a new command. Not a big deal for disk based xcopy because the blocks could only move from spindle to spindle so fast. Still way more efficient than before but sometimes not actually faster at all.

Along came the Flash Array.

The FlashArray, XCOPY and VAAI

media_1401731250956.png

The Pure Storage snapshot technology is used for XCOPY commands. No matter where they are coming from. This results in just a metadata pointer change in order to move the data. The blocks don’t actually move anywhere since they are stored once and mapped in metadata. This enables zero impact snaps and clones that can be created as fast as I can click the button in the GUI.
What does this all mean?
Since the ESXi host is telling the FlashArray to move 4MB at a time the copy function does not reach the full potential of what the FlashArray can really do. It is like using a freight train to move cargo across the country but only putting one box in each car.

Pure Storage recommendation

media_1401733336758.png

This is why Pure recommends changing the MaxHWTransferSize (the setting that controls the size of the transfer) to the maximum allowed 16384 (or 16MB).

Default is 4096
Commands to help you change the setting via the CLI

esxcfg-advcfg -g /DataMover/MaxHWTransferSize
Value of MaxHWTransferSize is 4096

Set the transfer size to the Pure Storage best practice:

esxcfg-advcfg -s 16384 /DataMover/MaxHWTransferSize
Value of MaxHWTransferSize is 16384

…but wait there is more!

So the Pure Storage FlashArray is cool with cloning multi TB volumes using xcopy with no impact on performance or space usage. So the question is why only 16MB at a time? (real answer should come from someone way smarter than me at VMware).

I am curious to try out a Storage vMotion or cloning persistent View desktops that fully use the power of the array.
Until then, still better than spinning disk or no VAAI at all.

What happened while getting 100% Virtualized

I often think about how many people have stalled around getting to 100% virtual. I know you are thinking I need to find some fun things to do. You are probably right.

The first thing I thought when I deployed my very first virtual infrastructure project back in the day was, “Man, I want to see if I can virtualize EVERYTHING.” This is before I knew much about storage, cloud, and management. I may be naive but I think there is real potential out there to achieve this goal. There is low hanging fruit still out there depending how you deploy your infrastructure. Having attended VMware Partner Exchange (PEX) I know how the ecosystem is built around your journey to virtualization. The biggest slide to resellers and other partners is the one VMware shows off that says, “Every $1 a customer spends on VMware they buy $9-11 in infrastructure.” Which I fully believe is the reason many customers never saw the FULL cost savings they could have when going virtual.

Roadblocks

media_1395085571475.png

I believe we all ran into a couple of different kinds of roadblocks on our path. First were organizational. Line of business owners, groups within IT and other political entities made traveling the road very difficult. Certain groups didn’t want to share. Others started to think VM’s were free and went crazy with requests. Finally the very important people who own the very important application didn’t want to be virtual because somehow virtualization was a downgrade from dedicated hardware.

Then if we were able to dodge the roadside problems organizationally, there were technical problems. Remember that $11 of drag? The big vendors made an art of refreshing and updating you with new technology. I know, I helped do it. So performance was a problem? Probably buy more disk or servers. Then every 3-5 years they were back, with something new to fix what the previous generation did not deliver on. This “spinning drag” in the case of storage slowed you from getting to your goal. 100%.

Disillusionment

media_1395085999383.png

At some point you lose the drive to be 100% virtual. The ideal has been beaten out of you. Well at least my vendor takes me for steak dinner and I get to go to VMworld and pretend I am a big shot every year. This is where you settle. Resign yourself to the fact that everything is so complicated and hard it will never get done. The big vendors make a huge living on keeping you there. Changing the name from VI, to Private Cloud, Hybrid super happy land or whatever some marketing guys that have never opened the vCenter client think of next.

Distractions

media_1395143968665.png

So trying to rebuild Amazon in your data center? Probably lots of other things to fix first. Using more complicated abstraction layers may help in the long run to building a cloud. I see more customers continue to refresh wasteful infrastructure with new infrastructure while they are still trying to figure this out. What we need is a quick an easy win. Make things better and save money right away. Then maybe we can keep working on building the utopian cloud.

The low hanging fruit

media_1395163000352.png

When we first started to virtualize we looked for the easy wins. To get you rolling again down the path we need to identify the lowest hanging fruit in the data center. We found all the web servers running at 1% CPU and 300MB of Ram (if that) and virtualized those so quick the app owner didn’t even know it happened. Just like a room of 1000 servers all running at 2% CPU usage there are giant tracks of heat generating spinning waste covering the data center. You had to get so many of them and stripe so wide just to make performance serviceable. You wasted weeks of your life in training classes to learn how to tweak and tune these boat anchors because it was always YOUR fault it didn’t do what the vendor said it would.

Take that legacy disk technology and consolidate to a system made to make sure it is not the roadblock on the way to being 100% virtual. I remember taking pictures of the stacks of servers getting picked up by the recycling people and now is the time to send off tons of refrigerator sized boxes of spinning dead weight. I am not in marketing so I don’t want to sound like a sales pitch. I am seeing customers realize their goal of virtualization with simple and affordable flash storage. No more data migrations or End of Life forklift upgrades. No more having to decide if the maintenance is so high I should just buy a new box. Just storage that performs well all the time and is fine running virtual Oracle and VDI on the same box.

How we do it

media_1395163736250.png

How is Pure Storage able to replace disk with Flash (SSD)? Mainly, we created a system from the ground up just for Flash. We created a company that believes the old way of doing business needs to disappear. Customers say, “You actually do what you said, and more.” (Biggest reason I am here). Also, do it all at the price of traditional 15k disk. Not there on SATA, yet.

  1. Make it ultra simple. No more tweaking, moving, migrating or refreshing. If you can give a volume a name and a size you can manage Pure Storage.
  2. Make it efficient. No more wasted space due to having to short stroke drives, no more wasted space because you created a RAID 10 pool and now have nowhere to move things so you can destroy and recreate it.
  3. Make it Available. Support that is awesome because things do happen. Most likely though most of your downtime is planned when it comes to migrating or upgrading code. Pure Storage will allow zero performance hit and zero outage to reboot a controller to upgrade the firmware/code (whatever you want to call it). Pretty nice for an environment that needs ultimate it uptime.
  4. Make sure it alway performs. Imagine going to the DBA’s and saying, “everything is under 1ms latency, How about you stop blaming storage and double check your SQL code?” Now that is something as an administrator I wanted to say for a long long long time.

Once you remove complicated storage from the list of things preventing you from thing preventing 100% virtual you can focus on getting the applications working right, the automation to make life easier and maybe make it to your kid’s soccer games on Saturday.

No Spindles Bro

I was assisting one of my local team members the other day with sizing a VM for Microsoft SQL. I usually always fall back to this guide from VMware. So I started out with the basic seperation of Data and Logs and TempDB.

Make it look like this:

VM Disk Layout

LSI SCSI Adapter
C: – Windows

Paravirtual SCSI Adapter
D: – Logs
E: – Data
F: – TempDB

Which is pretty standard. Then someone said, “Why do we need to do that?” I thought for a second or five. Why DO we need to do that? I knew the answer in the old school. Certain raid types were awesomer at the types of data written by the different parts of the SQL Database. We are in a total post-spindle count world. No Spindles Bro! So what are some reasons to still do it this way for an All Flash Array?

1. Disk Queues
I think of these like torpedo tubes. The more tubes the less people are waiting in line to load torpedoes. You can fire more, so to speak. Just make sure the array on the other end is able to keep up. Having 30 queues all going to one 2 Gbps Fiber Channel port would be no good. See number 3 for paths.

2.  Logical Separation and OCD compliance (if using RDMs)
Don’t argue with the DBA. Just do it. If something horrifically bad happens the logs and data will be in different logical containers. So maybe that bad thing happens to one or the other, not both. I am not a proponent of RDM’s. SO much more to manage. If you can’t win or don’t want to fight that fight at least with RDM’s you will be able to label the LUN on the array “SQLSERVER10 Logs D” so you know the LUN matches to something in Windows. This also makes writing snapshot scripts much easier.

3. Paths
Each Datastore or RDM has its own paths, if you are using Round Robin (recommended for Pure Flash Array) more IO on more paths equals better usage of the iSCSI or FC interconnects. If you put it all on one LUN, you only get those queues (see #1) and those paths. Remember do what you can to limit waiting.
Am I going down the right path? How does this make it easier? Are there other reasons to separate the logs and data for a database other than making sure the Raid 10 flux capacitor is set correctly for 8k sequential writes? I don’t want to worry about that anymore. Pretty sure plenty other VM Admins and DBA’s don’t either.

For me a good exercise in questioning why I did things one way and if I should still do them this way now.