vSphere Container Hosts Storage Networking

In the last couple of days I had a couple of questions from customers implementing some kind of container host on top of vSphere. Each was doing it to make use of either Kubernetes or Docker Volume Plugin for Pure Storage. First, there was a little confusion if the actual container needs to have iSCSI access to the array. The container needs network access for sure (I mean if you want somone to use the app) but it does not need access to the iSCSI network. Side Note: iSCSI is not required to use the persistent storage plugins for Pure. Fiber channel is supported. ISCSI may just be an easy path to using a PureFlash Array or NFS (10G network) for FlashBlade with an existing vSphere Setup.

To summarize all that: The container host VM needs access to talk directly to the storage. I accomplish this today with multiple vnics but you can do it however you like. There may be some vSwitches, physical nics and switches in the way, but the end result should be the VM talking to the FlashArray or FlashBlade.

More information on configuring our plugins is here:

  1. Docker/DCOS/Mesos – https://store.docker.com/plugins/pure-docker-volume-plugin
  2. Kubernetes and OpenShift – https://hub.docker.com/r/purestorage/k8s/

Basically the container host needs to be able to talk to the MGMT interface of the array, to do it’s automation of creating host objects, volumes and connecting them together (also removing them when you are finished). The thing is to know the plugin does all the work for you. Then when your application manifest requests the storage the plugin mounts the device to the required mount point inside the container. The app (container) does not know or care anything about iSCSI, NFS or Fiber Channel (and it should not).

Container HOST Storage Networking

Container hosts as VM’s Storage Networking

If you are setting up iSCSI in vSphere for Pure, you should probably go see Cody’s pages on doing this most of this is a good idea as a foundation for what I am about to share.

https://www.codyhosterman.com/pure-storage-vmware-overview/flasharray-and-vmware-best-practices/iscsi-setup/

Make sure you can use MPIO. Follow the linux best practices for Pure Storage. Inside your container hosts.

Do it the good old (new) gui way

So what I normally do is setup 2 new port groups on my VDS.

something like… iscsi-1 and iscsi-2 I know I am very original and creative.

 

Set the uplink for the Portgroup

We used to setup “in guest iSCSI” for VM’s that needed array based snaphost features way back in the day. This is basically the same piping. After creating the new port groups edit the settings in the HTML5 GUI as shown below.

Set the Failover Order

Go for iSCSI-1 on Uplink 1 and iSCSI-2 on Uplink 2

I favor putting the other Uplink into “Unused” as this gives me the straightest troubleshooting path in case something downstream isn’t working. You can put it in “standby” and probably be just fine.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.