Using the Docker Volume Plugin with Docker Swarm

Remember the prerequisites. Check the official README for the latest information. Official README

Platform and Software Dependencies

Operating Systems Supported:

  • CentOS Linux 7.3
  • CoreOS (Ladybug 1298.6.0 and above)
  • Ubuntu (Trusty 14.04 LTS, Xenial 16.04.2 LTS)

Environments Supported :

  • Docker (v1.13 and above)
  • Swarm
  • Mesos 1.8 and above

Other software dependencies:

  • Latest iscsi initiator software for your operating system
  • Latest linux multipath software package for your operating system

Review: To install the plugin –


docker plugin install store/purestorage/docker-plugin:1.0 --alias pure

OR if you are annoyed by having to hit Y for the permissions the plugin requests.


docker plugin install store/purestorage/docker-plugin:1.0 --alias pure --grant-all-permissions

The installation process is the same as a standalone docker host except you must specify your clusterid. This is a unique string you assign to your swarm nodes.


docker plugin disable pure
docker plugin set pure PURE_DOCKER_NAMESPACE=<clusterid>
docker plugin enable pure

When you first install the Pure Volume Plugin the plugin is enabled. Docker will not allow you to modify the namespace while the plugin is in use. So we need to disable the plugin before making changes. This also means it is best to do this before creating and using any volumes.

Remember to put your API token and array management IP in the pure.json file under /etc/pure-docker-plugin/ – for each host.

More information Here

Demo for setting up Swarm and testing container failover

Previous post about installing the Plugin

Pure Storage Docker Plugin

This is a quick guide and how to install the Pure plugin for docker 1.13 and above. For full details check out Pure Volume Plugin on Store.docker.com.

Requirements

 

Operating Systems Supported

CentOS Linux 7.3
CoreOS (Ladybug 1298.6.0 and above)
Ubuntu (Trusty 14.04 LTS, Xenial 16.04.2 LTS)
Environments Supported

Docker 1.13+ I am on 17.03-ce
Swarm
Mesos 1.8 and above
Other dependencies

Latest iSCSI initiator SW
Latest Multipath package (This made a difference for me on Ubuntu remember to update!)

Hosts Before

media_1501006005257.png

Here I am just listing the Pure hosts on my array before I install the plugin.

Volumes Before

media_1501006035507.png

Also listing out my volumes, these are all pre-existing.

Pull and Install the plugin (Docker 1.13 and above)

Create /etc/pure-docker-plugin/pure.json

media_1501006093681.png

edit the file pure.json in /etc/pure-docker-plugin and add your array and API token
to get a token from the Pure CLI – (or go to the GUI of the array and copy the API token for your user).

 

pureeadmin create –api-token [user]
pureadmin list –api-token [user] –expose

Pull the plugin and Install

media_1501006156094.png

docker plugin install store/purestorage/docker-plugin:1.0 –alias pure

Grant the plugins to the directories it requests.

Done. Easy.

For Docker Swarm

Setting the PURE_DOCKER_NAMESPACE variable can be done with the command:

docker plugin set pure PURE_DOCKER_NAMESPACE=<clusterid>

My next blog post will dive more into setting up the plugin with Docker Swarm. The clusterid is just a unique string. Keep it simple.

Test it

media_1501006217870.png

$docker volume create -d pure -o size=200GiB Demo

Remember if you want to create the volume with other units the information is in the README but here it is for now:<Units can be specified as xB, xiB, or x. If no units are specified MiB is assumed.

My host created by the plugin

media_1501006373816.png

Now that I created a volume on the array the host docker01 is now added to the list of hosts. The plugin automates adding the iSCSI IQN and creating the host.

My new volume all ready to go

media_1501006405843.png

You also see the docker01-Demo is listed and sized to my requested 200GiB The PURE_DOCKER_NAMESPACE will prepend the volume name you create. The default will use the docker hostname. In a Mesos and Swarm environment the namespace setting mentioned above is used. This is only identified this way on the array.

Now the volume can be mounted to a container using

 

#docker run –volume Demo:/data [image] [command]
You could also create a new volume and mount it to a container all in the same line with:

 

#docker run –volume-driver pure –volume myvolume:/data [image] [command]

My First DockerCon

Wrapping up my very first DockerCon. I learned great new things, was introduced to new tech and reconnected with some old friends.

This was my first convention in a very long time where I actually just attended the show and went to sessions. It was really nice. While people would usually read my blog looking for tips and tricks on how to do technical things and not my philosophic rambling. So I won’t try to be a pundit on announcements and competition and all that. Some cool things I learned:

  1. Share everything on GitHub. People use github as the defacto standard for sharing information. Usually it is code, but lots more is out there including presentations and demos for a lot for what happened at DockerCon. Exciting for me as someone that always loved sharing what I learn via this blog is that this is expected. I will post some of my notes and other things about specific sessions once the info is all posted.
  2. Being a “storage guy” for the past 6 years or so between Pure Storage and EMC it was good to see how many companies in the ecosystem have solutions built for CI/CD and Container Security. So much different than other shows where the Storage vendors dominate the mind share.
  3. Over the years friends and co-workers have gone there own way and ended up all over the industry. Some of my favorite people that always put a very high value on community and sharing seem to be the same people that gravitate to DockerCon. It was great to see all of you and meet some new people.

More to follow as I pull my notes together and find links to the sessions.

 

Kubernetes Anywhere and PhotonOS Template

Experimenting with Kubernetes to orchestrate and manage containers? If you are like me and already have a lot invested in vSphere (time, infra, knowledge) you might be exctied to use Kubernetes Anywhere to deploy it quickly. I won’t re-write the instruction found here:

https://github.com/kubernetes/kubernetes-anywhere

It works with

  • Google Compure Engine
  • Azure
  • vSphere

The vSphere option uses the Photon OS ova to spin up the container hosts and managers. So you can try it out easily with very little background in containers. That is dangerous as you will find yourself neck deep in new things to learn.

Don’t turn on the template!

media_1491484535602.png

If you are like me and *skim* instructions you could be in for hours of “Why do all my nodes have the same IP?” When you power on the Photon OS template the startup sequence generates a machine ID (and mac address). So even though I powered it back off, the cloning processes was producing identical VM’s for my kubernetes cluster. Those not hip to networking this is bad for communication.

Also, don’t try to be a good VMware Admin cad convert that VM to a VM Template. The Kubernetes Anywhere script won’t find it.

IF you do like me and skip a few lines reading (happens right) make sure to check this documenation out on Photon OS. It will help get you on the right track.

https://github.com/vmware/photon/blob/master/docs/photon-admin-guide.md#clearing-the-machine-id-of-a-cloned-instance-for-dhcp

This is clearly marked in the documentation now.

Setting Docker_gwbridge Subnet

I had an issue with the Docker Swarm subnet automatically generated when I do:

$docker swarm init

Basically it was choosing the subnet my VPN connection was using to assign an IP to my machine on the internal network. Obviously this wreaked havoc on me being able to connect to the docker hosts I was working with in our lab.

I decided it would be worth it to create the docker_gwbridge network and assign the CIDR subnet for the network that would not overlap with the VPN.

$docker network create –subnet 192.168.249.0/24 docker_gwbridge

I did this before I created the swarm cluster. So far everything is working fine in the lab and I am able to SSH to the Docker Host and connect to the services I am testing on those machines. There may be other issues and I will report back as I find them.

MK_SPACEMOUNTAINB_7567608042

Come see @CodyHosterman at VMworld, and if he is too busy you can see me

Co89Fz2UAAAiAD9Look for a post about going to In-n-Out some time soon, it is my tradition.

Be sure to check out what we will be doing at VMworld at the end of the month. Click the banner below once you are done being mesmerized by Chappy. Sign up for a 1:1 demo or meeting, I’ll be there are would love to meed with you. See how focused a demo I give.

 

vmworld-sig chappyFB

Sessions to be sure to see featuring the Amazing Cody Hosterman

SDDC9456-SPO: Implementing Self-Service Storage Provisioning with vRealize Automation Xaas

VMware vCenter is no longer meant to be the end-user interface for requesting and managing virtual machines and related resources. Storage is no exception. Join Cody Hosterman as he discusses how vRealize Automation Anything-as-a-Service (Xaas) provides the ability to easily import vRealize Orchestrator workflows to control, manage and provision storage via the self-service catalog offering vRealize Automation.

Wednesday, Aug 31, 2:00 PM – 3:00 PM

NF9455-SPO: Best Practices for All-Flash Data Reduction Arrays with VMware vSphere

As All-Flash Data Reduction arrays are becoming common place in VMware environments due to their performance, flexibility and ease-of-use, it is important to understand how to best implement and manage them with EXXi. Data-reduction and flash changes how an administrator should think about various configuration options within VMware and those will be discussed in detail. VAAI, Space Reclamation, virtual disks, SIOC, SDRS Queue depths, Multipathing and other points will be highlighted.

Monday, Aug 29, 2:30 PM – 3:30 PM

Seizing AD Roles – File under Good to know

So lets say the power goes out and half of the vm’s on your “lab storage that uses local disks” go into an infinite BSOD loop. I was lucky as one of the servers that still worked was a AD Domain Controller with DNS. Since I usually don’t try to fight BSOD’s and just rebuild. I did so. One very helpful page to move the AD roles was this article on seizing the roles. Which I had to do since the server holding the roles was DOA.

 

https://technet.microsoft.com/en-us/library/cc816779(v=ws.10).aspx

 

Enjoy and file this under Good to Know

18e6fo

FlashStack Your Way to Awesomeness

You may or may not have heard about Pure Storage and Cisco partnering to provide solutions together to help our current and prospective customers using UCS, Pure Storage, and VMware. These predesigned and tested architectures provide a full solution for compute, network and storage. Read more here:

https://www.purestorage.com/company/technology-partners/cisco.html

http://blogs.cisco.com/datacenter/accelerate-vdi-success-with-cisco-ucs-and-pure-storage

This results in CVD’s (Cisco Validated Designs)

http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_flashstack_view62_5k.html

There are more coming for SQL, Exchange, SAP and general Virtual Machines (I call it JBOVMs, Just a Bunch of VM’s).

Turn-key like solution for compute, network, and storage

Know how much and what to purchase when it comes to compute, network and storage as we have worked with Cisco to validate with actual real workloads. Many times mixed workloads because who runs just SQL or just Active Directory. It is proven and works. Up in running in a couple of days. If a couple of months was not good (legacy way), and then 2-4 weeks (newer way with legacy HW) wasn’t good enough, how about 1-2 days? For reals next generation datacenter. Also, scale compute, network and storage independently. Why buy extra hypervisor licenses when you just need 5 TB of space?

Ability to connect workload from/to the publics clouds (AWS, AZURE)

I don’t think as many people know this as they should, but Rob Barker “Barkz” is awesome. He worked hard to prove out the ability to use Pure FlashArray with Azure compute. Great read and more details here:

Announcing: Pure Storage All-Flash Cloud for Microsoft Azure

Official Pure information here:

https://www.purestorage.com/resources/type-a/pure-storage-all-flash-cloud-azure-deployment-guide.html

Azure is ready now and AWS is in the works.

Ability to backup to the public clouds.

No secret here we are working hard to integrate with backup software vendors. Some have been slow and others have been willing to work with our API to make seamless backup and snapshot management integration with Pure and amazing thing.

Just one example of how Commvault is enabling backup to Azure:

http://www.commvault.com/resource-library/55fc5ff8991435a6ce000c9c/backup-to-azure-with-commvault.pdf

IntelliSnap and Pure Storage

https://documentation.commvault.com/commvault/v10/article?p=features/snap_backup/pure/overview.htm

Check how easy it is to setup the Commvault and Pure Storage.

Ease of storage allocation without the need of a storage specialist

If I have ever talked to you about Pure Storage and I didn’t say how simple it is to use or mention my own customers that are not “Storage Peeps” that manage it quite easily then I failed. Take away my Orange sunglasses.

If you are looking at FlashStack or just now finding out how easy it is now. Remember no Storage Ph.D. required. We even have nearly everything you need to be built into our free vSphere Plugin. Info from my here Cody Hosterman here.

The Pure Storage Plugin for the vSphere Web Client

Here is a demo if you want to see how it works. This is a little older but I know he is working on some new stuff.

Even better if you would like to automate end to end and tie the Pure Storage provisioning with UCS Director that is possible too! See here:

VMware Space Consumption on Thin Provisioned Data-Reducing Arrays

A common question I get from my customers is: Why does vSphere say my data store is full? when the array is 4% used? I usually make a quick explanation of how the VMFS file system has no clue that the block device underneath is actually deduping and compressing the data. So even though you provisioned 1TB of VM’s the Array might only write a fraction of that amount. This can get many different reactions. Anger, disbelief, astonishment and understanding. This post is to visually show that what vSphere is thinking is used on VMFS will not necessarily be reflected the same on a data reducing array (including FlashArray).

When vSphere says I am FULL

media_1466090762665.png

Even when the FlashArray says plenty of space

media_1466090873568.png

You can tell from my environment testing vRealize Automation and Orchestrator that there is a lot more being “used” in vSphere than is written to the array. In your head start to do the match though. 3.4 times 8.19GB does not equal 169GB. That is because we do not claim thin provisioning as actual data reduction. This includes any set of “zeros”. Space not provisioned to a VM at all, the empty VMFS space AND the empty space provisioned to a VM (lazy or eager zeros) not consumed or written to by the VM. Since my enironment is mostly empty VM’s you can see the Total Reduction is ridiculously high.

Some solutions.
1. Use Thin Provisioned VM’s with Automatic UNMAP in vSphere 6. Read more from Cody Hosterman here. Direct Guest OS Unmap in vSphere 6.
This will give closer accounting of VM provisioned space and space consumed on the array. It is still not aware of the compression and dedupe behind the scenes on the array.
2. vVols provide the storage awareness needed to let VMware know the actual consumption per VM. Come see at the Pure Storage booth at VMworld.

Use the plugin!

media_1466090985027.png

At least you can quickly see that the 169.4GB will be reduced by 3.4:1 (for actual written data) all in one screen.

New Features in Pure1 – Analytics

The best just gets better? Pure1 Manage is Pure Storage’s SaaS based management tool for Pure customers. Beside getting tons of health, capacity and performance information you now have some new. It is a pretty hard upgrade process that requires updating VM’s at each site and possibly some consulting services. Just kidding. It is already available, no effort from you required. Just login to pure1.purestorage.com.

Capacity Analytics

media_1455027599889.png

You are now able to use the Analytics tab to project out your current growth to determine when your array will be getting towards full. Very nice. Included with your Pure Storage FlashArray. No extra anything to buy. Sweet.

Support

Also, as a bonus, you can see the Support tab now in the Pure 1 Manage screen. This will let you see all open support tickets for each of your arrays. Simplicity wins every time. Keep checking the Pure 1 portal as our team rolls our great new innovations.