Creating a Helm Repo with Github

Next step in learning helm is being able to take an existing helm package and put it in your own repo.

There are ways to do this with github pages. I don’t really want mess withthat right now, how can I use a Github repo to host my changes to the deployment?

For installing helm and an additional demo please see part 1 of this series.

https://blog.2vcps.io/2018/03/27/getting-started-with-helm-for-k8s/

Continue reading “Creating a Helm Repo with Github”

Getting Started with Helm for K8s

Over the last few weeks I was setting up Kubernetes in the lab. One thing I quickly learned was managing and editing yaml files for deployments, services and persistent volume claims became confusing and hard. Even when I had things commited in github sometimes I would make edits then not push them then rebuild my K8s cluster.

The last straw was when 2 of our Pure developers said that editing yaml in vi wasn’t very cool and to start using helm.

Needless to say that was good advice. I still have to remember to push my repos to github. Now my demostration applications are more “cloud native”. I can create and edit them in one environment and use helm install in another and have it just work.

Continue reading “Getting Started with Helm for K8s”

Using Snapshots with the Pure Storage Plugin for Kubernetes

One request from customers is not only provision persistent storage for Kubernetes but also integrate into workflows that may need to snap and copy the data for different environments. Much like we do this with powershell or python for SQL and Oracle environments to accelerate development or QA. Pure has enabled snapshots using the Pure Provisioner as part of our Kubernetes Plugin.

In this demo I am showing how I can take a users data directory for JupyterHub and clone it for another user to take advantage of all the benefits of Pure’s snapshots and clones. You instantly get access to a copy of the dataset. The dataset doesn’t take up room on the backend storage. Only globally unique changes will grow the volume. In this use case the Data Science team will see increases in productivity as they are not waiting for data to download from the cloud or copy from another place on the array.

The command to run the snap using kubectl is below:

kubectl exec <pure provisioner pod name> -- snapshot create -n <namespace> <pvc-claim-name>

Kubernetes and the Pure Storage FlexVolume Plugin

First, if you are using Pure Storage and Kubernetes make life easier and take a look at our plugin. Now version 1.2.2 and GA.

https://hub.docker.com/r/purestorage/k8s/

Make sure the follow the directions on the page to pull and install the plugin. If you are using Openshift pay special attention to the Readme. I will post more on this in the near future.

Cockroach DB as our Persistent Database

I want to simulate a very easy database that I can easily use in a container. That is also not the same old. I built a Go app that will write to a database over and over to kind of demonstrate the inner workings of the plugin but not necessarily supply a performance test.

To learn more about the steps I use in the video to deploy and manage CRDB in K8s please check out this link. https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html

With that said, please check out how to deploy and scale a database with a persistent data platform from a Pure FlashArray. Watch this in Full screen to make the CLI commands easier to see.

What you are seeing in the video:

  1. Deploy the initial 3 pods with volumes automatically created and connected on the Pure FA.
  2. Initialize the cluster.
  3. Fail a node and watch K8s redeploy a new container and re-attach the data volume.
  4. Run a load generation application as a K8s Job.
  5. Scale the DB cluster out to 8 nodes.

What is next?

This is a really easy and quick demo but it show the ease of using the Pure Plugin to manage the persistent data, making sure you do not lose data in the event of app crashes. Also easily scaling. This can all be done via policy and the deployment can be made even easier using Helm. In a future post we will see how we can take advantage of these methods and keep the same highly available, high performance and very easy to use persistent data platform for your application.

CockroachDB with Persistent Data

There IS an Official Whitepaper!

While I was writing this post the awesome Simon Dodsley was writing a great whitepaper on Persistent storage with Pure. As you can see there is some very different ways to deploy CockroachDB but the main goal is to keep your important data persistent no matter what happens to the containers as the scale, live and die.

I know most everyone loved seeing the demo of the most mission critical app in my house. I also want to show a few quick ways to leverage the Pure plugin to provide persistent data to a database. I am posting my files I used to create the demo here https://github.com/2vcps/crdb-demo-pure

First note
I started with the instructions provided here by Cockroach Labs.
This is an insecure installation for demo purposes. They do provide the instructions for a more Prod ready version. This is good enough for now.

Second note
The loadbalancer I used was created for my environment using the intructions to output the HAProxy file found here on the Cockroach Labs website:
https://www.cockroachlabs.com/docs/stable/generate-cockroachdb-resources.html

My yaml file refers to a docker image I built for the HAproxy loadbalancer. If it works for you cool! If not please follow the instructions above to create your own. If you really need to know more I can write another post showing how to take the Dockerfile and copy the CFG generated by CRDB into a new image just for you.

 

My nice little docker swarm

media_1501095950777.png

I have three VMware VM’s running Ubuntu 16.04. With Docker CE and the Pure plugin already installed. Read more here if you want to install the plugin.

media_1501096079095.png

Run the deploy

https://github.com/2vcps/crdb-demo-pure/blob/master/3node-cockroachdb-pure.yml

version: '3.1'
services:
    db1:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
            mode: replicated
            replicas: 1
      ports:
            - 8888:8080
      command: start --advertise-host=cockroach_db1 --logtostderr --insecure
      networks:
            - cockroachdb
      volumes:
            - cockroachdb-1:/cockroach/cockroach-data
    db2:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db2 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-2:/cockroach/cockroach-data
    db3:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db3 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-3:/cockroach/cockroach-data
    crdb-proxy:
      image: jowings/crdb-proxy:v1
      deploy:
         mode: replicated
         replicas: 1
      ports:
         - 26257:26257
      networks: 
         - cockroachdb

networks:
    cockroachdb:
        external: true

volumes:
    cockroachdb-1:
      driver: pure
    cockroachdb-2:
      driver: pure
    cockroachdb-3:
      driver: pure

 

#docker stack deploy -c 3node-cockroachdb-pure.yml cockroach

Like it shows in the compose file This command deploys 4 services. 3 database nodes and 1 HAproxy. Each database node gets a brand new volume attached directly to the path by the Pure Docker Volume Plugin.

New Volumes

media_1501098437804.png

Each new volume created and attached to the host via iSCSI and mounted into the container.

Cool Dashboard

media_1501098544719.png

Other than being no data do you notice something else?
First lets generate some data.
I run this from a client machine but you can attach to one of the DB containers and run this command to generate some sample data.

cockroach gen example-data | cockroach sql --insecure --host [any host ip of your docker swam]

media_1501098910914.png

I am also going to create a “bank” database and use a few containers to start inserting data over and over.

cockroach sql --insecure --host 10.21.84.7
# Welcome to the cockroach SQL interface.
# All statements must be terminated by a semicolon.
# To exit: CTRL + D.
[email protected]:26257/> CREATE database bank;
CREATE DATABASE
[email protected]:26257/> set database = bank;
SET
[email protected]:26257/bank> create table accounts (
-> id INT PRIMARY KEY,
-> balance DECIMAL
-> );
CREATE TABLE
[email protected]:26257/bank> ^D

I created a program in golang to insert some data into the database just to make the charts interesting. This container starts, inserts a few thousand rows then exits. I run it as a service with 12 replicas so it is constantly going, I call it gogogo because I am funny.

media_1501108005294.png

gogogo

media_1501108062456.png
media_1501108412285.png

You can see the data slowly going into the volumes.

media_1501171172944.png

Each node remains balanced (roughly) as cockroachdb stores that data.

What happens if a container dies?

media_1501171487843.png

Lets make this one go away.

media_1501171632191.png

We kill it.
Swarm starts a new one. The Docker engine uses the Pure plugin and remounts the volume. The CRDB cluster keeps on going.
New container ID but the data is the same.

media_1501171737281.png

Alright what do I do now?

media_1501171851533.png

So you want to update the image to the latest version of Cockroach? Did you notice this in our first screenshot?

Also our database is getting a lot of hits, (not really but lets pretend), so we need to scale it out. What do we do now?

https://github.com/2vcps/crdb-demo-pure/blob/master/6node-cockroachdb-pure.yml

version: '3.1'
services:
    db1:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
            mode: replicated
            replicas: 1
      ports:
            - 8888:8080
      command: start --advertise-host=cockroach_db1 --logtostderr --insecure
      networks:
            - cockroachdb
      volumes:
            - cockroachdb-1:/cockroach/cockroach-data
    db2:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db2 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-2:/cockroach/cockroach-data
    db3:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db3 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-3:/cockroach/cockroach-data
    crdb-proxy:
      image: jowings/crdb-haproxy:v2
      deploy:
         mode: replicated
         replicas: 1
      ports:
         - 26257:26257
      networks: 
         - cockroachdb
    db4:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db4 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-4:/cockroach/cockroach-data
    db5:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db5 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-5:/cockroach/cockroach-data
    db6:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db6 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-6:/cockroach/cockroach-data
networks:
    cockroachdb:
        external: true

volumes:
    cockroachdb-1:
      driver: pure
    cockroachdb-2:
      driver: pure
    cockroachdb-3:
      driver: pure
    cockroachdb-4:
      driver: pure
    cockroachdb-5:
      driver: pure
    cockroachdb-6:
      driver: pure
$docker stack deploy -c 6node-cockroachdb-pure.yml cockroach

(important to provide the name of the stack you already used, or else errors)

media_1501172007803.png

We are going to update the services with the new images.

  1. This will replace the container with the new version — v1.0.3
  2. This will attach the existing volumes for nodes db1,db2,db3 to the already created FlashArray volumes.
  3. Also create new empty volumes for the new scaled out nodes db4,db5,db6
  4. CockroachDB will begin replicating the data to the new nodes.
  5. My gogogo client “barage” is still running

This is kind of the shotgun approach in this non-prod demo environment. If you want no downtime upgrades to containers I suggest reading more on blue-green deployments. I will show how to make the application upgrade with no downtime and use blue-green in another post.

Cockroach DB begins to reblance the data.

media_1501172638117.png

6 nodes

media_1501172712079.png

If you notice the gap in the queries it is becuase I updated every node all at once. A better way would be to do one at a time and make sure each node is back up while they “roll” through the upgrade to the new image. Not prod remember?

media_1501172781312.png
media_1501172828992.png

Application says you are using 771MiB of your 192GB. While the FlashArray is using just maybe 105MB across these volumes.

A little while later…

media_1501175811897.png

Now we are mostly balanced with replicas in each db node.

Conclusion
This is just scratching the surface and running highly scalable data applications in containers with persistent data on a FlashArray. Are you a Pure customer or potential Pure customer about to run stateful/persistent apps on Docker/Kubernetes/DCOS? I want to hear from you. Leave a comment or send me a message on Twitter @jon_2vcps.

If you are a developer and have no clue what your infrastructure team does or is doing I am here to help make everyone’s life better. No more weekend long deployments or upgrades. Get out of doing storage performance troubleshooting.

Go to more of your kids soccer games.

Come see @CodyHosterman at VMworld, and if he is too busy you can see me

Co89Fz2UAAAiAD9Look for a post about going to In-n-Out some time soon, it is my tradition.

Be sure to check out what we will be doing at VMworld at the end of the month. Click the banner below once you are done being mesmerized by Chappy. Sign up for a 1:1 demo or meeting, I’ll be there are would love to meed with you. See how focused a demo I give.

 

vmworld-sig chappyFB

Sessions to be sure to see featuring the Amazing Cody Hosterman

SDDC9456-SPO: Implementing Self-Service Storage Provisioning with vRealize Automation Xaas

VMware vCenter is no longer meant to be the end-user interface for requesting and managing virtual machines and related resources. Storage is no exception. Join Cody Hosterman as he discusses how vRealize Automation Anything-as-a-Service (Xaas) provides the ability to easily import vRealize Orchestrator workflows to control, manage and provision storage via the self-service catalog offering vRealize Automation.

Wednesday, Aug 31, 2:00 PM – 3:00 PM

NF9455-SPO: Best Practices for All-Flash Data Reduction Arrays with VMware vSphere

As All-Flash Data Reduction arrays are becoming common place in VMware environments due to their performance, flexibility and ease-of-use, it is important to understand how to best implement and manage them with EXXi. Data-reduction and flash changes how an administrator should think about various configuration options within VMware and those will be discussed in detail. VAAI, Space Reclamation, virtual disks, SIOC, SDRS Queue depths, Multipathing and other points will be highlighted.

Monday, Aug 29, 2:30 PM – 3:30 PM

FlashStack Your Way to Awesomeness

You may or may not have heard about Pure Storage and Cisco partnering to provide solutions together to help our current and prospective customers using UCS, Pure Storage, and VMware. These predesigned and tested architectures provide a full solution for compute, network and storage. Read more here:

https://www.purestorage.com/company/technology-partners/cisco.html

http://blogs.cisco.com/datacenter/accelerate-vdi-success-with-cisco-ucs-and-pure-storage

This results in CVD’s (Cisco Validated Designs)

http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_flashstack_view62_5k.html

There are more coming for SQL, Exchange, SAP and general Virtual Machines (I call it JBOVMs, Just a Bunch of VM’s).

Turn-key like solution for compute, network, and storage

Know how much and what to purchase when it comes to compute, network and storage as we have worked with Cisco to validate with actual real workloads. Many times mixed workloads because who runs just SQL or just Active Directory. It is proven and works. Up in running in a couple of days. If a couple of months was not good (legacy way), and then 2-4 weeks (newer way with legacy HW) wasn’t good enough, how about 1-2 days? For reals next generation datacenter. Also, scale compute, network and storage independently. Why buy extra hypervisor licenses when you just need 5 TB of space?

Ability to connect workload from/to the publics clouds (AWS, AZURE)

I don’t think as many people know this as they should, but Rob Barker “Barkz” is awesome. He worked hard to prove out the ability to use Pure FlashArray with Azure compute. Great read and more details here:

Announcing: Pure Storage All-Flash Cloud for Microsoft Azure

Official Pure information here:

https://www.purestorage.com/resources/type-a/pure-storage-all-flash-cloud-azure-deployment-guide.html

Azure is ready now and AWS is in the works.

Ability to backup to the public clouds.

No secret here we are working hard to integrate with backup software vendors. Some have been slow and others have been willing to work with our API to make seamless backup and snapshot management integration with Pure and amazing thing.

Just one example of how Commvault is enabling backup to Azure:

http://www.commvault.com/resource-library/55fc5ff8991435a6ce000c9c/backup-to-azure-with-commvault.pdf

IntelliSnap and Pure Storage

https://documentation.commvault.com/commvault/v10/article?p=features/snap_backup/pure/overview.htm

Check how easy it is to setup the Commvault and Pure Storage.

Ease of storage allocation without the need of a storage specialist

If I have ever talked to you about Pure Storage and I didn’t say how simple it is to use or mention my own customers that are not “Storage Peeps” that manage it quite easily then I failed. Take away my Orange sunglasses.

If you are looking at FlashStack or just now finding out how easy it is now. Remember no Storage Ph.D. required. We even have nearly everything you need to be built into our free vSphere Plugin. Info from my here Cody Hosterman here.

The Pure Storage Plugin for the vSphere Web Client

Here is a demo if you want to see how it works. This is a little older but I know he is working on some new stuff.

Even better if you would like to automate end to end and tie the Pure Storage provisioning with UCS Director that is possible too! See here:

VMware Space Consumption on Thin Provisioned Data-Reducing Arrays

A common question I get from my customers is: Why does vSphere say my data store is full? when the array is 4% used? I usually make a quick explanation of how the VMFS file system has no clue that the block device underneath is actually deduping and compressing the data. So even though you provisioned 1TB of VM’s the Array might only write a fraction of that amount. This can get many different reactions. Anger, disbelief, astonishment and understanding. This post is to visually show that what vSphere is thinking is used on VMFS will not necessarily be reflected the same on a data reducing array (including FlashArray).

When vSphere says I am FULL

media_1466090762665.png

Even when the FlashArray says plenty of space

media_1466090873568.png

You can tell from my environment testing vRealize Automation and Orchestrator that there is a lot more being “used” in vSphere than is written to the array. In your head start to do the match though. 3.4 times 8.19GB does not equal 169GB. That is because we do not claim thin provisioning as actual data reduction. This includes any set of “zeros”. Space not provisioned to a VM at all, the empty VMFS space AND the empty space provisioned to a VM (lazy or eager zeros) not consumed or written to by the VM. Since my enironment is mostly empty VM’s you can see the Total Reduction is ridiculously high.

Some solutions.
1. Use Thin Provisioned VM’s with Automatic UNMAP in vSphere 6. Read more from Cody Hosterman here. Direct Guest OS Unmap in vSphere 6.
This will give closer accounting of VM provisioned space and space consumed on the array. It is still not aware of the compression and dedupe behind the scenes on the array.
2. vVols provide the storage awareness needed to let VMware know the actual consumption per VM. Come see at the Pure Storage booth at VMworld.

Use the plugin!

media_1466090985027.png

At least you can quickly see that the 169.4GB will be reduced by 3.4:1 (for actual written data) all in one screen.

New Features in Pure1 – Analytics

The best just gets better? Pure1 Manage is Pure Storage’s SaaS based management tool for Pure customers. Beside getting tons of health, capacity and performance information you now have some new. It is a pretty hard upgrade process that requires updating VM’s at each site and possibly some consulting services. Just kidding. It is already available, no effort from you required. Just login to pure1.purestorage.com.

Capacity Analytics

media_1455027599889.png

You are now able to use the Analytics tab to project out your current growth to determine when your array will be getting towards full. Very nice. Included with your Pure Storage FlashArray. No extra anything to buy. Sweet.

Support

Also, as a bonus, you can see the Support tab now in the Pure 1 Manage screen. This will let you see all open support tickets for each of your arrays. Simplicity wins every time. Keep checking the Pure 1 portal as our team rolls our great new innovations.

Pure//Accelerate

Have you registered for Pure Accelerate yet? You should do it right now.

The next great conference where you actually learn about what is pertinent to your passion for IT. Develop insight for what is next, and hear from your peers and industry experts about moving to the next generation of IT.

accelerate

In 10 years you will tell people, yeah, I was at the very first Pure//Accelerate, I was there before EVERYONE else. You can be the IT hipster all over again. Before it moved to Moscone and had 30,000 people. You can move to Portland and drink IPA’s and post pictures of them to Instagram.

JPEG image-069AC5307867-1