Over the last few weeks I was setting up Kubernetes in the lab. One thing I quickly learned was managing and editing yaml files for deployments, services and persistent volume claims became confusing and hard. Even when I had things commited in github sometimes I would make edits then not push them then rebuild my K8s cluster.
The last straw was when 2 of our Pure developers said that editing yaml in vi wasn’t very cool and to start using helm.
Needless to say that was good advice. I still have to remember to push my repos to github. Now my demostration applications are more “cloud native”. I can create and edit them in one environment and use helm install in another and have it just work.
One request from customers is not only provision persistent storage for Kubernetes but also integrate into workflows that may need to snap and copy the data for different environments. Much like we do this with powershell or python for SQL and Oracle environments to accelerate development or QA. Pure has enabled snapshots using the Pure Provisioner as part of our Kubernetes Plugin.
In this demo I am showing how I can take a users data directory for JupyterHub and clone it for another user to take advantage of all the benefits of Pure’s snapshots and clones. You instantly get access to a copy of the dataset. The dataset doesn’t take up room on the backend storage. Only globally unique changes will grow the volume. In this use case the Data Science team will see increases in productivity as they are not waiting for data to download from the cloud or copy from another place on the array.
The command to run the snap using kubectl is below:
Make sure the follow the directions on the page to pull and install the plugin. If you are using Openshift pay special attention to the Readme. I will post more on this in the near future.
Cockroach DB as our Persistent Database
I want to simulate a very easy database that I can easily use in a container. That is also not the same old. I built a Go app that will write to a database over and over to kind of demonstrate the inner workings of the plugin but not necessarily supply a performance test.
With that said, please check out how to deploy and scale a database with a persistent data platform from a Pure FlashArray. Watch this in Full screen to make the CLI commands easier to see.
What you are seeing in the video:
Deploy the initial 3 pods with volumes automatically created and connected on the Pure FA.
Initialize the cluster.
Fail a node and watch K8s redeploy a new container and re-attach the data volume.
Run a load generation application as a K8s Job.
Scale the DB cluster out to 8 nodes.
What is next?
This is a really easy and quick demo but it show the ease of using the Pure Plugin to manage the persistent data, making sure you do not lose data in the event of app crashes. Also easily scaling. This can all be done via policy and the deployment can be made even easier using Helm. In a future post we will see how we can take advantage of these methods and keep the same highly available, high performance and very easy to use persistent data platform for your application.
In the last post I mentioned there are resources that have already gone through that do a better job than me in helping you understand containers and Kubernetes.
So if you are a virtualization admin like me and want to make 2018 the year you know enough to be dangerous I suggest the following resources.
Start thinking: Does this app need a VM or a container? Once you are asking the question you will begin to think critically about the choices.
I am not sure we all need to move 100% off of VM’s today. Starting to ask the questions will help prepare us to provide these services to our customers when the workloads and workflows that require them to arise.
So I have been internally debating the best way to share this latest little thing I was working on/ learning. My goal over 2018 is to post more on migrating applications from virtual to containers managed by K8s. That transition isn’t for everything and has definetley required diving more into applications. There are many Kubernetes concepts I am going to skip over as others may already have explained them better. I do plan on doing a vSphere to K8s quick and easy to help us VCP’s and other Virtual Admins get started.
I wanted to keep it running in my lab and even though it works just fine on my local laptop, I switch between PC and Mac (2 of them) and wanted my environment (and data) available from a central place. Plus, I can’t learn Kubernetes without real applications to run.
Jupyter is an open source web application that allows you to display interactive code, equations and visualizations. I use it for Data Analytics in Python.
So jupyter is an application that can run in your conda environment. I want to run it as a container with persistent NFS storage in my Kuberenetes cluster in my basement. Notebooks are the files that contain the code and visualizations. I can post notebooks to github to allow others to test my work. In the github repo, I included a very basic file with some python. Once you have this all running you can play with it if you would like.
So how to get it to run. ContinuumIO the keepers of Anaconda provide a container image and some basic instructions for running the container on Docker. I googled for ways that people provide this in cluster environment. In the near future Jupyterhub will be the solution for you if you want multi-tenant jupyter deployments with Oauth and all kinds of fancy features I do not need in my tiny lab.
The following files are all available on my github at Conda-K8s. This worked in my environment with Kuberenetes 1.9. Your mileage may vary depending on access rights, version and anything you do that I don’t know about.
First create the persistent volume you will need to create and edit the following nfs-pv.yaml file.
# FIXME: use the right IP and the right path
First make sure you edit the file with your NFS server IP and valid already created path to your NFS Share. This is where your jupyter notebook data will be stored. If the POD crashes or the host server dies it will start elsewhere in the cluster, your data will persist. Brilliant!
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
claim-jowings 10Gi RWX Retain Released jupyter4me/hub-db-dir 3d
conda-notebooks 100Gi RWX Retain Bound default/conda-claim 3d
Now that the volume object is created we can now create the “claim”
I am not going to get into the why of doing this but as far as my tiny brain can understand it is the way K8s manages what application can connect with what persistent volume. Notice how the request section of the yaml is asking for 100Gi, the size of my volume in the last step.
Finally we can create the POD. The pod is what kubernetes uses to schedule a application and its most basic component. It can be just one container. It can be more, for now we won’t get into what all that means.
If you take a look at the file above there are some things we are doing to get conda and jupyter to work. First notice the “env” section I created. I didn’t want to create a custom container image but rather use the default image provided by continuumio. I don’t want to accidentally become reliant on my own proprietary image. Without the command and the arguments in the $JUYPTERCMD environment variable, the container starts, has nothing to do, and shuts down. K8s sees this as a failure so it starts it again (and again and again). Also we see in the volumes section we are telling the POD to use our “conda-claim” we created in the last step. Under containers the volumeMounts declaration tells k8s to mount the pv to the mountPath inside the container.
kubectl create -f conda-pod.yaml
Now lets see what the results look like:
kubectl get pod
NAME READY STATUS RESTARTS AGE
conda-742lc 1/1 Running 0 2d
Very good, the pod is running and we have a “READY 1/1”
A few things we need to connect to the jupyter notebook. Run the following command and notice the output. It gives you a URL with a token to access the web app. Obviously localhost is going to not work from my remote workstations. Save that token for later though.
$kubectl logs conda-742lc
Package plan for installation in environment /opt/conda:
The following NEW packages will be INSTALLED:
The following packages will be UPDATED:
anaconda: 5.0.1-py36hd30a520_1 --> custom-py36hbbc8b67_0
conda: 4.3.30-py36h5d9f9f4_0 --> 4.4.7-py36_0
pycosat: 0.6.2-py36h1a0ea17_1 --> 0.6.3-py36h0a5515d_0
+ /opt/conda/bin/jupyter-nbextension enable nbpresent --py --sys-prefix
Enabling notebook extension nbpresent/js/nbpresent.min...
- Validating: OK
+ /opt/conda/bin/jupyter-serverextension enable nbpresent --py --sys-prefix
- Writing config: /opt/conda/etc/jupyter
+ /opt/conda/bin/jupyter-nbextension enable nb_conda --py --sys-prefix
Enabling notebook extension nb_conda/main...
- Validating: OK
Enabling tree extension nb_conda/tree...
- Validating: OK
+ /opt/conda/bin/jupyter-serverextension enable nb_conda --py --sys-prefix
- Writing config: /opt/conda/etc/jupyter
[I 17:09:25.393 NotebookApp] [nb_conda_kernels] enabled, 3 kernels found
[I 17:09:25.399 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[W 17:09:25.421 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[I 17:09:26.044 NotebookApp] [nb_anacondacloud] enabled
[I 17:09:26.050 NotebookApp] [nb_conda] enabled
[I 17:09:26.095 NotebookApp] ✓ nbpresent HTML export ENABLED
[W 17:09:26.095 NotebookApp] ✗ nbpresent PDF export DISABLED: No module named 'nbbrowserpdf'
[I 17:09:26.098 NotebookApp] Serving notebooks from local directory: /opt/notebooks
[I 17:09:26.098 NotebookApp] 0 active kernels
[I 17:09:26.098 NotebookApp] The Jupyter Notebook is running at: http://[all ip addresses on your system]:8888/?token=08938eb3b2bc00f350c43f7535e38f6aa339f5915e12d912
[I 17:09:26.098 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 17:09:26.099 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=08938<blah blah blah
We must create a “service” in Kubernetes in order for the application to be accessible. There is a ton about services and ingress into applications. Since I am running on an private cluster. Not on Google or Amazon I am going to use the simplest way for this post to create external access. That is done using the “type” under the spec. See how it says NodePort? Also I am not specifying an inbound port (you can do that if you want). I am just telling it to find the app called “conda” and forward traffic to tcp 8888.
This creates the service from the file. This is actually a cool concept that allows the inbound traffic management (ingress) be disaggregated from the application pod/deployment. That means I can swap versions of the app without changing the inbound rules or loadbalancers (lb is a whole book unto itself). To see my services now I run:
Great, now we see the service is forwarding port 32250 (yours will be different) to 8888. Using the node port type I can actually hit any node in my cluster and my K8s CNI will forward the traffic.
now just go to and paste your token from earlier.
http://<a node ip>:32250/
In my github repo for this project I included a basic notebook file that shows some python code to simulate coin flips many many times. Feel free to “upload” and play with it and have fun with Data Science on Juypter / Conda running in a K8s cluster.
In the last couple of days I had a couple of questions from customers implementing some kind of container host on top of vSphere. Each was doing it to make use of either Kubernetes or Docker Volume Plugin for Pure Storage. First, there was a little confusion if the actual container needs to have iSCSI access to the array. The container needs network access for sure (I mean if you want somone to use the app) but it does not need access to the iSCSI network. Side Note: iSCSI is not required to use the persistent storage plugins for Pure. Fiber channel is supported. ISCSI may just be an easy path to using a PureFlash Array or NFS (10G network) for FlashBlade with an existing vSphere Setup.
To summarize all that: The container host VM needs access to talk directly to the storage. I accomplish this today with multiple vnics but you can do it however you like. There may be some vSwitches, physical nics and switches in the way, but the end result should be the VM talking to the FlashArray or FlashBlade.
More information on configuring our plugins is here:
Basically the container host needs to be able to talk to the MGMT interface of the array, to do it’s automation of creating host objects, volumes and connecting them together (also removing them when you are finished). The thing is to know the plugin does all the work for you. Then when your application manifest requests the storage the plugin mounts the device to the required mount point inside the container. The app (container) does not know or care anything about iSCSI, NFS or Fiber Channel (and it should not).
Container HOST Storage Networking
Container hosts as VM’s Storage Networking
If you are setting up iSCSI in vSphere for Pure, you should probably go see Cody’s pages on doing this most of this is a good idea as a foundation for what I am about to share.
So what I normally do is setup 2 new port groups on my VDS.
something like… iscsi-1 and iscsi-2 I know I am very original and creative.
Set the uplink for the Portgroup
We used to setup “in guest iSCSI” for VM’s that needed array based snaphost features way back in the day. This is basically the same piping. After creating the new port groups edit the settings in the HTML5 GUI as shown below.
Set the Failover Order
Go for iSCSI-1 on Uplink 1 and iSCSI-2 on Uplink 2
I favor putting the other Uplink into “Unused” as this gives me the straightest troubleshooting path in case something downstream isn’t working. You can put it in “standby” and probably be just fine.
At Kubecon in Austin I was talking with my good friend Jonas Rosland about how I was getting to train a group of new Pure SE’s on Cloud Native Apps and planned on doing labs with installing docker and running apps in containers. I was going to get every student a vm and let them spin up some containers. He reminded me of Play with Docker. What I great idea I thought.
Hey @Jon_2vcps and @marcosnils, since you’re both at KubeCon you should meet up and talk about PWD for training purposes 🙂
He not only did that but went a head and introduced me to Marcos who was the developer for PWD. On the expo floor I was able to sit and chat with someone that created quite a cool way for people to experience Docker. Just a cool thing about people that do something for the community. A nice guy that was willing to answer questions and was not too busy to help someone out.
Since I had a good number of students logging in at once it seemed a good idea that we set it up on our own. So in just a day I spun up the environment and let the students get to work. Everyone had their own playground that ran Docker in Docker. Everyone got to do something they normally would not get a chance to play with. Once I clean out some of the Pure specific stuff I will post the class and slides to Github.
I will create a post about what it took to get it up and running on my own in the next day or so. This is more of a thank you for all the work Marcos did to create this cool project for everyone to enjoy.
So if you are looking to learn a little more about Docker head to:
That one time you all of sudden could not SSH into your Docker Swarm hosts?
I am writing this so I will remember to be smarter next time.
Ever Get this?
minas-tirith:~ jowings$ ssh scarif
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
I started to flip out and wondered why this just all of sudden happened on all four host in my swarm cluster. Was something actually nasty happening? Probably not, but you never know. I thought I broke the pub key on my mac. because I went into .ssh/known_hosts and removed the entry for my hosts as I quite commonly see this because I rebuild vm’s and hosts all the time. Then I got something different and got the same exact error from my Windows 10 machine.
Permission denied (publickey).
Pretty sure I didn’t break 2 different ssh clients on 2 different computers.
What did I do?
$docker stack deploy -c gitlab.yml gitlab
So I am keeping local git copies and thoughs I would be smart to have Gitlab to run this service in my home lab.
Problem in my zeal to have git use stander ssh tcp port 22 to push my repos up to the server I did this:
So basically my gitlab service was now available using tcp/22 on my entire cluster. Even though the container is only on one host they way Docker overlay networking works is any host in that cluster will forward the request for tcp/22 to that container. The container without my public key, the container that no matter my hostname does not have the same SSH “ID” as my actual hosts.
Bad move JO.
So don’t do that and stuff.
In this paper, Simon uses the Pure Docker Volume Plugin to create persistent storage for CockroachDB. That is all well and good if you want to play with CockroachDB, but also shows the foundation for you to use the plugin to create persistent data for your app.
What applications are you using with containers that require persistent (and reliable) data storage? I would be very interested in seeing how this works for everyone else with their own apps.
While I was writing this post the awesome Simon Dodsley was writing a great whitepaper on Persistent storage with Pure. As you can see there is some very different ways to deploy CockroachDB but the main goal is to keep your important data persistent no matter what happens to the containers as the scale, live and die.
I know most everyone loved seeing the demo of the most mission critical app in my house. I also want to show a few quick ways to leverage the Pure plugin to provide persistent data to a database. I am posting my files I used to create the demo here https://github.com/2vcps/crdb-demo-pure
I started with the instructions provided here by Cockroach Labs.
This is an insecure installation for demo purposes. They do provide the instructions for a more Prod ready version. This is good enough for now.
My yaml file refers to a docker image I built for the HAproxy loadbalancer. If it works for you cool! If not please follow the instructions above to create your own. If you really need to know more I can write another post showing how to take the Dockerfile and copy the CFG generated by CRDB into a new image just for you.
Like it shows in the compose file This command deploys 4 services. 3 database nodes and 1 HAproxy. Each database node gets a brand new volume attached directly to the path by the Pure Docker Volume Plugin.
Each new volume created and attached to the host via iSCSI and mounted into the container.
Other than being no data do you notice something else?
First lets generate some data.
I run this from a client machine but you can attach to one of the DB containers and run this command to generate some sample data.
cockroach gen example-data | cockroach sql --insecure --host [any host ip of your docker swam]
I am also going to create a “bank” database and use a few containers to start inserting data over and over.
cockroach sql --insecure --host 10.21.84.7
# Welcome to the cockroach SQL interface.
# All statements must be terminated by a semicolon.
# To exit: CTRL + D.
firstname.lastname@example.org:26257/> CREATE database bank;
email@example.com:26257/> set database = bank;
firstname.lastname@example.org:26257/bank> create table accounts (
-> id INT PRIMARY KEY,
-> balance DECIMAL
I created a program in golang to insert some data into the database just to make the charts interesting. This container starts, inserts a few thousand rows then exits. I run it as a service with 12 replicas so it is constantly going, I call it gogogo because I am funny.
You can see the data slowly going into the volumes.
Each node remains balanced (roughly) as cockroachdb stores that data.
What happens if a container dies?
Lets make this one go away.
We kill it.
Swarm starts a new one. The Docker engine uses the Pure plugin and remounts the volume. The CRDB cluster keeps on going.
New container ID but the data is the same.
Alright what do I do now?
So you want to update the image to the latest version of Cockroach? Did you notice this in our first screenshot?
Also our database is getting a lot of hits, (not really but lets pretend), so we need to scale it out. What do we do now?
(important to provide the name of the stack you already used, or else errors)
We are going to update the services with the new images.
This will replace the container with the new version — v1.0.3
This will attach the existing volumes for nodes db1,db2,db3 to the already created FlashArray volumes.
Also create new empty volumes for the new scaled out nodes db4,db5,db6
CockroachDB will begin replicating the data to the new nodes.
My gogogo client “barage” is still running
This is kind of the shotgun approach in this non-prod demo environment. If you want no downtime upgrades to containers I suggest reading more on blue-green deployments. I will show how to make the application upgrade with no downtime and use blue-green in another post.
Cockroach DB begins to reblance the data.
If you notice the gap in the queries it is becuase I updated every node all at once. A better way would be to do one at a time and make sure each node is back up while they “roll” through the upgrade to the new image. Not prod remember?
Application says you are using 771MiB of your 192GB. While the FlashArray is using just maybe 105MB across these volumes.
A little while later…
Now we are mostly balanced with replicas in each db node.
This is just scratching the surface and running highly scalable data applications in containers with persistent data on a FlashArray. Are you a Pure customer or potential Pure customer about to run stateful/persistent apps on Docker/Kubernetes/DCOS? I want to hear from you. Leave a comment or send me a message on Twitter @jon_2vcps.
If you are a developer and have no clue what your infrastructure team does or is doing I am here to help make everyone’s life better. No more weekend long deployments or upgrades. Get out of doing storage performance troubleshooting.