PSO and “Failed to Log in to Any iSCSI Targets.”

So I create and destroy Kubernetes clusters on vSphere on a pretty regular basis. Some I create with Terraform and Ansible. Some I use PKS. I have a plumbing test for Pure Service Orchestrator that mounts a single volume to a pod on each node.

Every once in a while I get an error like this, on just one node:

Failed to log in to any iSCSI targets! Will not be able to attach volume

In order to make sure it isn’t PSO with the error and it shouldn’t be since the other nodes are working. Run this command:

iscsiadm -m discovery -t st -p 192.168.230.24
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.24,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.25,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.26,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.27,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 192.168.230.24:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.25:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.26:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.27:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479

Now I that isn’t what should be the result. So I thought at first to restart iscsi and that didn’t help. Then I thought, well this is a lab so lets just…

#cd /etc/iscsi
#rm -r nodes

Do not try this if you have other iSCSI targets for other storage. Not sure you will be happy. At first, I thought I should stop iSCSI before doing this. It doesn’t seem to have any effect. Now every node is able to mount and start the pod. Pure Service Orchestrator is trying to mount that volume over and over so it didn’t take long to see everything showing the way I wanted.

NAME                                        READY   STATUS    RESTARTS   AGE
 pure-flex-4zlcq                             1/1     Running   0          12m
 pure-flex-7stfb                             1/1     Running   0          12m
 pure-flex-g2kt2                             1/1     Running   0          12m
 pure-flex-jg5cz                             1/1     Running   0          12m
 pure-flex-n8wkw                             1/1     Running   0          6m34s
 pure-flex-rtsv7                             1/1     Running   0          12m
 pure-flex-vtph2                             1/1     Running   0          12m
 pure-flex-w8x22                             1/1     Running   0          12m
 pure-flex-wqr9k                             1/1     Running   0          12m
 pure-flex-xwbww                             1/1     Running   0          12m
 pure-provisioner-9c8dc9f79-xrq6d            1/1     Running   1          12m
 redis-master-demolocal-1-779f74876c-9k24t    1/1     Running   0          12m
 redis-master-demolocal-10-6695b56f47-zgqc7   1/1     Running   0          12m
 redis-master-demolocal-2-778666b57-5xdh8     1/1     Running   0          6m3s
 redis-master-demolocal-3-84848dfb87-fhj6n    1/1     Running   0          12m
 redis-master-demolocal-4-7c9dfdffb9-6cjv5    1/1     Running   0          12m
 redis-master-demolocal-5-65b555fc79-jjdkl    1/1     Running   0          12m
 redis-master-demolocal-6-6d495bfdf-cb5r2     1/1     Running   0          12m
 redis-master-demolocal-7-5c5db655-fx2qd      1/1     Running   0          12m
 redis-master-demolocal-8-74bc65b8d9-2bt8h    1/1     Running   0          12m
 redis-master-demolocal-9-65dd54c587-zb9p2    1/1     Running   0          12m

Get going with MicroK8s

Last week I was getting stickers from the Ubuntu booth during the Open Infrastructure Conference in Denver. I asked a sorta dumb question, since this was a so new to me. My very first Open Infra Conference (formerly OpenStack Summit). I was asking a lot of questions.

I saw a sticker for MicroK8s (Micro-KATES).

Me: What is that?

Person in Booth: Do you know what MiniKube is?

Me: Yes.

Person in Booth: It is like that, but from the Ubuntu Opinionated version.

Me: Ok, cool, my whole lab is Ubuntu, except when it isn’t. So I’ll try it out.

Ten minutes later? Kuberenetes is running on my Ubuntu 16.04 VM.

Go over to https://microk8s.io/ to get the full docs.

Want a quick lab?

snap install microk8s --classic
microk8s.kubectl get nodes
microk8s.kubectl get services

Done. What? What!

So this was slightly annoying to me to type microk8s.blah for everyhing. So alias that if you don’t already have kubectl. I didn’t, this was a fresh VM.

snap alias microk8s.kubectl kubectl

You can run this command to push the config into a file to be used elsewhere.

microk8s.kubectl config view --raw > $HOME/.kube/config

Want the Dashboard? Run this:

microk8s.enable dns dashboard

It took my 5 minutes to get to this point. Now I am like OK lets connect to some Pure FlashArrays.

First we need enable priveleged containers in MicroK8s. Add this line to the following 2 config files.

–allow-privileged=true

# kubelet config
sudo vim /var/snap/microk8s/current/args/kubelet
#kube-apiserver config
sudo vim /var/snap/microk8s/current/args/kube-apiserver

Restart services to pick up the new config:

sudo systemctl restart snap.microk8s.daemon-kubelet.service
sudo systemctl restart snap.microk8s.daemon-apiserver.service

Now you can install helm, and run the Pure Service Orchestrator Helm chart.

More info on that here:

https://github.com/purestorage/helm-charts

The sticker joined my laptop.

It is “NFSEndPoint”

I think I have updated my blog post and PSO guide to reflect this change. In case you are using Pure Service Orchestrator with FlashBlade. The original yaml for the arrays when installing PSO was “NfsEndPoint”. At somepoint, it was fixed to expect “NFSEndPoint” matching the proper name for NFS. I never updated my blog and docs until now.

Sample values.yaml

arrays:
  FlashArrays:
    - MgmtEndPoint: "1.2.3.4"
      APIToken: "a526a4c6-18b0-a8c9-1afa-3499293574bb"
      Labels:
        rack: "22"
        env: "prod"
    - MgmtEndPoint: "1.2.3.5"
      APIToken: "b526a4c6-18b0-a8c9-1afa-3499293574bb"
  FlashBlades:
    - MgmtEndPoint: "1.2.3.6"
      APIToken: "T-c4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.7"
      Labels:
        rack: "7b"
        env: "dev"
    - MgmtEndPoint: "1.2.3.8"
      APIToken: "T-d4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.9"
      Labels:
        rack: "6a"

New Pure Service Orchestrator Demo

You may want to make this full screen to see all the CLI glory.

What you will see in this demo is the initial install of Pure Service Orchestrator on a upstream version of Kubernetes. Then by running the ‘helm upgrade’ command I can add a FlashArray to scale the environment and take advantage of Smart Provisioning. First we see the new m50 is not used over the original m70. So the final upgrade adds labels for the failure domain or availability zone in Kubernetes. I also add my FlashBlade to enable block and file if needed for my workload. We use the sample application with node and storage selectors to now request the app use compute and storage in a particular AZ. Kubernetes will only schedule the compute on matching nodes and PSO will provision storage on matching storage arrays.

I would love to hear what you think of this and any other ways I can show this off to enable cloud native applications. I am always looking for good examples of containerized apps that need persistent storage. Hit me up on the twitters @jon_2vcps or submit a comment below.

Kubecon 2018 Seattle Pure Storage – also We are hiring

I will be at the Pure Storage booth at Kubecon next week December 11-13. Booth G7. Come see us to learn about Pure Service Orchestrator and Cloud Block Store for AWS. Find out How our customers are leveraging K8s to transform their applications and Pure Storage for their persistent storage needs.

It has been a fun (nearly 2 years) time at Pure working with our customers that already love Pure Storage for things like Oracle, SQL and VMware as they move into the world of K8s and Containers. Also helping customers that never used Pure before move from complicated or underperforming solutions for persistent storage to FlashArray or FlashBlade. With Cloud Block Store entering beta and GA later next year even more customers will want to see how to automate storage persistence on premises, in the public cloud or in a hybrid model. All of that to say if you are an architect looking to grow on our team please find me at Kubecon. I want meet you and learn why you love cloud, containers, Kubernetes and automating all the things in-between.

  • Send me a message on twitter @jon_2vcps
  • Find me at the Pure Booth
  • Stop me in the hall between sessions.

I look just like one of the following people:

Pure Service Orchestrator Guide

Over the last few months I have been compiling information that I have used to help customers when it comes to PSO. Using Helm and PSO is very simple, but with so many different ways to setup K8s right now it can require a broad knowledge of how plugins work. I will add new samples and work arounds to this Github repo as I come across them. For now enjoy. I have the paths for volume plugins for Kubespray, Kubeadm, Openshift and Rancher version of Kubernetes. Plus some quota samples and even some PSO FlashArray Snapshot and clone examples.

https://github.com/2vcps/PSO-Guide

A nice picture of some containers because it annoys some people, that makes me think it is funny.

Kubernetes on VMware vSphere Demo and more

This post is a recap of my session at VMworld last week in Las Vegas. First, due to lighting, the demo was no very easily viewable. I am really disappointed this happened. I posted the full demo here on youtube:

All of the scripts and instructions are available here on my github repo.

https://github.com/2vcps/vmworld2018_vin3762bus

Coming up next is some work around kubespray and terraform.

 

Storage Quotas in Kubernetes

One thing since we released Pure Service Orchestrator I get asked is, “How do we control how much developer/user can deploy?”

I played around with some of the settings from the K8s documentation for quotas and limits. I uploaded these into my gists on GitHub.

git clone git@gist.github.com:d0fba9495975c29896b98531b04badfd.git
#create the namespace as a cluster-admin
kubectl create -f dev-ns.yaml
#create the quota in that namespace
kubectl -n development create -f storage-quota.yaml
#or if you want to create CPU and Memory and other quotas too
kubectl -n development create -f quota.yaml

This allows users in that namespace to be limitted to a certain number of Persistent Volume Claims (PVC) and/or total requested storage. Both can be useful in scenarios where you don’t want someone to create 10,000 1Gi volumes on an array or create one giant 100Ti volume. 

Credit to dilbert.com When I searched for quotas on the internet this made me laugh. I work with salespeople a lot.

 

VMworld 2018 in Las Vegas

I was going to write my own post, but Cody Hosterman already did a great one.

Cody’s VMworld 2018 and Pure Storage Blog

The sessions are filling up so it will be a good idea to register and get there early. I am very excited about talking about Kubernetes on vSphere. It will follow my journey of learning containers and Kubernetes over the last 2 years or so. Hope everyone learns something.

Last year,  here I am talking about containers in front of a container. Boom!

Getting Started with Pure Service Orchestrator and Helm

Why Pure Service Orchestrator?

At Pure we have been working hard to develop a way to provide a persistent data layer that is able to meet the expectations of our customers for ease of use and simplicity.  The first iteration of this was the release as the Docker and Kubernetes Plugins.

The plugins provided automated storage provisioning. Which solved a portion of the problem.  All the while, we were working on the service that resided within those plugins. A service that would allow us to bring together managing many arrays. Both block and file.

The new Pure Service Orchestrator will allow smart provisioning over many arrays. On-demand persistent storage for developers placed on the best array or adhering to your policies based on labels.

To install you can use the traditional shell script as described in the readme file here.

The second way that may fit into your own software deployment strategy is using Helm. Since using Helm provides a very quick and simple way to install and it may be new to you the rest of this post will be how to get started with PSO using Helm.

Installing Helm

Please be sure to install Helm using the correct RBAC intructions.

I describe the process in my blog here.

http://54.88.246.86/2018/03/27/getting-started-with-helm-for-k8s/ 

Also, get acquainted with the official Helm documentation at the following site:

https://docs.helm.sh/using_helm/

Once Helm is fully functioning with your Kubernetes cluster run the following commands to setup and Pure Storage Helm repo:

helm repo add pure https://purestorage.github.io/helm-charts
helm repo update
helm search pure-k8s-plugin

Additionally, you need to create a YAML file with the following formate and contents:

arrays:
  FlashArrays:
    - MgmtEndPoint: "1.2.3.4"
      APIToken: "a526a4c6-18b0-a8c9-1afa-3499293574bb"
      Labels:
        rack: "22"
        env: "prod"
    - MgmtEndPoint: "1.2.3.5"
      APIToken: "b526a4c6-18b0-a8c9-1afa-3499293574bb"
  FlashBlades:
    - MgmtEndPoint: "1.2.3.6"
      APIToken: "T-c4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.7"
      Labels:
        rack: "7b"
        env: "dev"
    - MgmtEndPoint: "1.2.3.8"
      APIToken: "T-d4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.9"
      Labels:
        rack: "6a"

You can run a dry run of the installation if you want to see the output but not change anything on your cluster. It is important to remember the path to the yaml file you created above.

helm install --name pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml --dry-run --debug

If you are satisfied with the output of the dry run you can run the install now.

helm install --name pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml

Please check the GitHub page hosting the Pure Storage repo for more detail.

https://github.com/purestorage/helm-charts/tree/master/pure-k8s-plugin#how-to-install

Setting the Default StorageClass

Since we do not want to assume you only have Pure Storage in you environment we do not force ‘pure’ as the default StorageClass in Kubernetes.

If you already installed the plugin via helm and need to set the default class to pure run this command.

kubectl patch storageclass pure -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

If you have another storage class set to default and you wish to change it to Pure you must first remove the default tag from the other StorageClass and then run the command above. Having two defaults will produce undesired results.  To remove the default tag run this command.

kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

Read more about these commands from the K8s documentation.

https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

Demo

Maybe you are a visual learner check out these two demos showing the Helm installation in action.

Updating your Array information

If you need to add a new FlashArray or FlashBlade simply add the information to your YAML file and update via Helm. You may edit the config map within Kubernetes and there are good reasons to do it that way, but for simplicity we will stick to using helm for changes to the array info YAML file. Once your file contains the new array or label run the following command.

helm upgrade pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml --set ...

Upgrading using Helm

With the same general process you can use the following command and update the version of Pure Service Orchestrator.

helm upgrade pure-storage-driver pure/pure-k8s-plugin -f <your_own_dir>/yourvalues.yaml --version <target version>

Upgrading from the legacy plugin to the Helm version

Follow the instructions here:

https://github.com/purestorage/helm-charts/tree/master/pure-k8s-plugin#how-to-upgrade-from-the-legacy-installation-to-helm-version

There are a few platform specific considerations you should make if you are using any of the following.

  1. Containerized Kubelet (Some flavors of K8s do this, Rancher and Openshift are two).
  2. CentOS/RHEL Atomic Linux
  3. CoreOS
  4. OpenShift
  5. OpenShift Containerized Deployment

Be certain to read through the notes if you use any of these platform versions.

https://github.com/purestorage/helm-charts/tree/master/pure-k8s-plugin#how-to-upgrade-from-the-legacy-installation-to-helm-version

https://github.com/purestorage/helm-charts/tree/master/pure-k8s-plugin#platform-specific-considerations