CockroachDB with Persistent Data

There IS an Official Whitepaper!

While I was writing this post the awesome Simon Dodsley was writing a great whitepaper on Persistent storage with Pure. As you can see there is some very different ways to deploy CockroachDB but the main goal is to keep your important data persistent no matter what happens to the containers as the scale, live and die.

I know most everyone loved seeing the demo of the most mission critical app in my house. I also want to show a few quick ways to leverage the Pure plugin to provide persistent data to a database. I am posting my files I used to create the demo here https://github.com/2vcps/crdb-demo-pure

First note
I started with the instructions provided here by Cockroach Labs.
This is an insecure installation for demo purposes. They do provide the instructions for a more Prod ready version. This is good enough for now.

Second note
The loadbalancer I used was created for my environment using the intructions to output the HAProxy file found here on the Cockroach Labs website:
https://www.cockroachlabs.com/docs/stable/generate-cockroachdb-resources.html

My yaml file refers to a docker image I built for the HAproxy loadbalancer. If it works for you cool! If not please follow the instructions above to create your own. If you really need to know more I can write another post showing how to take the Dockerfile and copy the CFG generated by CRDB into a new image just for you.

 

My nice little docker swarm

media_1501095950777.png

I have three VMware VM’s running Ubuntu 16.04. With Docker CE and the Pure plugin already installed. Read more here if you want to install the plugin.

media_1501096079095.png

Run the deploy

https://github.com/2vcps/crdb-demo-pure/blob/master/3node-cockroachdb-pure.yml

version: '3.1'
services:
    db1:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
            mode: replicated
            replicas: 1
      ports:
            - 8888:8080
      command: start --advertise-host=cockroach_db1 --logtostderr --insecure
      networks:
            - cockroachdb
      volumes:
            - cockroachdb-1:/cockroach/cockroach-data
    db2:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db2 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-2:/cockroach/cockroach-data
    db3:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db3 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-3:/cockroach/cockroach-data
    crdb-proxy:
      image: jowings/crdb-proxy:v1
      deploy:
         mode: replicated
         replicas: 1
      ports:
         - 26257:26257
      networks: 
         - cockroachdb

networks:
    cockroachdb:
        external: true

volumes:
    cockroachdb-1:
      driver: pure
    cockroachdb-2:
      driver: pure
    cockroachdb-3:
      driver: pure

 

#docker stack deploy -c 3node-cockroachdb-pure.yml cockroach

Like it shows in the compose file This command deploys 4 services. 3 database nodes and 1 HAproxy. Each database node gets a brand new volume attached directly to the path by the Pure Docker Volume Plugin.

New Volumes

media_1501098437804.png

Each new volume created and attached to the host via iSCSI and mounted into the container.

Cool Dashboard

media_1501098544719.png

Other than being no data do you notice something else?
First lets generate some data.
I run this from a client machine but you can attach to one of the DB containers and run this command to generate some sample data.

cockroach gen example-data | cockroach sql --insecure --host [any host ip of your docker swam]

media_1501098910914.png

I am also going to create a “bank” database and use a few containers to start inserting data over and over.

cockroach sql --insecure --host 10.21.84.7
# Welcome to the cockroach SQL interface.
# All statements must be terminated by a semicolon.
# To exit: CTRL + D.
[email protected]:26257/> CREATE database bank;
CREATE DATABASE
[email protected]:26257/> set database = bank;
SET
[email protected]:26257/bank> create table accounts (
-> id INT PRIMARY KEY,
-> balance DECIMAL
-> );
CREATE TABLE
[email protected]:26257/bank> ^D

I created a program in golang to insert some data into the database just to make the charts interesting. This container starts, inserts a few thousand rows then exits. I run it as a service with 12 replicas so it is constantly going, I call it gogogo because I am funny.

media_1501108005294.png

gogogo

media_1501108062456.png
media_1501108412285.png

You can see the data slowly going into the volumes.

media_1501171172944.png

Each node remains balanced (roughly) as cockroachdb stores that data.

What happens if a container dies?

media_1501171487843.png

Lets make this one go away.

media_1501171632191.png

We kill it.
Swarm starts a new one. The Docker engine uses the Pure plugin and remounts the volume. The CRDB cluster keeps on going.
New container ID but the data is the same.

media_1501171737281.png

Alright what do I do now?

media_1501171851533.png

So you want to update the image to the latest version of Cockroach? Did you notice this in our first screenshot?

Also our database is getting a lot of hits, (not really but lets pretend), so we need to scale it out. What do we do now?

https://github.com/2vcps/crdb-demo-pure/blob/master/6node-cockroachdb-pure.yml

version: '3.1'
services:
    db1:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
            mode: replicated
            replicas: 1
      ports:
            - 8888:8080
      command: start --advertise-host=cockroach_db1 --logtostderr --insecure
      networks:
            - cockroachdb
      volumes:
            - cockroachdb-1:/cockroach/cockroach-data
    db2:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db2 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-2:/cockroach/cockroach-data
    db3:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db3 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-3:/cockroach/cockroach-data
    crdb-proxy:
      image: jowings/crdb-haproxy:v2
      deploy:
         mode: replicated
         replicas: 1
      ports:
         - 26257:26257
      networks: 
         - cockroachdb
    db4:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db4 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-4:/cockroach/cockroach-data
    db5:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db5 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-5:/cockroach/cockroach-data
    db6:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db6 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-6:/cockroach/cockroach-data
networks:
    cockroachdb:
        external: true

volumes:
    cockroachdb-1:
      driver: pure
    cockroachdb-2:
      driver: pure
    cockroachdb-3:
      driver: pure
    cockroachdb-4:
      driver: pure
    cockroachdb-5:
      driver: pure
    cockroachdb-6:
      driver: pure
$docker stack deploy -c 6node-cockroachdb-pure.yml cockroach

(important to provide the name of the stack you already used, or else errors)

media_1501172007803.png

We are going to update the services with the new images.

  1. This will replace the container with the new version — v1.0.3
  2. This will attach the existing volumes for nodes db1,db2,db3 to the already created FlashArray volumes.
  3. Also create new empty volumes for the new scaled out nodes db4,db5,db6
  4. CockroachDB will begin replicating the data to the new nodes.
  5. My gogogo client “barage” is still running

This is kind of the shotgun approach in this non-prod demo environment. If you want no downtime upgrades to containers I suggest reading more on blue-green deployments. I will show how to make the application upgrade with no downtime and use blue-green in another post.

Cockroach DB begins to reblance the data.

media_1501172638117.png

6 nodes

media_1501172712079.png

If you notice the gap in the queries it is becuase I updated every node all at once. A better way would be to do one at a time and make sure each node is back up while they “roll” through the upgrade to the new image. Not prod remember?

media_1501172781312.png
media_1501172828992.png

Application says you are using 771MiB of your 192GB. While the FlashArray is using just maybe 105MB across these volumes.

A little while later…

media_1501175811897.png

Now we are mostly balanced with replicas in each db node.

Conclusion
This is just scratching the surface and running highly scalable data applications in containers with persistent data on a FlashArray. Are you a Pure customer or potential Pure customer about to run stateful/persistent apps on Docker/Kubernetes/DCOS? I want to hear from you. Leave a comment or send me a message on Twitter @jon_2vcps.

If you are a developer and have no clue what your infrastructure team does or is doing I am here to help make everyone’s life better. No more weekend long deployments or upgrades. Get out of doing storage performance troubleshooting.

Go to more of your kids soccer games.

FlashStack Your Way to Awesomeness

You may or may not have heard about Pure Storage and Cisco partnering to provide solutions together to help our current and prospective customers using UCS, Pure Storage, and VMware. These predesigned and tested architectures provide a full solution for compute, network and storage. Read more here:

https://www.purestorage.com/company/technology-partners/cisco.html

http://blogs.cisco.com/datacenter/accelerate-vdi-success-with-cisco-ucs-and-pure-storage

This results in CVD’s (Cisco Validated Designs)

http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_flashstack_view62_5k.html

There are more coming for SQL, Exchange, SAP and general Virtual Machines (I call it JBOVMs, Just a Bunch of VM’s).

Turn-key like solution for compute, network, and storage

Know how much and what to purchase when it comes to compute, network and storage as we have worked with Cisco to validate with actual real workloads. Many times mixed workloads because who runs just SQL or just Active Directory. It is proven and works. Up in running in a couple of days. If a couple of months was not good (legacy way), and then 2-4 weeks (newer way with legacy HW) wasn’t good enough, how about 1-2 days? For reals next generation datacenter. Also, scale compute, network and storage independently. Why buy extra hypervisor licenses when you just need 5 TB of space?

Ability to connect workload from/to the publics clouds (AWS, AZURE)

I don’t think as many people know this as they should, but Rob Barker “Barkz” is awesome. He worked hard to prove out the ability to use Pure FlashArray with Azure compute. Great read and more details here:

Announcing: Pure Storage All-Flash Cloud for Microsoft Azure

Official Pure information here:

https://www.purestorage.com/resources/type-a/pure-storage-all-flash-cloud-azure-deployment-guide.html

Azure is ready now and AWS is in the works.

Ability to backup to the public clouds.

No secret here we are working hard to integrate with backup software vendors. Some have been slow and others have been willing to work with our API to make seamless backup and snapshot management integration with Pure and amazing thing.

Just one example of how Commvault is enabling backup to Azure:

http://www.commvault.com/resource-library/55fc5ff8991435a6ce000c9c/backup-to-azure-with-commvault.pdf

IntelliSnap and Pure Storage

https://documentation.commvault.com/commvault/v10/article?p=features/snap_backup/pure/overview.htm

Check how easy it is to setup the Commvault and Pure Storage.

Ease of storage allocation without the need of a storage specialist

If I have ever talked to you about Pure Storage and I didn’t say how simple it is to use or mention my own customers that are not “Storage Peeps” that manage it quite easily then I failed. Take away my Orange sunglasses.

If you are looking at FlashStack or just now finding out how easy it is now. Remember no Storage Ph.D. required. We even have nearly everything you need to be built into our free vSphere Plugin. Info from my here Cody Hosterman here.

The Pure Storage Plugin for the vSphere Web Client

Here is a demo if you want to see how it works. This is a little older but I know he is working on some new stuff.

Even better if you would like to automate end to end and tie the Pure Storage provisioning with UCS Director that is possible too! See here:

Pure//Accelerate

Have you registered for Pure Accelerate yet? You should do it right now.

The next great conference where you actually learn about what is pertinent to your passion for IT. Develop insight for what is next, and hear from your peers and industry experts about moving to the next generation of IT.

accelerate

In 10 years you will tell people, yeah, I was at the very first Pure//Accelerate, I was there before EVERYONE else. You can be the IT hipster all over again. Before it moved to Moscone and had 30,000 people. You can move to Portland and drink IPA’s and post pictures of them to Instagram.

JPEG image-069AC5307867-1

UNMAP – Do IT!

Pretty sure my friend Cody Hosterman has talked about this until he turned blue in the face.  Just a point I want to quickly re-iterate here for the record. Run unmap on your vSphere Datastores.

Read this if you are running Pure Storage, but even if you run other arrays (especially all-flash) find a way to do UNMAP on a regular basis:

http://www.codyhosterman.com/2016/01/flasharray-unmap-script-with-the-pure-storage-powershell-sdk-and-poweractions/

Additionally, start to learn the ins-n-outs of vSphere 6 and automatic unmap!

http://blog.purestorage.com/direct-guest-os-unmap-in-vsphere-6-0-2/

Speaking of In-n-out…. I want a double double before I start Whole 30.

in-n-out

Easy Storage Monitoring – Setting Up PureELK with Docker

[UPDATE June 2016: Appears this works with Ubuntu only, maybe a debian flavor. I am hearing RHEL is problematic to get the dependencies working.]

I have blogged in the past about setting up vROPS (vCOPS) and Splunk to monitor a Pure Storage FlashArray using the REST API. Scripts and GETs and PUTs are fun and all but what if there was a simple tool you can install to have your own on site monitoring and analytics of your FlashArrays?

Enter Pure ELK. Some super awesome engineers back in Mountain View wrote this integration for Pure and ELK and packaged it an amazingly easy insatllation and released it on Github! Open Source and ready to go!
https://github.com/pureelk

and

http://github.com/pureelk/pureelk

Don’t know Docker? Cool we will install it for you. Don’t know Kibana or elasticsearch? Got you covered. One line in a fresh Ubuntu install (I used Ubuntu but I bet your favorite flavor will suffice).

go ahead and try:

curl -s https://raw.githubusercontent.com/pureelk/pureelk/master/pureelk.sh | bash -s install

(fixed url to reflect no longer in Dev)

This will download and install docker, setup all the dependencies for Pure ELK and let you know where to go from your browser to config your FlashArrays.

I had one small snag:

Connecting to the Docker Daemon!

media_1450716022076.png

My user was not in the right group to connect to docker the first time. The Docker install when it is not automated actually tells you to add your user to the “docker” group in order to

$sudo usermod -aG docker [username]

Log out and back in that did the trick. If you know a better way for the change to be recognized without logging out let me know in the comments.

I re-ran the install
curl -s https://raw.githubusercontent.com/pureelk/pureelk/dev/pureelk.sh | bash -s install

In about 4 minutes I was able to hit the management IP and start adding FlashArrays!

Quickly add all your FlashArrays

media_1450715719804.png

Click the giant orange PLUS button.

This is great if you have more than one FlashArray. If you only have one it still works. Everyone should have more Flash though right?

media_1450715771293.png

Fill in your FlashArray information. You can choose your time-to-live for the metrics and how often to pull data from the FlashArray

Success!

media_1450715937834.png

I added a couple of arrays for fun and then clicked “Go to Kibana”
I could have gone to
https://[server ip]:5601

Data Already Collecting

media_1450716109188.png

This is just the beginning. The next post I will share some of the pre-packaged dashboards and also some of the cusotmizations you can make in order to visualize all the Data PureELK is pulling from the REST API. Have fun with this free tool. It can be downloaded and setup in less than 10 minutes on a linux machine, 15 minutes if you need to build a new VM.

Register: VMUG Webinar and Pure Storage September 22

Register here: http://tinyurl.com/pq5fd9k

September 22 at 1:00pm Eastern time Pure Storage and VMware will be highlighting the results of ESG Lab Validation paper. The study on consolidating workloads with VMware and Pure Storage used a single FlashArray //m50 and deployed five virtualized mission-critical workloads VMware Horizon View, Microsoft Exchange Server, Microsoft SQL Server (OLTP), Microsoft SQL Server (data warehouse) and Oracle (OLTP). While I won’t steal all the thunder it is good to note that all of this was run with zero tuning on the applications. Want out of the business of tweaking and tuning everything in order to get just a little more performance from your application? Problem Solved. Plus check out the FlashArray and the consistent performance even during failures.

Tier 1 workloads in 3u of Awesomeness

wpid1910-media_1442835406510.png

You can see in the screenshot the results of running tier one application on an array made to withstand real-world ups and downs of the datacenter. Things happen to hardware and software even, but it is good to see the applications still doing great. We always tell customers, it is not how fast the array is in a pristine benchmark, but how does it respond when things are not going well, when controller loses power or a drive (or two) fails. That is what sets Pure Storage apart (that and data reduction and real Evergreen Storage).

Small note: Another proven environment with near 32k block sizes. This one hung out between 20k and 32k, don’t fall for 4k or 8k nonsense benchmarks. When the blocks hit the array from VMware this is just what we see.

Register for the Webinar
http://tinyurl.com/pq5fd9k
You can win a GoPro too.

PureStorage + REST API + Splunk = Fun with Data about Data

A few months back I posted a powershell script to post Pure Storage data directly into VMware vCenter Operations Manager (now called vRealize Operations). Inspiration hit me like a brick when a big customer of mine said, “Do you have a plugin for Splunk?”

He already wrote some scripts in python to pull data from our REST API. He just said, “Sure wish I didn’t have to do this myself.” I took the hint. Now I am not a python person, so I did the best I could with the tools I have.
You will notice that the script is very similar to the one I wrote for vCOPS. That is because open REST API’s rock, if you don’t have one for your product you are wrong. 🙂

The formatting in WordPress ALWAYS breaks scripts when I paste them. So head over to GitHub and download the script today.
https://github.com/2vcps/post-rest2splunk/tree/master

Like before I schedule this as a task to run every 5 minutes. That seems to not explode the tiny Splunk VM I am running in VMware Fusion to test this out.

Dashboards. Check.

wpid1855-media_1429109420445.png

Some very basic Dashboards I created. I am not a Splunk ninja, perhaps you know one? I am sure people that have done this for a while can pull much better visuals out of this data.

wpid1856-media_1429109524852.png
wpid1857-media_1429109617758.png

Pivot Table

wpid1858-media_1429109962843.png

Stats from a Lab array some Averages computed by Splunk.

Gauge Report of Max Latency (that is micro seconds)

wpid1859-media_1429110138347.png

A 1000 of these is 1 millisecond 🙂 pretty nice.

From Wikipedia
A microsecond is an SI unit of time equal to one millionth (0.000001 or 10−6 or 1/1,000,000) of a second. Its symbol is μs. One microsecond is to one second as one second is to 11.574 days. A microsecond is equal to 1000 nanoseconds or 1/1,000 milliseconds.

Even if everything else didn’t help you at least you learned that today. Right?

The link to github again https://github.com/2vcps/post-rest2splunk/tree/master

Top 5 – Pure Storage Technical Blog Posts 2014

Today I thought it would be pretty cool to list out my favorite 5 technical blog posts that pertain to Pure Storage. These are posts that I use to show customers how to get things done without re-inventing the wheel. Big thanks to Barkz and Cody for all the hard work they put in this year. Looking forward to even more awesomeness this year.

SQL Server 2014 Prod/Dev with VMware PowerCLI and Pure Storage PowerShell Toolkit – Rob “Barkz” Barker

Enhanced UNMAP script using with PowerCLI and RESTful API – Cody Hosterman

VMware PowerCLI  and Pure Storage – Cody Hosterman
Check out the great script to set all the vSphere Best Practices for the Pure Storage Flash Array.

Pure Storage PowerShell Toolkit Enhancements – Rob “Barkz” Barker

PowerActions – The PowerCLI Plugin for the vSphere Web Client with UNMAP – Cody Hosterman

JO-Unicorn-Rainbow

VMware vCenter Operations Manager and Pure Storage Rest API

I was playing with the REST API and Powershell in order to provision vSphere Datastores. I started to think what else could we do with all the cool information we get from the Pure Storage REST API?
I remembered some really cool people here and here had used the open HTTP Post adapter. So I started to work on how to pull data out of the Flash Array and into vCOPS.

Pure Dashboard

media_1407352996822.png

We already get some pretty awesome stats in the Pure web GUI. What we don’t get is the trends and analysis. Also I don’t see how my data reduction increases and decreases over time. Also I don’t get stats from multiple arrays.

First Dashboard with Array Stats, Heat Map, and Health based in vCops Baseline

media_1407353219054.png
media_1407360492500.png

Array Level Stats

First each of these scripts require Powershell 4.0.
1. Enter the Flash Array Names in the variable for $FlashArrayName. You can see I have 4 arrays in the Pure SE Lab.
2. I create a file with the credential to vCOPS. Since we are going to schedule this script to run every few minutes you need to create this file. More information on creating that credential here http://blogs.technet.com/b/robcost/archive/2008/05/01/powershell-tip-storing-and-using-password-credentials.aspx

You MUST read and do that to create the cred.txt file in c:\temp that I reference in the script.

3. Change the $url variable to be the IP or name of your vCOPS UI server.
4. Don’t forget to modify the Pure Flash Array and Password in each script.

Find it on GitHub https://github.com/2vcps/purevcops-array

[code]
cls
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = @(‘pure1′,’pure2′,’pure3′,’pure4’)

$AuthAction = @{
password = "pass"
username = "user"
}

# will ignore SSL or TLS warnings when connecting to the site
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$pass = cat C:\temp\cred.txt | ConvertTo-SecureString
$mycred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist "admin",$pass

# function to perform the HTTP Post web request
function post-vcops ($custval,$custval2,$custval3)
{
# url for the vCOps UI VM. Should be the IP, NETBIOS name or FQDN
$url = "<vcops ip>"
#write-host "Enter in the admin account for vCenter Operations"

# prompts for admin credentials for vCOps. If running as scheduled task replace with static credentials
$cred = $mycred

# sets resource name
$resname = $custval3

# sets adapter kind
$adaptkind = "Http Post"
$reskind = "Pure FlashArray"

# sets resource description
$resdesc = "<flasharraydesc>"

# sets the metric name
$metname = $custval2

# sets the alarm level
$alrmlev = "0"

# sets the alarm message
$alrmmsg = "alarm message"

# sets the time in epoch and in milliseconds
#This is setting us 7 hours behind
$epoch = [decimal]::Round((New-TimeSpan -Start (get-date -date "01/01/1970") -End (get-date)).TotalMilliseconds)

# takes the above values and combines them to set the body for the Http Post request
# these are comma separated and because they are positional, extra commas exist as place holders for
# parameters we didn’t specify
$body = "$resname,$adaptkind,$reskind,,$resdesc`n$metname,$alrmlev,$alrmmsg,$epoch,$custval"

# executes the Http Post Request
Invoke-WebRequest -Uri "https://$url/HttpPostAdapter/OpenAPIServlet" -Credential $cred -Method Post -Body $body
#write-host $resname
#write-host $custval2 "=" $custval "on" $custval3
}
ForEach($element in $FlashArrayName)
{
$faName = $element.ToString()
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
api_token = $ApiToken.api_token
}
Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

$PureStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?action=monitor" -WebSession $Session
$PureArray = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?space=true" -WebSession $Session
ForEach($FlashArray in $PureStats) {

$wIOs = $FlashArray.writes_per_sec
$rIOs = $FlashArray.reads_per_sec
$rLatency = $FlashArray.usec_per_read_op
$wLatency = $FlashArray.usec_per_write_op
$queueDepth = $FlashArray.queue_depth
$bwInbound = $FlashArray.input_per_sec
$bwOutbound = $FlashArray.output_per_sec
}
ForEach($FlashArray in $PureArray) {

$arrayCap =($FlashArray.capacity)
$arrayDR =($FlashArray.data_reduction)
$arraySS =($FlashArray.shared_space)
$arraySnap =($FlashArray.snapshots)
$arraySys =($FlashArray.system)
$arrayTP =($FlashArray.thin_provisioning)
$arrayTot =($FlashArray.total)
$arrayTR =($FlashArray.total_reduction)
$arrayVol =($FlashArray.volumes)
}

post-vcops($wIOs)("Write IO")($faName)
post-vcops($rIOs)("Read IO")($faName)
post-vcops($rLatency)("Read Latency")($faName)
post-vcops($wLatency)("Write Latency")($faName)
post-vcops($queueDepth)("Queue Depth")($faName)
post-vcops($bwInbound)("Input per Sec")($faName)
post-vcops($bwOutbound)("Output per Sec")($faName)

post-vcops($FlashArray.capacity)("Capacity")($faName)
post-vcops($FlashArray.data_reduction)("Real Data Reduction")($faName)
post-vcops($FlashArray.shared_space)("Shared Space")($faName)
post-vcops($FlashArray.snapshots)("Snapshot Space")($faName)
post-vcops($FlashArray.system)("System Space")($faName)
post-vcops($FlashArray.thin_provisioning)("TP Space")($faName)
post-vcops($FlashArray.total)("Total Space")($faName)
post-vcops($FlashArray.total_reduction)("Faker Total Reduction")($faName)
post-vcops($FlashArray.volumes)("Volumes")($faName)

}
[/code]

 

For Volumes

Find it on GitHub https://github.com/2vcps/purevcops-volumes

[code]
cls
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = @(‘pure1′,’pure2′,’pure3′,’pure4’)

$AuthAction = @{
password = "pass"
username = "user"
}

# will ignore SSL or TLS warnings when connecting to the site
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$pass = cat C:\temp\cred.txt | ConvertTo-SecureString
$mycred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist "admin",$pass

# function to perform the HTTP Post web request
function post-vcops ($custval,$custval2,$custval3,$custval4)
{
# url for the vCOps UI VM. Should be the IP, NETBIOS name or FQDN
$url = "<vcops ip or name>"
#write-host "Enter in the admin account for vCenter Operations"

# prompts for admin credentials for vCOps. If running as scheduled task replace with static credentials
$cred = $mycred

# sets resource name
$resname = $custval

# sets adapter kind
$adaptkind = "Http Post"
$reskind = "Flash Volumes"

# sets resource description
$resdesc = $custval4

# sets the metric name
$metname = $custval2

# sets the alarm level
$alrmlev = "0"

# sets the alarm message
$alrmmsg = "alarm message"

# sets the time in epoch and in milliseconds
#This is setting us 7 hours behind
$epoch = [decimal]::Round((New-TimeSpan -Start (get-date -date "01/01/1970") -End (get-date)).TotalMilliseconds)

# takes the above values and combines them to set the body for the Http Post request
# these are comma separated and because they are positional, extra commas exist as place holders for
# parameters we didn’t specify
$body = "$resname,$adaptkind,$reskind,,$resdesc`n$metname,$alrmlev,$alrmmsg,$epoch,$custval3"

# executes the Http Post Request
Invoke-WebRequest -Uri "https://$url/HttpPostAdapter/OpenAPIServlet" -Credential $cred -Method Post -Body $body

write-host $custval,$custval2,$custval3
}
ForEach($element in $FlashArrayName)
{
$faName = $element.ToString()
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
api_token = $ApiToken.api_token
}
Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

$PureStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?action=monitor" -WebSession $Session
$PureVolStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/volume?space=true" -WebSession $Session
ForEach($Volume in $PureVolStats) {
#$Volume.data_reduction
#$Volume.name
#$Volume.volumes
#$Volume.shared_space
#$Volume.system
#$Volume.total
#$Volume.total_reduction
#$Volume.snapshots
$adjVolumeSize = ($Volume.Size /1024)/1024/1024
#$Volume.thin_provisioning

post-vcops($Volume.Name)("Volume Size")($adjVolumeSize)($faName)
post-vcops($Volume.Name)("Volume Data Reduction")($Volume.data_reduction)($faName)
post-vcops($Volume.Name)("Volumes")($Volume.volumes)($faName)
post-vcops($Volume.Name)("Shared Space")($Volume.shared_space)($faName)
post-vcops($Volume.Name)("System")($Volume.system)($faName)
post-vcops($Volume.Name)("Total")($Volume.total)($faName)
post-vcops($Volume.Name)("Total Reduction")($Volume.total_reduction)($faName)
post-vcops($Volume.Name)("Thin Provisioning")($Volume.thin_provisioning)($faName)
post-vcops($Volume.Name)("Snapshots")($Volume.snapshots)($faName)
}
}
[/code]

Once each of the scripts is working schedule them as a task on a windows server. I do one for volumes and one for arrays and run them every 5 minutes indefintely. This will start to dump the data into vCOPS.

Now you can make Dashboards.

Creating Dashboards

media_1408382785427.png

Login to the UI for vCOPS. You must by in the custom UI, the standar UI hides all of the cool non-vSphere customization you can do.

 

Go to Environment –> Environment Overview

media_1408383055768.png

Expand Resource Kinds

media_1408383114822.png

This lets you know that data is being accepted to the array. Other than the Powershell script bombing out and failing this is the only way you know it is working. Now for a new Dashboard.

Click Dashboards -> Add

media_1408383203181.png

Drag Resources, Metric Selector, Metric Graph and Heat Map to the Right

media_1408383262000.png

Name it and Click OK

Adjust the Layout

media_1408383477679.png

I like a nice Column for information and a bigger display area for graphs and heat maps. Adjust to your preference.

Edit the Resources Widget

media_1408383579549.png

Edit the Name and filters to tag

media_1408383667269.png

Now we just see the Flash Arrays

media_1408383734620.png
media_1408383840440.png

Select your Resource Provider I named mine Lab Flash Arrays as the Providing Widget for the Metric Selector. Also Select the Lab Flash Arrays and Metric Selector as the Providing Widgets for the Metric Graph.

Edit the Metric Graph Widget by clicking the gear icon

media_1408384372245.png

I change the Res. Interaction Mode to SampleCustomViews.xml. This way when I select a Flash Array the Graph does show up until I double click the Metric in the Metric Selector. You are of course free to do it as you like.

The Heat Map

media_1408384493307.png

Edit the heat map and you will find tons of options.

media_1408384631976.png

Create a Configuration

media_1408384728117.png

Name the New Configuration

media_1408384811714.png

Group by and Resource Kinds

media_1408384843862.png

Group by the Resource Kind and then select Pure Flash Array in the drop down.

Select the Metric to Size the Heatmap and Color the Heatmap

media_1408384873077.png

Adjust the colors if you think Read and Green are boring

media_1408384896168.png

Save the Config!

media_1408384924548.png

Look! A cool new heatmap

media_1408384959172.png

Do this for all the metrics you want to have as a drop down in teh dashboard.

Obviously there are a lot more things you can do with the Dashboards and widgets. Hopefully this is enough to get you kicked off.

A Brand New Dashboard

media_1408385301227.png

Staying through Thursday at VMworld? Come to PureStorage Evolve

When: Thursday August 28th
1:00pm – 5:45pm (conference) and 5:45pm – 10:00pm (networking pavilion)
Where: Yerba Buena Center

It will be awesome. Register today!

media_1406766982809.png
Why should you come?
Flash is changing virtualization more than any other technology. With storage no longer in the way the journey to 100% virtualization can be a reality and you can focus on the Cloud operations you need to move to the what is next for your IT organization. Stop letting legacy storage distract you form what can move your business forward. Come to