UPDATE: Eric Siebert wrote an almost identical article here for techtarget. Since I don’t want people to think I just lift ideas from other blogs I would like to credit this article at the top. Although I have been using this procedure for nearly 2 years for snapshots. I would like to say I didn’t see his article before I posted mine. Next time I will do more googling. I try to keep this blog from regurgitating ideas that can be found elsewhere.
Do you ever have to commit gigantic snapshots that the vCenter client times out before it is finished. After the initial panic a while ago I learned the snapshots continue to delete even if the client times out. So how do you know what they are finished? Answer ssh in the service console and look.
A ‘ls’ of a normal virtual machine:
A ‘ls’ of the vm with a snapshot:
Now a try – watch -n 30 ‘ls -alh’ This will re-run the ls -alh command every 30 seconds. In action:
Now if you VI Client times out you can leave this window running so you will know the snapshots are gone when the vmdk with the “delta” in the filename are gone.
Notice the additional delta files that appear. When deleting snapshots the vCenter will create a new delta file for changes that occur during the delete of the original delta. Then it deletes this new delta. I know I know an awesome video. I really like playing with Jing though. The watch command can be used with an linux command you want to repeat over and over and over.
Finally have a second to log into the test ESX and mess with esxcfg- commands again.
Today, esxcfg-dumppart, this command can be used to list, create and activate dump partitions used by the VMKernel during a crash. I would bet almost everyone automatically creates one of these during the install of ESX. What I mean is I never even tried to not create a dump part on installation. I was trying to think of a practical use for this. Maybe we want the dump to go to a SAN partition or a some other drive? I would guess this would make is possible.
While trying to cook up a way to secure client hosted VM’s I thought of this layout. A Virtual Firewall Appliance that creates an IPsec tunnel back to the client network. Then placing the client virtuals on a dedication vSwitch.
Has anyone tried something like this? I hope that VI4 / vSphere will include a way to make this a reality. I figure a downside of just creating a infrastructure with some kind of m0n0wall appliance is the appliance would need to move from host to host in a DRS/HA cluster. I bet with some scripting and/or affinity rules I might be able to keep them together. It would be good of the new infrastructure would have layer 3 or firewall capability that would exist across the cluster. Then you would not have to worry about vMotioning a virtual firewall around.
Maybe someone has a better way to do this? Am I over thinking it? I would want this best way of assuring clients their data doesn’t mix at any point physical or virtual unless it is in the VPN tunnel.
When I first started out in College I needed a work study job. Since I liked to help people with their computer problems I applied and was hired for a position doing phone and in person support for the University. One of the best things about starting out at a school they don’t mind teaching. Our trainer said that in previous years new employees would be slotted into being Windows or Mac or UNIX support. He said we would be Wunder-Cons (our title was consultant instead of help desk dude). We had the privilege of having to support all of it. This thrust me into the world IT no matter what the piece of paper from USC said I was a Bachelor of.
I believe a new kind of Wunder-Consultant/Engineer is being made. With the announcement of the Nexus 1000v last fall the line between Network Engineer and Datacenter/Server Engineer is getting blurred. The SAN and Server Engineers have had this tension for a while now. Virtualization is a fun technology to learn but who gets the responsibility? I have seen where the SAN team owns the ESX’s and the Server team operates the VM’s like they are physical. The Network team not trusting or understanding why they want a bunch of 1GigE trunk ports. Across larger organizations it would look different but the struggle may be just the same. Who is in control of the VM’s? Are they secure? Who gets called at 1am when something dies? This is internal to the IT department and does not consider that Sales doesn’t want to share memory with accounting.
I can see these technologies pushing engineers into being jacks of all trades. To be a truly Architect level in VMware today you must be awesome with storage and servers. You have to be able to SSH into an ESX, choose the right storage for an application, and setup templates of Windows 2003. That is an easy day. You already will have to troubleshoot IO (because all problems get blamed on the virtualization first).
With the Nexus 1000v I picture the Virtualization Admins learning the skills to configure and troubleshoot route/switch inside and outside the Virtual Infrastructure. Add to that Cisco’s push this year with 10GigE and FCoE and their own embedded virtualization products. The lines between job duties are getting blown away.
Who is poised to become the experts in this realm? The network, server or storage admins? In this economy it may be good to know how to do all three jobs. I am sure corporations would love to pay just one salary to perform these tasks.
Randomly I though how would this relate to SOX? Could it pose any problems with compliance? I will save that for next time.