This month I was, together with Ethan Banks, the Solarwinds Thwack Ambassador. The Ambassadors take some extra time to, beyond the regular conversations on the Thwack forum, post a weekly thought on a specific theme. This month I focussed on Virtualization Management. As all the stars were perfectly aligned I went to 2 VMUGs in the first week of december so a lot of the thoughts came from those presentations. Actually most of it comes either from Joe Baguley’s keynote or Chad Sakac’ EMC presentation. Here are the 3 posts and you can still jump into the conversations!
Today I was at the Nordics VMUG in Copenhagen (Denmark) and in a few session we kept seeing that management layers shift or at least are shifting. About 10 years ago we were all, with a few exceptions maybe, managing a few physical servers, each with their own very specific use case and physical design. Even identical physical architectures were a dream because we bought hardware for 5-years use and in between new business demands entered. To manage these servers we used RDP and maybe some iLO (HP) or iDRAC (DELL) for physical management when the OS wasn’t there yet. It was a rare use case where someone actually used a HP Matrix system to manage all the physical hardware in a Single-Pane-Of-Glass.
Over the last 10 years now we have been managing our servers as Virtual Machines. From adding new physical devices we went to adding physical resources when needed and gave us a very short time-to-market. Whether you are using VMware ESX(i) or even Microsoft Hyper-V, we have all our machines in a cluster manager or maybe if you are small enough just per host and you manage everything through your virtualization management client like vCenter Client. Lately these management interface have even shifted to web-based interfaces so that local installations are no longer needed. And although some hardware vendors do have management plugins, we don’t have today our Single-Pane-Of-Glass. We are still logging in to our Backup Server, Network Firewall, Storage Array, …
Fast forward to today tomorrow. With the Software-Defined-Datacenter entering our world, the possibility of bringing EVERYTHING in to that Single-Pane-Of-Glass has become closer than ever. In essence the SD-hype is about splitting the control plane from the data plane, which provides us the possibility to pool and automate resources. Waaw, nice sales pitch there, right? You would not have to go into the Storage or Networking management interfaces anymore because everything will be controlled through API’s and scripts (Puppet/Chef/…) Time to market? The speed of light
How many User Interfaces do YOU have today? I’d love to see you list all of them and tell me which ones you think/hope to get rid of in the not so distant future and how.
Most of us kind of agree that we are evolving away from the client server model to a “cloud computing” model. Back in the days we had a few hundred applications that ran on some mainframe and were controlled by very few people. Today we manage thousands of applications on a specifically chosen infrastructure, whether or not physical or virtual. Tomorrow however we will have millions and millions of applications that run everywhere. Private, Hybrid and Public cloud are even buzzwords that will probably faint away. Are we ready for that transition?
Explaining technology in simple terms is always a challenge. Finding the right analogy is key. Watch Joe Baguley, CTO EMEA at VMware on the Belgian VMUG last week explaining the difference of how we manage our servers today with how you manage CATS.
Obviously Joe is a very good speaker as such and you probably had a laugh at the video. But do we get the point? I know for a fact that I am not really ready for this. I guess most of you probably still have an excel sheet with a column per VLAN and 1 line per server for the IP address. I know I do. So here comes a two folded question:
1) what are the tools you use today to deploy new workloads and how do you document that? LUNs, IP-Addresses, MPIO settings, what software runs on it, …
2) what are the type of tools, whether or not they already exist, you’ll need when the next software you buy will be deployed by dozens a time multiple times a month?
Two weeks ago I wrote about the Single-Pain-Of-Glass and last week we talked about Herding Chicken instead of Managing Cats. What keeps coming back is that we as people and customers are just not ready for it. Yes, we want everything to be in one management location but if we would have one for all our current challenges it would never be sufficient for future challenges and we are back to square one. So a big missing part here would be flexibility and durability. Towards the second post everyone understood exactly where the challenge lies but we are not even close to this new way of managing services & applications.
When it comes to that higher level of managing infrastructure through abstraction and automation some new scripting platforms pop up. Whether or not you want to use OpenStack for example as the management platform as such, designing it to be a sustainable program requires the sysadmins to learn new languages. The two major names today are Puppet and Chef. While Puppet has a broader audience today and is probably closer to the sysadmin, Chef is stronger and would give more power and flexibility. The flipside to both is that you’ll need some coding skills.
So who’s already into these new programs? What’s your experience? Do you even have the coding skills this could require?