Recently, we had a debate on Twitter about IBM’s Cloud Computing initiatives. The point of discussion was whether mainframes can be part of what we call cloud or it should be just x86s machines. Some folks in the Clouderati group argued that mainframes cannot be part of the cloud. I differed from their point of view and I thought I will briefly discuss my position on the topic here.
Before I put forward my views on the topic, let me briefly outline what IBM has done in this space. Using their zSeries mainframes, they have consolidated their data centers from 150 to 5. They did it by taking advantage of their virtualization technology, the open source Linux operating system and efficient networking technology. Recently, I talked about how they are offering Analytics as a Service to their employees using their mainframe consolidation. IBM plans to offer self service cloud offerings on top of their mainframe based cloud infrastructure, by standardizing the business processes of their customers. Essentially, this is just a repositioning of their offerings with cloud attributes.
Some members of the Clouderati and analysts don’t consider IBM’s offering as a cloud because their offerings are on top of mainframes and, for them, it is just virtualization of mainframes with some management layer on top of it. Some of them tend to emphasize on the presence of x86 machines in the cloud infrastructure. I disagree with this point of view and I will list out my thoughts below. Borrowing the terminology from Chris Hoff, these are my incomplete thoughts.
- IBM’s cloud has the attributes that define cloud computing. Using the characteristics defined by NIST, it is service based and, also, self service. It is scalable and elastic. It is multi-tenant. It allows broader network access using different types of devices and protocols. When a “cloud” built on top of mainframes can satisfy these necessary attributes needed for the definition of cloud computing, it is a cloud.
- If the idea behind the cloud is the complete abstraction of the underlying hardware for the customers, how does it matter if there are x86s or mainframes or, even, supercomputers underneath. As long as the provider manages to completely abstract away the complexity of the underlying hardware, the presence of mainframes there shouldn’t matter.
- Unlike the mainframes in the mainframe era, IBM’s technology has evolved a lot. Now, they are running Linux on top of their mainframes much like Amazon or other public cloud providers. Essentially, I should be able to port most of my apps in other environment to IBM cloud. If I can run my applications like how I run on top of Amazon cloud ecosystem or Microsoft Azure, IBM’s cloud is a cloud in the real terms.
- From IBM’s point of view, the biggest achievements from this consolidation of datacenters is the tremendous cost savings and efficiency. If they could offer a cloud economics comparable to other cloud providers, it doesn’t matter if they use x86s or mainframes or supercomputers in the underlying infrastructure.
As I told earlier, these are my incomplete thoughts. I have been talking to IBM regularly and, also, to analysts and pundits who don’t trust IBM. My ideas on this issues are bound to evolve and I hope to revisit this topic again in the future with more enlightenment. In the mean time, if you really want to help my thoughts evolve on this topic, jump in and offer your thoughts. if anyone is interested in contributing to Cloud Ave on this topic, feel free to buzz me.
At scale, the z/Series of servers is a very cost-effective means of delivering Linux OS instances. The IFL “engines” that are added to the mainframe are purposed-designed to run massive concurrent Linux environments using a “stripped down” derivative of z/VM.
IBM’s z/VM operating system has been around for > 40 years in some form and is an excellent and robust virtualization hypervisor. If anyone wants to make a claim about their virtualization credentials IBM has all the right in the world to tout theirs. (Don’t forget the late and unlamented OS/2 had a kick-ass virtualization service in it too)
Since cloud services are designed to increase the abstraction between the service and the platform (and especially the underlying hardware) as I twittered “Who Cares what it runs on?”. x86 based platforms seem to many to be the ultimate expression of IT hardware commoditization, but that shouldn’t mean that other platforms or solutions are forbidden from being considered as “real” cloud services.
From the consumer’s perspective, all I care about is the cost of a VM instance. Whether I get the best price from an x86 blade farm, from a Mainframe IFL or from naturalized leprechauns – as long as my Linux applications work correctly – I don’t care how the service provider gives me the service as long as it meets my performance and availability requirements.
Would be happy to chat with you more on this subject.
~Randy
Thanks Randy for your comments. I am in agreement with you. I will get in touch with you after the thanksgiving holidays for a chat.
Perhaps my judgement is a bit clouded (sorry, bad pun intended 😉 because I used to work for IBM in an area related to the Cloud initiatives, and any biases aside, the comment on “who cares what it runs on” is bang on target. Cloud is a concept and a way of operating. Whether it runs on z, p, x86, Sun, HP or whatever, who cares? It is the operational model that defines a cloud. I submit that in many instances, z is even more efficient because of the scalability/density factor over x86, meaning fewer moving parts, fewer cables, less networking complexity, lower power consumption and so on. Why would anyone care whether it runs on a mainframe or an iPhone?
Also, Randy, FYI, in some of the analysis I’ve done, it’s not only in the cost per VM, which is one way of viewing it, but it is also in the overall cost of the workloads including licensing and so on, which adds to the argument that more-powerful-than-x86 (whether IBM z or p, or others such as HP or Sun) might be even more cost effective despite the higher hardware costs because of the density factor, or the processing you can apply to workloads, resulting in lower SW licensing costs for example. In some cases, x86 comes out on top. In others, mainframes might. Point is, I agree with you guys; it IS a cloud. There’s a lot of subjectivity in the numbers, and there’s no one *best* answer. (I know, I ran these analyses for a number of scenarios)