LinkedIn Twitter
Director, OpenShift Strategy at Red Hat. Founder of Rishidot Research, a research community focused on services world. His focus is on Platform Services, Infrastructure and the role of Open Source in the services era. Krish has been writing @ CloudAve from its inception and had also been part of GigaOm Pro Analyst Group. The opinions expressed here are his own and are neither representative of his employer, Red Hat, nor CloudAve, nor its sponsors.

3 responses to “Cloud Computing need not wait for Infrastructure 2.0”

  1. Greg Ness


    Thanks for mentioning my Archimedius blog. I think you are correct that cloud will take off regardless of Infratsructure2.0. Clearly it already has. I think I2.0 will help it accelerate at a faster pace, by enabling more mobility and more robust services.


  2. Lori MacVittie


    I agree with Greg’s post – cloud computing requires Infrastructure 2.0, absolutely. For the most part, a lot of the application delivery infrastructure out there is already Infrastructure 2.0 “ready” – it’s flexible, highly scalable, and able to be integrated via interoperable standards.

    The bigger problem is getting cloud computing providers to recognize the importance of infrastructure in assuring availability and performance needs of customers are met. Many do, but many still have their “head in the cloud” regarding this aspect of infrastructure architecture.

    DNS and DHCP, as Greg points out, are also often overlooked, and the inherent scalability issues associated with TCP cannot be underestimated.

    Cloud computing capable of supporting enterprise class applications – whether lightweight or not – requires more than just slapping together a couple web servers and some virtualization technology. It requires an investment in infrastructure that’s capable of scaling up with demand, and being flexible enough to provide monetization opportunities through additional value-add offerings (acceleration, security, etc…) while simultaneously having a mechanism through which that infrastructure can be integrated into the processes that make the cloud work.

    I agree we can start now, with what’s available, but as you also say providers need to be aware that in the long run it’s going to take more awareness of the infrastructure in order to make this a viable (and for them, profitable) computing model.


  3. Harley

    I think Greg Ness’s infrastructure 2.0 piece is one of the most insightful commentaries forecasts on infrastructure and IT I have read in some time.

    Infrastructure virtualization is a key next step toward 2.0. In support of Greg’s position, abstracting network devices and the services that run on them (routing, switching, FW, VPN, QoS, etc)in order to automate their configuration ideally in response to end-user requirements is critical. For example, when VMs are moved physically or virtually the infrastructure services required to provide access, security and connectivity to VM resources should be automated. 2.0 needs a new architecture capable of abstracting users, applications and how they interact (associations) at a “business level” into a policy statement that is not bound to but has knowledge of infrastructure at a technical level (configuration). This architecture achieves true infrastructure virtualization (and automation which = reduced TCO). For example manage the relationship between users and VM applications via a policy “top down” rather than “bottom up” and automate from the policy statement to infrastructure services and elements. Security is implicate because infrastructure services are configured on the basis of selective inclusion (policy definitions)… one is on net unless defined by a policy VS selective exclusion (today’s “static” infrastructure 1.0) where-by everyone is on net unless ACLed out by a legion of technicians sitting on the command line. Clearly the latter will not scale technically or economically.

    Hats off to Greg for fielding an important editorial for all of us in the information age. Great piece.