LinkedIn Twitter
Director, OpenShift Strategy at Red Hat. Founder of Rishidot Research, a research community focused on services world. His focus is on Platform Services, Infrastructure and the role of Open Source in the services era. Krish has been writing @ CloudAve from its inception and had also been part of GigaOm Pro Analyst Group. The opinions expressed here are his own and are neither representative of his employer, Red Hat, nor CloudAve, nor its sponsors.

One response to “Nature's Attack On Amazon And The Instance Vs Fabric Debate”

  1. Rethinking The Cloud: From Client/Server To P2P | CloudAve

    [..] Let us do a brief recap of
    how Cloud is architected at present and, then, do a complete rethink of
    this model to keep such downtimes at its bare minimum. Before
    describing the nature of Cloud Computing as it exists today, let us dig
    back into the history of computing. Till a few decades back, the
    computing was done on huge centralized mainframe machines and super
    computers and are accessed by users using dumb text based terminals.
    All the software, peripherals, etc. were part of this huge centralized
    powerful machines and were centrally managed by dedicated teams. This
    centralized client-server model of computing was in vogue for quite
    some time before the PC revolution ushered in a new era of distributed
    client-server model. This new client-server model saw the federation of
    management and offered greater flexibility than the centralized
    client-server model. The past few years saw the emergence of Cloud
    Computing which is a much sophisticated evolution from the centralized
    client-server system but built using large numbers of cheaper x86
    systems. Even though the computing resources in the Cloud model appear
    to be centralized like the centralized client-server model of mainframe
    years, there are some significant differences. In the traditional
    mainframe client-server model, the work was split between the server
    and the client whereas in the Cloud model, the work is done completely
    on the “server” side (I have used the double quotes here to
    differentiate from a single powerful server). On the traditional model,
    the server was a single powerful machine like a mainframe or a
    supercomputer whereas in the cloud model, the “server” is actually a
    server farm with hundreds or thousands of cheap low end x86 machines
    that acts as a centralized computing resource. Even though the Cloud
    model is a much sophisticated evolution from the previous client-server
    models, we are still dealing with a “centralized resource” from a
    single vendor. Some of the big vendors use geographically distributed
    datacenters and state of art virtualization technologies or “fabric”
    technology to offer high reliability in terms of uptime. However, it is
    not the case with all the vendors. Many of them use a single datacenter
    and a Cloud like architecture to offer their infrastructure services.
    This leads to a single point of failure, like what happened in the case
    of Rackspace recently. Even with geo-distributed datacenters, there arepartial outageslike the recent lightning strike on one of the Amazon’s datacenters. [..]