As we move to broader scale cloud adoption, one would be excused for assuming that we’d reached a point where the definition of what constitutes infrastructure as a service is set in stone – true different vendors package up their virtual servers with different specs, but IaaS is, to a greater or lesser extent, a fixed concept. A conversation with Robert Jenkins, CTO of Swiss-based infrastructure provider CloudSigma recently challenged that assumption. In the face of much higher profile and broadly spaced cloud vendors, CloudSigma has chosen to differentiate itself by taking a very novel look at what enterprise infrastructure should look like – they offer cloud servers where the customer has ultimate flexibility to specify what they buy – on the CloudSigma platform customers can specify exactly how much CPU, RAM, Storage and Bandwidth they require independently. Resources are not bundled together and there is no “standard size” server. Additionally, customers can run Any Operating System and software that they wish.
According to Jenkins, this degree of flexibility is important when selling to enterprises with existing workloads they wish to move to the clouds – those workloads will generally have closely defined server requirements and hence customers can specify exactly what they need. An interesting use case – but is it a valid one?
At face-value, the CloudSigma story makes sense. However when assessing their claim that enterprises want to move existing workloads as they stand to the cloud, I’m reminded by the admonition of Christian Reilly, one time chief cloud architect at cloud.com/Citrix who, in his other role as cloud architect at Bechtel, was adamant that enterprises simply don’t have the budget, appetite or time to move existing workloads to the cloud. Rather, as Reilly sees it, enterprises follow a dual strategy:
- Existing workloads are left in situ but, where necessary, organizations will enable more diverse access to the data (mobile access via an API strategy for example)
- New workloads will be built, where appropriate, on the cloud
If this is in fact the case, CloudSigma is providing to a limited market of enterprises that want to use the cloud and also want to move existing workloads as they stand. This assumption goes against another one of the tenets of cloud application deployment – that build an application for the cloud is very different to building an application for traditional deployment. While in theory one could take an existing workload unchanged and bung it on some cloud somewhere, that is a sub-optimal approach. Rather organizations who are going thorough a change process anyway should take the opportunity to re-architect their applications to take advantage of the peculiar traits of cloud infrastructure. In doing so the likelihood is that one of the existing product offerings from a regular IaaS vendor should suffice.
CloudSigma has a few other tricks up its sleeve however – a 100% uptime SLA with a time credit of 50 times qualifying downtime and fully persistent servers and storage are two examples – but it is the flexibility card that CloudSigma plays strongest, and one in which, to an extent, they come unstuck. Cloud Sigma currently has two facilities – one in Zurich and one in Las Vegas. While Jenkins did promise that they would be in “ten locations in 12-18 months” what they have today is what organizations have to go buy. Only offering two locations gives limited failover and geographic granularity options and somewhat limits CloudSigma’s flexibility claims.
CloudSigma is taking an interesting approach to IaaS but one which I suspect will have limited real world appeal going forwards.