Last week I was invited to speak at a Microsoft conference in Redmond about building cloud applications for portability across clouds and infrastructure.
In my presentation, I approached the issue of application portability from the enterprise perspective. This means that developers generally are not choosing servers, clouds or other infrastructure components. Developers focus on building great applications and IT policy dictates where those apps live. If your application platform and architecture are inherently portable, both of these constituencies can function without compromise and friction.
Any vendor claiming to be a PaaS should itself be and enable its applications to be infrastructure independent. If you’re a public PaaS, that typically means supporting a curated set of cloud infrastructure providers that you integrate with. If you are a private PaaS, it means running on any infrastructure that can surface your operating system instances/nodes as a single logical hosting layer – private or public.
Operating System Portability
By this I don’t mean running Windows Apps on Linux and vice versa. I’m talking about enabling applications to be seamless migrated between like flavors of an OS across instances. This means that the application should not be bound to specific OS instances by state or other artifacts. Some would describe this as “stateless” but I would describe it as “state defined”. This is because your app deliberately stores any required state in a very well encapsulated boundary such as a database, a distributed cache, a shared file system or other distributed mechanism. A good PaaS should provide these capabilities out of the box and make it easy for developers to build this type of application pattern without additional effort.
Application Service Portability
Cloud applications tend to be composite applications in that they are really a collection of loosely coupled services with reliable intra/inter-service communication. If you want the ability to swap out one implementation of a service for another, this is possible but comes at a higher price. For example, if you want to swap out Oracle for MySQL or Tibco for RabbitMQ, you need to ensure you’re not using capabilities that aren’t uniformly available across those platforms. Trust me, swapping connection strings isn’t the tricky part. This is the layer that lock-in usually begins to happen but even these services are at least infrastructure and OS portable.
Lock-in is just another word for risk and like risk, it must be managed. If a vendor promises you zero lock-in, at any layer of the application stack, you should probably stop listening. He is either lying (as a practical matter) or providing little value. There is normally a trade-off between lock-in (risk) and value (reward). In the early days of virtualization, I recall customer conversations where they absolutely refused to use DRS/vMotion to distribute VM load across clusters because they didn’t trust it. Two years later, the same customers said “Of course I use it. That’s why I bought the software”. Risk had turned into value.
If you want to avoid all lock in, almost by definition, you’ll have to get by with fewer features and/or write more stuff yourself. That might be the right course of action but it also comes with its own risks. The key is to let customers determine where that trade-off between value/lock-in makes sense for them. There should be clear benefits which are achievable without incurring any level of lock-in. In other cases, the risk/reward merits taking a dependency because “building it yourself” is riskier or perhaps even impossible under a given set of business or technical constraints.
In any situation, it should be easy for customers to understand where their dependencies are and how they could decouple them should that be required. If you can’t find the lock-in with your platform, it’s because it’s being hidden and that’s the only thing you should really be afraid of.