At the recent Gluecon event in Colorado, I was fortunate enough to run into my friend Sam Ramji from Apigee and took the opportunity to grab some time with him after he had delivered his excellent panel presentation which was intriguingly entitled “Globalization, Black Swans, and APIs”.
It’s always great to bounce ideas and share strategies with someone who is so deeply involved with the “API Business” as it not only helps solidify one’s own architectural track but provides a reassuring affirmation that even for organizations and traditional enterprises who do not base their primary business on an API (like, say, Netflix – but don’t worry, I’m not going to beat that one to death) there is a huge amount of future potential to selectively use APIs as a powerful weapon against “locked in” data as part of a wider strategy to deal with the burgeoning trend of consumerization.
I’ve seen this comment…
Big Data is getting bigger, with some estimates suggesting that 90 percent of all data ever created was created in the last two years alone.
…appear in a few different guises recently (it was certainly a key theme at Structure Big Data back in March 2011) and although I have yet to find any compelling research to back up the claim, I have no reason to doubt the general suggestion. What I would doubt, however, is that the words “Big Data” are as close to the lips of most enterprise CIOs today – I think the words “Monolithic Systems” are far more likely to be the challenge – especially as it relates to the general malaise caused by the abject inability to access the vast swathes of data outside of the application (or system) in which it was created.
There is another phenomenon, Data Gravity, a quite brilliant term coined by another friend and Clouderati member, Dave McCrory, that I believe will further drive the need for enterprises to consider these disparate masses of data accumulated in years and years worth of incumbent systems as both a problem and an amazing opportunity.
In his original blog post, McCrory posits this, by way of scene setter:
Consider Data as if it were a Planet or other object with sufficient mass. As Data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data. This is the same effect Gravity has on objects around a planet.
If one accepts, for the context of this post, the “Data” to be the monolithic systems described above, then it is fairly simple to assume that the Services and Applications drawn toward this mass will benefit by doing so via a standard method – as far as that is possible – and that’s where we circle back to the concept of delivering an enabling set of APIs to act as a interoperability layer between the existing data sources and the new breed of services and applications, which, of course, could easily include cross-platform native or HTML5 apps for mobile.
Sounds feasible so far, right ? I think it makes a ton of sense. No enterprise in the world would want the financial, technical or operational burden of re-platforming an entire metric ass-load of legacy crapplications, would they ? What could possibly cause a secure, solid API architectural approach to falter ?
A new breed of roadblock. The Data Hugger.
A few years ago, as server virtualization reached maturity (readiness) and organizations began to see the empirical value of data center consolidation efforts, the term “server hugger*” became synonymous with those who often refused to acknowledge the value in the virtualization technology, and (as Allan Leinwand stated), felt the emotional well-being and efficient operation of their servers requires them to be physically close at all times.
The Data Hugger is a slightly different beast.
First, he is not in the business of infrastructure. He is in the business of the business. He doesn’t share the unrequited love of beige boxes and synchronized flickering LEDs in RAID arrays as his odd cousin, but he is equally as dangerous an inhibitor of progress as he puts a mental ring of steel around his data, believing that he is the guardian of the most precious, sacred, valuable and sought after information in the universe and he doesn’t trust his baby to anything except the application that was written to access it. He can not be swayed by IT, suffering from the dreaded “Not Invented Here” syndrome and taking a leaf out of the 20-plus-year-ago mentality of the machine room, when hardware was centralized and horrendously expensive, and only those with the keys to the kingdom had authority over what could be done.
In truth, the mortal danger is that the Data Hugger’s self-imposed sovereignty over their individual data materially affects the true value of the sum of the parts.
To paraphrase a quote from Larry Wall, creator of the Perl programming language, “Information doesn’t want to be free, Information wants to be valuable” – this is an opportunity to derive incredible value and one you don’t want to miss.
Get ready to fight the good fight against Data Huggers, people.
* Additional reading – Mark Thiele of ServiceMesh wrote a great post on the topic of “Ownership Disease”
Footnote : Dave McCrory followed up his initial blog with a later post entitled “Defying Data Gravity”
