I asked Paul to shoot me a paragraph about his keynote at Defrag. He sent over this:
Monkeys and Typewriters and Data
We have instrumented the planet. From webcams, to sensors, to servers, to satellites, we collect more data, on more things — at more times and in more places — than ever before. From Chinese electrical consumption, to Lake Havasu water levels, to Amazon web traffic, to iPhone backorder times, we have it all.
But given enough data, there will always, even if by chance, seem to be something wrong, something right, and something doing nothing at all. How do we tell the difference? Let’s talk about how we keep all these monkeys and typewriters with data from screwing up the world.
As I was receiving that, I was on the phone with Esteban Kolsky. We were talking about how data has become commoditized, and in some ways – useless. Every piece of technology generates more data at a greater volume and velocity – and our ability to process and analyze that data is reaching dizzying speeds. In essence, if I can gather the data to argue the sky is green (to 99% certainty), and you can simultaneously gather the data to argue the sky is blue (to 99% certainty) — then what GOOD is the data?
More pointedly, how do we (and business) know what “objective” measures must be in place to help us determine what we consider valid insight, and what we consider just “analytics” (ie, data that can be massaged to say anything). It’s actually a *philosophical* discussion at some level.
Is it possible that we need to make our organizations *dumber*? Is it possible that when infinite amounts of data can be crunched by ever increasing numbers of people, we’re actually entering a *dangerous* place? What if “transparency” has nothing to do with “intelligence” – and too much “intelligence” (analytics, not insight) is a bad thing? And who the hell gets to make these judgements anyway?
Defrag: working hard to make your head spin.
Come drown in data with us, while we design what life-rafts should look like. Join us.