This article from The New York Times was posted right at the end of 2012, but heralds a dichotomy for 2013 – the love/hate relationship with the steamrolling bandwagon of Big Data. I’m conflicted – in many ways I should love “Big Data”, but it’s already ill-defined-concept-cum-tech-vendor-hyperbole and so I find myself trying to distance Atheon and our work from it.
The best definitions I have seen (my current favourite) end up marginalising Big Data as a fairly rare extreme which only affects a tiny proportion of businesses (and a fair number of scientists). Why do I like this idea? Because it draws attention away from the “Big” and let’s us get back to Data; it seems to me foolish to trouble ourselves with extreme challenges of scale, speed and specialisation (normally presented as the “three Vs”) when 99.9% of organisations still fritter away “small data” (deliberately no caps!) in so many ways:
- Poor data capture: introducing errors at source
- Poor master data: limiting the ways in which they can analyse and utilise transactional data
- Poor attribution: focusing on categorisation over attribution (due a post in its own right at some point…)
- Poor quality: placing little value on data accuracy, timeliness, usability, location etc.
- Poor valuation: treating data as an IT resource, rather than an all-powerful business asset
- Poor focus: 100s of KPIs (how can 100s of measures be “key”?)
- Poor definition: ill-defined metrics, repeated in different forms – with different names – across the organisation
- Poor management: Ever-growing islands of data / data silos
- Poor analysis: the obsession with reporting, at the expense of even small-scale imaginative analysis
The list goes on. Let’s address some (many?) of these before we get carried away with the extremes of Big Data. Stephen few, as usual, presents a strongly cogent argument for small, slow and sure data (in direct contrast with the 3 Vs).