The above is a from a digest of “big-data” stories from Data Science Central, of which I am typically sceptical.
Sceptical because after two decades banging on about the problem of relying in metrics in complex situations (eg setting a speed limit as a number, anyone?) I wonder how many data-practitioners, large or small, actually get the problem with use of quantifiable data?
In my day-to-day version, it’s resisting the management adage “You can’t manage what you don’t measure” – because the opposite is true. You’ll only get what you measure and that will be a distortion of what you’re really trying to achieve. Never seen the problem given a name – McNamara Fallacy – before now. Maybe recognising – giving it a name – can help. Not surprised however to find it’s that Macnamara – Bob – of US Vietnam body-count fame.
The first step is to measure whatever can be easily measured.
This is OK as far as it goes.The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value.
This is artificial and misleading.The third step is to presume that what can’t be measured easily really isn’t important.
This is blindness.The fourth step is to say that what can’t be easily measured really doesn’t exist.
This is suicide.
- Daniel Yankelovich (1972)
The reason given is invariably the scientistic observation that these other hard-to-quantify and too-complex-to-account-for factors cannot be proven with logic and empirical evidence.
[More examples from BBC R4 Today this morning:
Rosling (08:40) – posthumous book – “Factfulness”?
NUT (o8:46) – “more than a score” high-stakes testing?
And Hans’ book is also book of the week.]
=====
[Post Note: And more examples with “initial” UK Company accounting returns on gender pay … ]
OK. You definitely get priority on this insight!