Just a bookmarking post.
Hat-tip to @anitaleirfall for posting this link to Daniel Kahnemann’s “retraction” of underpowered statistical significance data in his Thinking Fast and Slow based on his earlier work with Amos Tversky.
Noted previously that super-statistician Taleb had profound effect on Kahnemann.
Kahnemann’s work is behind “Nudge“.
(He and Tversky much referenced in Thaler and Sunstein’s eponymous book?)
Taleb had been involved with Nudge with UK as well as US governments (The David Cameron story?). AND Taleb had indicated the problem with Nudge when, as so often, the wrong or false facts were used as the basis of Nudge, or unintended consequences resulted.
Does the technical error – which changes the significance of the facts – actually change the implied / accepted reality, prove it wrong or simply leave it unproven?
Many meta-meta-levels in this …. me internalising Kahnemann’s error – doing so correctly – making the right Kahnemann / Taleb / Nudge inference(s) – and the question of how significant these might be … as Nudges. (There’s a lot more in Thinking Fast and Slow – not all dependent on the above error.)
The real dilemma here is small facts
having much greater significance than might appear,
despite seeming insignificant and easy to internalise implicitly
– non-contentious, easy fit with prejudice –
yet being significantly wrong!
Story of our lives?
(Of course the opposite case exists too – as Kahnemann himself admits – failure to internalise a significant correct fact because it’s insignificant appearance.)