Statistics had a boost in visual popularity over the first two years of the COVID-pandemic thanks to those not-at-all depressing and constant growth charts. But they with great exposure came great misinformation and misunderstandings of what the data conveyed and where it came from. That’s why we need some basic principles to follow when we analyse data and statistics and, in 1995, the late Robert P. Abelson published a book with some called Statistics as Principled Argument, a book that examined the problems of interpreting quantitative data.
Jim Lewis and Jeff Sauro looked at eight “laws” from the book that condenses some of its theories. I’ve quoted one below:
2. Overconfidence abhors uncertainty.
Consistent with the expectation that chance is more regular than its actual lumpiness implies, people (including researchers) tend to underestimate the extent to which measurements can vary from one sample to another.
“Psychologically, people are prone to prefer false certitude to the daunting recognition of chance variability” (Abelson, 1995, p. 27).
That’s why it’s important to compute confidence intervals around estimated values, whether means or percentages, especially when sample sizes are small. This reveals to researchers the actual precision (or lack thereof) in their measurements. It’s easy to get fooled by the randomness of data, especially with small sample sizes. This is one of the reasons we encourage researchers planning studies to make comparisons (e.g., benchmark studies), so they have a large enough sample size to differentiate the signal of a real difference from the noise of sampling error.
(via Measuring U)
Filed under: data viz education statistics