John Ioannidis wrote a paper in 2005 titled Why Most Published Research Findings Are False where he tried to replicate the results of some major psychology papers to see if they held up. He found that only about half of them did. Jay Daigle asked why this wasn’t as much of a problem in mathematics, compared to fields like medicine (for example, another study found similar results in cancer research).
But isn’t it…weird…that our results hold up when our methods don’t? How does that even work?
We get away with it becuase (sic) we can be right for the wrong reasons—we mostly only try to prove things that are basically true.
It’s a longread but worth your time if you’re into mathematics and proofs. In my head, mathematics seems quite strenuous in its proofs. Look at how revered Sir Andrew Wiles became when he proved Fermat’s Last Theorem, notably the “most difficult mathematical problem” due to its frequency of unsuccessful proofs. In the sciences, it seems less stringent and you get all kinds of papers passed off as peer reviewed and accurate only to find they’re claiming things that aren’t true (see: MMR causing autism). But when you compare public response to mathematics vs. the sciences, it’s “who cares?” vs. “omg, vaccines are unsafe!”
But despite my former aptitude for maths (I miss those days), I am but a layman and you should read that article for a better summation.
Filed under: psychology research statistics