Oh boy - look what a data hunter has dragged in this time! Why is this problem so common? And who on earth is Bonferroni? Our friend here found one "statistically significant" result when he looked at goodness knows how many differences between groups of people. He's fallen totally for a statistical illusion that's a hazard of 'multiple testing'. And a lot of headline writers and readers will fall for it, too. Then he's made it worse by taking his unproven hypothesis (that a particular drink on a particular day in a particular group of people prevented stroke) and whacking on another unproven hypothesis (that if everyone else drinks lots of it, benefits will ensue). But it's the problem of multiple testing (also called multiplicity) where Olive Jean Dunn comes in. It's pretty much inevitable that multiple testing will churn out some some totally random, unreliable answers. A "statistically significant" difference isn't proof that th...