A recent PloS medicine study reported in the NY Times commits one of the easiest mistakes in statistics, and something that really shouldn't be allowed in a refereed manuscript. In fairness to the journal, the journal itself doesn't say anything wrong in the paper, but even in its own abstract, it leads readers to the wrong conclusions. Essentially, the statistics show nothing significant, but the authors want you believe otherwise.
So only a dedicated statistician would appreciate this (or an economist; one of the nice things about economics is that it forces statistics on all its practitioners; something you hate as students, but appreciate later).
The study states that in 3,141 counties, the life expectancy of women showed a statistically significant decline in 11 counties for men and 180 counties for women. The article doesn't state this, but if you look at the original journal article, you find these estimates were significant with a 10% p-value.
So what does a 10% p-value mean. It means the chance of a false positive is 10%.
So in practice what does this mean. It means if you are running 6,242 hypothesis tests (3,141 counties for men and for women) on the null hypothesis that life expectancy is unchanged, means you're going to screw up and find a false positive in 624 of them. The fact that they only got 180+11 cases of statistical declines when random chance suggests you should have gotten more than 600 cases, suggests that it is very questionable to conclude that these declines are real.