It’s been a long time, and this is actually a boring subject, but it’s something I wanted to talk about.

Last week Nate Silver of FiveThirtyEight wrote a piece about flawed statistical thinking in an op-ed by Peggy Noonan. He used some simple calculations to show his point.

Dan McLaughlin of RedState had an issue with Mr. Silver’s piece.

Silver concedes of his statistical analysis that “this calculation

assumes that individuals’ risk of being audited is independent of their political views,” which of course is the very thing in dispute; it’s like the old joke about an economist stranded on a desert island with a stack of canned goods whose solution begins, “assume a can opener.” All things being equal, all things are equal.

Mr. McLaughlin fundamentally misidentified what Silver was doing when he made that assumption. It doesn’t weaken his argument; it is necessary to make it, statistically.

#### The Null Hypothesis

What Mr. Silver was doing in his piece was using an informal version of the null hypothesis, which is the foundation of much of modern statistics. The fundamental mathematical principle behind statistical significance relies, not on proving a hypothesis, but on disproving the null hypothesis.

Thus, if a statistician wants to show that smoking causes cancer, he does the math assuming that smoking has no effect on cancer. If the math leads to an unlikely result, he has disproven the null hypothesis. If the math doesn’t, then he has failed to disprove the null hypothesis. A statistician never proves a positive hypothesis, they simply disprove null hypotheses.

This is a bit tough to grasp, so I’ll try to explain it with a very simple example. If I have a coin that I think might be weighted, the way I test that is to flip if a bunch of times and write down the results. Then, I assume it was 50-50 heads tails, and ask, “If the coin weren’t biased, how unlikely would it be that I got the results I just did?” In 100 flips, if my sample came out 47-53, sure, the most likely answer is that it is slightly weighted. But the null hypothesis is still very likely, so I would not reject it. If my sample were 82-18, however, that would be a staggeringly unlikely event with a fair coin, so it is probably safe to reject the null. If I only flipped the coin twice, however, I couldn’t disprove the null even both flips were heads, since that has a good chance of happening regardless. Mathematically, this is what is represented by the p-value; the probability of a result like the one in question, given the null hypothesis.

That an individual’s risk of audit is independent of their political views is a null hypothesis. Mr. Silver proposes it, then shows that Peggy Noonan’s evidence does not disprove it. Thus, statistically, Peggy Noonan has very weak evidence. He does this by showing that, if the null is true, it would not be unusual to find four or five (indeed, four or five thousand) Republican donors that were audited. So, the fact that Peggy Noonan did find four or five Republican donors that were audited is not statistical evidence that the null is false.

The null hypothesis itself, however, is not a political statement. It is simply the way one has to formulate the problem in order to use the mathematical tools available.

As a side note, I was banned from RedState some time ago for formulating a statistical query in this way, because the null hypothesis looked like a political position, so the fact that it has come up again is of some interest to me.