Mistakes, Damned Mistakes, and Statistics
There are various instructional videos on You Tube regarding statistics, but the most interesting is one presented by Oxford mathematician Peter Donnelly on how juries (and the rest of us) are easily confused and swayed by unsound mathematical arguments involving randomness and uncertainty.
Rather horrifying thought, isn't it - being convicted of a crime you did not commit by unsound statistical inference!
However, before the criminal case, Donnelly references what is called the False Positive Paradox: how a test that is 99% accurate can result in so many false test results indicating that well people have a serious disease.
Gonick references the same paradox. Assume that one person in one thousand has this serious illness, the test is 99% accurate, and the probability of a false positive test result is only 2%. Then supposed that you test positive. What is the probability that you actually have this disease?
Using Bayes' Theorem, we ultimately calculate that the percentage is .0472; that is, that out of all those who test positive for the disease, less than 5% actually have it! In other words, out of the 21 people in 1000 who would test positive, only 1 in 1000 would have the disease. Actually, this is not as bad as it initially looks: it's better, after all, to have a false positive than a false negative, as the false positive can be detected in further tests, whereas a false negative mandates no further tests. Also, the test does increase each of the 21 people's chances of having the disease from 1 in 1000 to 1 in 21.
At least, I'm 99.9999999999999% sure of that!