There are two kinds of error in medical diagnostic tests: the test indicates negative when there is disease present, or positive when there isn't. Given that a person either has the disease or doesn't, and tests positive or negative, these four are the only possible outcomes:
- Patient has the disease and tests positive - GOOD
- Patient has the disease and tests negative - BAD
- Patient doesn't have the disease and tests negative - GOOD
- Patient doesn't have the disease and tests positive - BAD
The problem is that the % false positives is not the percentage of positive test results that are false, although it sounds like it should be. It is the (average) percentage of people who don't have the disease who none the less test positive. The two things are very different.
To illustrate, let's consider an imaginary diagnostic test - a good one. Let's suppose our test has a false positive rate of 1% and a false negative rate of 0%. This would be an extraordinarily good test - I don't know if any tests available today meet these criteria.
Let's now suppose that we roll out this test on a population where the true rate of infection is, say 500 per 100,000. (That is, 5 per 1000, or 1 for every 200 people) This kind of infection rate has our political leaders losing control of their anal sphincters, so it's a "bad" scenario.
Let's test 200 of these people, chosen at random. On average there will be one person infected, and our excellent test always indicates positive for that person. But we also get around 2 false positives (1% of 199). So if you are a member of this population, and you test positive, the chance that you actually have the disease is about 1 in 3, that is 33%, and the chances that you don't are about 67%.
If the false positive rate of our test was 10% (a more realistic figure), a positive test result would mean you had only a 5% chance of having the disease. If the true infection rate were lower, the chances of having the disease if you test positive would be (even) lower.