1

While it makes some sense, it's not clear to me why those are different. If a test, say medical test, is correct 90% of time then chances of it being wrong is 10%.

But I've read things that say in medical field test with high accuracy for negatives are used as screening method and more expensive tests which are high accuracy positive tests are used only when former tests come positive. Because if former came negative we are confident patient doesn't have a disease but if it comes positive we are not as sure, thus we do expensive test. Which of course, makes sense.

I get there are 4 events:

  1. Test is +, patient has a disease
  2. Test is -, patient doesn't have a disease
  3. Test is +, patient doesn't have a disease
  4. Test is -, patient has a disease
  • Number 3 is false positive, number 4 is false negative. For example, in number 3, the test is positive, but's that's false -- the patient doesn't have the disease. – saulspatz Jul 27 '19 at 23:50
  • 3
    Why would you ever think that the probabilities of false negatives and false positives would be the same?? Make a Venn diagram to see this. – David G. Stork Jul 27 '19 at 23:56
  • 3
    Echoing David, there are many times I see people ask "why isn't X true?" when the real question is why one would believe X is true in the first place. (Here, X is the claim false positives and false negatives would occur equally frequently.) – anon Jul 28 '19 at 00:40
  • “This test is correct 90% of the time” is neither a statement about the rate of false positives nor a statement about the rate of false negatives. Also important to understanding accuracy of a disease test is to understand the rate of disease among those who get the test. If a disease is very rare (say 0.1% of the tested population has it), a test that always says “no disease” will give a correct result almost all the time and have a 0% false positive rate. It will also have a 100% false negative rate. – Steve Kass Jul 28 '19 at 01:33
  • In the tree diagram here, the false negative rate is $(1-v)$ (2nd branch in the second trial/column), whereas the false positive rate is $(1-p)$ (3rd branch in the same trial/column). – ryang Jan 03 '21 at 13:02

1 Answers1

2

To understand why the rate of false positives and false negatives should be different, you can consider the two edge cases: the constant tests.

One “test” I could perform is by declaring every result a positive, regardless of any information about the input. I just say that it’s a positive result no matter what. Clearly, in this case, we’re going to get a lot of false positives (unless the result really should be positive for everything in the population), but we’re not going to get any false negatives because we never report anything to be negative.

Likewise, I could perform the “test” where I declare every single result to be negative, and hence I may get a bunch of false negatives but it is impossible for me to get any false positives since I never report anything as positive.

In both of these tests it’s really obvious why the rate of false positives doesn’t match the rate of false negatives.

In general any other test you perform can be thought of as some kind of piecewise combination of these two tests, so it’s actually really quite rare that you’d be able to construct a test with the same likelihood of false positives and false negatives.

  • 3
    Also note that if $10%$ of the population has the condition you're allegedly testing for, the "test" where you always answer "negative" is literally correct $90%$ of the time and wrong $10%$ of the time. – David K Jul 28 '19 at 01:27