Suppose there are screening tests available for every single disease or ailment under the sun. Now suppose, for a moment, that you are screened for a broken arm. And let’s say that our screening test gives us a positive result. In other words, the test predicts that you have a broken arm. Now, let’s suppose that I proceed to order an x-ray and it confirms the positive result of our screening test and that you, indeed, do have a broken arm. This, in the parlance of epidemiology and, specifically, decision science theory, is what we call a ‘true positive’. You screened positive for a disease or ailment and you truly do have the disease or ailment.
Let’s look at the flip side. Suppose now that you are screened for a broken foot. And let’s say that our screening test gives us a negative result. In other words, the test predicts that you will not have a broken foot. Now, let’s suppose that I proceed to order an x-ray and it confirms the negative result of our screening test and that you, indeed, do not have a broken foot. This is what we call a ‘true negative’. You screened negative for a disease or ailment and you truly do not have the disease or ailment.
In both instances, there is something obvious that has occurred: the screening test was correct. In one scenario it said you had an ailment and you did. In another, it said you didn’t have an ailment and you didn’t.
And so, in the ‘arm’ scenario, since the test was correct, painkillers are prescribed and, perhaps, a cast or splint is placed on your arm and you go home. The costs incurred are in-line with the resources delivered. And in the ‘foot’ scenario, since the test was also correct, nothing is prescribed, no casts or splints are placed and the costs incurred are also in-line with the resources delivered.
But what happens when the test is wrong?
What happens when the test delivers a false positive? In other words, what happens when the test says you have a broken arm and you don’t. Well, let’s think about this for a second. If I’m treated for a broken arm and I don’t really have one, then the system incurs costs. I see the doctor for follow-up visits. I take time off work. I get a cast and painkillers when I don’t need them. Maybe I suffer from some adverse events due to the painkillers that I didn’t need. And maybe I suffer from some stress and anxiety.
Or a false negative? What happens when the test says that you don’t have a broken foot, but you really do? In this case, you may not incur splint and painkiller costs at the point of diagnosis, but you incur the cost of ongoing pain and suffering, the cost of this untreated morbidity, lost productivity, time off work potentially and, maybe, some downstream effect of not being properly diagnosed in the first place (ie, more advanced disease requiring now more expensive intervention that would not have been necessary had your disease been caught in the first place).
This is where healthcare gets really, really tricky. We tend to focus on the costs we incur in the patients who have disease. And we disregard costs for patients in whom we find no disease. But, the false positives and false negatives in healthcare drive costs that nobody accounts for. Ever. When a hospital gets its annual budget, the government doesn’t give it a few extra million dollars to treat patients who don’t really have disease. And nobody puts together a budget that has a buffer for taking care of patients who have disease that we didn’t realise was there in the first place. It just doesn’t work that way.
And, yes, not every disease or ailment falls into this neat little ‘false positive’ or ‘false negative’ category. It is definitely hard to quantify the extent of these costs—it is conceivable that they are a drop in the bucket. And perhaps, most germanely, there is no obvious way to eliminate the problem of false positives and false negatives.
Medicine is not perfect. It makes mistakes. And that’s ok. But it is worth pointing out that even these honest mistakes cost money.
Decision theory sheds light on the things we don’t often pay attention to in healthcare
This article was originally published here.