What’s More Important: Improving Mortality Rate or Survival Rate? (Hint: It is not a trick question)

A recent commentary by Aaron E. Carroll in the NY Times explains “Why Survival Rate is Not the Best Way to Judge Cancer Spending.”  If you don’t understand the difference between survival rate and mortality rate, then it is worth a quick read; it explains the concept of “lead-time bias” and “overdiagnosis bias.” Here’s an excerpt:

Mortality rates are determined by taking the number of people who die of a certain cause in a year and dividing it by the total number of people in a population…

Survival rates describe the number of people who live a certain length of time after a diagnosis…

Let’s consider a hypothetical illness, thumb cancer. We have no method to detect the disease other than feeling a lump. From that moment, everyone lives about four years with our best therapy. Therefore, the five-year survival rate for thumb cancer is effectively zero, because within five years of detection, everyone dies.

Now, let’s assume that we develop a new scanner that can detect thumb cancer five years earlier. We prevent no more deaths, mind you, because our therapy hasn’t improved. Everyone now dies nine years after detection instead of four. The five-year survival rate is now 100 percent.