We might once have categorized a melanoma simply as a type of skin cancer. But that is beginning to seem as outdated as calling pneumonia, bronchitis, and hay fever “cough.” Personalized medicine will help more oncologists gain a more sophisticated understanding of a given cancer as, say, one of a number of mutations. If they are properly combined, compared, and analyzed, digitized records could indicate which combination of chemotherapy, radioimmunotherapy, surgery, and radiation has the best results for that particular subtype of cancer. That is the aspiration at the core of “learning health care systems,” which are designed to optimize medical interventions by comparing the results of natural variations in treatments.
For those who dream of a “Super Watson” moving from conquering Jeopardy to running hospitals, each of these advances may seem like steps toward cookbook medicine implemented by machine. And who knows what’s in the offing a hundred years hence? In our lifetime, what matters is how all these data streams are integrated, how much effort is put into that aim, how participants are treated, and who has access to the results. These are all difficult questions, but no one should doubt that juggling all the data will take skilled and careful human intervention — and plenty of good legal advice, given complex rules on health privacy and human subjects research.
To dig a bit deeper in radiology: the imaging of bodily tissue is rapidly advancing. We’ve seen the advances from X-rays and ultrasound to nuclear imaging and radiomics. Scientists and engineers are developing ever more ways of reporting what is happening inside the body. There are already ingestible pill-cams; imagine much smaller, injectable versions of the same. The resulting data streams are far richer than what came before. Integrating them into a judgment about how to tweak or entirely change patterns of treatment will take creative, un-systematizable thought. As radiologist James Thrall has argued,
The data in our . . . information system databases are “dumb” data. [They are] typically accessed one image or one fact at a time, and it is left to the individual user to integrate the data and extract conceptual or operational value from them. The focus of the next 20 years will be turning dumb data from large and disparate data sources into knowledge and also using the ability to rapidly mobilize and analyze data to improve the efficiency of our work processes.
Richer results from the lab, new and better forms of imaging, genetic analysis, and other sources will need to be integrated into a coherent picture of a patient’s state of illness. In Simon Head’s thoughtful distinction, optimizing medical responses to the new volumes and varieties of data will be a matter of practice, not predetermined process. Both diagnostic and interventional radiologists will need to take up difficult cases anew, not as simple sorting exercises.
Given all the data streams now available, one might assume that rational health policy would deepen and expand the professional training of radiologists. But it appears that the field is instead moving toward commoditization in the US. Ironically, radiologists themselves have a good deal of responsibility here; to avoid night shifts, they started contracting with remote “nighthawk” services to review images. That, in turn, has led to “dayhawking” and to pressure on cost-conscious health systems to find the cheapest radiological expertise available—even if optimal medical practice would recommend closer consultations between radiologists and other members of the care team for both clinical and research purposes. Government reimbursement policies have also failed to do enough to promote advances in radiological AI.
Many judgment calls need to be made by imaging specialists encountering new data streams. Presently, robust private and social insurance covers widespread access to radiologists who can attempt to take on these challenges. But can we imagine a world in which people are lured into cheaper insurance plans to get “last year’s medicine at last year’s prices”? Absolutely. Just as we can imagine that the second tier (or third or fourth or fifth tier) of medical care will probably be the first to include purely automated diagnoses.
Those in the top tier may be happy to see the resulting decline in health care costs overall; they are often the ones on the hook for the taxes necessary to cover the uninsured. But no patient is an island in the learning health care system. Just as ever-cheaper modes of drug production have left the United States with persistent shortages of sterile injectables, excluding a substantial portion of the population from high-tech care will make it harder for those with access to such care to understand whether it’s worth trying. A learning health system can make extraordinary discoveries, if a comprehensive dataset can fuel observational research into state-of-the-art clinical innovations. The less people have access to such innovations, the less opportunities we have to learn how well they work and how they can be improved. Tiering may solve medicine’s cost crisis at present, but sets back future medical advances for everyone. Thus, there is a high road to advances in medical AI, emphasizing better access for everyone to improving quality of care, and a cost-cutting low road, which focuses on merely replicating what we have. Doctors, hospital managers, and investors will implement the high road, the low road, or some middle path. Their decisions, in turn, are shaped by a shifting health law and policy landscape.
For example, consider the tensions between tradition and innovation in malpractice law. When something goes wrong, doctors are judged based on a standard of care that largely refers to what other doctors are doing at the time. Malpractice concerns thus scare some doctors into conformity and traditionalism. On the other hand, the threat of litigation can also speed the transition to clearly better practice. No doctor today could get away with simply palpating a large tumor to diagnose whether it is malignant or benign. Samples usually must be taken, pathologists consulted, and expert tissue analysis completed. If AI methods of diagnosis become sufficiently advanced, it will be malpractice not to use them, too.
On the other hand, advanced automation may never get any traction if third-party payers, whether government or insurers, refuse to pay for it. Insurers often try to limit the range of care that their plans cover. Patients’ rights groups fight for mandated benefits. Budget cutters resist, and when they succeed, health systems may have no choice but to reject expensive new technology.
Other regulatory schemes also matter. Medical boards determine the minimal acceptable practice level for doctors. In the United States, the Centers for Medicare and Medicaid Services help set the terms for graduate medical education via subsidies. Well funded, they can design collaborations with bioengineers, computer scientists, and statisticians. Poorly funded, they will go on churning out too many physicians ignorant of the statistical knowledge necessary to do their current jobs well, let alone critically evaluate new AI-driven technologies.
The law is not merely one more set of hurdles to be navigated before engineers can be liberated to cure humanity’s ills. The key reason health care employment has actually grown as a sector for the past decade are the legal mandates giving wide swaths of the population guaranteed purchasing power, whatever their wages or wealth. At their best, those legal mandates also guide the development of a health care system toward continuous innovation and improvement.