In the art of medicine of the future, it will be important to know when to use artificial intelligence and when we should let natural intelligence work undisturbed.
Some argue that artificial intelligence means safer diagnoses, more tailored treatments and better quality of treatment (1). We share this view. However, we would also urge the medical profession to be more vigilant when it comes to all the tacit value assumptions in our medical profession.
Diagnostics
Artificial intelligence is information technology that can adjust its own activity, which includes machine learning and deep learning (2). We understand an algorithm as an accurate description of a procedure for solving a problem (3).
Will the use of artificial intelligence give us better diagnostics? Much of the evidence suggests that it will. Diagnoses are names of diseases, and diseases are conditions with common characteristics that we can describe. Diagnostics is descriptive, as it describes something out there in the world. Some diseases can be delimited; for example, tuberculosis refers to the presence and activity of the tubercle bacterium. Other illnesses, such as mental disorders, have more diffuse delimitations. Moreover, many diagnoses and risk conditions share a common trait in that they are defined on the basis of threshold values. When does a normal mood change into depression? And when is the blood pressure or the blood sugar too high? We can, of course, respond by giving a specific number on an MADRS, a mean value higher than 130/80 mm Hg over a 24-hour period for blood pressure, or HbA1c ≥ 48 mmol/mol in serum. However, such threshold values are neither God-given nor carved in stone. We humans change and create new thresholds, categories and diagnoses over time, and we will continue to do so. It should probably be natural intelligence rather than artificial intelligence that continues to set these threshold values.
For the sake of discussion, let us assume that all diagnoses are correct and appropriate correlates to disease processes in the body. Given this, artificial intelligence offers us a fantastic opportunity to detect disease, which has already been shown in cancer and imaging diagnostics (4–6). But how will artificial intelligence impact on medical treatment?
Treatment
When treating patients, curatively or palliatively, we need to know something about what the patients want, what the doctors want, and not least, what we should do. What is the right treatment for 90-year-old Olga with lung cancer? The answer depends on what we want to achieve. Do we want Olga to survive as long as possible? In that case, we have set ourselves a goal that can be worked out using an algorithm. Or do we want to limit treatment and give Olga a dignified end? Artificial intelligence can be more difficult to use in the latter case. As Mette Brekke and Ingvild Vatten Alsnes have pointed out, it is possible that artificial intelligence (at least for the time being) will fall short when it comes to treating people with health anxiety, multimorbidity and problems in the workplace or close relationships (7).
Artificial intelligence will undoubtedly change the health service
On the one hand, medical practices today are empirical, evidence-based and descriptive. Here, artificial intelligence can obviously help to boost the medical profession. On the other hand, our profession is normative, filled with values, intentions, discretion and visions. This is where we should be particularly careful. What should be the overall goal of our health service? There is no precise algorithm for that. But what we do know is that the health service is ultimately about people. Treating the health of such autonomous beings is and will remain normative. The fundamental motivation for our health service is thus normative.
Can the use of artificial intelligence be useful for this normative part of the medical profession? That is possible. For example, we can hope for better transparency. When creating algorithms, we also need to reflect on what we want to achieve. By doing so, we may be able to make more conscious and wise choices in relation to the value assumptions that are already hidden. We doctors have clinical judgment, gut feelings, bias tendencies, prejudices and various professional skills. This means that we do not always treat ‘equal cases equally’ (8). One hope is that artificial intelligence can keep this in check.
Unfortunately, a US study found the opposite. An algorithm that was used to find patients in need of additional medical treatment underestimated the sickest African American patients, and in doing so, perpetuated the racial inequalities in health care (9). In other words, there is a danger that the algorithms adopt our old mistakes. This can have serious consequences. The question is whether we can to identify such errors, and who is responsible for them?
Artificial intelligence will undoubtedly change the health service. And used properly, it can be a very useful tool for doctors, nurses and other healthcare personnel. But as Ingrid Hokstad has pointed out, we doctors should take the reins (10). The future of the art of medicine lies in knowing when to use artificial intelligence and when we should let natural intelligence work undisturbed (11).