Ethical issues of implementing artificial intelligence in medicine

Cover Page


Cite item

Full Text

Abstract

Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The “black box” problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the “black box” problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a “black box”, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durán writes, “Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis”. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a “black box” of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the “black box” algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.

Full Text

Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The “black box” problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the “black box” problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a “black box”, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durán writes, “Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis”. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a “black box” of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the “black box” algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.

×

About the authors

Maxim I. Konkov

N.I. Pirogov Russian National Research Medical University

Author for correspondence.
Email: konkovmaksim18@gmail.com
ORCID iD: 0009-0002-2803-1020
Russian Federation, Moscow

References

  1. Holm EA. In defense of the black box. Science. 2019;364(6435):26–27. doi: 10.1126/science.aax0162
  2. Durán JM, Jongsma KR. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J Med Ethics. 2021 Mar 18:medethics-2020-106820. doi: 10.1136/medethics-2020-106820
  3. Poon AIF, Sung JJY. Opening the black box of AI-Medicine. J Gastroenterol Hepatol. 2021;36(3):581–584. doi: 10.1111/jgh.15384
  4. Wang F, Kaushal R, Khullar D. Should Health Care Demand Interpretable Artificial Intelligence or Accept “Black Box” Medicine? Ann Intern Med. 2020;172(1):59–60. doi: 10.7326/M19-2548
  5. London AJ. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Cent Rep. 2019;49(1):15–21. doi: 10.1002/hast.973
  6. Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf Fusion. 2022;77:29–52. doi: 10.1016/j.inffus.2021.07.016
  7. Quinn TP, Jacobs S, Senadeera M, Le V, Coghlan S. The three ghosts of medical AI: Can the black-box present deliver? Artif Intell Med. 2022;124:102158. doi: 10.1016/j.artmed.2021.102158
  8. Handelman GS, Kok HK, Chandra RV, et al. Peering Into the Black Box of Artificial Intelligence: Evaluation Metrics of Machine Learning Methods. AJR Am J Roentgenol. 2019;212(1):38–43. doi: 10.2214/AJR.18.20224
  9. Meldo АА, Utkin LV, Trofimova ТN. Artificial intelligence in medicine: current state and main directions of development of the intellectual diagnostics. Diagnostic radiology and radiotherapy. 2020;11(1):9–17. (In Russ). doi: 10.22328/2079-5343-2020-11-1-9-17
  10. Malykh V. Decision support systems in medicine. Program Systems: Theory and Applications, 2019;10(2(41)):155–184. (In Russ). doi: 10.25209/2079-3316-2019-10-2-155-184

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2023 Eco-Vector

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

СМИ зарегистрировано Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор).
Регистрационный номер и дата принятия решения о регистрации СМИ: серия ПИ № ФС 77 - 79539 от 09 ноября 2020 г.


This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies