The concept of responsible artificial intelligence as the future of artificial intelligence in medicine

封面


如何引用文章

全文:

详细

Active deployment of artificial intelligence (AI) systems in medicine creates many challenges. Recently, the concept of responsible artificial intelligence (RAI) was widely discussed, which is aimed at solving the inevitable ethical, legal, and social problems. The scientific literature was analyzed and the possibility of applying the RAI concept to overcome the existing AI problems in medicine was considered. Studies of possible AI applications in medicine showed that current algorithms are unable to meet the basic enduring needs of society, particularly, fairness, transparency, and reliability. The RAI concept based on three principles — accountability for AI activities and responsibility and transparency of findings (ART) — was proposed to address ethical issues. Further evolution, without the development and application of the ART concept, turns dangerous and impossible the use of AI in such areas as medicine and public administration. The requirements for accountability and transparency of conclusions are based on the identified epistemological (erroneous, non-transparent, and incomplete conclusions) and regulatory (data confidentiality and discrimination of certain groups) problems of using AI in digital medicine [2]. Epistemological errors committed by AI are not limited to omissions related to the volume and representativeness of the original databases analyzed. In addition, these include the well-known “black box” problem, i.e. the inability to “look” into the process of forming AI outputs when processing input data. Along with epistemological errors, normative problems inevitably arise, including patient confidentiality and discrimination of some social groups due to the refusal of some patients to provide medical data for training algorithms and as part of the analyzed databases, which will lead to inaccurate AI conclusions in cases of certain gender, race, and age. Importantly, the methodology of the AI data analysis depends on the program code set by the programmer, whose epistemological and logical errors are projected onto the AI. Hence the problem of determining responsibility in the case of erroneous conclusions, i.e. its distribution between the program itself, the developer, and the executor. Numerous professional associations design ethical standards for developers and a statutory framework to regulate responsibility between the links described. However, the state must play the greatest role in the development and approval of such legislation. The use of AI in medicine, despite its advantages, is accompanied by many ethical, legal, and social challenges. The development of RAI has the potential both to solve these challenges and to further the active and secure deployment of AI systems in digital medicine and healthcare.

全文:

Active deployment of artificial intelligence (AI) systems in medicine creates many challenges. Recently, the concept of responsible artificial intelligence (RAI) was widely discussed, which is aimed at solving the inevitable ethical, legal, and social problems. The scientific literature was analyzed and the possibility of applying the RAI concept to overcome the existing AI problems in medicine was considered. Studies of possible AI applications in medicine showed that current algorithms are unable to meet the basic enduring needs of society, particularly, fairness, transparency, and reliability. The RAI concept based on three principles — accountability for AI activities and responsibility and transparency of findings (ART) — was proposed to address ethical issues. Further evolution, without the development and application of the ART concept, turns dangerous and impossible the use of AI in such areas as medicine and public administration. The requirements for accountability and transparency of conclusions are based on the identified epistemological (erroneous, non-transparent, and incomplete conclusions) and regulatory (data confidentiality and discrimination of certain groups) problems of using AI in digital medicine [2]. Epistemological errors committed by AI are not limited to omissions related to the volume and representativeness of the original databases analyzed. In addition, these include the well-known “black box” problem, i.e. the inability to “look” into the process of forming AI outputs when processing input data. Along with epistemological errors, normative problems inevitably arise, including patient confidentiality and discrimination of some social groups due to the refusal of some patients to provide medical data for training algorithms and as part of the analyzed databases, which will lead to inaccurate AI conclusions in cases of certain gender, race, and age. Importantly, the methodology of the AI data analysis depends on the program code set by the programmer, whose epistemological and logical errors are projected onto the AI. Hence the problem of determining responsibility in the case of erroneous conclusions, i.e. its distribution between the program itself, the developer, and the executor. Numerous professional associations design ethical standards for developers and a statutory framework to regulate responsibility between the links described. However, the state must play the greatest role in the development and approval of such legislation. The use of AI in medicine, despite its advantages, is accompanied by many ethical, legal, and social challenges. The development of RAI has the potential both to solve these challenges and to further the active and secure deployment of AI systems in digital medicine and healthcare.

×

作者简介

Nikolai Germanov

N.I. Pirogov Russian National Research Medical University

编辑信件的主要联系方式.
Email: n.s.germanov@gmail.com
ORCID iD: 0000-0003-1953-8794
俄罗斯联邦, Moscow

参考

  1. Dignum V. Responsibility and Artificial Intelligence. In: Dubber M.D., Pasquale F., Das S., editors. The Oxford Handbook of Ethics of AI. Oxford : Oxford University Press, 2020. doi: 10.1093/oxfordhb/9780190067397.013.12
  2. Trocin C., Mikalef P., Papamitsiou Z., et al. Responsible AI for Digital Health: a Synthesis and a Research Agenda // Inf Syst Front. 2021. Режим доступа: https://www.researchgate.net/publication/352769689_Responsible_AI_for_Digital_Health_a_Synthesis_and_a_Research_Agenda/link/60d807df92851ca9448cf7c4/download. Дата обращения: 03.06.2023. doi: 10.1007/s10796-021-10146-4
  3. Racine E., Boehlen W., Sample M. Healthcare uses of artificial intelligence: Challenges and opportunities for growth // Healthcare Management Forum. 2019. Vol. 32, N 5. P. 272–275. doi: 10.1177/0840470419843831
  4. Zednik C. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence // Philos. Technol. 2021. Vol. 34. P. 265–288. doi: 10.1007/s13347-019-00382-7
  5. Astromskė K., Peičius E., Astromskis P. Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations // AI & Soc. 2021. Vol. 36. P. 509–520. doi: 10.1007/s00146-020-01008-9
  6. Burr C., Taddeo M., Floridi L. The Ethics of Digital Well-Being: A Thematic Review // Sci Eng Ethics. 2020. Vol. 26. P. 2313–2343. doi: 10.1007/s11948-020-00175-8
  7. Gotterbarn D., Bruckman M., Flick C., Miller K., Wolf M.J. ACM Code of Ethics: A Guide for Positive Action // Communications of the ACM. 2018. Vol. 61, N 1. P. 121–128. doi: 10.1145/3173016

补充文件

附件文件
动作
1. JATS XML

版权所有 © Eco-Vector, 2023

Creative Commons License
此作品已接受知识共享署名-非商业性使用-禁止演绎 4.0国际许可协议的许可。

СМИ зарегистрировано Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор).
Регистрационный номер и дата принятия решения о регистрации СМИ: серия ПИ № ФС 77 - 79539 от 09 ноября 2020 г.


##common.cookie##