Discrimination of artificial intelligence in healthcare

Cover Page


Cite item

Full Text

Abstract

Currently, artificial intelligence (AI) plays an important role in various fields as a primary worker or assistant, especially in healthcare. AI has many functions that can be performed even better than humans do due to the high speed of compiling a large amount of data from various sources (Internet and electronic health records), which increases the productivity of doctors. Less discussed, though equally important, is the discrimination of AI in healthcare. Doctors, who have a lot of responsibility, cannot rely on AI due to multiple vulnerabilities. First, AI collects huge amounts of data. However, AI does not guarantee error-free results, despite its non-autonomy and operator control, since human factors can play a role, becoming a source of inaccuracy. Therefore, a high risk of using poor quality data in further decision-making processes exists. Errors, biases (ethnic, gender, age, and social), and data gaps worsen AI outcomes, leading, for example, to discrimination against minorities and inaccurate prescriptions. In addition, AI may act unethically or even violate laws (e.g., Title VI, Section 1557 of the Affordable Care Act, which prohibits discrimination by race, color, national origin, sex, age, or disability in certain healthcare programs and activities).

Secondly, AI may be called secretive due to the indeterminacy of the algorithms. This means that no one can explain how or why the AI reached this or that decision, as stated by Judea Pearl in The Book of Why: The New Science of Cause and Effect. Therefore, doctors cannot check the facts and make sure that the analysis is done correctly and the conclusion is accurate. Difficulties in solving real-world problems using AI that cannot be solved by formal and mathematical rules of logic as humans do (e.g., natural language and face recognition) must be recognized. AI makes the work of medical professionals easier, however, posing many unresolved problems that mislead doctors and contribute to wrong decisions.

Full Text

Currently, artificial intelligence (AI) plays an important role in various fields as a primary worker or assistant, especially in healthcare. AI has many functions that can be performed even better than humans do due to the high speed of compiling a large amount of data from various sources (Internet and electronic health records), which increases the productivity of doctors. Less discussed, though equally important, is the discrimination of AI in healthcare. Doctors, who have a lot of responsibility, cannot rely on AI due to multiple vulnerabilities. First, AI collects huge amounts of data. However, AI does not guarantee error-free results, despite its non-autonomy and operator control, since human factors can play a role, becoming a source of inaccuracy. Therefore, a high risk of using poor quality data in further decision-making processes exists. Errors, biases (ethnic, gender, age, and social), and data gaps worsen AI outcomes, leading, for example, to discrimination against minorities and inaccurate prescriptions. In addition, AI may act unethically or even violate laws (e.g., Title VI, Section 1557 of the Affordable Care Act, which prohibits discrimination by race, color, national origin, sex, age, or disability in certain healthcare programs and activities).

Secondly, AI may be called secretive due to the indeterminacy of the algorithms. This means that no one can explain how or why the AI reached this or that decision, as stated by Judea Pearl in The Book of Why: The New Science of Cause and Effect. Therefore, doctors cannot check the facts and make sure that the analysis is done correctly and the conclusion is accurate. Difficulties in solving real-world problems using AI that cannot be solved by formal and mathematical rules of logic as humans do (e.g., natural language and face recognition) must be recognized. AI makes the work of medical professionals easier, however, posing many unresolved problems that mislead doctors and contribute to wrong decisions.

×

About the authors

Michael Yu. Khomyakov

N.I. Pirogov Russian National Research Medical University

Author for correspondence.
Email: mehilaineen@gmail.com
ORCID iD: 0009-0005-4818-8270
Russian Federation, Moscow

References

  1. Holzinger A, Langs G, Denk H, et al. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9(4):e1312. doi: 10.1002/widm.1312
  2. Hoffman S, Podgurski A. Artificial Intelligence and Discrimination in Health Care. Case Western Reserve University. School of Law; 2020.
  3. Kim JS, Kim BG, Hwang SH. Efficacy of Artificial Intelligence-Assisted Discrimination of Oral Cancerous Lesions from Normal Mucosa Based on the Oral Mucosal Image: A Systematic Review and Meta-Analysis. Cancers (Basel). 2022;14(14):3499. doi: 10.3390/cancers14143499
  4. Laguarta J, Hueto F, Subirana B. COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings. IEEE Open J Eng Med Biol. 2020;1:275–281. doi: 10.1109/OJEMB.2020.3026928
  5. Vellido A. Societal Issues Concerning the Application of Artificial Intelligence in Medicine. Kidney Dis (Basel). 2019;5(1):11–17. doi: 10.1159/000492428
  6. Park SH, Han K. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction. Radiology. 2018;286(3):800–809. doi: 10.1148/radiol.2017171920

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2023 Eco-Vector

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

СМИ зарегистрировано Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор).
Регистрационный номер и дата принятия решения о регистрации СМИ: серия ПИ № ФС 77 - 79539 от 09 ноября 2020 г.


This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies