Discrimination of artificial intelligence in healthcare

Capa


Citar

Texto integral

Resumo

Currently, artificial intelligence (AI) plays an important role in various fields as a primary worker or assistant, especially in healthcare. AI has many functions that can be performed even better than humans do due to the high speed of compiling a large amount of data from various sources (Internet and electronic health records), which increases the productivity of doctors. Less discussed, though equally important, is the discrimination of AI in healthcare. Doctors, who have a lot of responsibility, cannot rely on AI due to multiple vulnerabilities. First, AI collects huge amounts of data. However, AI does not guarantee error-free results, despite its non-autonomy and operator control, since human factors can play a role, becoming a source of inaccuracy. Therefore, a high risk of using poor quality data in further decision-making processes exists. Errors, biases (ethnic, gender, age, and social), and data gaps worsen AI outcomes, leading, for example, to discrimination against minorities and inaccurate prescriptions. In addition, AI may act unethically or even violate laws (e.g., Title VI, Section 1557 of the Affordable Care Act, which prohibits discrimination by race, color, national origin, sex, age, or disability in certain healthcare programs and activities).

Secondly, AI may be called secretive due to the indeterminacy of the algorithms. This means that no one can explain how or why the AI reached this or that decision, as stated by Judea Pearl in The Book of Why: The New Science of Cause and Effect. Therefore, doctors cannot check the facts and make sure that the analysis is done correctly and the conclusion is accurate. Difficulties in solving real-world problems using AI that cannot be solved by formal and mathematical rules of logic as humans do (e.g., natural language and face recognition) must be recognized. AI makes the work of medical professionals easier, however, posing many unresolved problems that mislead doctors and contribute to wrong decisions.

Texto integral

Currently, artificial intelligence (AI) plays an important role in various fields as a primary worker or assistant, especially in healthcare. AI has many functions that can be performed even better than humans do due to the high speed of compiling a large amount of data from various sources (Internet and electronic health records), which increases the productivity of doctors. Less discussed, though equally important, is the discrimination of AI in healthcare. Doctors, who have a lot of responsibility, cannot rely on AI due to multiple vulnerabilities. First, AI collects huge amounts of data. However, AI does not guarantee error-free results, despite its non-autonomy and operator control, since human factors can play a role, becoming a source of inaccuracy. Therefore, a high risk of using poor quality data in further decision-making processes exists. Errors, biases (ethnic, gender, age, and social), and data gaps worsen AI outcomes, leading, for example, to discrimination against minorities and inaccurate prescriptions. In addition, AI may act unethically or even violate laws (e.g., Title VI, Section 1557 of the Affordable Care Act, which prohibits discrimination by race, color, national origin, sex, age, or disability in certain healthcare programs and activities).

Secondly, AI may be called secretive due to the indeterminacy of the algorithms. This means that no one can explain how or why the AI reached this or that decision, as stated by Judea Pearl in The Book of Why: The New Science of Cause and Effect. Therefore, doctors cannot check the facts and make sure that the analysis is done correctly and the conclusion is accurate. Difficulties in solving real-world problems using AI that cannot be solved by formal and mathematical rules of logic as humans do (e.g., natural language and face recognition) must be recognized. AI makes the work of medical professionals easier, however, posing many unresolved problems that mislead doctors and contribute to wrong decisions.

×

Sobre autores

Michael Khomyakov

N.I. Pirogov Russian National Research Medical University

Autor responsável pela correspondência
Email: mehilaineen@gmail.com
ORCID ID: 0009-0005-4818-8270
Rússia, Moscow

Bibliografia

  1. Holzinger A, Langs G, Denk H, et al. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9(4):e1312. doi: 10.1002/widm.1312
  2. Hoffman S, Podgurski A. Artificial Intelligence and Discrimination in Health Care. Case Western Reserve University. School of Law; 2020.
  3. Kim JS, Kim BG, Hwang SH. Efficacy of Artificial Intelligence-Assisted Discrimination of Oral Cancerous Lesions from Normal Mucosa Based on the Oral Mucosal Image: A Systematic Review and Meta-Analysis. Cancers (Basel). 2022;14(14):3499. doi: 10.3390/cancers14143499
  4. Laguarta J, Hueto F, Subirana B. COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings. IEEE Open J Eng Med Biol. 2020;1:275–281. doi: 10.1109/OJEMB.2020.3026928
  5. Vellido A. Societal Issues Concerning the Application of Artificial Intelligence in Medicine. Kidney Dis (Basel). 2019;5(1):11–17. doi: 10.1159/000492428
  6. Park SH, Han K. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction. Radiology. 2018;286(3):800–809. doi: 10.1148/radiol.2017171920

Arquivos suplementares

Arquivos suplementares
Ação
1. JATS XML

Declaração de direitos autorais © Eco-Vector, 2023

Creative Commons License
Este artigo é disponível sob a Licença Creative Commons Atribuição–NãoComercial–SemDerivações 4.0 Internacional.

СМИ зарегистрировано Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор).
Регистрационный номер и дата принятия решения о регистрации СМИ: серия ПИ № ФС 77 - 79539 от 09 ноября 2020 г.


Este site utiliza cookies

Ao continuar usando nosso site, você concorda com o procedimento de cookies que mantêm o site funcionando normalmente.

Informação sobre cookies