Show simple item record

dc.contributor.authorFletcher, Richard Ribon
dc.contributor.authorNakeshimana, Audace
dc.contributor.authorOlubeko, Olusubomi
dc.date2022-08-11T08:09:59.000
dc.date.accessioned2022-08-23T16:51:30Z
dc.date.available2022-08-23T16:51:30Z
dc.date.issued2021-04-15
dc.date.submitted2021-07-26
dc.identifier.citation<p>Fletcher RR, Nakeshimana A, Olubeko O. Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health. Front Artif Intell. 2021 Apr 15;3:561802. doi: 10.3389/frai.2020.561802. PMID: 33981989; PMCID: PMC8107824. <a href="https://doi.org/10.3389/frai.2020.561802">Link to article on publisher's site</a></p>
dc.identifier.issn2624-8212 (Linking)
dc.identifier.doi10.3389/frai.2020.561802
dc.identifier.pmid33981989
dc.identifier.urihttp://hdl.handle.net/20.500.14038/41871
dc.description.abstractIn Low- and Middle- Income Countries (LMICs), machine learning (ML) and artificial intelligence (AI) offer attractive solutions to address the shortage of health care resources and improve the capacity of the local health care infrastructure. However, AI and ML should also be used cautiously, due to potential issues of fairness and algorithmic bias that may arise if not applied properly. Furthermore, populations in LMICs can be particularly vulnerable to bias and fairness in AI algorithms, due to a lack of technical capacity, existing social bias against minority groups, and a lack of legal protections. In order to address the need for better guidance within the context of global health, we describe three basic criteria (Appropriateness, Fairness, and Bias) that can be used to help evaluate the use of machine learning and AI systems: 1) APPROPRIATENESS is the process of deciding how the algorithm should be used in the local context, and properly matching the machine learning model to the target population; 2) BIAS is a systematic tendency in a model to favor one demographic group vs another, which can be mitigated but can lead to unfairness; and 3) FAIRNESS involves examining the impact on various demographic groups and choosing one of several mathematical definitions of group fairness that will adequately satisfy the desired set of legal, cultural, and ethical requirements. Finally, we illustrate how these principles can be applied using a case study of machine learning applied to the diagnosis and screening of pulmonary disease in Pune, India. We hope that these methods and principles can help guide researchers and organizations working in global health who are considering the use of machine learning and artificial intelligence.
dc.language.isoen_US
dc.relation<p><a href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&list_uids=33981989&dopt=Abstract">Link to Article in PubMed</a></p>
dc.rightsCopyright © 2021 Fletcher, Nakeshimana and Olubeko. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectappropriate use
dc.subjectartificial intelligence
dc.subjectbias
dc.subjectethics
dc.subjectfairness
dc.subjectglobal health
dc.subjectmachine learning
dc.subjectmedicine
dc.subjectArtificial Intelligence and Robotics
dc.subjectBioethics and Medical Ethics
dc.subjectHealth Services Administration
dc.subjectInternational Public Health
dc.titleAddressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health
dc.typeJournal Article
dc.source.journaltitleFrontiers in artificial intelligence
dc.source.volume3
dc.identifier.legacyfulltexthttps://escholarship.umassmed.edu/cgi/viewcontent.cgi?article=5710&amp;context=oapubs&amp;unstamped=1
dc.identifier.legacycoverpagehttps://escholarship.umassmed.edu/oapubs/4678
dc.identifier.contextkey24025215
refterms.dateFOA2022-08-23T16:51:30Z
html.description.abstract<p>In Low- and Middle- Income Countries (LMICs), machine learning (ML) and artificial intelligence (AI) offer attractive solutions to address the shortage of health care resources and improve the capacity of the local health care infrastructure. However, AI and ML should also be used cautiously, due to potential issues of fairness and algorithmic bias that may arise if not applied properly. Furthermore, populations in LMICs can be particularly vulnerable to bias and fairness in AI algorithms, due to a lack of technical capacity, existing social bias against minority groups, and a lack of legal protections. In order to address the need for better guidance within the context of global health, we describe three basic criteria (Appropriateness, Fairness, and Bias) that can be used to help evaluate the use of machine learning and AI systems: 1) APPROPRIATENESS is the process of deciding how the algorithm should be used in the local context, and properly matching the machine learning model to the target population; 2) BIAS is a systematic tendency in a model to favor one demographic group vs another, which can be mitigated but can lead to unfairness; and 3) FAIRNESS involves examining the impact on various demographic groups and choosing one of several mathematical definitions of group fairness that will adequately satisfy the desired set of legal, cultural, and ethical requirements. Finally, we illustrate how these principles can be applied using a case study of machine learning applied to the diagnosis and screening of pulmonary disease in Pune, India. We hope that these methods and principles can help guide researchers and organizations working in global health who are considering the use of machine learning and artificial intelligence.</p>
dc.identifier.submissionpathoapubs/4678
dc.contributor.departmentDepartment of Psychiatry
dc.source.pages561802


Files in this item

Thumbnail
Name:
frai_03_561802.pdf
Size:
2.635Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record

Copyright © 2021 Fletcher, Nakeshimana and Olubeko. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Except where otherwise noted, this item's license is described as Copyright © 2021 Fletcher, Nakeshimana and Olubeko. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.