Published PDF 6 ANND 119
Published PDF 6 ANND 119
Published PDF 6 ANND 119
1
Assistant Professor, Neurology and Neurosurgery, University of Texas Medical Branch, Texas, USA
2
Ph.D. Student, Medical Humanities Graduate Program, Institute of Medical Humanities, Graduate School of Biomedical
Sciences, University of Texas Medical Branch, Texas, USA
*
Corresponding author: Dabi A, Assistant Professor, Neurology and Neurosurgery, Director Neurosciences Critical Care
Program, University of Texas Medical Branch, Galveston, Texas 775555, USA, Tel: 409-772-8053; E-mail:
[email protected]
Received: September 30, 2020; Accepted: October 8, 2020; Published: October 16, 2020
Abstract
Use of artificial Intelligence and Machine Learning-assisted clinical algorithms that help predict the clinical outcome and
influence clinical decisions is rising. In future, it may lead to an ethical confusion when medical decisions influenced by their
use lead to ‘Death by Neurologic Criteria (DNC) or ‘Brain Death.’ Therefore, appropriate steps need to be taken
preemptively to try and resolve it before it leads to public mistrust about this technological advancement that has immense
potential to improve the quality of healthcare while making it more affordable and efficient. This review aims to describe the
concept of Machine Learning assisted clinical algorithm, related ethical issues and the framework in which it can be used in
relation to DNC cases.
Keywords: Deep learning; Machine learning; Brain death; Death by neurologic criteria; Ethics
1. Introduction
Use of Artificial Intelligence (AI) algorithms has recently seen a steady increase in various aspects of daily life, including
smart phone AI assistants like Siri, self-driving cars, web mapping services etc. Not surprisingly, the healthcare has also
witnessed the use of Artificial Intelligence, and it is expected to rise exponentially in the near future.
Artificial intelligence is defined as a branch of computer science that attempts to understand and build intelligent entities,
often instantiated as software programs. It includes categories like Machine Learning (ML), which is a branch of computer
science that uses algorithms to identify patterns in data. Deep Learning (DL) is a subspecialty of Machine Learning, that
employs artificial neural networks (NN) with many intervening layers to identify patterns and data [1]. Since its first mention
Citation: Dabi A, Taylor AJ. Machine Learning, Ethics and Brain Death Concepts and Framework. Arch Neurol Neurol Disord.
2020;3(2):119.
©2020 Yumed Text. 1
www.yumedtext.com | October-2020
at Dartmouth conference in 1956, AI has come a long way, with an exponential increase in the interest, research and its
application in healthcare in the last several years, and with an expected continued rise in many aspects of medicine in future
[2].
Deep Learning has been found to be better at the prognostication of the patient outcome than other types of AI programs. DL
models have a framework of an input layer (called features), and an output layer (called labels), with intervening hidden
layers (sometimes referred as „the black box‟ due to their inherent lack of operational transparency and causal insight) [3].
The number of these „hidden‟ layers can be a few to over hundred. Each layer can involve thousands of connections, resulting
in calculations involving millions to hundreds of millions of parameters. Technological advances have made such
calculations possible and relatively affordable. Once trained on labeled data to help identify input-output correlation, these
algorithms can then be applied to new data. This format is called „Supervised machine learning‟. In „Unsupervised machine
learning‟, the algorithm tries to identify patterns in unlabeled data with an aim to find sub-clusters within the original data or
detect data outliers or form low-dimensional data representations [1,4] (FIG. 1).
Artificial
Intelligence
Machine Learning
Deep
Learning
Common applications that include but not limited to are: improved diagnostic modalities, better therapeutic interventions,
creation of a smoother workflow in processing of the enormous electronic health record (EHR) data and more accurate
clinical prognostication [6]. They can also be used to provide basic clinical triage in areas inaccessible to specialists due to
geographic or political isolation [7]. Use of Machine Learning based apps that are trained to detect the mood and mental state
of the psychiatry patients by analyzing their speech or facial expressions can be helpful in early detection of potentially
treatable psychiatric conditions and therefore help in prevention of certain dangerous outcomes like suicides [8-11].
Deep Learning algorithms have been found to be very effective, even better than human diagnoses in fields like radiology,
ophthalmology, dermatology, pathology, which all rely on image-based diagnosis, or in electroencephalogram interpretation,
that needs pattern recognition similar to imaging. It has also found use in fields where there is enormous amount of data that
2
www.yumedtext.com | October-2020
can be too overwhelming for human brains to process, for example, genome interpretation or prognostication using the entire
hospitals‟ Electronic Health Record (EHR), or even nationwide EHR data [12,13]. Similarly, the expected data surge
generated from the future personal wearable sensors and devices would only be manageable and interpretable with the use of
Machine Learning algorithms [1,14].
Despite the many benefits of Machine Learning use in healthcare, it does present a unique set of challenges, that need to be
overcome before its wider acceptance. As Machine Learning algorithms involve large volumes of high-quality training data,
accuracy of this input data is vital. This problem can be resolved by using sophisticated algorithms that can handle „noisy‟
data sets without affecting the reliability and accuracy of the prediction models [1].
The “black-box” format of the Machine Learning algorithms makes them intuitively less trustworthy. There is an active
research ongoing by the technological companies to make the Machine Learning „explainable.‟ This concept now called as
“Explainable AI” (xAI) would involve the hidden layers to have human comprehensible „model‟ and „interface‟ to better
understand the underlying technique, its strengths and weaknesses (FIG. 2). This is expected to unveil the „third-wave AI
systems‟ with applications in Medicine, but also in defense, finance, security and transportation [3,15].
Input Layer
Hidden
Layers
Many of the ML algorithms generate results that may be too difficult for human interpretation. ML algorithms generate the
output data mostly in the form of area-under-the-curve (AUC). Due to this and the basic architecture of these „black-box‟
algorithms, it is difficult to compare and validate them in the traditional format of prospective randomized controlled trials.
This may be a drawback in their validation and certification, wider application and formulation of official guidelines related
to their use unless technological companies and healthcare authorities are able to figure out a practical solution for this
3
www.yumedtext.com | October-2020
problem. Besides, by nature, the ML models are made to improve continuously with time, therefore their certification will
potentially need frequent updates. The FDA (Food and Drug Administration) has announced in 2018 to indicate that it may
prefer „pre-certified‟ approach for such software to rectify this problem [1].
AI assisted calculation of clinical data needs to be integrated with other pertinent patient-information (like patient
preferences, values, social and cultural norms, faith and belief systems, social support structure, etc.) for full utilization of its
potential. Better algorithm designs that can integrate all the relevant inputs can help overcome this obstacle [1].
A smooth integration of the Machine Learning applications in Medicine will also need adaptation on part of the clinicians
and patients with the expected change in person-to-person communication format, perceived extra workload from ML
generated alerts, additional work up needed due to false positive alarms and concern about missed alert/diagnoses due to false
negative ML algorithm results. Once these algorithms are proven to be at par or superior to human clinical decision making,
then their acceptability will slowly improve. It is possible that certain subspecialties will adopt these algorithms much earlier
than others, example imaging-based or pattern recognition-based fields. Remote areas with lack of physicians are also likely
to be early adopters [16].
In the current environment with significant public concern about the consumer privacy, there is concern about the ownership
of the enormous amount of healthcare data that is likely to be generated, processed and utilized to make meaningful use of
Machine Learning applications. The infrastructure required and the cost of establishing and maintaining such complex
computational devices will make it unaffordable to individuals or small healthcare setups. Most likely, the data will be owned
by large technological companies, who may potentially decide to distribute and sell it to the third parties for profit. Data hack
or leak by rogue individuals or nations may lead to a complex financial and socio-political situation. Having strict policies
about ownership with robust data protection strategies can help boost the public confidence in such systems.
Due to the excess cost attached to this, the healthcare setups are at risk of further segregating along the economic lines. Large
institutions with big budget may be able to afford them while the smaller clinics and hospitals may lag. This will potentially
lead to restructuring of the healthcare with mergers and the collapse of smaller hospitals.
During the initial stages of its use, the Machine Learning algorithms will incur extra cost. The insurance companies are
likely to hesitate in bearing this extra financial burden. Once these algorithms are established as part of the „standard of care‟
(due to their status of being at par or superior to an average clinician), the insurance companies may then impede the clinical
decision making if the physician or the patient decides against what the ML algorithm suggests. They may refuse to pay in
such situations thus incurring huge costs to the patients and the hospitals.
If there is an unfavorable outcome when ML algorithm assisted clinical decisions are made, then the proportion of
responsibility shared between the ML developer, the ML interpreter, the physician and the patient will need to be determined.
This can be preemptively dealt with by training the ML- assisted Clinical teams by having mock clinical scenarios and
practice runs, possibly even official certification courses (TABLE 1).
4
www.yumedtext.com | October-2020
Brain Death {also known as “Death by Neurologic Criteria (DNC)” or “Neurologic criteria for Death”} is defined as „the
clinical state that involves an apneic patient with irreversible coma and absent brainstem reflexes‟ [17]. It involves
„irreversible cessation of all functions of the entire brain, including the brainstem‟. This definition is commonly used in the
United States and many European countries, with emphasis on the „irreversibility‟ of the coma, where the organism as a
„whole‟ cannot survive in the absence of artificial life support [18,19].
Physicians invariably and not incorrectly have doubts at some point of time during the decision-making process about the
prognostication of „Death by Neurologic Criteria‟. This is highest when the patient is initially admitted, due to the relative
lack of clear baseline information with an incomplete understanding of the pathophysiology of the underlying lesion and
comorbidities. This doubt also depends on the various diagnostic and therapeutic modalities and interventions available, their
utility, risks and expected benefits etc. Expected quality of life and the likely best and the worse possible functional outcomes
are important. The physician confusion and doubt about prognostication may also arise due to „anecdotal‟ personal
experience, personal faith and belief, personality traits, past mistakes, duration of the clinical experience of the individual
physician or the collective experience of the team involved, etc. Known and implicit bias by the physicians and their
emotional state on that given day may also influence their opinion [20].
In developing nations, additionally the cost of the health care interventions to the family significantly affects the decisions by
the family members. This makes it difficult to compare the DNC data from the developing and developed countries [21].
5. Machine Learning Algorithms and their Potential Role in Death by Neurologic Criteria Decisions
The aim for a physician is to improve the clinical outcome of a given patient, and prevent mortality, if possible. Therefore,
there is always an attempt to try and predict the chances of mortality, so that the level and urgency of care provided can be
escalated if needed. It also helps in efficient allocation of the limited healthcare resources available and to appropriately
utilize the palliative care services when indicated.
5
www.yumedtext.com | October-2020
The Machine Learning models are likely to become better in the next few years at predicting the patients‟ outcome including
mortality of critically ill patients. This paradigm, particularly in the Neurocritical Care units, will be thus predicting the
chances of „death by neurologic criteria (DNC)‟. This has the potential to create a state of confusion with ethical and legal
dilemmas, including the public, when there is already an element of mistrust and misunderstanding about the DNC. However,
this can also be a great opportunity to improve healthcare, if the legislature and the medical fraternity, especially the
neurointensivists, develop and apply appropriate policies preemptively regarding AI, ML and DL use in the patient care [22].
Most of the current ML software can predict outcome of patients with ICU (intensive care unit) admission much earlier and
better than the standard clinical outcome predictor scales [23,24]. This can be helpful in more efficient allocation of
resources, improving the healthcare quality to the patients and reducing the futility of care. It may also be helpful to prepare
the patient and/or the family members much earlier about the likely outcome [25]. If warned in time, the palliative care can
be initiated for appropriate patients, saving them from unnecessary medical interventions, that may be uncomfortable at the
least, and most likely will not impact their long-term outcome in a positive manner [26-28].
Most patients would like to die in their own homes given a choice, yet in the end, almost all die in an institution. A
significant proportion of terminal cases may progress to what is called as „state worse than death‟. This is usually described
by the patients‟ or their families as a state of severe disability, with associated social isolation, incontinence, dementia,
chronic intractable pain, total dependence for daily care and with a need for advanced technological life-support for
sustenance of their lives. Most such patients would choose death over such outcome [29]. Machine learning algorithms can
help predict quality of life outcomes much better than the traditional clinical score systems. This can prevent the discomfort
that these patients have to go through if they reach such a state [30].
Early prediction of the likelihood of Death by Neurologic Criteria of the donor candidates is vital to coordinate with organ
transplant teams. With a uniform, national ML-assisted healthcare software, the physician teams would be able to match
prospective donor candidates with patients in need much earlier, if the latter‟s clinical and genomic data is already uploaded
in the healthcare data cloud.
Donor candidate patients for „Donation by Cardiac Death‟ (DCD) are a clinical challenge for many intensivists due to the
medico-legal parameters regarding the expected time of cardiac arrest post withdrawal of care [31]. With the help of better
prognostic ML algorithms, it may be possible to have much better control over the selection of the donor candidate and to
appropriately time the withdrawal of active care to improve the yield and the outcome of the organs harvested [32,33]
The media portrayal of the DNC and the use of ML-assisted algorithm can play a huge role in shaping the public opinion on
this vital issue [34,35]. Based on the popular television series in the past, the public perception about post cardiac arrest
resuscitation was found to be overoptimistic. This led to unrealistic patient and family expectations about post resuscitation
clinical outcome. Similar confusion regarding the ML-assisted algorithms and their use in DNC prediction is avoidable. Pre-
emptive formal legislative policy formulation with stress on correct media depiction of this sensitive issue can be very helpful
in educating the public. In case of certain media institutions‟ claim for the need to exercise their „right of creative freedom‟,
6
www.yumedtext.com | October-2020
they may be allowed provided they display a disclaimer certificate accepting that their version of Death by Neurologic
Criteria is fictional.
If a high profile DNC case and associated media controversy is on-going, it would be helpful for the professional medical
societies to reach out to public using different forums (including newspaper, news channels, online and social media outlets
etc.) to provide clear, reliable, detailed, medically and legally accurate information in order to educate the masses. This will
help in clarifying doubts and improving the public trust and confidence [36,37]. Similar use of these outlets for public
education during the current Covid-19 pandemic is a great example of their appropriate use.
7. Conclusions
Death by Neurologic Criteria remains a sensitive issue with potential for significant public and media controversy if
mishandled. Use of Machine Learning-assisted clinical algorithms in near future is likely to increase the chances of triggering
controversy if their use leads to change in medical decision in a patient, who has a poor outcome. The ethical issues thus
arising are likely to have complex social, cultural, religious and financial implications for all those involved. Anticipating and
preparing for such possibilities with formulation of official policies by the medical professional societies along with an active
public education campaign can help full utilization Machine Learning clinical algorithms.
REFERENCES
1. Yu B, Beam A, Kohane I. Artifical Intelligence in Healthcare. Nature Biomed Eng. 2018;2:719-31.
2. Kononenko I. Machine Learning for medical diagnosis: history, state of the art and perspective. Aritif Intell Med.
2001;1(23):89-109.
3. London AJ. Artificial Intelligence and Black Box Medical Decisions: Accountability and Explainability. Hastings
Cent Rep. 2019;49(1):15-21.
4. LeCun Y, Bengio Y, Hinton G. Deep Learning. Nature. 2015;521(7533):436-44.
5. Visvikis D, Rest CCL, Jaouen V, et al. Artificial Intelligence, Machine (deep) learning and radio(geno)mics:
definitions and nuclear medicine applications. Eur J Nuc Med Mol Imaging. 2019;46(13):2630-7.
6. Rajkomar A, Dean J, Kohane I. Machine Learning in Medicine. N Eng J Med. 2019;330(14):1347-58.
7. Kwon JM, Lee Y, Lee S, et al. Validation of deep-learning-based triage and acuity score using a large national
dataset. PLoS One. 2018;13(10):e0205836.
8. Chandler C, Foltz P, Elvevag B. Using Machine Learning in Psychiatry: The Need to Establish a Framework that
nurture Trustworthiness. Schizophr Bull. 2020;46(1):11-4.
9. Kulkarni S, Reddy N, Hariharan S. Facial Expression (Mood) recognition from facial images using Committee
Neural Networks. Biomed Eng Online. 2009; 8:16.
10. Stark H. Artificial Intelligence is here and it wants to revolutionize Psychiatry, Forbes, 30 October 2017.
11. Lewis T. AI can read your emotions. Should it? The Observer, 17 August 2019.
12. Cheon A, Kim J, Lim J. Use of Deep Learning to predict Stroke patient mortality. Int J Environ Res Public Health.
2019;16(11):1876.
7
www.yumedtext.com | October-2020
13. Rajkomar A, Oren E, Dai AM, et al. Scalable and accurate deep learning with electronic health records. NPJ Digit
Med. 2018;1:18.
14. Schmidhuber J. Deep Learning in neural networks: an overview. Neural Netw. 2015;61:85-117.
15. Turek M. Defense Advanced Research Projects Agency, 2020. [Online]. Available:
https://www.darpa.mil/program/explainable-artificial-intelligence
16. Chandler C, Foltz PW, Elvevag B. Using Machine Learning in Psychiatry: the need to establish a Framework that
nurtures trustworthiness. Schizophr Bull. 2019;46:11-4.
17. Wijdicks E. Determining Brain Death," Continuum. 2015;21(5):1411-24.
18. Machado C, Korein J, Ferrer Y, et al. The Declaration of Sydney on human health. J Med Ethics. 2007;33(12):699-
703.
19. Machado C, Korein J, Ferrer Y, et al. The concept of brain death did not evolve to benefit organ transplants. J Med
Ethics. 2007;33(4):197-200.
20. Robertson A, Helseth E, Laake J, et al. Neurocritical Care Physician's doubt about whether to withdraw life-
sustaining treatment the first days after devastating injury: an interview study. Scandinavian J Trauma, Resus Emerg
Med. 2019;27:81.
21. Russell J, Epstein L, Greet D, et al. Lewis, Brain death, the determination of brain death, and member guidance of
brain death accomodation requests. Neurology. 2019;92:228-32.
22. Hassanzadeh H, Sha Y, Wang M. DeepDeath: Learning to predict the underlying cause of death with Big Data. Conf
Proc IEEE Eng Med Biol Soc. 2017. 3373-6 p.
23. Shickel B, Loftus T, Adhikari L, et al. "DeepSOFA: A Continuous Acuity Score for Critically Ill Patients using
Clinically Interpretable Deep Learning. Sci Rep. 2019;9(1):1879.
24. Schinkel M, Paranjape K, Pandey RN, et al. Clinical application of artificial intelligence in Sepsis: a narrative
review. Comput Biol Med. 2019;115:103488.
25. Barton C, Chettipally U, Zhou Y, et al. Evaluation of a machine learning algorithm for up to 48-hour advance
predection of sepsis using six vital signs. Comput Biol Med. 2019;109:79-84.
26. Avati A, Jung K, Harman S, et al. Improving Palliative Care with Deep Learning. BMC Med Inform Decis Mak.
2018;18(4):122.
27. Caicedo-Torres W, Gutierrez J. ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU.
J Biomed Inform. 2019;98:103269.
28. Meyer DZA, Pfahringer B, Kempfert J, et al. Machine Learning for real-time prediction of complications in critical
care: a retrospective study. Lancet Respir Med. 2018;6(12):905-14.
29. Hillman K, Athan F, Forero R. States worse than death. Curr Opin Crit Care. 2018;24(5):415-20.
30. Pratt A, Chang J, Sederstrom N. A fate worse than death: Prognostication of devastating brain injury. Crit Care Med.
2019;47(4):591-8.
31. Smith M, Dominguez-Gil B, Greer D. Organ donation after circulatory death: current status and future potential.
Intensive Care Med. 2019;45(3):310-21.
32. Reich D, Mulligan D, Abt P, et al. ASTS Recommended Practice Guidelines for Controlled Donation after Cardiac
Death Organ Procurement and Transplantation. Am J Transplant. 2004;9(9):2004-11.
33. Louis ES, Sharp R. Ethical Aspects of organ donation after Circulatory Death. Continuum. 2015;21(5):1445-50.
8
www.yumedtext.com | October-2020
34. Lewis A, Weaver J, Caplan A. Portrayal of Brain Death in film and television. Am J Transplant. 2017;17(3):761-9.
35. Johnson L. Death by Neurologic Criteria:: Expert definitions and lay misgivings. QJM: Int J Med. 2017; 110(5):267-
70.
36. Lewis A, Caplan A. Brain Death in the media. Transplantation. 2016;100(5):e24.
37. Smith M, Dominguez-Gil B, Greer D, et al. Organ Donation after circulatory death: current status and future
potential. Intensive Care Med. 2019;45(3):310-21.