Discriminative Domain Adaptation Network for Fine-grained Disease Severity Classification

S Wen, Y Chen, S Guo, Y Ma, Y Gu… - 2023 International Joint …, 2023 - ieeexplore.ieee.org
S Wen, Y Chen, S Guo, Y Ma, Y Gu, P Chan
2023 International Joint Conference on Neural Networks (IJCNN), 2023ieeexplore.ieee.org
Unsupervised Domain Adaptation (UDA) has shown promise in improving medical
diagnosis tasks on the unlabeled target domain by utilizing rich labels on the source
domain. However, in real medical scenarios, it is crucial to obtain a fine-grained
classification of the disease, in order to support physicians in making accurate diagnoses
and treatment plans for patients. Unfortunately, accurately labeling medical data at a fine-
grained level is challenging because of the diversity of patients, and the variety of diseases …
Unsupervised Domain Adaptation (UDA) has shown promise in improving medical diagnosis tasks on the unlabeled target domain by utilizing rich labels on the source domain. However, in real medical scenarios, it is crucial to obtain a fine-grained classification of the disease, in order to support physicians in making accurate diagnoses and treatment plans for patients. Unfortunately, accurately labeling medical data at a fine-grained level is challenging because of the diversity of patients, and the variety of diseases. This leads to a difference between the given label and the true label, referred to as label bias. The existing UDA methods are based on the premise that the given label on the source domain is the true label, so it is easy to transfer the biased knowledge to the target domain in Fine-grained Unsupervised Domain Adaptation (FUDA), which leads to the poor performance of the model on the target domain. We find the key factor of FUDA is the sample with a large label bias (bias-sample) which is located near the decision boundary of adjacent fine-grained classes. To solve this problem, we propose Discriminative domain adaptation Network for Fine-grained classification (DNF) with Discriminative Cross Entropy (DCE) and Discriminative Local multi-kernel Maximum Mean Discrepancy (DLMMD). DNF employs two classifiers with different parameters to discriminate bias-samples and reduce their weight for classification, and use the outputs of the classifiers to modify the expectation of each class made by the given label to make it closer to the expectation of the true label. Therefore, DNF transfers non-bias knowledge from the source domain to the target domain. Experiments on Hand Tremor (HT) and Gait Freezing (GF) show that our approach outperforms SOTA.
ieeexplore.ieee.org
Showing the best result for this search. See all results