Abstract
Classification and segmentation are crucial in medical image analysis as they enable accurate diagnosis and disease monitoring. However, current methods often prioritize the mutual learning features and shared model parameters, while neglecting the reliability of features and performances. In this paper, we propose a novel Uncertainty-informed Mutual Learning (UML) framework for reliable and interpretable medical image analysis. Our UML introduces reliability to joint classification and segmentation tasks, leveraging mutual learning with uncertainty to improve performance. To achieve this, we first use evidential deep learning to provide image-level and pixel-wise confidences. Then, an uncertainty navigator is constructed for better using mutual features and generating segmentation results. Besides, an uncertainty instructor is proposed to screen reliable masks for classification. Overall, UML could produce confidence estimation in features and performance for each link (classification and segmentation). The experiments on the public datasets demonstrate that our UML outperforms existing methods in terms of both accuracy and robustness. Our UML has the potential to explore the development of more reliable and explainable medical image analysis models.
K. Ren and K. Zou—Denotes equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Our code has been released in https://github.com/KarryRen/UML.
References
Abdar, M., et al.: Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput. Biol. Med. 135, 104418 (2021)
Chen, J., et al.: TransUNet: transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306 (2021)
Cui, Y., Deng, W., Chen, H., Liu, L.: Uncertainty-aware distillation for semi-supervised few-shot class-incremental learning. arXiv preprint arXiv:2301.09964 (2023)
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059. PMLR (2016)
Han, Z., Zhang, C., Fu, H., Zhou, J.T.: Trusted multi-view classification. arXiv preprint arXiv:2102.02051 (2021)
Harouni, A., Karargyris, A., Negahdar, M., Beymer, D., Syeda-Mahmood, T.: Universal multi-modal deep network for classification and segmentation of medical images. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 872–876. IEEE (2018)
Jsang, A.: Subjective Logic: A Formalism for Reasoning Under Uncertainty. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-319-42337-1
Kang, Q., et al.: Thyroid nodule segmentation and classification in ultrasound images through intra-and inter-task consistent learning. Med. Image Anal. 79, 102443 (2022)
Kim, T., Lee, H., Kim, D.: UACANet: uncertainty augmented context attention for polyp segmentation. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 2167–2175 (2021)
Kohl, S., et al.: A probabilistic u-net for segmentation of ambiguous images. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Mehta, S., Mercan, E., Bartlett, J., Weaver, D., Elmore, J.G., Shapiro, L.: Y-net: joint segmentation and classification for diagnosis of breast biopsy images. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018, Part II. LNCS, vol. 11071, pp. 893–901. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_99
Newitt, D., Hylton, N., et al.: Multi-center breast DCE-MRI data and segmentations from patients in the I-SPY 1/ACRIN 6657 trials. Cancer Imaging Arch. 10(7) (2016)
Orlando, J.I., et al.: Refuge challenge: a unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med. Image Anal. 59, 101570 (2020)
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Sensoy, M., Kaplan, L., Kandemir, M.: Evidential deep learning to quantify classification uncertainty. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Sensoy, M., Kaplan, L., Kandemir, M.: Evidential deep learning to quantify classification uncertainty. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 3183–3193 (2018)
Smith, L., Gal, Y.: Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533 (2018)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Thomas, S.M., Lefevre, J.G., Baxter, G., Hamilton, N.A.: Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer. Med. Image Anal. 68, 101915 (2021)
Van Amersfoort, J., Smith, L., Teh, Y.W., Gal, Y.: Uncertainty estimation using a single deep deterministic neural network. In: International Conference on Machine Learning, pp. 9690–9700. PMLR (2020)
Wang, J., et al.: Information bottleneck-based interpretable multitask network for breast cancer classification and segmentation. Med. Image Anal. 83, 102687 (2023)
Wang, M., et al.: Uncertainty-inspired open set learning for retinal anomaly identification. arXiv preprint arXiv:2304.03981 (2023)
Wang, X., et al.: Joint learning of 3D lesion segmentation and classification for explainable COVID-19 diagnosis. IEEE Trans. Med. Imaging 40(9), 2463–2476 (2021)
Yang, X., Zeng, Z., Yeo, S.Y., Tan, C., Tey, H.L., Su, Y.: A novel multi-task deep learning model for skin lesion segmentation and classification. arXiv preprint arXiv:1703.01025 (2017)
Zhang, M., Xu, S., Piao, Y., Shi, D., Lin, S., Lu, H.: PreyNet: preying on camouflaged objects. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 5323–5332 (2022)
Zhou, Y., et al.: Multi-task learning for segmentation and classification of tumors in 3d automated breast ultrasound images. Med. Image Anal. 70, 101918 (2021)
Zhu, M., Chen, Z., Yuan, Y.: DSI-Net: deep synergistic interaction network for joint classification and segmentation with endoscope images. IEEE Trans. Med. Imaging 40(12), 3315–3325 (2021)
Zou, K., Tao, T., Yuan, X., Shen, X., Lai, W., Long, H.: An interactive dual-branch network for hard palate segmentation of the oral cavity from CBCT images. Appl. Soft Comput. 129, 109549 (2022)
Zou, K., et al.: EvidenceCap: towards trustworthy medical image segmentation via evidential identity cap. arXiv preprint arXiv:2301.00349 (2023)
Zou, K., Yuan, X., Shen, X., Wang, M., Fu, H.: TBraTS: Trusted brain tumor segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022, Part VIII. LNCS, vol. 13438, pp. 503–513. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16452-1_48
Acknowledgements
This work was supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-TC-2021-003), A*STAR AME Programmatic Funding Scheme Under Project A20H4b0141, A*STAR Central Research Fund, the Science and Technology Department of Sichuan Province (Grant No. 2022YFS0071 & 2023YFG0273), and the China Scholarship Council (No. 202206240082).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ren, K. et al. (2023). Uncertainty-Informed Mutual Learning for Joint Medical Image Classification and Segmentation. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14223. Springer, Cham. https://doi.org/10.1007/978-3-031-43901-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-43901-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43900-1
Online ISBN: 978-3-031-43901-8
eBook Packages: Computer ScienceComputer Science (R0)