Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations

A preprint version of the article is available at bioRxiv.

Abstract

The development of deep learning approaches to detect, segment or classify structures of interest has transformed the field of quantitative microscopy. High-throughput quantitative image analysis presents a challenge due to the complexity of the image content and the difficulty to retrieve precisely annotated datasets. Methods capable of reducing the annotation burden associated with the training of a deep neural network on microscopy images becomes primordial. Here we introduce a weakly supervised MICRoscopy Analysis neural network (MICRA-Net) that can be trained on a simple main classification task using image-level annotations to solve multiple more complex tasks such as semantic segmentation. MICRA-Net relies on the latent information embedded within a trained model to achieve performances similar to established architectures when no precisely annotated dataset is available. This learnt information is extracted from the network using gradient class activation maps, which are combined to generate detailed feature maps of the biological structures of interest. We demonstrate how MICRA-Net substantially alleviates the expert annotation process on various microscopy datasets and can be used for high-throughput quantitative analysis of microscopy images.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Various supervision levels can be employed for training a DL model to segment structures of interest in microscopy images.
Fig. 2: MICRA-Net architecture and experimental results on the modified MNIST dataset.
Fig. 3: Semantic segmentation of F-actin nanostructures observed on super-resolution microscopy images.
Fig. 4: Semantic instance segmentation on five selected cell lines of the CTC dataset.
Fig. 5: Detection of Giemsa-stained red blood cells from two different datasets of brightfield microscopy images from ref. 38.
Fig. 6: MICRA-Net is used as a tool to assist experts in the detection of sparse axon DAB markers in large SEM images of ultrathin mouse brain sections.

Similar content being viewed by others

Data availability

The MNIST, Cell Tracking Challenge and P. vivax datasets are all publicly available online. The F-actin and EM dataset are available at https://s3.valeria.science/flclab-micranet/index.html.

Code availability

Open source code for the MICRA-Net approach is available at https://github.com/FLClab/MICRA-Net and https://doi.org/10.5281/zenodo.5949132.

References

  1. Schermelleh, L. et al. Super-resolution microscopy demystified. Nat. Cell Biol. 21, 72–84 (2019).

    Article  Google Scholar 

  2. Lavoie-Cardinal, F. et al. Neuronal activity remodels the F-actin based submembrane lattice in dendrites but not axons of hippocampal neurons. Sci. Rep. 10, 11960 (2020).

    Article  Google Scholar 

  3. Schlegl, T., Seeböck, P., Waldstein, S. M., Langs, G. & Schmidt-Erfurth, U. f-AnoGAN: fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal. 54, 30–44 (2019).

    Article  Google Scholar 

  4. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  5. Caicedo, J. C. et al. Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl. Nat. Methods 16, 1247–1253 (2019).

    Article  Google Scholar 

  6. Moen, E. et al. Deep learning for cellular image analysis. Nat. Methods 16, 1233–1246 (2019).

    Article  Google Scholar 

  7. Ulman, V. et al. An objective comparison of cell-tracking algorithms. Nat. Methods 14, 1141–1152 (2017).

    Article  Google Scholar 

  8. Falk, T. et al. U-Net: deep learning for cell counting, detection and morphometry. Nat. Methods 16, 67–70 (2019).

    Article  Google Scholar 

  9. He, K., Gkioxari, G., Dollár, P. & Girshick, R. Mask R-CNN. In Proc. IEEE International Conference on Computer Vision 2961–2969 (IEEE, 2017).

  10. Kromp, F. et al. An annotated fluorescence image dataset for training nuclear segmentation methods. Sci. Data 7, 262 (2020).

    Article  Google Scholar 

  11. Stringer, C., Wang, T., Michaelos, M. & Pachitariu, M. Cellpose: a generalist algorithm for cellular segmentation. Nat. Methods 18, 100–106 (2021).

    Article  Google Scholar 

  12. Cheplygina, V., de Bruijne, M. & Pluim, J. P. W. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med. Image Anal. 54, 280–296 (2019).

    Article  Google Scholar 

  13. Papandreou, G., Chen, L.-C., Murphy, K. P. & Yuille, A. L. Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. In Proc. IEEE International Conference on Computer Vision 1742–1750 (IEEE, 2015).

  14. Khoreva, A., Benenson, R., Hosang, J., Hein, M. & Schiele, B. Simple does it: weakly supervised instance and semantic segmentation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 876–885 (IEEE, 2017).

  15. Xu, J., Schwing, A. G. & Urtasun, R. Tell me what you see and I will show you where it is. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 3190–3197 (IEEE, 2014).

  16. Pesce, E. et al. Learning to detect chest radiographs containing pulmonary lesions using visual attention networks. Med. Image Anal. 53, 26–38 (2019).

    Article  Google Scholar 

  17. Rajchl, M. et al. DeepCut: object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans. Med. Imaging 36, 674–683 (2016).

    Article  Google Scholar 

  18. Yang, L. et al. BoxNet: deep learning based biomedical image segmentation using boxes only annotation. Preprint at https://arxiv.org/abs/1806.00593 (2018).

  19. Lin, T.-Y. et al. Microsoft COCO: common objects in context. In Proc. Computer Vision—ECCV 2014. Lecture Notes in Computer Science Vol. 8693 (eds Fleet, D. et al.) 740–755 (Springer, 2014).

  20. Vezhnevets, A., Ferrari, V. & Buhmann, J. M. Weakly supervised structured output learning for semantic segmentation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 845–852 (IEEE, 2012).

  21. Dubost, F. et al. Weakly supervised object detection with 2D and 3D regression neural networks. Med. Image Anal. 65, 101767 (2020).

    Article  Google Scholar 

  22. Li, J. et al. An EM-based semi-supervised deep learning approach for semantic segmentation of histopathological images from radical prostatectomies. Comput. Med. Imaging Graph. 69, 125–133 (2018).

    Article  Google Scholar 

  23. Kraus, O. Z., Ba, J. L. & Frey, B. J. Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics 32, i52–i59 (2016).

    Article  Google Scholar 

  24. Ouyang, W. et al. Analysis of the Human Protein Atlas Image Classification competition. Nat. Methods 16, 1254–1261 (2019).

    Article  Google Scholar 

  25. Long, R. K. M. et al. Super resolution microscopy and deep learning identify Zika virus reorganization of the endoplasmic reticulum. Sci. Rep. 10, 20937 (2020).

    Article  Google Scholar 

  26. Chatterjee, B. & Poullis, C. Semantic segmentation from remote sensor data and the exploitation of latent learning for classification of auxiliary tasks. Preprint at https://arxiv.org/abs/1912.09216 (2019).

  27. Caicedo, J. C. et al. Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytometry A 95, 952–965 (2019).

    Article  Google Scholar 

  28. Selvaraju, R. R. et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. In Proc. IEEE International Conference on Computer Vision 618–626 (IEEE, 2017).

  29. Berg, S. et al. ilastik: interactive machine learning for (bio)image analysis. Nat. Methods 16, 1226–1232 (2019).

    Article  Google Scholar 

  30. LeCun, Y. et al. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Article  Google Scholar 

  31. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention (eds Navab, N.) 234–241 (Springer, 2015).

  32. Xu, K., Zhong, G. & Zhuang, X. Actin, spectrin and associated proteins form a periodic cytoskeletal structure in axons. Science 339, 452–456 (2013).

    Article  Google Scholar 

  33. Ljosa, V., Sokolnicki, K. L. & Carpenter, A. E. Annotated high-throughput microscopy image sets for validation. Nat. Methods 9, 637 (2012).

    Article  Google Scholar 

  34. Kromp, F. et al. Evaluation of deep learning architectures for complex immunofluorescence nuclear image segmentation. IEEE Trans. Med. Imaging 40, 1934–1949 (2021).

    Article  Google Scholar 

  35. Graham, S. et al. Hover-Net: simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med. Image Anal. 58, 101563 (2019).

    Article  Google Scholar 

  36. Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215–1225 (2019).

    Article  Google Scholar 

  37. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    Article  Google Scholar 

  38. Hung, J. & Carpenter, A. Applying faster R-CNN for object detection on malaria images. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops 56–61 (IEEE, 2017).

  39. Hung, J. et al. Keras R-CNN: library for cell detection in biological images using deep neural networks. BMC Bioinformatics 21, 300 (2020).

    Article  Google Scholar 

  40. Depto, D. S. et al. Automatic segmentation of blood cells from microscopic slides: a comparative analysis. Tissue Cell 73, 101653 (2021).

    Article  Google Scholar 

  41. Lam, S. S. et al. Directed evolution of APEX2 for electron microscopy and proximity labeling. Nat. Methods 12, 51–54 (2015).

    Article  Google Scholar 

  42. Bekker, J. & Davis, J. Learning from positive and unlabeled data: a survey. Mach. Learn. 109, 719–760 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  43. Kreshuk, A., Koethe, U., Pax, E., Bock, D. D. & Hamprecht, F. A. Automated detection of synapses in serial section transmission electron microscopy image stacks. PLoS ONE 9, e87351 (2014).

    Article  Google Scholar 

  44. Jagadeesh, V. et al. Synapse classification and localization in electron micrographs. Pattern Recognit. Lett. 43, 17–24 (2014).

    Article  Google Scholar 

  45. Gómez-de-Mariscal, E. et al. Deep-learning-based segmentation of small extracellular vesicles in transmission electron microscopy images. Sci. Rep. 9, 13211 (2019).

    Article  Google Scholar 

  46. Christiansen, E. M. et al. In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803 (2018).

    Article  Google Scholar 

  47. Caruana, R. Multitask learning. Mach. Learn. 28, 41–75 (1997).

    Article  Google Scholar 

  48. Girshick, R. Fast R-CNN. In Proc. IEEE International Conference on Computer Vision 1440–1448 (IEEE, 2015).

  49. Ruder, S. An overview of multi-task learning in deep neural networks. Preprint at https://arxiv.org/abs/1706.05098 (2017).

  50. Mathis, A. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281–1289 (2018).

    Article  Google Scholar 

  51. He, K., Girshick, R. & Dollár, P. Rethinking ImageNet pre-training. In Proc. IEEE International Conference on Computer Vision 4918–4927 (IEEE, 2019).

  52. Raghu, M., Zhang, C., Kleinberg, J. & Bengio, S. Transfusion: understanding transfer learning for medical imaging. In Advances in Neural Information Processing Systems 32 (eds Wallach, H. et al.) 3347–3357 (Curran Associates, 2019).

  53. Mazzara, G. P., Velthuizen, R. P., Pearlman, J. L., Greenberg, H. M. & Wagner, H. Brain tumor target volume determination for radiation treatment planning through automated MRI segmentation. Int. J. Radiat. Oncol. Biol. Phys. 59, 300–312 (2004).

    Article  Google Scholar 

  54. Eliceiri, K. W. et al. Biological imaging software tools. Nat. Methods 9, 697–710 (2012).

    Article  Google Scholar 

  55. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24, 417–441 (1933).

    Article  MATH  Google Scholar 

  56. Paszke, A. et al. Pytorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8024–8035 (2019).

    Google Scholar 

  57. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In 3rd International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) (ICLR, 2015).

  58. Cook, R. L. Stochastic sampling in computer graphics. ACM Trans. Graph. (TOG) 5, 51–72 (1986).

    Article  Google Scholar 

  59. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979).

    Article  Google Scholar 

  60. Van der Walt, S. et al. scikit-image: image processing in Python. PeerJ 2, e453 (2014).

    Article  Google Scholar 

  61. Kuhn, H. W. The Hungarian method for the assignment problem. Naval Res. Logistics Q. 2, 83–97 (1955).

    Article  MathSciNet  MATH  Google Scholar 

  62. Yeghiazaryan, V. & Voiculescu, I. D. Family of boundary overlap metrics for the evaluation of medical image segmentation. J. Med. Imaging 5, 015006 (2018).

    Article  Google Scholar 

  63. Scott, M. M. et al. A genetic approach to access serotonin neurons for in vivo and in vitro studies. Proc. Natl Acad. Sci. USA 102, 16472–16477 (2005).

    Article  Google Scholar 

  64. Good, P. I. Resampling Methods 3rd edn (Birkhäuser, 2006).

Download references

Acknowledgements

We acknowledge the following: L. Emond for F-actin sample preparation and immunocytochemistry; F. Nault, C. Salesse and L. Emond for the neuronal cell culture; J. Marek and R. Bernatchez for the development of a custom Python annotation application; T. Dhellemmes for inter-expert axon DAB annotations in EM images; C. Gagné and M.-A. Gardner for preliminary discussions on semantic segmentation; A. Schwerdtfeger and A. Gabela for careful proofreading of the manuscript. Funding was provided by grants from the Natural Sciences and Engineering Research Council of Canada (RGPIN-2017-06171, P.D.K.; RGPIN-2018-06264, M.P.; RGPIN-2019-06704, F.L.-C.), Canadian Institutes of Health Research (153107, P.D.K.; 470155, M.P.), Neuronex Initiative (Fond de Recherche du Québec—Santé; 295823, P.D.K. and F.L.-C.), CERVO Brain Research Center Foundation (F.L.-C.) and the Canadian Foundation for Innovation (32786, P.D.K.; 39088, F.L.-C.). F.L.-C. is a Canada Research Chair Tier II (CRC-2019-00126, F.L.-C.), A.D. is a CIFAR AI Chair, and A.B. is supported by a PhD scholarship from the Fonds de Recherche du Québec—Nature et Technologie (FRQNT) and an excellence scholarship from the FRQNT strategic cluster UNIQUE.

Author information

Authors and Affiliations

Authors

Contributions

A.B. and F.L.-C. designed the approach. A.B. implemented the neuronal network architectures, generated the modified MNIST dataset, created the annotation application for the user study and performed all deep learning experiments. A.B., A.D. and F.L.-C. analysed the results. F.L.-C. acquired and annotated the F-actin dataset. C.V.L.D. and M.P. generated and provided the annotated EM dataset. F.L.-C., A.D. and P.D.K. supervised the project. F.L.-C., A.D. and A.B. wrote the manuscript.

Corresponding author

Correspondence to Flavie Lavoie-Cardinal.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Alexander Krull and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Graphical user interface (GUI) developed to facilitate the visualisation of the extracted local maps from MICRA-Net.

Graphical user interface (GUI) developed to facilitate the visualisation of the extracted local maps from MICRA-Net. The detailed instructions to use the application can be found on the GitHub repository (https://github.com/FLClab/MICRA-Net). Briefly, one can load a trained MICRA-Net model and an image to predict the presence of a specific structure. The GUI shows the extracted local maps L1−8 to the user for each activated class of the selected image. The user can select the desired local maps which are combined into a detailed feature map that can be thresholded to generate a final segmentation mask.

Extended Data Fig. 2 Representative images of F-actin semantic segmentation on dendrites for both structures.

Representative images of F-actin semantic segmentation on dendrites for both structures (fibers and periodical lattice [rings]). From left to right, the fine segmentation from the Expert, MICRA-Net, weakly supervised U-Net, weakly supervised Mask R-CNN and Ilastik are shown. The color code maps true positive (TP, green), false positive (FP, yellow) and false negative (FN, red) segmentation for each method compared to the fine Expert labels. A red arrow indicates a region in the periodical lattice image missed by the Expert. Scale bars 1μm.

Extended Data Fig. 3 Representative examples of the instance segmentation procedure using MICRA-Net for two cell lines of the Cell Tracking Challenge.

Representative examples of the instance segmentation procedure using MICRA-Net for two cell lines of the Cell Tracking Challenge (top: PhC-C2DL-PSC, bottom: Fluo-N2DL-HeLa). Shown are the input image (left), the PCA decomposition of the raw feature maps extracted from layers L1−7 of MICRA-Net for the cell prediction (middle, and the grad-CAM of layer L8 for semantic contact (right). Scale bars: 25 μm.

Extended Data Fig. 4 Schematic of the training and fine-tuning procedure for MICRA-Net on the P. Vivax dataset.

Schematic of the training and fine-tuning procedure for MICRA-Net on the P. Vivax dataset. a) Data preparation: 80/20 split of the provided training set is used for training and validation respectively, keeping the testing set as is. b) Fine-tuning of MICRA-Net: uniform sample of {12, 24, 36} images from the testing set. A 3-fold scheme is used: training on two folds and validating on a separate fold, enabling early stopping. The 3-fold allowed to calculate the total number of epochs to train each model and to set the detection thresholds. All methods were tested on the same testing set of 84 images. c) Training: 5 different models were trained on the original dataset (Naive). For fine-tuning, the 3-fold scheme was repeated 5 times, one time for each of the 5 Naive models as starting points, generating a total of 25 models. Thus, allowing to stop the fine-tuning at a specific epoch.

Supplementary information

Supplementary Information

Supplementary Figs. 1–24, Tables 1–30 and Notes 1–5.

Reporting Summary

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bilodeau, A., Delmas, C.V.L., Parent, M. et al. Microscopy analysis neural network to solve detection, enumeration and segmentation from image-level annotations. Nat Mach Intell 4, 455–466 (2022). https://doi.org/10.1038/s42256-022-00472-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-022-00472-w

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing