Skip to main content

Advertisement

DT4PEIS: detection transformers for parasitic egg instance segmentation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Parasitic infections pose a significant health risk in many regions worldwide, requiring rapid and reliable diagnostic methods to identify and treat affected individuals. Recent advancements in deep learning have significantly improved the accuracy and efficiency of microscopic image analysis workflows, enabling its application in various domains such as medical diagnostics and microbiology. This work presents DT4PEIS, a novel two-stage architecture for the instance segmentation of parasite eggs in microscopic images. The first stage is a DEtection TRansformer (DETR) based architecture, which predicts the bounding boxes and class labels of the detected eggs. Then, the predicted bounding boxes are used as prompts to guide the segmentation process in the second stage, which is based on the Segment Anything Model (SAM) architecture. We evaluate the performance of the proposed method on the Chula-ParasiteEgg-11 dataset. Our results show that the proposed method outperforms the other architectures in terms of segmentation mean Average Precision (mAP), providing a more detailed and accurate representation of the detected eggs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The generated segmentation masks used in this work are publicly available [45].

Notes

  1. https://www.who.int/news-room/fact-sheets/detail/soil-transmitted-helminth-infections

References

  1. Dogan N (2024) Intestinal parasites from past to present: taxonomy, paleoparasitology, geographic distribution, prevention and control strategies

  2. Belizario V Jr, Ampo SAM, Henderson K, Santos T, Gerth-Guyette E, Guzman LM, Lumangaya C, Siao T, Chua RSCS (2024) Surveillance of soil-transmitted helminthiasis and schistosomiasis in the Philippines: review of current policies, guidelines and practices. Southeast Asian J Trop Med Public Health 55(4):177–195

    Google Scholar 

  3. Anantrasirichai N, Chalidabhongse TH, Palasuwan D, Naruenatthanaset K, Kobchaisawat T, Nunthanasup N, Boonpeng K, Ma X, Achim A (2022) ICIP 2022 challenge on parasitic egg detection and classification in microscopic images: dataset, methods and results. In: IEEE international conference on image processing (ICIP), IEEE pp 4306–4310

  4. Suwannaphong T, Chavana S, Tongsom S, Palasuwan D, Chalidabhongse TH, Anantrasirichai N (2023) Parasitic egg detection and classification in low-cost microscopic images using transfer learning. SN Comput Sci 5(1):82

    Article  Google Scholar 

  5. Capuozzo S, Marrone S, Gravina M, Cringoli G, Rinaldi L, Maurelli MP, Bosco A, Orrù G, Marcialis GL, Ghiani L et al (2024) Automating parasite egg detection: insights from the first ai-kfm challenge. Front Artif Intell 7:1325219

    Article  Google Scholar 

  6. Chaibutr N, Pongpanitanont P, Laymanivong S, Thanchomnang T, Janwan P (2024) Development of a machine learning model for the classification of enterobius vermicularis egg. J Imaging 10(9):212

    Article  Google Scholar 

  7. Gao Z, Huang J, Chen J, Shao T, Ni H, Cai H (2024) Deep transfer learning-based computer vision for real-time harvest period classification and impurity detection of porphyra haitnensis. Aquac Int 1–28

  8. Thanchomnang T, Chaibutr N, Maleewong W, Janwan P (2024) Automatic detection of opisthorchis viverrini egg in stool examination using convolutional-based neural networks. PeerJ 12:16773

    Article  Google Scholar 

  9. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: Proceedings of european conference on computer vision (ECCV), Springer pp 213–229

  10. Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, Xiao T, Whitehead S, Berg AC, Lo W-Y, et al (2023) Segment anything. arXiv:2304.02643

  11. Roberts LS, Janovy Jr J (2009) Gerald D. Schmidt & Larry S. Roberts’ Foundations of Parasitology. 8th edn. McGraw-Hill, New York

  12. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28

  13. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV) pp 2961–2969

  14. Cai Z, Vasconcelos N (2018) Cascade R-CNN: delving into high quality object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 6154–6162

  15. Chen K, Pang J, Wang J, Xiong Y, Li X, Sun S, Feng W, Liu Z, Shi J, Ouyang W, et al (2019) Hybrid task cascade for instance segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 4974–4983

  16. Feng C, Zhong Y, Gao Y, Scott MR, Huang W (2021) TOOD: task-aligned one-stage object detection. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV), IEEE computer society pp 3490–3499

  17. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 770–778

  18. Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 1492–1500

  19. Liu Z, Mao H, Wu C-Y, Feichtenhofer C, Darrell T, Xie S (2022) A ConvNet for the 2020s. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 11976–11986

  20. Woo S, Debnath S, Hu R, Chen X, Liu Z, Kweon IS, Xie S (2023) ConvNeXt V2: co-Designing and Scaling ConvNets With Masked Autoencoders. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 16133–16142

  21. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature Pyramid Networks for Object Detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 2117–2125

  22. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv:2010.11929

  23. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV) pp 10012–10022

  24. Liu S, Li F, Zhang H, Yang X, Qi X, Su H, Zhu J, Zhang L (2022) DAB-DETR: dynamic anchor boxes are better queries for DETR. arXiv:2201.12329

  25. Zhu X, Su W, Lu L, Li B, Wang X, Dai J (2021) Deformable DETR: deformable transformers for end-to-end object detection. In: International conference on learning representations (ICLR)

  26. Zhang H, Li F, Liu S, Zhang L, Su H, Zhu J, Ni L, Shum H-Y (2023) DINO: DETR with improved denoising anchor boxes for end-to-end object detection. In: International conference on learning representations

  27. Zong Z, Song G, Liu Y (2023) DETRs with collaborative hybrid assignments training. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV) pp 6748–6758

  28. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft COCO: common objects in context. In: Proceedings of european conference on computer vision (ECCV), Springer pp 740–755

  29. Gupta A, Dollar P, Girshick R (2019) LVIS: a dataset for large vocabulary instance segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 5356–5364

  30. Penpong N, Wanna Y, Kamjanlard C, Techasen A, Intharah T (2024) Attacking the out-of-domain problem of a parasite egg detection in-the-wild. Heliyon 10(4)

  31. Kumar S, Arif T, Ahamad G, Chaudhary AA, Ali MA, Islam A (2024) Improving faster r-cnn generalization for intestinal parasite detection using cycle-gan based data augmentation. Disc Appl Sci 6(5):1–13

    Google Scholar 

  32. Rajasekar SJS, Jaswal G, Perumal V, Ravi S, Dutt V (2023) Parasite. ai–an automated parasitic egg detection model from microscopic images of fecal smears using deep learning techniques. In: 2023 International conference on advances in computing, communication and applied informatics (ACCAI), IEEE pp 1–9

  33. Aung ZH, Srithaworn K, Achakulvisut T (2022) Multitask learning via pseudo-label generation and ensemble prediction for parasitic egg cell detection: IEEE ICIP challenge 2022. In: IEEE international conference on image processing (ICIP), IEEE pp 4273–4277

  34. Jocher G (2020) YOLOv5 by Ultralytics. https://github.com/ultralytics/yolov5

  35. Kumar S, Arif T, Ahamad G, Chaudhary AA, Khan S, Ali MA (2023) An Efficient and Effective Framework for Intestinal Parasite Egg Detection Using YOLOv5. Diagnostics 13(18):2978

    Article  Google Scholar 

  36. Liang T, Chu X, Liu Y, Wang Y, Tang Z, Chu W, Chen J, Ling H (2022) CBNet: a composite backbone network architecture for object detection. IEEE Trans Image Process 31:6893–6906

    Article  MATH  Google Scholar 

  37. Wan Z, Liu S, Ding F, Li M, Srivastava G, Yu K (2023) C2BNet: a deep learning architecture with coupled composite backbone for parasitic EGG detection in microscopic images. IEEE J Biomed Health Inform

  38. AlDahoul N, Karim HA, Momo MA, Escobar FIF, Magallanes VA, Tan MJT (2023) Parasitic egg recognition using convolution and attention network. Sci Rep 13(1):14475

    Article  Google Scholar 

  39. Zhang S, Chi C, Yao Y, Lei Z, Li SZ (2020) Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 9759–9768

  40. Tian Z, Shen C, Chen H, He T (2019) FCOS: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV) pp 9627–9636

  41. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV) pp 2980–2988

  42. Zamami R, Kohagura K, Kinjyo K, Nakamura T, Kinjo T, Yamazato M, Ishida A, Ohya Y (2021) The association between glomerular diameter and secondary focal segmental glomerulosclerosis in chronic kidney disease. Kidney Blood Press Res 46(4):433–440

    Article  Google Scholar 

  43. He K, Chen X, Xie S, Li Y, Dollár P, Girshick R (2022) Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) pp 16000–16009

  44. Palasuwan D, Naruenatthanaset K, Kobchaisawat T, Chalidabhongse TH, Nunthanasup N, Boonpeng K, Anantrasirichai N (2022) Parasitic egg detection and classification in microscopic images. IEEE Dataport

  45. Ruiz-Santaquiteria J, Muñoz J, Pedraza A, Deniz O, Bueno G (2024) DT4PEIS masks repository. Mendeley data. https://doi.org/10.17632/d3wt5ynm7n.1

  46. MMDetection Contributors (2018) OpenMMLab detection toolbox and benchmark. https://github.com/open-mmlab/mmdetection

  47. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin-transformer-object-detection. https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/

  48. Liang T, Chu X, Liu Y, Wang Y, Tang Z, Chu W, Chen J, Ling H (2022) CBNetV2. https://github.com/VDIGPKU/CBNetV2

  49. Ge Z, Liu S, Wang F, Li Z, Sun J (2021) YOLOX: exceeding YOLO series in 2021. arXiv:2107.08430

  50. Bochkovskiy A, Wang C-Y, Liao H-YM (2020) YOLOv4: optimal speed and accuracy of object detection. arXiv:2004.10934

  51. Padilla R, Netto SL, Da Silva EA (2020) A survey on performance metrics for object-detection algorithms. In: 2020 International conference on systems, signals and image processing (IWSSIP), IEEE pp 237–242

Download references

Acknowledgements

This work has been funded by the DIAMOND project (Ref. TED2021-132147B-100), the HANS project (Ref. PID2021-127567NB-I00), both projects supported by the Spanish Ministry of Science, Innovation, and Universities, and by the European Union NextGenerationEU/PRTR. The work has also been partially funded by the ARTE project (Ref. 2022-GRIN-34352) supported by University of Castilla-La Mancha.

Author information

Authors and Affiliations

Authors

Contributions

All authors conceived the idea and designed the experiments. J.R. implemented the experiments and summarized the results. J.R. and A.P. wrote the manuscript. O.D. and G.B. supervised the project. All authors reviewed the manuscript.

Corresponding author

Correspondence to Gloria Bueno.

Ethics declarations

Competing Interests

The authors declare that they have no known competing interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethical and informed consent for data used

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ruiz-Santaquiteria, J., Pedraza, A., Deniz, O. et al. DT4PEIS: detection transformers for parasitic egg instance segmentation. Appl Intell 55, 271 (2025). https://doi.org/10.1007/s10489-024-06199-y

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-06199-y

Keywords