Skip to main content
Log in

Automated Urine Cell Image Classification Model Using Chaotic Mixer Deep Feature Extraction

  • Published:
Journal of Digital Imaging Aims and scope Submit manuscript

Abstract

Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data Availability

The data used in this study were downloaded from [6,7,8].

References

  1. M. Oyaert, J. Delanghe, Progress in automated urinalysis, Annals of laboratory medicine, 39 (2019) 15-22.

    Article  PubMed  Google Scholar 

  2. C. Cavanaugh, M.A. Perazella, Urine sediment examination in the diagnosis and management of kidney disease: core curriculum 2019, American Journal of Kidney Diseases, 73 (2019) 258-272.

    Article  PubMed  Google Scholar 

  3. M.A. Perazella, The urine sediment as a biomarker of kidney disease, American journal of kidney diseases, 66 (2015) 748-755.

    Article  CAS  PubMed  Google Scholar 

  4. S. De Bruyne, M.M. Speeckaert, W. Van Biesen, J.R. Delanghe, Recent evolutions of machine learning applications in clinical laboratory medicine, Critical Reviews in Clinical Laboratory Sciences, 58 (2021) 131-152.

    Article  PubMed  Google Scholar 

  5. M. D'Alessandro, L. Poli, Q. Lai, A. Gaeta, C. Nazzari, M. Garofalo, F. Nudo, F. Della Pietra, A. Bachetoni, V. Sargentini, Automated Intelligent Microscopy for the Recognition of Decoy Cells in Urine Samples of Kidney Transplant Patients, Transplantation Proceedings, Elsevier, 2019, pp. 157–159.

  6. Y. Liang, R. Kang, C. Lian, Y. Mao, An end-to-end system for automatic urinary particle recognition with convolutional neural network, Journal of medical systems, 42 (2018) 1-14.

    Article  Google Scholar 

  7. Y. Liang, Z. Tang, M. Yan, J. Liu, Object detection based on deep learning for urine sediment examination, Biocybernetics and Biomedical Engineering, 38 (2018) 661-670.

    Google Scholar 

  8. M. Yan, Q. Liu, Z. Yin, D. Wang, Y. Liang, A Bidirectional Context Propagation Network for Urine Sediment Particle Detection in Microscopic Images, ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, pp. 981–985.

  9. Q. Li, Z. Yu, T. Qi, L. Zheng, S. Qi, Z. He, S. Li, H. Guan, Inspection of visible components in urine based on deep learning, Medical Physics, 47 (2020) 2937-2949.

    Article  CAS  PubMed  Google Scholar 

  10. X. Zhang, L. Jiang, D. Yang, J. Yan, X. Lu, Urine sediment recognition method based on multi-view deep residual learning in microscopic image, Journal of medical systems, 43 (2019) 1-10.

    Article  Google Scholar 

  11. J. Pan, C. Jiang, T. Zhu, Classification of urine sediment based on convolution neural network, AIP Conference Proceedings, AIP Publishing LLC, 2018, pp. 040176.

  12. T. Li, D. Jin, C. Du, X. Cao, H. Chen, J. Yan, N. Chen, Z. Chen, Z. Feng, S. Liu, The image-based analysis and classification of urine sediments using a LeNet-5 neural network, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 8 (2020) 109-114.

    Google Scholar 

  13. N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli, G.V. Hernandez, L. Krpalkova, D. Riordan, J. Walsh, Deep learning vs. traditional computer vision, Science and information conference, Springer, 2019, pp. 128–144.

  14. J. Lemley, S. Bazrafkan, P. Corcoran, Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision, IEEE Consumer Electronics Magazine, 6 (2017) 48-56.

    Article  Google Scholar 

  15. I. Zafar, G. Tzanidou, R. Burton, N. Patel, L. Araujo, Hands-on convolutional neural networks with TensorFlow: Solve computer vision problems with modeling in TensorFlow and Python, Packt Publishing Ltd, 2018.

  16. Ş. Öztürk, U. Özkaya, Residual LSTM layered CNN for classification of gastrointestinal tract diseases, Journal of Biomedical Informatics, 113 (2021) 103638.

    Article  PubMed  Google Scholar 

  17. P. Carcagnì, M. Leo, G. Celeste, C. Distante, A. Cuna, A systematic investigation on deep architectures for automatic skin lesions classification, 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021, pp. 8639–8646.

  18. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv:2010.11929, (2020).

  19. I.O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit, Mlp-mixer: An all-mlp architecture for vision, Advances in Neural Information Processing Systems, 34 (2021).

  20. Z. Liu, J. Ning, Y. Cao, Y. Wei, Z. Zhang, S. Lin, H. Hu, Video swin transformer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3202–3211.

  21. A. Trockman, J.Z. Kolter, Patches are all you need?, arXiv preprint arXiv:2201.09792, (2022).

  22. M. Baygin, O. Yaman, P.D. Barua, S. Dogan, T. Tuncer, U.R. Acharya, Exemplar Darknet19 feature generation technique for automated kidney stone detection with coronal CT images, Artificial Intelligence in Medicine, 127 (2022) 102274.

    Article  PubMed  Google Scholar 

  23. Z. Tu, H. Talebi, H. Zhang, F. Yang, P. Milanfar, A. Bovik, Y. Li, Maxim: Multi-axis mlp for image processing, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5769–5780.

  24. V.I. Arnold, A. Avez, Ergodic problems of classical mechanics, Benjamin, 1968.

  25. J. Bao, Q. Yang, Period of the discrete Arnold cat map and general cat map, Nonlinear Dynamics, 70 (2012) 1365-1375.

    Article  Google Scholar 

  26. H. Zhang, Z. Dong, B. Li, S. He, Multi-Scale MLP-Mixer for image classification, Knowledge-Based Systems, 258 (2022) 109792.

    Article  Google Scholar 

  27. Z. Zhou, M.T. Islam, L. Xing, Multibranch CNN With MLP-Mixer-Based Feature Exploration for High-Performance Disease Diagnosis, IEEE Transactions on Neural Networks and Learning Systems, (2023).

  28. G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.

  29. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, 2009 IEEE conference on computer vision and pattern recognition, Ieee, 2009, pp. 248-255.

    Google Scholar 

  30. T. Tuncer, S. Dogan, F. Özyurt, S.B. Belhaouari, H. Bensmail, Novel Multi Center and Threshold Ternary Pattern Based Method for Disease Detection Method Using Voice, IEEE Access, 8 (2020) 84532-84540.

    Article  Google Scholar 

  31. L.E. Peterson, K-nearest neighbor, Scholarpedia, 4 (2009) 1883.

    Article  Google Scholar 

  32. H. Tora, E. Gokcay, M. Turan, M. Buker, A generalized Arnold’s Cat Map transformation for image scrambling, Multimedia Tools and Applications, (2022) 1–14.

  33. J. Goldberger, G.E. Hinton, S. Roweis, R.R. Salakhutdinov, Neighbourhood components analysis, Advances in neural information processing systems, 17 (2004) 513-520.

    Google Scholar 

  34. H.W. Loh, C.P. Ooi, S. Seoni, P.D. Barua, F. Molinari, U.R. Acharya, Application of Explainable Artificial Intelligence for Healthcare: A Systematic Review of the Last Decade (2011–2022), Computer Methods and Programs in Biomedicine, (2022) 107161.

  35. R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.

  36. V. Jahmunah, E.Y.K. Ng, R.-S. Tan, S.L. Oh, U.R. Acharya, Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals, Computers in Biology and Medicine, 146 (2022) 105550.

    Article  CAS  PubMed  Google Scholar 

  37. K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.

  38. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.

  39. J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.

  40. F. Chollet, Xception: Deep learning with depthwise separable convolutions, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.

  41. M. Tan, Q. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, International Conference on Machine Learning, PMLR, 2019, pp. 6105–6114.

  42. G. Fasano, A. Franceschini, A multidimensional version of the Kolmogorov–Smirnov test, Monthly Notices of the Royal Astronomical Society, 225 (1987) 155-170.

    Article  Google Scholar 

  43. D.J. Steinskog, D.B. Tjøstheim, N.G. Kvamstø, A cautionary note on the use of the Kolmogorov–Smirnov test for normality, Monthly Weather Review, 135 (2007) 1151-1157.

    Article  Google Scholar 

  44. M. Sýs, L. Obrátil, V. Matyáš, D. Klinec, A Bad Day to Die Hard: Correcting the Dieharder Battery, Journal of Cryptology, 35 (2022) 1-20.

    Article  Google Scholar 

  45. M. Kaneko, K. Tsuji, K. Masuda, K. Ueno, K. Henmi, S. Nakagawa, R. Fujita, K. Suzuki, Y. Inoue, S. Teramukai, Urine cell image recognition using a deep‐learning model for an automated slide evaluation system, BJU international, 130 (2022) 235-243.

    Article  PubMed  Google Scholar 

  46. X. Zhao, J. Xiang, Q. Ji, Urine red blood cell classification based on Siamese Network, Journal of Physics: Conference Series, IOP Publishing, 2021, pp. 012089.

  47. E. Fernandez, M. Barlis, K. Dematera, G. LLas, R. Paeste, D. Taveso, J. Velasco, Four-class urine microscopic recognition system through image processing using artificial neural network, J. Telecommun. Electron. Comput. Eng.(JTEC), (2018) 214–218.

  48. X. Li, M. Li, Y. Wu, X. Zhou, L. Zhang, X. Ping, X. Zhang, W. Zheng, Multi‐instance inflated 3D CNN for classifying urine red blood cells from multi‐focus videos, IET Image Processing, 16 (2022) 2114-2123.

    Article  Google Scholar 

  49. E.O. Fernandez, M. Nilo, J.O. Aquino, J.M.P. Bravo, S. Julie-Anne, C.V.B. Gaddi, C.A. Simbran, Microcontroller-based automated microscope for image recognition of four urine constituents, TENCON 2018–2018 IEEE Region 10 Conference, IEEE, 2018, pp. 1689–1694.

  50. F. Hao, X. Li, M. Li, Y. Wu, W. Zheng, An Accurate Urine Red Blood Cell Detection Method Based on Multi-Focus Video Fusion and Deep Learning with Application to Diabetic Nephropathy Diagnosis, Electronics, 11 (2022) 4176.

    Article  CAS  Google Scholar 

  51. A. Africa, J. Velasco, Development of a urine strip analyzer using artificial neural network using an android phone, ARPN Journal of Engineering and Applied Sciences, 12 (2017) 1706-1712.

    Google Scholar 

  52. J.S. Velasco, M.K. Cabatuan, E.P. Dadios, Urine sediment classification using deep learning, Lecture Notes on Advanced Research in Electrical and Electronic Engineering Technology, (2019) 180–185.

Download references

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: ME, IT, PDB, KY, SD, TT, RST, HF, URA; formal analysis: ME, IT, PDB, KY; investigation: ME, IT, PDB, KY; methodology: ME, IT, PDB, KY, SD, TT, RST, HF, URA; software: SD, TT; project administration: URA; resources: IT, PDB; supervision: URA; validation: ME, IT, PDB, KY, SD, TT, RST, HF, URA; visualization: ME, IT, PDB, KY, SD, TT; writing—original draft: ME, IT, PDB, KY, SD, TT, RST, HF, URA; writing—review and editing: ME, IT, PDB, KY, SD, TT, RST, HF, URA; all authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Sengul Dogan.

Ethics declarations

Ethical Approval

Not applicable.

Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Erten, M., Tuncer, I., Barua, P.D. et al. Automated Urine Cell Image Classification Model Using Chaotic Mixer Deep Feature Extraction. J Digit Imaging 36, 1675–1686 (2023). https://doi.org/10.1007/s10278-023-00827-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-023-00827-8

Keywords

Navigation