Skip to main content

Advertisement

Log in

Stable and invertible invariants description for gray-level images based on Radon transform

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

In a large number of applications, different types of descriptors have been implemented to identify and recognize textured objects in grayscale images. Their classification must be carried out independently of their position, orientation and scale. The property of completeness of descriptors which guarantees their uniqueness for a given shape is also a sought-after property. It is known in the literature that such a property is difficult to obtain. It was possible to achieve it in rare cases for planar curves using for instance Fourier descriptors or the curvature. In grayscale images, we know at least three cases of complete descriptors: Those based on Zernike moments, those computed from the analytical Fourier–Mellin transform or obtained from the complex moments. To our current knowledge, in the case of curved surfaces and 3D volume images, there are yet no complete descriptors invariant to the 3D rigid motions. The property of invertibility of invariants, introduced recently and which implies completeness, allows the reconstruction of the object shape up to a similarity. The two sets of descriptors that we propose here verify on the one hand the invariance and the invertibility and on the other hand, the notion of stability which was introduced to guarantee the fact that the descriptors vary slightly during small variations in the shape. Their construction based on the Radon transform allows a certain robustness with respect to noise. In this article, we rigorously demonstrate the properties of invariance, invertibility and convergence for the two sets of proposed invariants. To evaluate the stability and robustness with respect to noise, experimental studies are carried out on different well-known datasets, namely Kimia 99 and MPEG7. We introduced our own face dataset, which we named FSTEF for further evaluation. On the other hand, several types and levels of noise were added to test the robustness to noise. Therefore, the effectiveness of the suggested sets of invariants are demonstrated by the different studies proposed in the present work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

FSTEF dataset is not publicly available to preserve individuals privacy under the European General Data Protection Regulation.

References

  1. Dong, B., Wang, X.: Comparison deep learning method to traditional methods using for network intrusion detection. In: 2016 8th IEEE International Conference on Communication Software and Networks (ICCSN), pp. 581–585 (2016). https://doi.org/10.1109/ICCSN.2016.7586590

  2. Wang, J., Ma, Y., Zhang, L., Gao, R.X., Wu, D.: Deep learning for smart manufacturing: methods and applications. J. Manuf. Syst. 48, 144–156 (2018). https://doi.org/10.1016/j.jmsy.2018.01.003. URL https://www.sciencedirect.com/science/article/pii/S0278612518300037. Special Issue on Smart Manufacturing. ISSN 0278-6125

  3. Li, P., Wang, D., Wang, L., Lu, H.: Deep visual tracking: review and experimental comparison. Pattern Recogn. 76, 323–338 (2018)

    MATH  Google Scholar 

  4. Villamizar, M., Sanfeliu, A., Moreno-Noguer, F.: Online learning and detection of faces with low human supervision. Vis. Comput. 35(3), 349–370 (2019)

    MATH  Google Scholar 

  5. Bayoudh, K., Knani, R., Hamdaoui, F., Mtibaa, A.: A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets. Vis. Comput. 38(8), 2939–2970 (2022)

  6. Zhang, Z., Geiger, J., Pohjalainen, J., Mousa, A., Jin, W., Schuller, B.: Deep learning for environmentally robust speech recognition: an overview of recent developments. CoRR arXiv: 1705.10874 (2017)

  7. Li, S., Song, W., Fang, L., Chen, Y., Ghamisi, P., Benediktsson, J.A.: Deep learning for hyperspectral image classification: an overview. IEEE Trans. Geosci. Remote Sens. 57(9), 6690–6709 (2019)

    MATH  Google Scholar 

  8. Scheidegger, F., Istrate, R., Mariani, G., Benini, L., Bekas, C., Malossi, C.: Efficient image dataset classification difficulty estimation for predicting deep-learning accuracy. Vis. Comput. 37(6), 1593–1610 (2021)

    MATH  Google Scholar 

  9. Otter, D.W., Medina, J.R., Kalita, J.K.: A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 604–624 (2021). https://doi.org/10.1109/TNNLS.2020.2979670

    Article  MathSciNet  MATH  Google Scholar 

  10. Wu, X., Sahoo, D., Hoi, S.: Recent advances in deep learning for object detection. Neurocomputing 396, 39–64 (2020). https://doi.org/10.1016/j.neucom.2020.01.085. ISSN 0925-2312

  11. Alalwan, N., Abozeid, A., ElHabshy, A.A. and Alzahrani, A.: Efficient 3d deep learning model for medical image semantic segmentation. Alex. Eng. J. 60(1), 1231–1239 (2021). https://doi.org/10.1016/j.aej.2020.10.046. ISSN 1110-0168

  12. Ben Gamra, M., Akhloufi, M.A.: A review of deep learning techniques for 2d and 3d human pose estimation. Image Vis. Comput. 114, 104282 (2021). https://doi.org/10.1016/j.imavis.2021.104282. ISSN 0262-8856

  13. Qian, X., Zeng, Y., Wang, W., Zhang, Q.: Co-saliency detection guided by group weakly supervised learning. IEEE Trans. Multimedia (2022). https://doi.org/10.1109/TMM.2022.3167805

  14. Li, D., Pan, X., Fu, Z., Chang, L., Zhang, G.: Real-time accurate deep learning-based edge detection for 3-d pantograph pose status inspection. IEEE Trans. Instrum. Meas. 71, 1–12 (2022). https://doi.org/10.1109/TIM.2021.3137558

    Article  MATH  Google Scholar 

  15. Zhang, Z., Cui, P., Zhu, W.: Deep learning on graphs: a survey. IEEE Tran. Knowl. Data Eng. 34(1), 249–270 (2022). https://doi.org/10.1109/TKDE.2020.2981333. ISSN 1558-2191

  16. Brosch, T., Tam, R.: Manifold learning of brain mris by deep learning. In: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2013, pp. 633–640. Berlin, Heidelberg (2013)

  17. Kuang, L., Yang, L.T., Liao, Y.: An integration framework on cloud for cyber-physical-social systems big data. IEEE Trans. Cloud Comput. 8(2), 363–374 (2020). https://doi.org/10.1109/TCC.2015.2511766

    Article  MATH  Google Scholar 

  18. Zhang, Q., Yang, L., Chen, Z., Li, P.: A survey on deep learning for big data. Inf. Fus. 42, 146–157 (2018). https://doi.org/10.1016/j.inffus.2017.10.006. ISSN 1566-2535

  19. Mokhtarian, F., Bober, M.: Curvature Scale Space Representation: Theory, Applications, and MPEG-7 Standardization. Computational Imaging and Vision. Springer, Netherlands (2013). ISBN 9789401703437. https://books.google.co.ma/books?id=QOeoCAAAQBAJ

  20. Mokhtarian, F., Abbasi, S., Kittler, J.: Robust and efficient shape indexing through curvature scale space. In: British Machine Vision Conference (1996)

  21. BenKhlifa, A., Ghorbel, F.: An almost complete curvature scale space representation: Euclidean case. Signal Process.: Image Commun. 75:32–43 (2019). https://doi.org/10.1016/j.image.2019.03.009. https://www.sciencedirect.com/science/article/pii/S0923596518306921. ISSN 0923-5965

  22. Abbasi, S., Mokhtarian, F.: Curvature scale space with affine length parametrisation. In: Scale-Space, Volume 1682 of Lecture Notes in Computer Science, pp. 435–440. Springer (1999)

  23. Benkhlifa, A., Ghorbel, F.: A normalized generalized curvature scale space for 2D contour representation (2019). https://doi.org/10.1007/978-3-030-19816-9_13. ISBN 978-981-13-6263-7

  24. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)

    MATH  Google Scholar 

  25. Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)

    MATH  Google Scholar 

  26. Teague, M.R.: Image analysis via the general theory of moments. Josa 70(8), 920–930 (1980)

    MathSciNet  MATH  Google Scholar 

  27. Lao, H., Zhang, X.: Diagnose Alzheimer’s disease by combining 3d discrete wavelet transform and 3d moment invariants. IET Image Process. 16(14), 3948–3964 (2022)

    MATH  Google Scholar 

  28. Yang, C.: Plant leaf recognition by integrating shape and texture features. Pattern Recogn. 112, 107809 (2021)

    Google Scholar 

  29. Tigistu, T., Abebe, G.: Classification of rose flowers based on Fourier descriptors and color moments. Multimedia Tools Appl. 80(30), 36143–36157 (2021)

    MATH  Google Scholar 

  30. Ghorbel, F.: A complete invariant description for gray-level images by the harmonic analysis approach. Pattern Recogn. Lett. 15(10), 1043–1051 (1994)

    MATH  Google Scholar 

  31. Cecotti, H.: Rotation invariant descriptors for galaxy morphological classification. Int. J. Mach. Learn. Cybern. 11(8), 1839–1853 (2020)

    MATH  Google Scholar 

  32. Yang, J., et al: Quasi Fourier–Mellin transform for affine invariant features. IEEE Trans. Image Process. 29, 4114–4129 (2020)

  33. Wang, B., Gao, Y.: Structure integral transform versus radon transform: a 2d mathematical tool for invariant shape recognition. IEEE Trans. Image Process. 25(12), 5635–5648 (2016)

    MathSciNet  MATH  Google Scholar 

  34. Aftab, S., Ali, S.F., Mahmood, A., Suleman, U.: A boosting framework for human posture recognition using spatio-temporal features along with radon transform. Multimedia Tools Appl. 81(29), 42325–42351 (2022)

    Google Scholar 

  35. Elghoul, S., Ghorbel, F.: Fast global se \((2, r)\) shape registration based on invertible invariant descriptor. Signal Process.: Image Commun. 90, 116058 (2021)

    MATH  Google Scholar 

  36. Ghorbel, E., Ghorbel, F., M’Hiri, S.: A fast and efficient shape blending by stable and analytically invertible finite descriptors. IEEE Trans. Image Process. 31, 5788–5800 (2022)

    MATH  Google Scholar 

  37. Crimmins, T.R.: A complete set of Fourier descriptors for two-dimensional shapes. IEEE Trans. Syst., Man, Cybern. 12(6), 848–855 (1982)

    MathSciNet  MATH  Google Scholar 

  38. Ghorbel, F.: Towards a unitary formulation for invariant image description: application to image coding. Ann. Telecommun. 53(5), 242–260 (1998)

    MATH  Google Scholar 

  39. Huang, Z., Cohen, F.S.: Affine-invariant b-spline moments for curve matching. IEEE Trans. Image Process. 5(10), 1473–1480 (1996)

    MATH  Google Scholar 

  40. Bryner, D., Srivastava, A., Klassen, E.: Affine-invariant, elastic shape analysis of planar contours. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 390–397. IEEE (2012)

  41. Bryner, D., Klassen, E., Le, H., Srivastava, A.: 2d affine and projective shape analysis. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 998–1011 (2013)

    MATH  Google Scholar 

  42. Ait Khouya, Y., Ghorbel, F.: Brachiopods classification based on fusion of contour and region based descriptors. Int. J. Adv. Intell. Paradig. 14(1–2), 140–156 (2019)

    MATH  Google Scholar 

  43. Ghorbel, F., Derrode, S., Mezhoud, R., Bannour, T., Dhahbi, S.: Image reconstruction from a complete set of similarity invariants extracted from complex moments. Pattern Recogn. Lett. 27(12), 1361–1369 (2006)

    MATH  Google Scholar 

  44. Chong, C.W., Raveendran, P., Mukundan, R.: An efficient algorithm for fast computation of pseudo-Zernike moments. Int. J. Pattern Recogn. Artif. Intell. 17(06), 1011–1023 (2003)

    MATH  Google Scholar 

  45. Chong, C.W., Raveendran, P., Mukundan, R.: Translation and scale invariants of Legendre moments. Pattern Recogn. 37(1), 119–129 (2004)

    MATH  Google Scholar 

  46. Mukundan, R., Ong, S.H., Lee, P.A.: Image analysis by Tchebichef moments. IEEE Trans. Image Process. 10(9), 1357–1364 (2001)

    MathSciNet  MATH  Google Scholar 

  47. Sheng, Y., Shen, L.: Orthogonal Fourier–Mellin moments for invariant pattern recognition. JOSA A 11(6), 1748–1757 (1994)

    MATH  Google Scholar 

  48. Xu, D., Li, H.: Geometric moment invariants. Pattern Recogn. 41(1), 240–249 (2008)

    MATH  Google Scholar 

  49. Teh, C.H., Chin, R.T.: On image analysis by the methods of moments. IEEE Trans. Pattern Anal. Mach. Intell. 10(4), 496–513 (1988)

    MATH  Google Scholar 

  50. Zhang, D., Lu, G.: Shape-based image retrieval using generic Fourier descriptor. Signal Process.: Image Commun. 17(10), 825–848 (2002)

    MATH  Google Scholar 

  51. Kaur, P., Pannu, H.S., Malhi, A.K.: Comprehensive study of continuous orthogonal moments—a systematic review. ACM Comput. Surv. (CSUR) 52(4), 1–30 (2019)

    MATH  Google Scholar 

  52. Benkhlifa, A., Ghorbel, F.: A novel 2d contour description generalized curvature scale space. In: Representations, Analysis and Recognition of Shape and Motion from Imaging Data: 6th International Workshop, RFMI 2016, Sidi Bou Said Village, Tunisia, October 27–29, 2016, Revised Selected Papers 6, pp. 129–140. Springer (2017)

  53. Derrode, S., Ghorbel, F.: Robust and efficient Fourier–Mellin transform approximations for gray-level image reconstruction and complete invariant description. Comput. Vis. Image underst. 83(1), 57–78 (2001)

    MATH  Google Scholar 

  54. Zhang, H., Shu, H.Z., Haigron, P., Li, B.S., Luo, L.M.: Construction of a complete set of orthogonal Fourier–Mellin moment invariants for pattern recognition applications. Image Vis. Comput. 28(1), 38–44 (2010)

    MATH  Google Scholar 

  55. Smach, F., Lematre, C., Gauthier, J.P., Miteran, J., Atri, M.: Generalized Fourier descriptors with applications to objects recognition in SVM context. J. Math. Imaging Vis. 30, 43–71 (2008)

    MathSciNet  MATH  Google Scholar 

  56. Tabbone, S., Wendling, L., Salmon, J.P.: A new shape descriptor defined on the radon transform. Comput. Vis. Image Underst. 102(1), 42–51 (2006)

    MATH  Google Scholar 

  57. Jafari-Khouzani, K., Soltanian-Zadeh, H.: Radon transform orientation estimation for rotation invariant texture analysis. IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 1004–1008 (2005)

    MATH  Google Scholar 

  58. Jafari-Khouzani, K., Soltanian-Zadeh, H.: Rotation-invariant multiresolution texture analysis using radon and wavelet transforms. IEEE Trans. Image Process. 14(6), 783–795 (2005)

    MathSciNet  MATH  Google Scholar 

  59. Galigekere, R.R., Holdsworth, D.W., Swamy, M.: Moment patterns in the radon space. Opt. Eng. 39(4), 1088–1097 (2000)

  60. Xiao, B., Cui, J.T., Qin, H.X., Li, W.S., Wang, G.Y.: Moments and moment invariants in the radon space. Pattern Recogn. 48(9), 2772–2784 (2015)

    MATH  Google Scholar 

  61. Wang, X., Xiao, B., Ma, J.F., Bi, X.L.: Scaling and rotation invariant analysis approach to object recognition based on radon and Fourier–Mellin transforms. Pattern Recogn. 40(12), 3503–3508 (2007)

    MATH  Google Scholar 

  62. Hoang, T.V., Tabbone, S.: Invariant pattern recognition using the RFM descriptor. Pattern Recogn. 45(1), 271–284 (2012)

    MATH  Google Scholar 

  63. Deans, S.R.: The Radon Transform and Some of its Applications. Courier Corporation (2007)

  64. Gupta, S., Thakur, K., Kumar, M.: 2d-human face recognition using sift and surf descriptors of face’s feature regions. Vis. Comput. 37, 447–456 (2021)

    MATH  Google Scholar 

  65. Latecki, L.J., Lakamper, R., Eckhardt, T.: Shape descriptors for non-rigid shapes with a single closed contour. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), vol. 1, pp. 424–429. IEEE (2000)

  66. Sebastian, T.B., Klein, P.N., Kimia, B.B.: Recognition of shapes by editing their shock graphs. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 550–571 (2004)

    MATH  Google Scholar 

  67. Jemal, M.A., Sallami, M.M., Ghorbel, F.: Robust watermarking method based on the analytical Clifford Fourier Mellin transform. Multimedia Tools Appl. 1–22 (2023)

  68. Li, H., Liu, Z., Huang, Y., Shi, Y.: Quaternion generic Fourier descriptor for color object recognition. Pattern Recogn. 48(12), 3895–3903 (2015)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Youssef Ait Khouya.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khouya, Y.A., Oussous, M.A., Jakimi, A. et al. Stable and invertible invariants description for gray-level images based on Radon transform. Vis Comput 41, 79–97 (2025). https://doi.org/10.1007/s00371-024-03311-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-024-03311-8

Keywords