Skip to main content

Rotation scaling and translation invariants by a remediation of Hu’s invariant moments

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Due to the invariance to translation, rotation and scaling, the seven invariant moments presented by Hu (Visual pattern recognition by moment invariants, IRE Transactions on Information Theory, vol. 8, February 1962, pp. 179–187) are widely used in the field of pattern recognition. The set of these moments is finite; therefore, they do not comprise a complete set of image descriptors. To solve this problem, we introduce in this paper a new set of invariant moments of infinite order. The non-orthogonal property causes the redundancy of information. For this reason, we propose a new set of orthogonal polynomials in two variables, and we present a set of orthogonal moments, which are invariant to rotation, scale and translation. The presented approaches are tested by the invariability of the moments, the image retrieval and the classification of the objects. In this framework, using the proposed orthogonal moments, we present two classification systems. The first based on the Fuzzy C-Means Clustering algorithm (FCM) and the second based on the Radial Basis Functions Neural Network (RBF). The performance of our invariant moments is compared with Legendre invariant moments, Tchebichef-Krawtchouk (TKIM), Tchebichef-Hahn (THIM), Krawtchouk-Hahn (KHIM), Hu invariant moments, the descriptor of histogram of oriented gradients (HOG), the adaptive hierarchical density histogram features (AHDH) and with descriptors of color and texture Hist, HSV, FOS and SGLD. The experimental tests are performed on seven image databases: Columbia Object Image Library (COIL-20) database, MPEG7-CE shape database, MNIST handwritten digit database, MNIST fashion image database, ImageNet database, COIL-100 database and ORL database. The obtained results show the efficiency and superiority of our orthogonal invariant moments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32

Similar content being viewed by others

References

  1. Broomhead DS, David L (1988) Radial basis functions, multi-variable functional interpolation and adaptive networks. Technical report RSRE 2:41–48

    Google Scholar 

  2. Chong C, Caramesran R, Cukundan R (2004) Translation and scale invariants of legendre moments. Pattern Recogn 37(1):119–129. https://doi.org/10.1016/j.patcog.2003.06.003

    Article  Google Scholar 

  3. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. Proceedings of IEEE Conference Computer Vision and Pattern Recognition:886–893

  4. Dunn JC (1974) A fuzzy relative of the isodata process and its use in detecting compact well separated clusters. Journal of Cybernetics 3(3):32–57

    Article  MathSciNet  Google Scholar 

  5. Faloutsos C, Equitz W, Flickner M, Niblack W, Petkovic D, Barber R (1994) Efficient and effective querying by image content. Journal of Intelligent 335 Information Systems 3(4): 231–262.

    Article  Google Scholar 

  6. Hafner J, Sawhney H, Equitz W, Flickner M, Niblack W (1995) Efficient color histogram indexing for quadratic form distance function. IEEE Trans Pattern Anal Mach Intell 7(17):729–736

    Article  Google Scholar 

  7. Haralick R, Dinstein I, Shanmugam K (1973) Textural features for image classification. IEEE Transactions on systemes, Man and Cybernetics 3(6):610–621

    Article  Google Scholar 

  8. Hmimid A, Sayyouri M, Qjidaa H (2015) Fast computation of separable two-dimensional discrete invariant moments for image classification. Pattern Recogn 48:509–521. https://doi.org/10.1016/j.patcog.2014.08.020

    Article  MATH  Google Scholar 

  9. Hu MK (1962) Visual pattern recognition by moment invariants. IRE Trans Inform Theory 8:179–187

    MATH  Google Scholar 

  10. Hu X, Zhang Q, Shi J, Qi Y (2016) A Comparative Study on Weighted Central Moment and Its Application in 2D Shape Retrieval. Information 7

  11. Jahid T, Karmouni H, Hmimid A, Sayyouri M, Qjidaa H (2019) Fast computation of ChVGVarlier moments and its inverses using Clenshaw’s recurrence formula for image analysis. Multimed Tools Appl 78:12183–12201. https://doi.org/10.1007/s11042-018-6757-z

    Article  MATH  Google Scholar 

  12. Khotanzad A, Hong Y (1990) Invariant image recognition by zernike moments. IEEE Trans Pattern Anal Mach Intell 12:489–497. https://doi.org/10.1109/34.55109

    Article  Google Scholar 

  13. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2344. https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  14. Liao SX, Pawlak M (1996) On image analysis by moments. IEEE Trans Pattern Anal Mach Intell 18(3):254–266. https://doi.org/10.1109/34.485554

    Article  Google Scholar 

  15. Press W, Flannery B, Teukolsky S, Vetterling W (2020). Numerical recipes. The art of scientific computing. IEEE Transactions on Pattern Analysis and Machine Intelligence

  16. Sidiropoulos P, Vrochidis S, Kompatsiaris I (2011) Content based binary image retrieval using the adaptive hierarchical density histogram. Pattern Recogn 44(4):739–750. https://doi.org/10.1016/j.patcog.2010.09.014

    Article  Google Scholar 

  17. Teague M (1980) Image analysis via the general theory of moments. J Opt Soc Am A 70:920–930. https://doi.org/10.1364/JOSA.70.000920

    Article  MathSciNet  Google Scholar 

  18. Zhang H, Shu HZ, Han GN, Coatrieux G, Luo LM, Coatrieux JL (2010) Blurred image recognition by Legendre moment invariants. Image Processing, IEEE Transactions on Image Processing 19(3):596–611. https://doi.org/10.1109/TIP.2009.2036702

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhu H (2012) Image representation using separable two- dimensional continuous and discrete orthogonal moments. Pattern Recogn 45(4):1540–1558. https://doi.org/10.1016/j.patcog.2011.10.002

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amal Hjouji.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Proof of Theorem 1

Hu has been proved in [9] that the normalized central moments μn, m are invariant to the translation and to the scale, i.e.

$$ {\mu}_{n,m}\left({f}^t\right)={\mu}_{n,m}(f)\ \mathrm{and}\kern0.50em {\mu}_{n,m}\left({f}^{sc}\right)={\mu}_{n,m}(f), $$
(73)

where ft and fs are the translated and scaled image of the original image f f respectively. Whatever the translation vector and the scaling factor. We use equations (12) and (73), we get

$$ {\varnothing}_n\left({f}^t\right)=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\mu}_{2k,2n-2k}\left({f}^t\right)=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\mu}_{2k,2n-2k}(f)={\varnothing}_n(f) $$
(74)
$$ {\varnothing}_n\left({f}^s\right)=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\mu}_{2k,2n-2k}\left({f}^s\right)=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\mu}_{2k,2n-2k}(f)={\varnothing}_n(f) $$
(75)

Therefore, the moments ∅n, n = 1, 2, 3… are invariant to translation and scaling. Now we will prove the invariance under rotation. If the image f(x, y) is rotated by an angle θ, the rotation matrix is

$$ {M}_{\theta }=\left(\begin{array}{cc}\cos \left(\theta \right)& -\sin \left(\theta \right)\\ {}\sin \left(\theta \right)& \cos \left(\theta \right)\end{array}\right) $$
(76)

of the inverse matrix is Mθ. The rotated image is

$$ {f}^r\left(x,y\right)=f\left({M}_{\theta}\left(\begin{array}{c}x\\ {}y\end{array}\right)\right)=f\left(x\cos \left(\theta \right)-y\sin \left(\theta \right),x\sin \left(\theta \right)+y\cos \left(\theta \right)\right) $$
(77)

and the moment after the rotation is

$$ {\displaystyle \begin{array}{l}{\varnothing}_n\left({f}^r\right)={\varnothing}_n\left(f\left(x\cos \left(\theta \right)-y\sin \left(\theta \right),x\sin \left(\theta \right)+y\cos \left(\theta \right)\right)\right)\\ {}=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\mu}_{2k,2n-2k}\left(f\left(x\cos \left(\theta \right)-y\sin \left(\theta \right),x\sin \left(\theta \right)+y\cos \left(\theta \right)\right)\right)\end{array}} $$
(78)

We use the equations (12), (4) and (2), we get

$$ {\varnothing}_n\left({f}^r\right)={M}_{00}^{-\left(n+1\right)}\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\left(x-\overline{x}\right)}^{2k}{\left(y-\overline{y}\right)}^{2n-2k}\times $$
$$ f\left(x\cos \left(\theta \right)-y\sin \left(\theta \right),x\sin \left(\theta \right)+y\cos \left(\theta \right)\right) d\mu $$
(79)

By letting \( \left(\begin{array}{c}{x}^{\prime}\\ {}y\hbox{'}\end{array}\right)={M}_{\theta}\left(\begin{array}{c}x\\ {}y\end{array}\right)=\left(\begin{array}{c}x\cos \left(\theta \right)-y\sin \left(\theta \right)\\ {}x\sin \left(\theta \right)+y\cos \left(\theta \right)\end{array}\right), \) we have

= dxdy = |Mθ|dxdy = dxdy′ and \( \left(\begin{array}{c}\overline{x}\\ {}\overline{y}\end{array}\right)={M}_{-\theta}\left(\begin{array}{c}\overline{x^{\prime }}\\ {}\overline{y^{\prime }}\end{array}\right), \) the moment ∅n(fr) can be written as

$$ {\varnothing}_n\left({f}^r\right)={M}_{00}^{-\left(n+1\right)}\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\left[\left({x}^{\prime }-{\overline{x}}^{\prime}\right)\cos \left(\theta \right)+\left({y}^{\prime }-{\overline{y}}^{\prime}\right)\sin \left(\theta \right)\right]}^{2k}\times $$
$$ {\left[\left({y}^{\prime }-{\overline{y}}^{\prime}\right)\cos \left(\theta \right)-\left({x}^{\prime }-{\overline{x}}^{\prime}\right)\sin \left(\theta \right)\right]}^{2n-2k}f\left({x}^{\prime },{y}^{\prime}\right) d\mu $$
$$ ={M}_{00}^{-\left(n+1\right)}{\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\alpha}^k{\beta}^{n-k}f\left({x}^{\prime },{y}^{\prime}\right) d\mu $$
(80)

Where

$$ \left\{\begin{array}{c}\alpha ={\left[\left({x}^{\prime }-{\overline{x}}^{\prime}\right)\cos \left(\theta \right)+\left({y}^{\prime }-{\overline{y}}^{\prime}\right)\sin \left(\theta \right)\right]}^2\\ {}\beta ={\left[\left({y}^{\prime }-{\overline{y}}^{\prime}\right)\cos \left(\theta \right)-\left({x}^{\prime }-{\overline{x}}^{\prime}\right)\sin \left(\theta \right)\right]}^2.\end{array}\right. $$
(81)

We use the following Newton’s binomial formula

$$ {\left(\alpha +\beta \right)}^n=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\alpha}^k{\beta}^{n-k} $$
(82)

and Eq.(80), we get

$$ {\varnothing}_n\left({f}^r\right)={M}_{00}^{-\left(n+1\right)}{\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\left(\alpha +\beta \right)}^nf\left({x}^{\prime },{y}^{\prime}\right) d\mu $$
(83)

We have,

$$ {\displaystyle \begin{array}{l}\alpha +\beta ={\left[\left({x}^{\hbox{'}}-{\overline{x}}^{\hbox{'}}\right)\cos \theta +\left({y}^{\hbox{'}}-{\overline{y}}^{\hbox{'}}\right)\sin \theta \right]}^2+{\left[\left({y}^{\hbox{'}}-{\overline{y}}^{\hbox{'}}\right)\cos \theta -\left({x}^{\hbox{'}}-{\overline{x}}^{\hbox{'}}\right)\sin \theta \right]}^2\\ {}\kern1.75em ={\left({x}^{\hbox{'}}-{\overline{x}}^{\hbox{'}}\right)}^2+{\left({y}^{\hbox{'}}-{\overline{y}}^{\hbox{'}}\right)}^2\kern20em \end{array}} $$

This equation and Newton’s binomial formula give

$$ {\displaystyle \begin{array}{c}{\left(\alpha +\beta \right)}^n={\left[{\left({x}^{\prime }-{\overline{x}}^{\prime}\right)}^2+{\left({y}^{\prime }-{\overline{y}}^{\prime}\right)}^2\right]}^n\\ {}=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\left({x}^{\prime }-{\overline{x}}^{\prime}\right)}^{2k}{\left({y}^{\prime }-{\overline{y}}^{\prime}\right)}^{2n-2k}\end{array}} $$
(84)

Equations (83), (84) and (2) give

$$ {\displaystyle \begin{array}{l}{\varnothing}_n\left({f}^r\right)={M}_{00}^{-\left(n+1\right)}{\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\left({x}^{\prime }-{\overline{x}}^{\prime}\right)}^{2k}{\left({y}^{\prime }-{\overline{y}}^{\prime}\right)}^{2n-2k}f\left({x}^{\prime },{y}^{\prime}\right) d\mu \kern0.75em \\ {}=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){M}_{00}^{-\left(n+1\right)}\times {M}_{2k,2n-2k}(f)\\ {}=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){M}_{00}^{-\left(\frac{(2k)+\left(2n-2k\right)+2}{2}\right)}\times {M}_{2k,2n-2k}(f)\end{array}} $$
(85)

Equations (85), (4) and (12) give

$$ {\varnothing}_n\left({f}^r\right)=\sum \limits_{k=0}^n\left(\begin{array}{c}n\\ {}k\end{array}\right){\mu}_{2k,2n-2k}(f)={\varnothing}_n(f). $$
(86)

This means that, the moments ∅n are invariant to rotation. But they are not orthogonal, which produces the information redundancy of the image. For this reason, we will present in the next section a new set of orthogonal invariant moments in terms of the normalized central moments and the proposed moments∅n.

Appendix 2: Calculation of the first orthogonal polynomials

B(0, 1) = {(x, y) ∈ 2; x2 + y2 ≤ 1}. Therefore, the scalar product 〈., .〉 associated with is defend as follows

$$ \left\langle P,Q\right\rangle ={\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }P\left(x,y\right)Q\left(x,y\right)w\left(x,y\right) dxdy={\iint}_{B\left(0;1\right)}P\left(x,y\right)Q\left(x,y\right) dxdy $$

Using the algorithm (24)–(28), the polynomials Pn(x, y), n = 0, 1, 2, . . can be calculated as follow:

  • Step 0: Eq. (24) gives P0(x, y) = 1.

  • Step 1: Eq. (25) gives

    $$ {a}_1=\frac{\left\langle \left({x}^2+{y}^2\right){P}_0,{P}_0\right\rangle }{\left\langle {P}_0,{P}_0\right\rangle }=\frac{\left\langle \left({x}^2+{y}^2\right),\kern0.75em 1\right\rangle }{\left\langle 1,1\right\rangle }=\frac{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\left({x}^2+{y}^2\right)w\left(x,y\right) dxdy}{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }w\left(x,y\right) dxdy} $$
$$ =\frac{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\left({x}^2+{y}^2\right)w\left(x,y\right) dxdy}{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }w\left(x,y\right) dxdy}=\frac{\iint_{B\left(0;1\right)}\left({x}^2+{y}^2\right) dxdy}{\iint_{B\left(0;1\right)} dxdy} $$
(87)

We pose x = rcos(θ) and y = rsin(θ), where r ∈ [0; 1] and θ ∈ [0; 2π], we get

$$ {a}_1=\frac{\int_0^1{\int}_0^{2\pi }{r}^3 drd\theta}{\int_0^1{\int}_0^{2\pi } rdrd\theta}=\frac{\frac{\pi }{2}}{\pi }=\frac{1}{2} $$
(88)

Eq. (26) gives \( {P}_1\left(x,y\right)={x}^2+{y}^2-\frac{1}{2}. \)

  • Step 2: Eq. (25) gives

$$ {\displaystyle \begin{array}{l}{a}_2=\frac{\left\langle \left({x}^2+{y}^2\right){P}_1,{P}_1\right\rangle }{\left\langle {P}_1,{P}_1\right\rangle }=\frac{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\left({x}^2+{y}^2\right)\left({x}^2+{y}^2-\frac{1}{2}\right)\left({x}^2+{y}^2-\frac{1}{2}\right)w\left(x,y\right) dxdy}{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\left({x}^2+{y}^2-\frac{1}{2}\right)}^2w\left(x,y\right) dxdy}\\ {}=\frac{\int_0^1{\int}_0^{2\pi }{r}^2{\left({r}^2-\frac{1}{2}\right)}^2 rdr d\theta}{\int_0^1{\int}_0^{2\pi }{\left({r}^2-\frac{1}{2}\right)}^2 rdr d\theta}=\frac{2\pi {\int}_0^1{r}^3{\left({r}^2-\frac{1}{2}\right)}^2 dr}{2\pi {\int}_0^1{\left({r}^2-\frac{1}{2}\right)}^2 rdr}=\frac{\frac{1}{48}}{\frac{1}{24}}=\frac{1}{2}\end{array}} $$
(89)

Equation (27) gives

$$ {\displaystyle \begin{array}{l}{b}_2=\frac{\left\langle \left({x}^2+{y}^2\right){P}_1,{P}_0\right\rangle }{\left\langle {P}_0,{P}_0\right\rangle }=\frac{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\left({x}^2+{y}^2\right)\left({x}^2+{y}^2-\frac{1}{2}\right)w\left(x,y\right) dxdy}{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }w\left(x,y\right) dxdy}\\ {}=\frac{\int_0^1{\int}_0^{2\pi }{r}^3\left({r}^2-\frac{1}{2}\right) dr d\theta}{\int_0^1{\int}_0^{2\pi } rdrd\theta}=\frac{2\pi {\int}_0^1{r}^3\left({r}^2-\frac{1}{2}\right) dr}{2\pi {\int}_0^1 rdr}=\frac{1}{12}\end{array}} $$
(90)

Equation (28) gives

$$ {\displaystyle \begin{array}{l}{\mathrm{P}}_2\left(\mathrm{x},\mathrm{y}\right)=\left({\mathrm{x}}^2+{\mathrm{y}}^2-\frac{1}{2}\right){\mathrm{P}}_1\left(\mathrm{x},\mathrm{y}\right)-\frac{1}{12}{\mathrm{P}}_0\left(\mathrm{x},\mathrm{y}\right)=\left({\mathrm{x}}^2+{\mathrm{y}}^2-\frac{1}{2}\right)\left({\mathrm{x}}^2+{\mathrm{y}}^2-\frac{1}{2}\right)-\frac{1}{12}\\ {}={\left({x}^2+{y}^2\right)}^{\mathbf{2}}-\left({x}^2+{y}^2\right)+\frac{1}{6}={x}^4+{y}^4+2{x}^2{y}^2-{x}^2-{y}^2+\frac{1}{6}\end{array}} $$
(91)
  • Step 3: Eq. (25) gives

$$ {\displaystyle \begin{array}{l}{a}_3=\frac{\left\langle \left({x}^2+{y}^2\right){P}_2,{P}_2\right\rangle }{\left\langle {P}_2,{P}_2\right\rangle}\\ {}=\frac{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\left({x}^2+{y}^2\right){\left({\left({x}^2+{y}^2\right)}^{\mathbf{2}}-\left({x}^2+{y}^2\right)+\frac{1}{6}\right)}^2w\left(x,y\right) dxdy}{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\left({\left({x}^2+{y}^2\right)}^{\mathbf{2}}-\left({x}^2+{y}^2\right)+\frac{1}{6}\right)}^2w\left(x,y\right) dxdy}\\ {}=\frac{\int_0^1{\int}_0^{2\pi }{r}^2{\left({r}^4-{r}^2+\frac{1}{6}\right)}^2 rdrd\theta}{\int_0^1{\int}_0^{2\pi }{\left({r}^4-{r}^2+\frac{1}{6}\right)}^2 rdrd\theta}=\frac{1}{2}\end{array}} $$
(92)

Equation (27) gives

$$ {\displaystyle \begin{array}{l}{b}_3=\frac{\left\langle \left({x}^2+{y}^2\right){P}_2,{P}_1\right\rangle }{\left\langle {P}_1,{P}_1\right\rangle}\\ {}=\frac{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\left({x}^2+{y}^2\right)\left({\left({x}^2+{y}^2\right)}^{\mathbf{2}}-\left({x}^2+{y}^2\right)+\frac{1}{6}\right)\left({x}^2+{y}^2-\frac{1}{2}\right)w\left(x,y\right) dxdy}{\int_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\left({x}^2+{y}^2-\frac{1}{2}\right)}^2w\left(x,y\right) dxdy}\\ {}=\frac{\int_0^1{\int}_0^{2\pi }{r}^3\left({r}^4-{r}^2+\frac{1}{6}\right)\left({r}^2-\frac{1}{2}\right) drd\theta}{\int_0^1{\int}_0^{2\pi }{\left({r}^2-\frac{1}{2}\right)}^2 rdrd\theta}=\frac{1}{15}\end{array}} $$
(93)

Equation (28) gives

$$ {P}_3\left(x,y\right)=\left({x}^2+{y}^2-\frac{1}{2}\right)\left[{\left({x}^2+{y}^2\right)}^2-\left({x}^2+{y}^2\right)+\frac{1}{6}\right]-\frac{1}{15}\left({x}^2+{y}^2-\frac{1}{2}\right) $$
(94)

And so on. The first four polynomials associated with the weight function w(x, y) P0, P1, P2 and P3 are computed.

Appendix 3: Proof of Theorem 2

Since P0 and P1 are in two variables polynomials has exactly degree 0 and 2 respectively. By induction we can show that Pn is a two variables polynomial which has exactly degree 2n. 〈Pn − 1, Pn − 1〉 and 〈Pn − 2, Pn − 2〉 are non-zero for all n ≥ 2. We will show by strongly induction on n that: for all n ≥ 1.

$$ \left\langle {P}_n,{P}_i\right\rangle =0\ \mathrm{for}\ \mathrm{all}\kern0.5em i\in \left\{0,\dots, n-1\right\} $$

We prove that the property is true for n = 1. From equations (24) and (26), we have

$$ \left\langle {P}_1,{P}_0\right\rangle =\left\langle \left({x}^2+{y}^2-{a}_1\right),1\right\rangle =\left\langle \left({x}^2+{y}^2\right),1\right\rangle -{a}_1\left\langle 1,1\right\rangle $$
(95)

Eqs. (25), (24) and (26) give

$$ \kern0.5em {a}_1=\frac{\left\langle \left({x}^2+{y}^2\right){P}_0,{P}_0\right\rangle }{\left\langle {P}_0,{P}_0\right\rangle }=\frac{\left\langle \left({x}^2+{y}^2\right)1,1\right\rangle }{\left\langle 1,1\right\rangle },\mathrm{then}\ \left\langle {P}_1,{P}_0\right\rangle =0. $$

Assume that the property holds for all k = 1, 2, …, n − 1, i. e.

$$ {\displaystyle \begin{array}{c}\left\langle {P}_k,{P}_i\right\rangle =0\ \mathrm{for}\ \mathrm{all}\ k\in \left\{0,\dots, n-1\right\}\ \mathrm{and}\ i\in \left\{0,\dots, k-1\right\}\\ {}\left( induction\ hypothesis\right)\end{array}} $$
(96)

We prove that the property is true for n, i.e.,

$$ \left\langle {P}_n,{P}_i\right\rangle =0\ \mathrm{for}\ \mathrm{all}\ i\in \left\{0,\dots, n-1\right\} $$
(97)

From Eq. (28), we have

$$ {\displaystyle \begin{array}{l}\left\langle {P}_n,{P}_{n-1}\right\rangle =\left\langle \left({x}^2+{y}^2-{a}_n\right){P}_{n-1}-{b}_n{P}_{n-2},{P}_{n-1}\right\rangle \\ {}=\left\langle \left({x}^2+{y}^2\right){P}_{n-1},{P}_{n-1}\right\rangle -{a}_n\left\langle {P}_{n-1},{P}_{n-1}\right\rangle -{b}_n\left\langle {P}_{n-2},{P}_{n-1}\right\rangle \end{array}} $$
(98)

Eq. (25) gives 〈(x2 + y2)Pn − 1, Pn − 1〉 − anPn − 1, Pn − 1〉 = 0 and the induction hypothesis (96) gives, 〈Pn − 2, Pn − 1〉 = 0. Therefore 〈Pn, Pn − 1〉 = 0.

We have also,

$$ {\displaystyle \begin{array}{c}\left\langle {P}_n,{P}_{n-2}\right\rangle =\left\langle \left({x}^2+{y}^2-{a}_n\right){P}_{n-1}-{b}_n{P}_{n-2},{P}_{n-2}\right\rangle \\ {}=\left\langle \left({x}^2+{y}^2\right){P}_{n-1},{P}_{n-2}\right\rangle -{a}_n\left\langle {P}_{n-1},{P}_{n-2}\right\rangle -{b}_n\left\langle {P}_{n-2},{P}_{n-2}\right\rangle \end{array}} $$
(99)

Eq. (27) gives, 〈(x2 + y2)Pn − 1, Pn − 2〉 − bnPn − 2, Pn − 2〉 = 0 and the induction hypothesis gives, 〈Pn − 1, Pn − 2〉 = 0. Therefor 〈Pn, Pn − 2〉 = 0.

$$ {\displaystyle \begin{array}{l}\mathrm{For}\ i\in \left\{0,1,\dots, n-3\right\},\mathrm{we}\ \mathrm{have}\\ {}\left\langle {P}_n,{P}_i\right\rangle =\left\langle \left({x}^2+{y}^2-{a}_n\right){P}_{n-1}-{b}_n{P}_{n-2},{P}_i\right\rangle \\ {}=\left\langle \left({x}^2+{y}^2\right){P}_{n-1},{P}_i\right\rangle -{a}_n\left\langle {P}_{n-1},{P}_i\right\rangle -{b}_n\left\langle {P}_{\mathrm{n}-2},{P}_i\right\rangle \end{array}} $$
(100)

The induction hypothesis gives 〈Pn − 1, Pi〉 = 0 and 〈Pn − 2, Pi〉 = 0. Then

$$ \left\langle {P}_n,{P}_i\right\rangle =\left\langle \left({x}^2+{y}^2\right){P}_{n-1},{P}_i\right\rangle =\left\langle {P}_{n-1},\left({x}^2+{y}^2\right){P}_i\right\rangle $$
(101)

Eq. (28) gives, (x2 + y2)Pi = Pi + 1 + ai + 1Pi + bi + 1Pi − 1. This relation and Eq.(101) give,

$$ \left\langle {P}_n,{P}_i\right\rangle =\left\langle {P}_{n-1},{P}_{i+1}+{a}_{i+1}{P}_i+{b}_{i+1}{P}_{i-1}\right\rangle $$
$$ =\left\langle {P}_{n-1},{P}_{i+1}\right\rangle +{a}_{i+1}\left\langle {P}_{n-1},{P}_i\right\rangle +{b}_{i+1}\left\langle {P}_{n-1},{P}_{i-1}\right\rangle $$
(102)

and the induction hypothesis gives

$$ \left\langle {P}_{n-1},{P}_{i+1}\right\rangle =0,\left\langle {P}_{n-1},{P}_i\right\rangle =0\ \mathrm{and}\ \left\langle {P}_{n-1},{P}_{i-1}\right\rangle =0.\mathrm{Therefore}\kern0.5em \left\langle {P}_n,{P}_i\right\rangle =0. $$

Appendix 4: Proof of proposition 1

We prove this result by induction on n

$$ {\displaystyle \begin{array}{l}\left\langle {P}_2,{P}_2\right\rangle =\left\langle \left({x}^2+{y}^2-{a}_2\right){P}_1-{b}_2{P}_0,{P}_2\right\rangle \\ {}=\left\langle \left({x}^2+{y}^2\right){P}_1,{P}_2\right\rangle -{a}_2\left\langle {P}_1,{P}_2\right\rangle -{b}_2\left\langle {P}_0,{P}_2\right\rangle =\left\langle \left({x}^2+{y}^2\right){P}_1,{P}_2\right\rangle \\ {}=\left\langle {P}_1,\left({x}^2+{y}^2\right){P}_2\right\rangle =\left\langle {P}_1,{P}_3+{a}_3{P}_2+{b}_3{P}_1\right\rangle ={b}_3\left\langle {P}_1,{P}_1\right\rangle \end{array}} $$
(103)

Assume the result holds for n – 1. We show that the result holds for n.

$$ {\displaystyle \begin{array}{l}\left\langle {P}_n,{P}_n\right\rangle =\left\langle \left({x}^2+{y}^2-{a}_n\right){P}_{n-1}-{b}_n{P}_{n-2},{P}_n\right\rangle \\ {}=\left\langle \left({x}^2+{y}^2\right){P}_{n-1},{P}_n\right\rangle -{a}_n\left\langle {P}_{n-1},{P}_n\right\rangle -{b}_n\left\langle {P}_{n-2},{P}_n\right\rangle \\ {}\begin{array}{l}=\left\langle \left({x}^2+{y}^2\right){P}_{n-1},{P}_n\right\rangle \\ {}=\left\langle {P}_{n-1},\left({x}^2+{y}^2\right){P}_n\right\rangle =\left\langle {P}_{n-1},{P}_{n+1}+{a}_{n+1}{P}_n+{b}_{n+1}{P}_{n-1}\right\rangle \\ {}={b}_{n+1}\left\langle {P}_{n-1},{P}_{n-1}\right\rangle \end{array}\end{array}} $$
(104)

Using the induction hypothesis, the relation (104) can be rewritten as

$$ \left\langle {P}_n,{P}_n\right\rangle =\left\langle {P}_1,{P}_1\right\rangle {\prod}_{i=2}^{n-1}{b}_{i+1}\times {b}_{n+1}=\left\langle {P}_1,{P}_1\right\rangle {\prod}_{i=2}^n{b}_{i+1} $$
(105)

Appendix 5. Proof of theorem 3

According to Eq. (38) and remark 1, we have

$$ {\displaystyle \begin{array}{c}{OIM}_n(f)={\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{K}_n\left(x-\overline{x},y-\overline{y}\right)f\left(x,y\right) d\mu \\ {}={\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\rho}_n{P}_n\left(x-\overline{x},y-\overline{y}\right)f\left(x,y\right) d\mu \end{array}} $$
(106)

From lemma 1, there exit a set of scalars {αi, i = 0, …, n} with αn = 1 such that

\( {Q}_n(X)=\sum \limits_{i=0}^n{\alpha}_i{X}^i \) and Pn(x, y) = Qn(x2 + y2). Therefore, Eq. (106) can be rewritten as

$$ {\displaystyle \begin{array}{l}{OIM}_n(f)={\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }{\rho}_n\ {Q}_n\left({\left(x-\overline{x}\right)}^2+{\left(y-\overline{y}\right)}^2\right)f\left(x,y\right) d\mu \\ {}={\rho}_n\sum \limits_{i=0}^n{\alpha}_i{\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty }\ {\left[{\left(x-\overline{x}\right)}^2+{\left(y-\overline{y}\right)}^2\right]}^if\left(x,y\right) d\mu \\ {}\begin{array}{l}={\rho}_n\sum \limits_{i=0}^n{\alpha}_i\sum \limits_{k=0}^i{\int}_{-\infty}^{+\infty }{\int}_{-\infty}^{+\infty}\left(\begin{array}{c}i\\ {}k\end{array}\right)\ {\left(x-\overline{x}\right)}^{2k}{\left(y-\overline{y}\right)}^{2i-2k}f\left(x,y\right) d\mu\ \\ {}=\sum \limits_{i=0}^n\sum \limits_{k=0}^i{\rho}_n{\alpha}_i\left(\begin{array}{c}i\\ {}k\end{array}\right){M}_{2k,2i-2k}(f)\\ {}=\sum \limits_{i=0}^n{M}_{00}^{i+1}{\rho}_n{\alpha}_i\sum \limits_{k=0}^i\left(\begin{array}{c}i\\ {}k\end{array}\right){\mu}_{2k,2i-2k}(f)=\sum \limits_{i=0}^n{\beta}_i{\varnothing}_i(f)\end{array}\end{array}} $$
(107)

where \( {\beta}_i={M}_{00}^{i+1}{\rho}_n{\alpha}_i. \)

According to equation (107), the orthogonal moment OIMn is a linear combination of the moments∅i, i = 0, …, n, which are invariant under translation, scaling and rotation (theorem 1). Then OIMn is also invariant under the three geometrical transformations for all n ≥ 0.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hjouji, A., EL-Mekkaoui, J. & Jourhmane, M. Rotation scaling and translation invariants by a remediation of Hu’s invariant moments. Multimed Tools Appl 79, 14225–14263 (2020). https://doi.org/10.1007/s11042-020-08648-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-08648-5

Keywords