Skip to main content
Log in

Image Similarity Assessment Based on Coefficients of Spatial Association

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

This paper focuses on the construction of image similarity indices that consider the hidden spatial association between two images. The proposal is a variant of a structural similarity (SSIM) coefficient and introduces a codispersion coefficient to capture the hidden spatial association between two images in a particular direction on a plane. The novel contribution of this article is the inclusion of the codispersion coefficient instead of the sample correlation coefficient. The difference between the codispersion and correlation coefficients is illustrated through two examples. We then show that this modified measure is a valid pseudo metric and has several useful properties, including quasi-convexity, which is established under very precise conditions. The quasi-convexity property is an attractive property that allows one to use this type of measure as a cost function in optimization problems. In addition, we introduce another variant of the SSIM with a contrast function that depends on the spatial lag on the space. This proposal trivially recovers the optimization properties of the SSIM. Various computational experiments with real datasets support our proposals and findings, and we characterize the practical advantages and drawbacks of the SSIM variants.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Basu, S., Reinsel, G.: Properties of the spatial unilateral first-order ARMA model. Adv. Appl. Probab. 25, 631–648 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  2. Boyd, S., Banderberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)

    Book  Google Scholar 

  3. Brunet, D., Vrscay, E.R., Wanng, Z.: On the mathematical properties of the structural similarity index. IEEE Trans. Image Process. 21, 1488–1498 (2012b)

    Article  MathSciNet  Google Scholar 

  4. Chandler, D.M., Hemami, S.S.: VSNR: a wevelet-based visual signal-to-noise-ratio for natural images. IEEE Trans. Image Process. 9, 2284–2298 (2007)

    Article  MathSciNet  Google Scholar 

  5. Chilès, J.P., Delfiner, P.: Geostatistics: Modeling Spatial Uncertainty. Wiley, New York (1999)

    Book  MATH  Google Scholar 

  6. Clifford, P., Richardson, S., Hémon, D.: Assessing the significance of the correlation between two spatial processes. Biometrics 45, 123–134 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cramer, H.: On the theory of stationaryrandom processes. Ann. Math. 41, 215–230 (1940)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cuevas, F., Porcu, E., Vallejos, R.: Study of spatial relationships between two sets of variables: a nonparametric approach. J. Nonparametr. Stat. 25, 695–714 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dutilleul, P.: Modifying the \(t\) test for assessing the correlation between two spatial processes. Biometrics 49, 305–314 (1993)

    Article  Google Scholar 

  10. Duttilleul, P., Pelletier, B., Alpargu, G.: Modified F tests for assessing the multiple correlation between one spatial process and several others. J. Stat. Plan. Inference 138, 1402–1415 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Genton, M.: Highly robust robust variogram estimation. Math. Geol. 30, 213–221 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hubert, L.J., Golledge, R.G.: Measuring association between spatially defined variables: Tjostheim index and some extensions. Geogr. Anal. 4, 273–278 (1982)

    Google Scholar 

  13. Larson, E.C., Chandler, D.M.: Most aparent distortion: full reference image quality assessment and the role of the strategy. J. Electron. Imaging 19, 011006 (2010)

    Article  Google Scholar 

  14. Martin, R.J.: Some results on unilateral ARMA lattice processes. J. Stat. Plan. Inference 50, 395–411 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  15. Matheron, G.: Les Variables Regionalisees et leur Estimation. Masson, Paris (1965)

    Google Scholar 

  16. Ojeda, S., Vallejos, R., Lamberti, P.W.: Measure of similarity between images based on the codispersion coefficient. J. Electron. Imaging 21, 023019 (2012)

    Article  Google Scholar 

  17. Richardson, S., Clifford, P.: Testing association between spatial processes. In: Possolo, A. (ed.) Spatial Statistics and Imaging: Papers from the Research Conference on Image Analysis and Spatial Statistics Held at Bowdoin College, Brunswick, ME, Summer 1988. IMS Lecture Notes, vol. 20, pp. 295–308 (1991)

  18. Rukhin, A., Vallejos, R.: Codispersion coefficient for spatial and temporal series. Stat. Probab. Lett. 78, 1290–1300 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  19. Osorio, F., Vallejos, R.: SpatialPack—an R package for computing the association between two spatial or temporal processes (2012)

  20. Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J., Vozel, B., Chehdi, K., Carli, M., Battisti, F., Kuo, C.C.J.: Image database TID2013:peculiarities, results and perspectives. Signal Process. Image Commun. 30, 57–77 (2015)

    Article  Google Scholar 

  21. Sampat, M.P., Wang, Z., Gupta, S., Bovik, A.C.: Complex wavelet structural similarity: a new image similarity index. IEEE Trans. Image Process. 18, 2385–2401 (2009)

    Article  MathSciNet  Google Scholar 

  22. Sheikh, H.R., Bovik, C.: Image information and visual quality. IEEE Trans. Image Process. 15, 430–444 (2006)

    Article  Google Scholar 

  23. Silvestre-Blanes, J.: Structural similarity image quality reliability determining parameters and window size. Signal Process. 91, 1012–1020 (2011)

    Article  MATH  Google Scholar 

  24. Tjstheim, D.: A measure of association for spatial variables. Biometrika 65, 109–114 (1978)

    Article  MathSciNet  Google Scholar 

  25. Vallejos, R.: Assessing the association between two spatial or temporal sequences. J. Appl. Stat. 35, 1323–1343 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  26. Vallejos, R.: Testing for the absence of correlation between two spatial or temporal sequences. Pattern Recognit. Lett. 33, 1741–1748 (2012)

    Article  Google Scholar 

  27. Vallejos, R., Mallea, A., Herrera, M., Ojeda, S.: A multivariate geostatistical approach for landscape classification from remotely sensed image data. Stoch. Environ. Res. Risk Assess. 29, 369–378 (2015)

    Article  Google Scholar 

  28. Vallejos, R., Osorio, F., Mancilla, D.: The codispersion map: a graphical tool to visualize the association between two spatial processes. Stat. Neerl. 69, 298–314 (2015)

    Article  MathSciNet  Google Scholar 

  29. Wang, Z., Bovik, C.: A universal image quality index. IEEE Signal Process. Lett. 9, 81–84 (2002)

    Article  Google Scholar 

  30. Wang, Z., Zimoncelli, E.P., Bovik, C.: Multi-scale structural similarity for image quality assessment. In: Proceedings of the IEEE Asilomar Conference Signals, Systems, and Computers, Pacific Grove, CA, pp. 1398–1402 (2003)

  31. Wang, Z., Bovik, A., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 1–14 (2004)

    Article  Google Scholar 

  32. Wang, Z., Bovik, C.: Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 26, 98–117 (2009)

    Article  Google Scholar 

  33. Wang, Z., Li, Q.: Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 20, 1185–1198 (2011)

    Article  MathSciNet  Google Scholar 

  34. Yaglom, A.M.: Correlation Theory of Stationary and Related Random Functions. Springer, Berlin (1986)

    Google Scholar 

  35. Yank, M.C.K., Schreckengost, J.F.: Difference sign test for comovements between two time series. Commun. Stat. Theory Methods A10, 355–369 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  36. Zhang, L., Zhang, L., Mou, X., Zhang, D.: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 20, 2378–2386 (2011)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

Ronny Vallejos was partially supported by Fondecyt Grant No. 1120048, Chile, and an AC3E Grant No. FB-0008. Part of this article was written while he was visiting the UFMG in Belo Horizonte, Brazil. Ronny Vallejos is also grateful to Assuncao and Rosangela Loschi for their hospitality and kindness. The authors want to thank Felipe Osorio and Ángelo Gárate for their helpful discussions. The authors also acknowledge the suggestions from two anonymous referees, an associate editor, and the editor of JMIV that helped to improve the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronny Vallejos.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 5746 KB)

Appendix: Proofs

Appendix: Proofs

1.1 Derivation of (6)

Assuming that the parameters of the models lie in the stationary region, the following representations hold:

$$\begin{aligned} X(i,j)&=\sum _{k=0}^{\infty }\sum _{l=0}^{\infty }\sum _{m=0}^{\infty }\frac{(k+l+m)!}{k!l!m!}\\&\phi _1^k\phi _2^l\phi _3^m\varepsilon _1(i-k-m,j-l-m),\\ Y(i,j)&=\sum _{n=0}^{\infty }\sum _{p=0}^{\infty }\sum _{q=0}^{\infty }\frac{(p+q+r)!}{p!q!r!}\\&\psi _1^p\psi _2^q\psi _3^r\varepsilon _2(i-q-r,j-q-r). \end{aligned}$$

Then, we compute the term \({\mathbb {E}}[X(i,j)Y(i,j)]\), thereby obtaining

$$\begin{aligned}&\sum _{k,l,m,r}\frac{(k+l+m)!}{k!l!m!}\frac{(k+l+2m+r)!}{r!(k+m-r)!(l+m-r)!}\\&\quad \qquad \times \,(\phi _1\psi _1)^k(\phi _2\psi _2)^l(\phi _3\psi _1\psi _2)^m\left( \frac{\psi _3}{\psi _1\psi _2}\right) ^r. \end{aligned}$$

We recall that this expression is valid when \(\phi _i\ne 0\), \(\psi _i\ne 0\), \(i=1,2,3\) and when the summation is over all non-negative klmr such that \((k+m-r)\ge 0\) and \((l+m-r)\ge 0\). Now, we define

$$\begin{aligned}&C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,h_1,h_2)\nonumber \\&\quad =\sum _{k,l,m,r}\frac{(k+l+m)!}{k!l!m!} \frac{(k+l+2m+r+h_1+h_2)!}{r!(k+m-r+h_1)!(l+m-r+h_2)!}\nonumber \\&\qquad \times (\phi _1\psi _1)^k(\phi _2\psi _2)^l(\phi _3\psi _1\psi _2)^m\left( \frac{\psi _3}{\psi _1\psi _2}\right) ^r. \end{aligned}$$
(50)

Using (50), we obtain

$$\begin{aligned}&{\mathbb {E}}[X(i,j)Y(i,j)]=\rho \sigma \tau C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,0,0),\\&{\mathbb {E}}[X(i+h_1,j+h_2)Y(i+h_1,j+h_2)]\\&\quad =\rho \sigma \tau C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,0,0),\\&{\mathbb {E}}[X(i,j)Y(i+h_1,j+h_2)]\\&\quad =\rho \sigma \tau \psi _1^{h_1}\psi _2^{h_2}C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,h_1,h_2),\\&{\mathbb {E}}[X(i+h_1,j+h_2)Y(i,j)]\\&\quad =\rho \sigma \tau \phi _1^{h_1}\phi _2^{h_2}C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,h_1,h_2),\\&\displaystyle {\mathbb {E}}[(X(i,j)-X(i+h_1,j+h_2)) (Y(i,j) -Y(i+h_1,j+h_2))] \\&\quad =\rho \sigma \tau (2C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,0,0)\\&\qquad - [\psi _1^{h_1}\psi _2^{h_2}+\phi _1^{h_1}\phi _2^{h_2}]C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,h_1,h_2)),\\&\mathbb {V}[X(i,j)]=\sigma ^2C(\phi _1,\phi _2,\phi _3,\phi _1,\phi _2,\phi _3,0,0),\\&\mathbb {V}[Y(i,j)]=\tau ^2C(\psi _1,\psi _2,\psi _3,\psi _1,\psi _2,\psi _3,0,0),\\&{{\mathrm{cov}}}(X(i+h_1,j+h_2)Y(i,j))\\&\quad =\sigma ^2\phi _1^{h_1}\phi _2^{h_2}C(\phi _1,\phi _2,\phi _3,\phi _1,\phi _2,\phi _3,h_1,h_2)\\&{{\mathrm{cov}}}(X(i,j)Y(i+h_1,j+h_2))\\&\quad =\tau ^2\psi _1^{h_1}\psi _2^{h_2}C(\psi _1,\psi _2,\psi _3,\psi _1,\psi _2,\psi _3,h_1,h_2),\\&{\mathbb {E}}\left[ (X(i+h_1,j+h_2)-X(i,j))^2\right] \\&\quad =2\sigma ^2[C(\phi _1,\phi _2,\phi _3,\phi _1,\phi _2,\phi _3,0,0)-\phi _1^{h_1}\phi _2^{h_2}\\&\qquad \times C(\phi _1,\phi _2,\phi _3,\phi _1,\phi _2,\phi _3,h_1,h_2)],\\&{\mathbb {E}}\left[ (Y(i+h_1,j+h_2)-Y(i,j))^2\right] \\&\quad =2\tau ^2[C(\psi _1,\psi _2,\psi _3,\psi _1,\psi _2,\psi _3,0,0)-\psi _1^{h_1}\psi _2^{h_2}\\&\qquad \times C(\psi _1,\psi _2,\psi _3,\psi _1,\psi _2,\psi _3,h_1,h_2)]. \end{aligned}$$

Replacing the above quantities in (2), one obtains Equation (6), where \(K=\frac{K_1}{2R_1R_2}\) and

$$\begin{aligned} K_1= & {} C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,0,0)\\&-\,[\psi _1^{h_1}\psi _2^{h_2}+\phi _1^{h_1}\phi _2^{h_2}]C(\phi _1,\phi _2,\phi _3,\psi _1,\psi _2,\psi _3,h_1,h_2),\\ R_1= & {} \sqrt{C(\phi _1,\phi _2,\phi _3,\phi _1,\phi _2,\phi _3,0,0)-\phi _1^{h_1}\phi _2^{h_2}C(\phi _1,\phi _2,\phi _3,\phi _1,\phi _2,\phi _3,h_1,h_2)},\\ R_2= & {} \sqrt{C(\psi _1,\psi _2,\psi _3,\psi _1,\psi _2,\psi _3,0,0)-\psi _1^{h_1}\psi _2^{h_2}C(\psi _1,\psi _2,\psi _3,\psi _1,\psi _2,\psi _3,h_1,h_2)}. \end{aligned}$$

Proof of Proposition 1

For the proof of Proposition 1, we need to establish the following lemma. \(\square \)

Lemma 2

A spatial lag \({\varvec{h}}=(h_1,h_2)\) that belongs to a rectangular grid of \(\mathbb {Z}^{2}_{+}\) induces a lag \(h^{*}>0\) in the space of images \(\mathbb {R}_+^N.\)

Proof

Let nm \(\in \mathbb {Z}_{+}\) be the number of rows and columns, respectively, of a rectangular grid of \(\mathbb {Z}^{2}_{+}\). Then, the spatial lag \({\varvec{h}}=(h_1,h_2)\in \mathbb {Z}^2_{+}\) corresponds to a translation of \(h_1\) units to the right and \(h_2\) units down (depending on the convention used). Thus, for \(\mathrm {{\varvec{X}}}\), fixing the point \({\varvec{s}}=(i,j)\in S_{{\varvec{h}}},\) we have that if \((i,j)\mapsto k\), then \((i+h_1,j+h_2)\mapsto k+h^*\), where \(h^*=h_1+nh_2\).

Note that Lemma 2 is valid if the spatial lag \({{\varvec{h}}}\) is a valid position on the image, i.e., \((i+h_1,j+h_2)\) is a point belonging to the rectangular grid.

Proof of Proposition 1

The proof is developed by construction. Let \({{\varvec{h}}}=(h_1,h_2)\in \mathbb {Z}^{2}_{+}\) be a fixed spatial lag; let \(h^{*}\) be the lag induced by \({{\varvec{h}}}\) according to Lemma 2; let us denote by \({\varvec{I}}_{k}\) the identity matrix of order \(k\times k\) and by \({\varvec{0}}_{k,l}\) the null matrix of order \(k\times l\); and let \({\varvec{A}}_{{\varvec{h}}}^{(1)}\) be a matrix of order \(n-h_1\times N\) such that

$$\begin{aligned} {\varvec{A}}_{{\varvec{h}}}^{(1)} = \left( \begin{matrix} {\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,2h_1}&-{\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,N-2n} \end{matrix}\right) . \end{aligned}$$

Similarly, we define the following quantities:

$$\begin{aligned} {\varvec{A}}_{{\varvec{h}}}^{(2)}&= \left( \begin{matrix} {\varvec{0}}_{n-h_1,n}&{\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,2h_1}&-{\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,N-3n} \end{matrix}\right) ,\\ {\varvec{A}}_{{\varvec{h}}}^{(3)}&= \left( \begin{matrix} {\varvec{0}}_{n-h_1,2n}&{\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,2h_1}&-{\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,N-4n} \end{matrix}\right) ,\\ \vdots&\\ {\varvec{A}}_{{\varvec{h}}}^{(k)}&= \left( \begin{matrix} {\varvec{0}}_{n-h_1,(k-1)n}&{\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,2h_1}&-{\varvec{I}}_{n-h_1}&{\varvec{0}}_{n-h_1,N-(k+1)n} \end{matrix}\right) , \end{aligned}$$

where \(k=1,\ldots ,m-h_2\).

In the previous calculations, for a given matrix \({{\varvec{X}}}\) of order \(n\times m\) and a spatial lag \({{\varvec{h}}}=(h_1,h_2)\), \(|S_{{\varvec{h}}}|=N_{{{\varvec{h}}}}=(n-h_1)(m-h_2)\) is implicit. Hence, defining

$$\begin{aligned} {\varvec{A}}_{{\varvec{h}}}= \left( \begin{matrix} {\varvec{A}}_{{\varvec{h}}}^{(1)}\\ {\varvec{A}}_{{\varvec{h}}}^{(2)}\\ \vdots \\ {\varvec{A}}_{{\varvec{h}}}^{(m-h_2)}\\ \end{matrix}\right) , \end{aligned}$$

it is straightforward to obtain

$$\begin{aligned} {\varvec{A}}_{{\varvec{h}}}{\varvec{x}}= (x_k-x_{k+h^*})_{k\in S_{{\varvec{h}}}^{im}}, \end{aligned}$$

where \(S_{{\varvec{h}}}^{im} =\{k: 1\le k\le nm \wedge F^{-1}(k)\in S_{{\varvec{h}}}\}, \) and F is the natural map associated with the indices given by

$$\begin{aligned} \begin{array}{lccl} F:&{}\mathbb {Z}_{+}^2&{}\longrightarrow &{}\mathbb {Z}_+\\ &{}(i,j)&{}\longmapsto &{} k=j+(n-1)i. \end{array} \end{aligned}$$

Clearly, F is a bijective map between \(S_{{\varvec{h}}}\) and \(S_{{\varvec{h}}}^{im}\). Because \(N_{{\varvec{h}}}=(n-h_1)(m-h_2)\), the result follows after simple calculations of the norms and inner products.

Proof of Proposition 5

Without loss of generality, we assume that \(h_1,h_2\ge 0\). \(h_1\) corresponds to a column translation of the matrix \({{\varvec{X}}}\), and \(h_2\) corresponds to a row translation of \({{\varvec{X}}}.\) Now, \({\varvec{A}}_{{\varvec{h}}}{\varvec{x}}\) is equal to the vector \({\varvec{x}}_{{\varvec{h}}}= ({\varvec{X}}(j,i)-{\varvec{X}}(j+h_2,i+h_1)),\) for all \((j,i)\ \text {such that} \ (j+h_2,i+h_1)\in [1,2,\ldots ,n]\times [1,2,\ldots ,m].\) Specifically,

$$\begin{aligned} {\varvec{x}}_{{\varvec{h}}}=\left( \begin{matrix} {\varvec{X}}(1,1)-{\varvec{X}}(1+h_2,1+h_1)\\ {\varvec{X}}(1,2)-{\varvec{X}}(1+h_2,2+h_1)\\ \vdots \\ {\varvec{X}}(1,m-h_1)-{\varvec{X}}(1+h_2,m)\\ {\varvec{X}}(2,1)-{\varvec{X}}(2+h_2,2+h_1)\\ {\varvec{X}}(2,2)-{\varvec{X}}(2+h_2,2+h_1)\\ \vdots \\ {\varvec{X}}(2,m-h_1)-{\varvec{X}}(2+h_2,m)\\ \vdots \\ {\varvec{X}}(n-h_2, m-h_1)-{\varvec{X}}(n,m) \end{matrix}\right) . \end{aligned}$$

Clearly, every pixel belonging to the subimage

$$\begin{aligned} I= & {} \{ {{\varvec{X}}}(j,i):\quad (j,i)\in [h_2+1,\ldots ,n-h_2]\\&\times \, [h_1+1,\ldots ,m-h_1]\} \end{aligned}$$

appears once with a positive sign and once with a negative sign in \({\varvec{x}}_{{\varvec{h}}}\); thus, they do not contribute to the sum of the entries of \({\varvec{x}}_{{\varvec{h}}}\). The terms that remain are those that are located outside of the central subimage. Among them, the terms that are in \(L_+=\{{\varvec{X}}(j,i):\quad (j,i)\in [1,\ldots ,h_2]\times [1,\ldots ,m-h_1]\cup [h_2+1,\ldots ,n-h_2]\times [1,\ldots ,h_1]\}\) have a positive sign, while those that are in \(L_{-}=\{{{\varvec{X}}}(j,i):\quad (j,i)\in [n-h_2,\ldots ,n]\times [h_1,\ldots ,m]\cup [n-h_1+1,\ldots ,n]\times [h_2+1,n-h_2]\}\) have a negative sign. Given that the intensities of the central part of the image is zero, we want the components of \({\varvec{x}}_{{\varvec{h}}}\) that have the same property. Then, we require that

$$\begin{aligned} \sum _{{{\varvec{X}}}(j,i)\in L_{+}}{{\varvec{X}}}(j,i) - \sum _{{{\varvec{X}}}(j,i)\in L_{-}} {{\varvec{X}}}(j,i)= 0, \end{aligned}$$
(51)

Using the definition of \(L_{+}\) and \(L_{-}\), it is straightforward to prove that Equation (51) is equivalent to

$$\begin{aligned}&\sum _{i=1}^{m-h_1}\sum _{j=1}^{h_2}{\varvec{X}}(j,i)+\sum _{i=1}^{h_1}\sum _{j=h_2+1}^{n-h_2} {\varvec{X}}(j,i)\\&\quad = \sum _{i=h_1+1}^{m}\sum _{j=n-h_2+1}^{n} {\varvec{X}}(j,i)+\sum _{i=m-h_1+1}^{m}\sum _{j=h_2+1}^{n-h_2} {\varvec{X}}(j,i). \end{aligned}$$

This completes the proof.

As an illustration of the terms used in the proof of Proposition 5, Fig. 7 shows an example with an image of size \(9\times 9\) and a spatial lag \({\varvec{h}}=(1,2).\) The sign displayed in each cell shows whether the term associated with that cell can be positive, negative, or both when applying the transformation \({\varvec{A}}_{{\varvec{h}}}\) to \({\varvec{x}}=\mathrm {vec}({\varvec{X}}).\) We observe from Fig. 7 that the subimage I corresponds to the area colored in green. \(L_+\) corresponds to the area colored in light blue, while \(L_+\) corresponds to the red area. The gray cells are those pixels that are not used in the construction of \({\varvec{x}}_{{\varvec{h}}}\) and are located at the upper-right corner and at the bottom-left corner of the matrix shown in Fig. 7 .

Fig. 7
figure 7

Example of the terms used in the proof of Proposition 5 for an image of size \(9 \times 9\) and \({{\varvec{h}}}=(1,2)\)

Proof of Proposition 9

We have to prove that \({\varvec{H}}_g{\varvec{v}}^{\lambda }=\lambda {\varvec{v}}^{\lambda },\) where \(\lambda \) and \({\varvec{v}}^{\lambda }\) are as in (36) and (37).

Let \(v_k^{\lambda }\), \(k=1,\ldots ,N\) be the components of \({\varvec{v}}^{\lambda }\), and let \({\varvec{H}}^{k}_{g}\) be the vector associated with the k-th row of \({\varvec{H}}_{g}\). We will prove that

$$\begin{aligned} \langle {\varvec{H}}^{k}_{g}, {\varvec{v}}^{\lambda }\rangle = \lambda v^{\lambda }_{k}\qquad k=1,\ldots ,N. \end{aligned}$$
(52)

Let \(k>1\). Using the partial derivatives given by

$$\begin{aligned}&\frac{\partial ^2g}{\partial x_1^2}=\frac{3x_1(||{\varvec{x}}||^2-x_1^2)}{||{\varvec{x}}||^5}, \end{aligned}$$
(53)
$$\begin{aligned}&\frac{\partial ^2g}{\partial x_i^2}=\frac{x_1(||{\varvec{x}}||^2-3x_i^2)}{||{\varvec{x}}||^5},\qquad i\ne 1, \end{aligned}$$
(54)
$$\begin{aligned}&\frac{\partial ^2g}{\partial x_1\partial x_i}=\frac{x_i(||{\varvec{x}}||^2-3x_1^2)}{||{\varvec{x}}||^5}, \qquad i\ne 1,\end{aligned}$$
(55)
$$\begin{aligned}&\frac{\partial ^2g}{\partial x_j\partial x_i}=-\frac{3x_1x_ix_j}{||{\varvec{x}}||^5}, \qquad i\ne 1, i>j. \end{aligned}$$
(56)

we have that

$$\begin{aligned}&\langle {\varvec{H}}^{k}_{g}, {\varvec{v}}^{\lambda }\rangle \nonumber \\&\quad = \frac{x_k(||{\varvec{x}}||^2-3x_1^2)}{||{\varvec{x}}||^5}\frac{5x_1||{\varvec{x}}||^2-6x_1^3-||{\varvec{x}}||^2\sqrt{4||{\varvec{x}}||^2 -3x_1^2}}{2(||{\varvec{x}}||^2-3x_1^2)}\\&\quad \qquad -\,3{\mathop {\mathop {\sum }_{i=2}}\limits _{i\ne k}}^{N}\frac{x_1x_kx_i^2}{x_N||{\varvec{x}}||^5}+\frac{x_1x_k(||{\varvec{x}}||^2-3x_k^2)}{x_N||{\varvec{x}}||^5}\\&\quad =\frac{5x_1x_k||{\varvec{x}}||^2-6{\varvec{x}}_1^3x_k-||{\varvec{x}}||^2\sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2x_N||{\varvec{x}}||^5}\\&\quad \qquad -\,3x_1x_k\frac{||{\varvec{x}}||^2-x_1^2-x_k^2}{x_N||{\varvec{x}}||^5} +\frac{x_1x_k||{\varvec{x}}||^2-3x_1x_k^3}{x_N||{\varvec{x}}||}\\&\quad =x_k\frac{x_1-\sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2||{\varvec{x}}||^3} =x_k\cdot \lambda =v_{k}^{\lambda } \cdot \lambda . \end{aligned}$$

If \(k=1\), the left-hand side of Eq. (52) is

$$\begin{aligned}&\langle {\varvec{H}}^{1}_{g}, {\varvec{v}}^{\lambda }\rangle \nonumber \\&\quad = \frac{3x_1(||{\varvec{x}}||^2-x_1^2)}{||{\varvec{x}}||^5}\cdot \frac{5x_1||{\varvec{x}}||^2-6x_1^3-||{\varvec{x}}||^2\sqrt{4||{\varvec{x}}||-3x_1^2}}{2x_N(||{\varvec{x}}||^2-3x_1^2)}\nonumber \\&\qquad +\sum _{i=2}^{N}\frac{x_i^2(||{\varvec{x}}||^2-3x_1^2)}{||{\varvec{x}}||^5}\nonumber \\&\quad =\frac{||{\varvec{x}}||^2-x_1^2}{x_N||{\varvec{x}}||^5}\nonumber \\&\qquad \cdot \left( \frac{15x_1^2||{\varvec{x}}||^2-18x_1^4-3x_1||{\varvec{x}}||^2 \sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2(||{\varvec{x}}||^2-3x_1^2)} +||{\varvec{x}}||^2-3x_1^2\right) \nonumber \\&\quad = \frac{||{\varvec{x}}||^2-x_1^2}{x_n||{\varvec{x}}||^3}\cdot \left( \frac{2||{\varvec{x}}||^2+3x_1^2-3x_1\sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2(||{\varvec{x}}||^2-3x_1^2)}\right) . \end{aligned}$$
(57)

Now, the right-hand side of Equation (52) is given by

$$\begin{aligned}&\lambda \cdot v^{\lambda }_1\nonumber \\&\quad =\frac{x_1-\sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2||{\varvec{x}}||^3}\nonumber \\&\qquad \cdot \, \frac{5x_1||{\varvec{x}}||^2-6x_1^3-||{\varvec{x}}||^2\sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2x_N(||{\varvec{x}}||^2-3x_1^2)}\nonumber \\&\quad =\frac{2||{\varvec{x}}||^2 +x_1^2||{\varvec{x}}||^2-3x_1^4-3x_1||{\varvec{x}}||^2 \sqrt{4||{\varvec{x}}||-3x_1^2}+3x_1^3\sqrt{4||{\varvec{x}}||^2 -3x_1^2}}{2x_N||{\varvec{x}}||^3(||{\varvec{x}}||-3x_1^2)}\nonumber \\&\quad =\frac{(||{\varvec{x}}||^2-x_1^2)(2||{\varvec{x}}||^2+3x_1^2)-3x_1(||{\varvec{x}}||^2-x_1^2)\sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2x_N||{\varvec{x}}||^3(||{\varvec{x}}||^2-3x_1^2)}\nonumber \\&\quad =\frac{||{\varvec{x}}||^2-x_1^2}{x_N||{\varvec{x}}||^3}\cdot \left( \frac{2||{\varvec{x}}||^2+3x_1^2-3x_1\sqrt{4||{\varvec{x}}||^2-3x_1^2}}{2(||{\varvec{x}}||^2-3x_1^2)}\right) . \end{aligned}$$
(58)

Because the expressions (57) and (58) are equal, the proof for \(k=1\) is complete. This concludes the proof.

Proof of (40) and (41)

The proof is similar to that of Proposition 9. First, note that

$$\begin{aligned}&\frac{\partial ^2f}{\partial x_1^2}=-\frac{2(||{\varvec{x}}||^2-x_1^2)(||{\varvec{x}}||^2-4x_1^2)}{||{\varvec{x}}||^6},\\&\frac{\partial ^2f}{\partial x_i^2}=\frac{2x^2_1(||{\varvec{x}}||^2-4x_i^2)}{||{\varvec{x}}||^6},\qquad i\ne 1,\\&\frac{\partial ^2f}{\partial x_1\partial x_i}=\frac{4x_1x_i(||{\varvec{x}}||^2-2x_1^2)}{||{\varvec{x}}||^6}, \qquad i\ne 1,\\&\frac{\partial ^2f}{\partial x_j\partial x_i}=-\frac{8x^2_1x_ix_j}{||{\varvec{x}}||^6}, \qquad i\ne 1, i>j. \end{aligned}$$

Then, if \(k=1\), we obtain

$$\begin{aligned} \langle {\varvec{H}}_{f}^1,{\varvec{v}}^{\lambda } \rangle= & {} -2\frac{(||{\varvec{x}}||^2-x_1^2)(||{\varvec{x}}||^2-4x_1^2)}{||{\varvec{x}}||^6}\cdot \frac{(2x_1^2-||{\varvec{x}}||^2)}{2x_1x_N}\\&+\sum _{i=2}^{N}\frac{4x_ix_1(||{\varvec{x}}||^2-2x_1^2)}{x_N||{\varvec{x}}||^6}\\= & {} -\frac{2(||{\varvec{x}}||^2-x_1^2)}{||{\varvec{x}}||^4}\left[ \frac{(||{\varvec{x}}||^2-4x_1^2)(2x_1^2-||{\varvec{x}}||^2)}{2x_1x_N||{\varvec{x}}||^2}\right. \\&\left. -\frac{2x_1(||{\varvec{x}}||^2-2x_1^2)}{x_N||{\varvec{x}}||^2}\right] \\= & {} \lambda \left[ \frac{(2x_1^2-||{\varvec{x}}||^2)}{2x_1x_N}\left\{ \frac{||{\varvec{x}}||^2-4x_1^2}{||{\varvec{x}}||^2}+\frac{4x_1^2}{||{\varvec{x}}||^2}\right\} \right] \\= & {} \lambda \cdot \frac{(2x_1^2-||{\varvec{x}}||^2)}{2x_1x_N}\\= & {} \lambda \cdot v_1^{\lambda }. \end{aligned}$$

If \(k>1\), one obtains

$$\begin{aligned}&\langle {\varvec{H}}_{f}^k,{\varvec{v}}^{\lambda } \rangle \\&\quad = \frac{4x_1x_k(||{\varvec{x}}||^2-2x_1^2)}{||{\varvec{x}}||^6}\cdot \frac{(2x_1^2-||{\varvec{x}}||)}{2x_1x_N}\\&\qquad -\,{\mathop {\mathop {\sum }\limits _{i=2}}\limits _{i\ne k}}^{N}\frac{8x_1^2x_i^2x_k}{x_N||{\varvec{x}}||^6}+\frac{2x_1x_k(||{\varvec{x}}||^2-4x_k^2)}{x_N||{\varvec{x}}||^6}\\&\quad =\frac{-2x_1x_k(||{\varvec{x}}||^4-4x_1^2 ||{\varvec{x}}||^2+4x_1^4)-6x_1^3x_k||{\varvec{x}}||^2 +8x_1^5x_k+8x_1^3x_k^3-8x_1^3x_k^3}{x_1x_N||{\varvec{x}}||^6}\\&\quad =\frac{-2x_1x_k(||{\varvec{x}}||^4-2x_1^2)}{x_1x_N||{\varvec{x}}||^6} =\frac{-2(||{\varvec{x}}||^2-2x_1^2)}{||{\varvec{x}}||^4}\cdot x_k\\&\quad =\lambda \cdot v_k^{\lambda }. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vallejos, R., Mancilla, D. & Acosta, J. Image Similarity Assessment Based on Coefficients of Spatial Association. J Math Imaging Vis 56, 77–98 (2016). https://doi.org/10.1007/s10851-016-0635-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-016-0635-y

Keywords

Navigation