Skip to main content

Advertisement

Log in

A new co-learning method in spatial complex fuzzy inference systems for change detection from satellite images

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The detection of spatial and temporal changes (or change detection) in remote sensing images is essential in any decision support system about natural phenomena such as extreme weather conditions, climate change, and floods. In this paper, a new method is proposed to determine the inference process parameters of boundary point, rule coefficient, defuzzification coefficient, and dependency coefficient and present a new FWADAM+ method to train that set of parameters simultaneously. The initial data are clustered simultaneously according to each data group. This result will be the basis for determining a suitable set of parameters by using the FWADAM+ concurrent training algorithm. Eventually, these results will be inherited in the following data groups to build other complex fuzzy rule systems in a shorter time while still ensuring the model’s efficiency. The weather imagery database of the United States Navy (US Navy) is used to evaluate and compare with some related methods using the root-mean-squared error (RMSE), R-squared (R2) measures, and the analysis of variance (ANOVA) model. The experimental results show that the proposed method is up to 30% better than the SeriesNet method, and the processing time is 10% less than that of the SeriesNet method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from US Navy but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of US Navy.

Notes

  1. We will represent the loss function (20) in the Appendix A.

References

  1. Liu W, Jie Y, Zhao J, Le YA (2017) Novel method of unsupervised change detection using multi-temporal PolSAR images. Remote Sens 9:1135

    Article  Google Scholar 

  2. Ma W, Wu Y, Gong M, Xiong Y, Yang H, Hu T (2018) Change detection in SAR images based on matrix factorisation and a Bayes classifier. Int J Remote Sens 40:1–26

    Google Scholar 

  3. Singh A (1989) Review article digital change detection techniques using remotely—sensed data. Int J Remote Sens 10(6):989–1003

    Article  Google Scholar 

  4. Lu D, Mausel P, Brondizio E, Moran E (2004) Change detection techniques. Int J Remote Sens 25(12):2365–2401

    Article  Google Scholar 

  5. Hussain M, Chen D, Cheng A, Wei H, Stanley D (2013) Change detection from remotely sensed images: from pixel-based to object-based approaches. ISPRS J Photogramm Remote Sens 80:91–106

    Article  Google Scholar 

  6. You Y, Cao J, Zhou W (2020) A survey of change detection methods based on remote sensing images for multi-source and multi-objective scenarios. Remote Sens 12(15):2460

    Article  Google Scholar 

  7. Canty MJ (2019) Image analysis, classification, and change detection in remote sensing. Taylor & Francis Group, Abingdon-on-Thames. https://doi.org/10.1201/9780429464348

    Book  MATH  Google Scholar 

  8. Shi W, Zhang M, Zhang R, Chen S, Zhan Z (2020) Change detection based on artificial intelligence: state-of-the-art and challenges. Remote Sens 12(10):1688

    Article  Google Scholar 

  9. Zhang M, Zhou Y, Quan W, Zhu J, Zheng R, Wu Q (2020) Online learning for IoT optimization: a Frank-Wolfe Adam-based algorithm. IEEE Internet Things J 7(9):8228–8237

    Article  Google Scholar 

  10. Shen Z, Zhang Y, Lu J, Xu J, Xiao G (2018) SeriesNet: a generative time series forecasting model. In: 2018 international joint conference on neural networks (IJCNN), pp 1–8. https://doi.org/10.1109/IJCNN.2018.8489522

  11. Du B, Ru L, Wu C, Zhang L (2019) Unsupervised deep slow feature analysis for change detection in multi-temporal remote sensing images. IEEE Trans Geosci Remote Sens 57(12):9976–9992

    Article  Google Scholar 

  12. Chu S, Li P, Xia M (2022) MFGAN: multi feature guided aggregation network for remote sensing image. Neural Comput Appl 34(12):10157–10173

    Article  Google Scholar 

  13. Nguyen CH, Nguyen TC, Tang TN, Phan NL (2021) Improving object detection by label assignment distillation. arXiv preprint, arXiv:2108.10520

  14. Daudt RC, Le Saux, B, Boulch A, Gousseau Y (2018) Urban change detection for multispectral earth observation using convolutional neural networks. In IGARSS 2018—2018 IEEE international geoscience and remote sensing symposium. IEEE, pp 2115–2118

  15. Odaudu SN, Umoh IJ, Adedokun EA, Jonathan C (2021) LearnFuse: An efficient distributed big data fusion architecture using ensemble learning technique. In: Misra S, Muhammad-Bello B (eds) Information and communication technology and applications. ICTA 2020. Communications in computer and information science, vol 1350. Springer, Cham, pp 80–92. https://doi.org/10.1007/978-3-030-69143-1_7

    Chapter  Google Scholar 

  16. Qin D, Zhou X, Zhou W, Huang G, Ren Y, Horan B, He H, Kito N (2018) MSIM: a change detection framework for damage assessment in natural disasters. Expert Syst Appl 97:372–383

    Article  Google Scholar 

  17. Saha S, Bovolo F, Bruzzone L (2019) Unsupervised deep change vector analysis for multiple-change detection in VHR images. IEEE Trans Geosci Remote Sens 57(6):3677–3693

    Article  Google Scholar 

  18. Zhan Y, Fu K, Yan M, Sun X, Wang H, Qiu X (2017) Change detection based on deep siamese convolutional network for optical aerial images. IEEE Geosci Remote Sens Lett 14(10):1845–1849

    Article  Google Scholar 

  19. Cao Z et al (2020) Detection of small changed regions in remote sensing imagery using convolutional neural network. In: IOP conference series earth and environmental science, vol 502, p 012017

  20. Liu R, Wang R, Huang J, Li J, Jiao L (2021) Change detection in SAR images using multiobjective optimization and ensemble strategy. IEEE Geosci Remote Sens Lett 18(9):1585–1589

    Article  Google Scholar 

  21. Celik T (2009) Unsupervised change detection in satellite images using principal component analysis and k-means clustering. IEEE Geosci Remote Sens Lett 6(4):772–776

    Article  Google Scholar 

  22. Saha PK, Logofatu D (2021) Efficient approaches for density-based spatial clustering of applications with noise. In: Maglogiannis I, Macintyre J, Iliadis L (eds) Artificial intelligence applications and innovations. AIAI 2021. IFIP advances in information and communication technology, vol 627. Springer, Cham, pp 184–195. https://doi.org/10.1007/978-3-030-79150-6_15

    Chapter  Google Scholar 

  23. Wu C, Peng Q, Lee J, Leibnitz K, Xia Y (2021) Effective hierarchical clustering based on structural similarities in nearest neighbor graphs. Knowl-Based Syst 228:107295

    Article  Google Scholar 

  24. Ghosh S, Dubey SK (2013) Comparative analysis of k-means and fuzzy c-means algorithms. Int J Adv Comput Sci Appl 4(4):35–39

    Google Scholar 

  25. Zhang D, Yao L, Chen K, Wang S, Chang X, Liu Y (2019) Making sense of spatio-temporal preserving representations for EEG-based human intention recognition. IEEE Trans Cybern 50(7):3033–3044

    Article  Google Scholar 

  26. Chen K, Yao L, Zhang D, Wang X, Chang X, Nie F (2019) A semisupervised recurrent convolutional attention model for human activity recognition. IEEE Trans Neural Netw Learn Syst 31(5):1747–1756

    Article  Google Scholar 

  27. López-Fandiño J, Garea AS, Heras DB, Argüello F (2018) Stacked autoencoders for multiclass change detection in hyperspectral images. In: Proceedings of the 2018 IEEE international geoscience and remote sensing symposium (IGARSS), pp 1906–1909

  28. Samadi F, Akbarizadeh G, Kaabi H (2019) Change detection in SAR images using deep belief network: a new training approach based on morphological images. IET Image Proc 13(12):2255–2264

    Article  Google Scholar 

  29. Peng D, Zhang Y, Guan H (2019) End-to-end change detection for high resolution satellite images using improved UNet++. Remote Sens 11(11):1382

    Article  Google Scholar 

  30. Luo M, Chang X, Nie L, Yang Y, Hauptmann AG, Zheng Q (2017) An adaptive semisupervised feature analysis for video semantic recognition. IEEE Trans Cybern 48(2):648–660

    Article  Google Scholar 

  31. Mou L, Zhu XX (2018) A recurrent convolutional neural network for land cover change detection in multispectral images. In: IGARSS 2018—2018 IEEE international geoscience and remote sensing symposium, 2018, pp 4363–4366. https://doi.org/10.1109/IGARSS.2018.8517375

  32. Zheng Z, Ma A, Zhang L, Zhong Y (2021) Change is everywhere: single-temporal supervised object change detection in remote sensing imagery. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 15193–15202

  33. Xu G, Li H, Zang Y, Xie L, Bai C (2020) Change detection based on IR-MAD model for GF-5 remote sensing imagery. In: IOP conference series: materials science and engineering, vol 768, no 7. IOP Publishing, p 072073

  34. Healey SP, Cohen WB, Yang Z, Kenneth Brewer C, Brooks EB, Gorelick N, Hernandez AJ, Huang C, Joseph Hughes M, Kennedy RE, Loveland TR, Moisen GG, Schroeder TA, Stehman SV, Vogelmann JE, Woodcock CE, Yang L, Zhu Z (2018) Mapping forest change using stacked generalization: an ensemble approach. Remote Sens Environ 204:717–728

    Article  Google Scholar 

  35. Jiang W, He G, Long T, Ni Y, Liu H, Peng Y, Lv K, Wang G (2018) Multilayer perceptron neural network for surface water extraction in Landsat 8 OLI satellite images. Remote Sens 10(5):755

    Article  Google Scholar 

  36. Sharma C, Amandeep B, Sobti R, Lohani TK, Shabaz M (2021) A secured frame selection based video watermarking technique to address quality loss of data: Combining graph based transform, singular valued decomposition, and hyperchaotic encryption. Secur Commun Netw 2021:5536170

    Article  Google Scholar 

  37. Jarrahi MA, Samet H, Ghanbari T (2020) Novel change detection and fault classification scheme for AC microgrids. IEEE Syst J 14(3):3987–3998

    Article  Google Scholar 

  38. Im J, Jensen JR (2005) A change detection model based on neighborhood correlation image analysis and decision tree classification. Remote Sens Environ 99(3):326–340

    Article  Google Scholar 

  39. Shao P, Shi W, He P, Hao M, Zhang X (2016) Novel approach to unsupervised change detection based on a robust semi-supervised FCM clustering algorithm. Remote Sens 8(3):264

    Article  Google Scholar 

  40. Zhang H, Wang Q, Shi W, Hao M (2017) A novel adaptive fuzzy local information C-means clustering algorithm for remotely sensed imagery classification. IEEE Trans Geosci Remote Sens 55(9):5057–5068

    Article  Google Scholar 

  41. Shao R, Du C, Chen H, Li J (2021) SUNet: Change detection for heterogeneous remote sensing images from satellite and UAV using a dual-channel fully convolution network. Remote Sens 13(18):3750

    Article  Google Scholar 

  42. Hou B, Liu Q, Wang H, Wang Y (2019) From W-Net to CDGAN: bitemporal change detection via deep learning techniques. IEEE Trans Geosci Remote Sens 58(3):1790–1802

    Article  Google Scholar 

  43. Kou R, Fang B, Chen G, Wang L (2020) Progressive domain adaptation for change detection using season-varying remote sensing images. Remote Sens 12(22):3815

    Article  Google Scholar 

  44. Ramot D, Milo R, Friedman M, Kandel A (2002) Complex fuzzy sets. IEEE Trans Fuzzy Syst 10(2):171–186

    Article  Google Scholar 

  45. Ramot D, Friedman M, Langholz G, Kandel A (2003) Complex fuzzy logic. IEEE Trans Fuzzy Syst 11(4):450–461

    Article  Google Scholar 

  46. Selvachandran G, Quek SG, Lan LTH, Son LH, Giang NL, Ding W, Abdel-Basset M, De Albuquerque VHC (2021) A new design of Mamdani complex fuzzy inference system for multi-attribute decision making problems. IEEE Trans Fuzzy Syst 29(4):716–730

    Article  Google Scholar 

  47. Bezdek JC, Ehrlich R, Full W (1984) FCM: the fuzzy c-means clustering algorithm. Comput Geosci 10(2):191–203

    Article  Google Scholar 

  48. Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of the 3rd international conference on learning representations (ICLR)

  49. Hoerl AE, Kennard RW (2000) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 42(1):80–86

    Article  MATH  Google Scholar 

  50. Son LH, Thong PH (2017) Some novel hybrid forecast methods based on picture fuzzy clustering for weather nowcasting from satellite image sequences. Appl Intell 46(1):1–15

    Article  Google Scholar 

  51. National Oceanic and Atmospheric Administration (2015) MTSAT west color infrared loop. Retrieved from, https://www.star.nesdis.noaa.gov/GOES/index.php

  52. Ji M, Liu L, Du R, Buchroithner MF (2019) A comparative study of texture and convolutional neural network features for detecting collapsed buildings after earthquakes using pre- and post-event satellite imagery. Remote Sens 11(10):1202

    Article  Google Scholar 

Download references

Acknowledgements

This research has been funded by the Research Project: VAST01.07/22-23, Vietnam Academy of Science and Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Le Hoang Son.

Ethics declarations

Conflict of interest

The authors declare that they do not have any conflict of interests. All authors have checked and agreed the submission.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Details of loss function

$$ \begin{aligned} {\text{RMSE }}(X_{{}}^{{db}} ,{\text{ }}X_{{}}^{{(t + 1)}} ) & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {X_{i}^{{db}} - X_{i}^{{(t + 1)}} } \right)} ^{2} } \\ & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {255 \times \left| {\frac{1}{{\kappa _{i} \times d_{i} }} - O_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }}^{*} } \right| - X_{i}^{{(t + 1)}} } \right)^{2} } } \\ & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {255 \times \left| {\frac{1}{{\kappa _{i} \times d_{i} }} - \left( {\gamma \times O_{{\left\lceil {\frac{{i.{\text{Rel}}}}{{a^{2} }}} \right\rceil }}^{*} + (1 - \gamma ) \times O_{{\left\lceil {\frac{{i.{\text{Img}}}}{{a^{2} }}} \right\rceil }}^{{*^{\prime}}} } \right)} \right| - X_{i}^{{(t)}} } \right)^{2} } } \\ & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {255 \times \left| {\frac{1}{{\kappa _{i} \times d_{i} }} - \left( {\gamma \times O_{{\left\lceil {\frac{{i.{\text{Rel}}}}{{a^{2} }}} \right\rceil }}^{*} + (1 - \gamma ) \times \left( {X_{i} ^{{(t)}} \times \left( {1 + O_{{\left\lceil {\frac{{i.{\text{Img}}}}{{a^{2} }}} \right\rceil }}^{*} } \right)} \right)} \right)} \right| - X_{i}^{{(t + 1)}} } \right)^{2} } } \\ & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {255 \times \left| {\frac{1}{{\kappa _{i} \times d_{i} }} - \left( {\gamma \times \frac{{\sum\nolimits_{{j = 1}}^{R} {{\text{W}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }} (X_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }}^{{(t)}} ){\kern 1pt} \times {\kern 1pt} {\text{DEF}}_{j} (X^{{(t)}} ){\kern 1pt} } }}{R} + (1 - \gamma ) \times \left( {X_{i} ^{{(t)}} \times \left( {1 + \frac{{\sum\nolimits_{{j = 1}}^{R} {{\text{W}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }} (X_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }}^{{(t)}} ) \times {\text{DEF}}_{j} ({\text{HOD}}^{{(t)}} ){\kern 1pt} } }}{R}} \right)} \right)} \right)} \right| - X_{i}^{{(t + 1)}} } \right)^{2} } } \\ & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {255 \times \left| {\frac{1}{{\kappa _{i} \times d_{i} }} - \left( {\gamma \times \frac{{\sum\nolimits_{{j = 1}}^{R} {{\text{W}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }} (X_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }}^{{(t)}} ){\kern 1pt} \times {\kern 1pt} \frac{{h_{{1j}} a_{j} + h_{{2j}} b_{j} + h_{{3j}} c_{j} }}{{h_{{1j}} + h_{{2j}} + h_{{3j}} }}{\kern 1pt} } }}{R} + (1 - \gamma ) \times \left( {X_{i} ^{{(t)}} \times \left( {1 + \frac{{\sum\nolimits_{{j = 1}}^{R} {{\text{W}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }} (X_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil }}^{{(t)}} ){\kern 1pt} \times {\kern 1pt} \frac{{h_{{1j}}^{\prime } a_{j}^{\prime } + h_{{2j}}^{\prime } b_{j}^{\prime } + h_{{3j}}^{\prime } c_{j}^{\prime } }}{{h_{{1j}}^{\prime } + h_{{2j}}^{\prime } + h_{{3j}}^{\prime } }}{\kern 1pt} } }}{R}} \right)} \right)} \right)} \right| - X_{i}^{{(t + 1)}} } \right)^{2} } } \\ & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {255 \times \left| {\frac{1}{{\kappa _{i} \times d_{i} }} - \left( {\gamma \times \frac{{\frac{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} \times {\text{w}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }}{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }} \times \sum\nolimits_{{j = 1}}^{R} {\frac{{h_{{1j}} a_{j} + h_{{2j}} b_{j} + h_{{3j}} c_{j} }}{{h_{{1j}} + h_{{2j}} + h_{{3j}} }}} {\kern 1pt} }}{R} + (1 - \gamma ) \times \left( {X_{i} ^{{(t)}} \times \left( {1 + \frac{{\frac{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} \times {\text{w}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }}{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }} \times {\kern 1pt} \sum\nolimits_{{j = 1}}^{R} {\frac{{h_{{1j}}^{\prime } a_{j}^{\prime } + h_{{2j}}^{\prime } b_{j}^{\prime } + h_{{3j}}^{\prime } c_{j}^{\prime } }}{{h_{{1j}}^{\prime } + h_{{2j}}^{\prime } + h_{{3j}}^{\prime } }}} {\kern 1pt} }}{R}} \right)} \right)} \right)} \right| - X_{i}^{{(t + 1)}} } \right)^{2} } } \\ & = \sqrt {\sum\limits_{{i = 1}}^{n} {\left( {255 \times \left| {\frac{1}{{\kappa _{i} \times d_{i} }} - \left( \begin{gathered} \gamma \times \frac{{\frac{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} \times {\text{w}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }}{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }}{\kern 1pt} \times {\kern 1pt} \sum\nolimits_{{j = 1}}^{R} {\frac{{h_{{1j}} \times \alpha _{j}^{a} \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\,{\text{and}}\,X_{i}^{{(t)}} \le b_{{ij}} }} {U_{{{\text{i,j}}}} \times {\kern 1pt} X_{{}}^{{(t)}} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\,{\text{and}}\,X_{i}^{{(t)}} \le b_{{jj}} }} {U_{{{\text{i,j}}}} } }}} \right) + h_{{2j}} \times \alpha _{j}^{b} \times {\kern 1pt} V_{j}^{{{\text{rel}}}} + h_{{3j}} \times \alpha _{j}^{c} \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, X_{i}^{{(t)}} \, \ge \, b_{{ij}} }} {U_{{{\text{i,j}}}} \times X_{i}^{{(t)}} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, X_{i}^{{(t)}} \, \ge \, b_{{ij}} }} {U_{{{\text{i,j}}}} } }}} \right)}}{{h_{{1j}} + h_{{2j}} + h_{{3j}} }}} {\kern 1pt} }}{R} \hfill \\ + (1 - \gamma ) \times \left( {X_{i} ^{{(t)}} \times \left( {1 + \frac{{\frac{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} \times {\text{w}}_{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }}{{\sum\nolimits_{{j = 1}}^{{R + 1}} {\beta _{{\left\lceil {\frac{i}{{a^{2} }}} \right\rceil j}} } }} \times {\kern 1pt} \sum\nolimits_{{j = 1}}^{R} {\frac{{h_{{1j}}^{\prime } \times \alpha _{j}^{{a^{\prime } }} \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{{(t)}} \, \le \, b_{{ij}} }} {U_{{{\text{i,j}}}} \times {\text{HOD}}_{{\text{i}}}^{{{\text{(t)}}}} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{{(t)}} \, \le \, b_{{ij}} }} {U_{{{\text{i,j}}}} } }}} \right)_{j} + h_{{2j}}^{\prime } \times \alpha _{j}^{{b^{\prime } }} \times {\kern 1pt} V_{j}^{{{\text{img}}}} + h_{{3j}}^{\prime } \times \alpha _{j}^{{c^{\prime } }} \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{{(t)}} \, \ge \, b_{{ij}} }} {U_{{{\text{i,j}}}} \times {\kern 1pt} {\text{HOD}}_{i}^{{(t)}} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{{(t)}} \, \ge \, b_{{ij}} }} {U_{{{\text{i,j}}}} } }}} \right){\kern 1pt} }}{{h_{{1j}}^{\prime } + h_{{2j}}^{\prime } + h_{{3j}}^{\prime } }}} {\kern 1pt} }}{R}} \right)} \right) \hfill \\ \end{gathered} \right)} \right| - X_{i}^{{(t + 1)}} } \right)^{2} } } \\ {.}\end{aligned} $$

Appendix B: Numerical example

Step 1: Input data preprocessing.

Step 1.1. Convert satellite image from color image to gray image.

Let us say we have 10 RGB images, each with a size 9 × 9. (Because the formula applies to all ten images is the same. Therefore, we will show the example with two images to avoid duplication in the information.)

To perform the conversion to grayscale, we will follow the formula:

$$ Y = 0.2126R + 0.7152G + 0.0722B $$

where Y is the corresponding gray pixel value, R, G, and B are, respectively, the pixel value of the channels: red, green, and blue of the color image.

Then, suppose our grayscale data is obtained as follows:

X1:

figure a

X2:

figure b

Step 1.2: Reduce image size by representative pixels.

From the input image, we will proceed to group pixels. (In this case, we will choose the value c = 3, which means we will group the 3 × 3 small images corresponding to the color we have presented above.)

From the above data, according to formula number (3), we can determine Imtb and \(\kappa\) as follows:

\({\text{Im}}_{{^{1} }}^{tb}\):

155.33

112.44

129.33

79.56

85

129.44

134.56

141.56

122.33

\({\text{Im}}_{{^{2} }}^{tb}\):

135.78

144.67

157.78

107.67

111.22

109.33

118.11

101.22

132

\(\kappa_{1}\):

0.035

0.011

0.3216

0.024

0.004

0.035

0.046

0.055

0.087

0.0159

0.0088

0.0424

0.904

0.004

0.006

0.054

0.025

0.367

0.124

0.4038

0.0375

0.007

0.005

0.014

0.17

0.026

0.17

0.3873

0.0812

0.0374

0.043

0.041

0.037

0.249

0.056

0.128

0.0602

0.1608

0.0896

0.154

0.038

0.051

0.035

0.387

0.03

0.077

0.0428

0.0637

0.292

0.292

0.051

0.03

0.033

0.051

0.0357

0.0598

0.1848

0.054

0.115

0.028

0.121

0.06

0.098

0.1191

0.0472

0.3772

0.05

0.044

0.052

0.117

0.256

0.059

0.0706

0.0409

0.0647

0.463

0.032

0.162

0.099

0.125

0.066

\(\kappa_{2}\):

0.246

0.027

0.0131

0.13

0.14

0.017

0.079

0.211

0.172

0.0583

0.032

0.4004

0.153

0.039

0.161

0.002

0.134

0.247

0.1556

0.0077

0.0598

0.164

0.066

0.13

0.054

0.052

0.05

0.0403

0.0424

0.1182

0.126

0.074

0.019

0.039

0.018

0.209

0.2261

0.1031

0.0465

0.039

0.179

0.279

0.011

0.171

0.045

0.1708

0.1079

0.1448

0.115

0.122

0.047

0.065

0.102

0.339

0.0495

0.1018

0.0249

0.087

0.032

0.126

0.018

0.136

0.039

0.0519

0.2524

0.4177

0.283

0.007

0.18

0.164

0.149

0.13

0.0434

0.028

0.0304

0.031

0.123

0.133

0.136

0.042

0.185

From here, we can determine \({\text{I}} \overline{m}\) according to formula number (2)

$$ \begin{aligned} & {\text{I}} \overline{m}_{1}^{\prime } :\quad \left[ {\begin{array}{*{20}r} \hfill {154.7874} & \hfill {112.9673} & \hfill {131.6365} \\ \hfill {75.1241} & \hfill {82.0974} & \hfill {126.5324} \\ \hfill {130.5848} & \hfill {138.0633} & \hfill {99.0258} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{2} :\quad \left[ {\begin{array}{*{20}r} \hfill {157.6249} & \hfill {144.745} & \hfill {146.3916} \\ \hfill {126.5922} & \hfill {124.9895} & \hfill {123.0693} \\ \hfill {150.0865} & \hfill {124.6629} & \hfill {128.7595} \\ \end{array} } \right] \\ \end{aligned} $$

Similarly, we get the following input:

$$ \begin{aligned} & {\text{I}} \overline{m}_{3} :\quad \left[ {\begin{array}{*{20}r} \hfill {154.0385} & \hfill {102.7383} & \hfill {164.6013} \\ \hfill {128.1039} & \hfill {114.4394} & \hfill {156.8697} \\ \hfill {158.3804} & \hfill {120.0475} & \hfill {181.3936} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{4} :\quad \left[ {\begin{array}{*{20}r} \hfill {162.5923} & \hfill {102.8552} & \hfill {124.9634} \\ \hfill {91.6648} & \hfill {160.1438} & \hfill {83.6119} \\ \hfill {121.9662} & \hfill {130.3867} & \hfill {86.8873} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{5} :\quad \left[ {\begin{array}{*{20}r} \hfill {141.3602} & \hfill {197.8817} & \hfill {133.2668} \\ \hfill {154.5071} & \hfill {158.9036} & \hfill {137.4368} \\ \hfill {130.2094} & \hfill {80.5522} & \hfill {167.8854} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{6} :\quad \left[ {\begin{array}{*{20}r} \hfill {107.6076} & \hfill {133.0151} & \hfill {148.3126} \\ \hfill {140.4335} & \hfill {130.6123} & \hfill {105.592} \\ \hfill {120.8286} & \hfill {120.3955} & \hfill {52.7942} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{7} :\quad \left[ {\begin{array}{*{20}r} \hfill {130.3769} & \hfill {127.7554} & \hfill {126.5509} \\ \hfill {122.962} & \hfill {174.067} & \hfill {160.7194} \\ \hfill {93.3577} & \hfill {145.5881} & \hfill {79.8419} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{8} :\quad \left[ {\begin{array}{*{20}r} \hfill {123.6647} & \hfill {138.8215} & \hfill {126.8762} \\ \hfill {178.5098} & \hfill {92.6901} & \hfill {160.6605} \\ \hfill {136.6688} & \hfill {107.7695} & \hfill {181.7323} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{9} :\quad \left[ {\begin{array}{*{20}r} \hfill {104.5517} & \hfill {108.0111} & \hfill {103.7327} \\ \hfill {153.8314} & \hfill {138.5577} & \hfill {97.8999} \\ \hfill {134.9734} & \hfill {92.5837} & \hfill {122.8807} \\ \end{array} } \right] \\ & {\text{I}} \overline{m}_{10} :\quad \left[ {\begin{array}{*{20}r} \hfill {144.5198} & \hfill {134.6119} & \hfill {136.1748} \\ \hfill {126.4774} & \hfill {132.5162} & \hfill {151.4232} \\ \hfill {141.6004} & \hfill {132.1263} & \hfill {132.2672} \\ \end{array} } \right] \\ \end{aligned} $$

Step 1.3: Determine the difference matrix (imaginary part).

Imaginary part (is the difference matrix): Determined by directly subtracting the difference between the corresponding regions of the remote sensing image according to formula number (3):

$$ \begin{aligned} & {\text{HOD}}_{{1}} ({\text{I}} \overline{m}_{2} - {\text{I}} \overline{m}_{1} ):\quad \left[ {\begin{array}{*{20}r} \hfill {2.8375} & \hfill {31.7777} & \hfill {14.7551} \\ \hfill {51.4681} & \hfill {42.8921} & \hfill {3.4631} \\ \hfill {19.5017} & \hfill {13.4004} & \hfill {29.7337} \\ \end{array} } \right] \\ & {\text{HOD}}_{{2}} ({\text{I}} \overline{m}_{3} - {\text{I}} \overline{m}_{2} ):\quad \left[ {\begin{array}{*{20}r} \hfill {3.5864} & \hfill {42.0067} & \hfill {18.2097} \\ \hfill {1.5117} & \hfill {10.5501} & \hfill {33.8004} \\ \hfill {8.2939} & \hfill {4.6154} & \hfill {52.6341} \\ \end{array} } \right] \\ & {\text{HOD}}_{{3}} ({\text{I}} \overline{m}_{4} - {\text{I}} \overline{m}_{3} ):\quad \left[ {\begin{array}{*{20}r} \hfill {8.5538} & \hfill {0.1169} & \hfill {39.6379} \\ \hfill {36.4391} & \hfill {45.7044} & \hfill {73.2578} \\ \hfill {36.4142} & \hfill {10.3392} & \hfill {94.5063} \\ \end{array} } \right] \\ & {\text{HOD}}_{{4}} ({\text{I}} \overline{m}_{5} - {\text{I}} \overline{m}_{4} ):\quad \left[ {\begin{array}{*{20}r} \hfill {21.2321} & \hfill {95.0265} & \hfill {8.3034} \\ \hfill {62.8423} & \hfill {1.2402} & \hfill {53.8249} \\ \hfill {8.2432} & \hfill {49.8345} & \hfill {80.9981} \\ \end{array} } \right] \\ & {\text{HOD}}_{{5}} ({\text{I}} \overline{m}_{6} - {\text{I}} \overline{m}_{5} ):\quad \left[ {\begin{array}{*{20}r} \hfill {33.7526} & \hfill {64.8666} & \hfill {15.0458} \\ \hfill {14.0736} & \hfill {28.2913} & \hfill {31.8448} \\ \hfill {9.3808} & \hfill {39.8433} & \hfill {115.0912} \\ \end{array} } \right] \\ & {\text{HOD}}_{{6}} ({\text{I}} \overline{m}_{7} - {\text{I}} \overline{m}_{6} ):\quad \left[ {\begin{array}{*{20}r} \hfill {22.7693} & \hfill {5.2597} & \hfill {21.7617} \\ \hfill {17.4715} & \hfill {43.4547} & \hfill {55.1274} \\ \hfill {27.4709} & \hfill {25.1926} & \hfill {27.0477} \\ \end{array} } \right] \\ & {\text{HOD}}_{{7}} ({\text{I}} \overline{m}_{8} - {\text{I}} \overline{m}_{7} ):\quad \left[ {\begin{array}{*{20}r} \hfill {6.7122} & \hfill {11.0661} & \hfill {0.3253} \\ \hfill {55.5478} & \hfill {81.3769} & \hfill {0.0589} \\ \hfill {43.3111} & \hfill {37.8186} & \hfill {101.8904} \\ \end{array} } \right] \\ & {\text{HOD}}_{{8}} ({\text{I}} \overline{m}_{9} - {\text{I}} \overline{m}_{8} ):\quad \left[ {\begin{array}{*{20}r} \hfill {19.113} & \hfill {30.8104} & \hfill {23.1435} \\ \hfill {24.6784} & \hfill {45.8676} & \hfill {62.7606} \\ \hfill {1.6954} & \hfill {15.1858} & \hfill {58.8516} \\ \end{array} } \right] \\ & {\text{HOD}}_{{9}} ({\text{I}} \overline{m}_{10} - {\text{I}} \overline{m}_{9} ):\quad \left[ {\begin{array}{*{20}r} \hfill {39.9681} & \hfill {26.6008} & \hfill {32.4421} \\ \hfill {27.354} & \hfill {6.0415} & \hfill {53.5233} \\ \hfill {6.627} & \hfill {39.5426} & \hfill {9.3865} \\ \end{array} } \right] \\ \end{aligned} $$

Step 1.4: Data sampling.

N = 10 Input images from Images 1 to 10 as above, we choose sample size \(\kappa = 4\), apply formula number (4), we get: \(M = \frac{10 - 4}{{4\left( {1 - 0.5} \right)}} + 1 = 4\).

Sample 1:

Training: From Image 1 to 3; Validation: Image 3; Testing: Image 4.

Sample 2:

Training: From Image 3 to 5; Validation: Image 5; Testing: Image 6.

Sample 3:

Training: From Image 5 to 7; Validation: Image 7; Testing: Image 8.

Sample 4.

Training: From Image 7 to 9: Validation: Image 9; Testting: Image 10.

Step 2: Using fuzzy C-means to cluster the input data with both the real and the imaginary parts of each sample.

With sample 1:

Using FCM clustering algorithm to cluster simultaneously with \({\text{I}} \overline{m}^{(t)}\) and HOD get the corresponding pairs of values.

$$ \begin{aligned} & X_{1} :\quad \left[ {\begin{array}{*{20}l} {\left( {154.7874, \, 2.8375} \right)} \hfill & {\left( {112.9673, \, 31.7777} \right)} \hfill & {\left( {131.6365, \, 14.7551} \right)} \hfill \\ {\left( {75.1241, \, 51.4681} \right)} \hfill & {\left( {82.0974, \, 42.8921} \right)} \hfill & {\left( {126.5324, \, 3.4631} \right)} \hfill \\ {\left( {130.5848, \, 19.5017} \right)} \hfill & {\left( {138.0633, \, 13.4004} \right)} \hfill & {\left( {99.0258, \, 29.7337} \right)} \hfill \\ \end{array} } \right] \\ & X_{2} :\quad \left[ {\begin{array}{*{20}l} {\left( {157.6249, \, 3.5864} \right)} \hfill & {\left( {144.745, \, 42.0067} \right)} \hfill & {\left( {146.3916, \, 18.2097} \right)} \hfill \\ {\left( {126.5922, \, 1.5117} \right)} \hfill & {\left( {124.9895, \, 10.5501} \right)} \hfill & {\left( {123.0693, \, 33.8004} \right)} \hfill \\ {\left( {150.0865, \, 8.2939} \right)} \hfill & {\left( {124.6629, \, 4.6154} \right)} \hfill & {\left( {128.7595, \, 52.6341} \right)} \hfill \\ \end{array} } \right] \\ & X_{3} :\quad \left[ {\begin{array}{*{20}l} {\left( {154.0385, \, 8.5538} \right)} \hfill & {\left( {102.7383, \, 0.1169} \right)} \hfill & {\left( {164.6013, \, 39.6379} \right)} \hfill \\ {\left( {128.1039, \, 36.4391} \right)} \hfill & {\left( {114.4394, \, 45.7044} \right)} \hfill & {\left( {156.8697, \, 73.2578} \right)} \hfill \\ {\left( {158.3804, \, 36.4142} \right)} \hfill & {\left( {120.0475, \, 10.3392} \right)} \hfill & {\left( {181.3936, \, 94.5063} \right)} \hfill \\ \end{array} } \right] \\ \end{aligned} $$
  • Number of cluster: 2

  • Value m = 2

  • EPS (Threshold of difference between two consecutive iterations) = 0.001

  • Number of iterations t: 3

Output:

  • Membership matrix: U

  • Center vector: V

Step 2.1: Transform X t) value and HOD in range [0, 1]

$$ \begin{aligned} & {\text{X}}_{1}^{\prime } :\quad \left[ {\begin{array}{*{20}l} {\left( {0.607, \, 0.0111} \right)} \hfill & {\left( {0.443, \, 0.1246} \right)} \hfill & {\left( {0.5162, \, 0.0579} \right)} \hfill \\ {\left( {0.2946, \, 0.2018} \right)} \hfill & {\left( {0.322, \, 0.1682} \right)} \hfill & {\left( {0.4962, \, 0.0136} \right)} \hfill \\ {\left( {0.5121, \, 0.0765} \right)} \hfill & {\left( {0.5414, \, 0.0526} \right)} \hfill & {\left( {0.3883, \, 0.1166} \right)} \hfill \\ \end{array} } \right] \\ & {\text{X}}_{2}^{\prime } :\quad \left[ {\begin{array}{*{20}l} {\left( {0.6181, \, 0.0141} \right)} \hfill & {\left( {0.5676, \, 0.1647} \right)} \hfill & {\left( {0.5741, \, 0.0714} \right)} \hfill \\ {\left( {0.4964, \, 0.0059} \right)} \hfill & {\left( {0.4902, \, 0.0414} \right)} \hfill & {\left( {0.4826, \, 0.1326} \right)} \hfill \\ {\left( {0.5886, \, 0.0325} \right)} \hfill & {\left( {0.4889, \, 0.0181} \right)} \hfill & {\left( {0.5049, \, 0.2064} \right)} \hfill \\ \end{array} } \right] \\ & {\text{X}}_{3}^{\prime } :\quad \left[ {\begin{array}{*{20}l} {\left( {0.6041, \, 0.0335} \right)} \hfill & {\left( {0.4029, \, 0.0005} \right)} \hfill & {\left( {0.6455, \, 0.1554} \right)} \hfill \\ {\left( {0.5024, \, 0.1429} \right)} \hfill & {\left( {0.4488, \, 0.1792} \right)} \hfill & {\left( {0.6152, \, 0.2873} \right)} \hfill \\ {\left( {0.6211, \, 0.1428} \right)} \hfill & {\left( {0.4708, \, 0.0405} \right)} \hfill & {\left( {0.7113, \, 0.3706} \right)} \hfill \\ \end{array} } \right] \\ \end{aligned} $$

Step 2.2: Initialize the matrix of values Vector cluster center according to random values

Satisfied:

  • \(V_{j} :\,\) is the center vector

    $$ V_{j1} \in \left( {\min Xi,\ldots\max Xi} \right);V_{j2} \in \left( {\min HODi,\ldots\max HODi} \right) $$
    $$ V_{1}^{(0)} = \,\left[ {\begin{array}{*{20}r} \hfill {0.1416} \\ \hfill {0.0024} \\ \end{array} } \right];\quad V_{2}^{(0)} = \,\left[ {\begin{array}{*{20}r} \hfill {0.1744} \\ \hfill {0.0113} \\ \end{array} } \right] $$

Step 2.3: Calculate U by center vector V

$$ \begin{aligned} U_{kj} & = \frac{1}{{\sum\limits_{i = 1}^{C} {\left( {\frac{{\left\| {X_{k} - V_{j} } \right\|}}{{\left\| {X_{k} - V_{i} } \right\|}}} \right)^{{\frac{2}{m - 1}}} } }} \\ U_{11} & = \frac{1}{{\left( {\frac{{\sqrt {\left( {X_{11} - V_{11} } \right)^{2} + \left( {X_{12} - V_{12} } \right)^{2} } }}{{\sqrt {\left( {X_{11} - V_{11} } \right)^{2} + \left( {X_{12} - V_{12} } \right)^{2} } }}} \right)^{{\frac{2}{2 - 1}}} + \left( {\frac{{\sqrt {\left( {X_{11} - V_{11} } \right)^{2} + \left( {X_{12} - V_{12} } \right)^{2} } }}{{\sqrt {\left( {X_{11} - V_{21} } \right)^{2} + \left( {X_{12} - V_{22} } \right)^{2} } }}} \right)^{{\frac{2}{2 - 1}}} }} \\ U_{12} & = \frac{1}{{\left( {\frac{{\sqrt {\left( {X_{11} - V_{21} } \right)^{2} + \left( {X_{12} - V_{22} } \right)^{2} } }}{{\sqrt {\left( {X_{11} - V_{11} } \right)^{2} + \left( {X_{12} - V_{12} } \right)^{2} } }}} \right)^{{\frac{2}{2 - 1}}} + \left( {\frac{{\sqrt {\left( {X_{11} - V_{21} } \right)^{2} + \left( {X_{12} - V_{22} } \right)^{2} } }}{{\sqrt {\left( {X_{11} - V_{21} } \right)^{2} + \left( {X_{12} - V_{22} } \right)^{2} } }}} \right)^{{\frac{2}{2 - 1}}} }} \\ U_{1}^{(0)} & = \left[ {\begin{array}{*{20}r} \hfill {0.4634} & \hfill {0.4455} & \hfill {0.4535} \\ \hfill {0.4454} & \hfill {0.436} & \hfill {0.4514} \\ \hfill {0.4531} & \hfill {0.4565} & \hfill {0.4348} \\ \end{array} } \right] \\ U_{2}^{(0)} & = \left[ {\begin{array}{*{20}r} \hfill {0.5366} & \hfill {0.5545} & \hfill {0.5465} \\ \hfill {0.5546} & \hfill {0.564} & \hfill {0.5486} \\ \hfill {0.5469} & \hfill {0.5435} & \hfill {0.5652} \\ \end{array} } \right] \\ \end{aligned} $$

Step 2.4: Update Center vector V

$$ \begin{aligned} V_{J1} & = \frac{{\sum\limits_{k = 1}^{N} {\mathop U\nolimits_{kj}^{m} \times X_{k} } }}{{\sum\limits_{k = 1}^{N} {\mathop U\nolimits_{kj}^{m} } }};\quad V_{J2} = \frac{{\sum\limits_{k = 1}^{N} {\mathop U\nolimits_{kj}^{m} \times HOD_{k} } }}{{\sum\limits_{k = 1}^{N} {\mathop U\nolimits_{kj}^{m} } }} \\ V_{j1}^{(1)} & = \left[ {\begin{array}{*{20}r} \hfill {0.4612} \\ \hfill {0.0896} \\ \end{array} } \right];\quad V_{j2}^{(1)} = \left[ {\begin{array}{*{20}r} \hfill {0.4551} \\ \hfill {0.0929} \\ \end{array} } \right] \\ \end{aligned} $$

Calculate the different between \(V^{(1)}\) and \(V^{(0)}\) using Euclid distance

$$ \begin{aligned} \left\| {V^{(1)} - V^{(0)} } \right\| & = \sqrt {\sum\nolimits_{l = 1}^{2} {(V_{jl}^{(1)} - V_{jl}^{(0)} )^{2} } } \\ & = \sqrt {(V_{j1}^{(1)} - V_{j1}^{(0)} )^{2} + (V_{j2}^{(1)} - V_{j2}^{(0)} )^{2} } \\ & = \sqrt {(V_{11}^{(1)} - V_{11}^{(0)} )^{2} + (V_{12}^{(1)} - V_{12}^{(0)} )^{2} + (V_{21}^{(1)} - V_{21}^{(0)} )^{2} + (V_{22}^{(1)} - V_{22}^{(0)} )^{2} } \\ & = \, 0.{5665} \\ \end{aligned} $$

Step 2.5: Repeat step 2.3 and step 2.4 while still satisfying one of two conditions.

  • Condition 1: The number of iterations is less than the maximum number of iterations (3)

  • Condition 2: \(\left\| {V^{(t)} \, - \,V^{(t - 1)} \,} \right\|\,\, \le \,\,EPS\left( {0.001} \right)\)

Current iteration count = 1 and \(\left\| {V^{(1)} \, - \,V^{(0)} \,} \right\|\, = 0.5665 > EPS\).

Continue to the second iteration:

$$ \begin{aligned} U_{1}^{(1)} & = \left[ {\begin{array}{*{20}r} \hfill {0.5568} & \hfill {0.5347} & \hfill {0.5326} \\ \hfill {0.3696} & \hfill {0.3698} & \hfill {0.4666} \\ \hfill {0.5506} & \hfill {0.5499} & \hfill {0.4282} \\ \end{array} } \right] \\ U_{2}^{(1)} & = \left[ {\begin{array}{*{20}r} \hfill {0.4432} & \hfill {0.4653} & \hfill {0.4674} \\ \hfill {0.6304} & \hfill {0.6302} & \hfill {0.5334} \\ \hfill {0.4494} & \hfill {0.4501} & \hfill {0.5718} \\ \end{array} } \right] \\ V_{1}^{(2)} & = \left[ {\begin{array}{*{20}r} \hfill {0.4333} \\ \hfill {0.0785} \\ \end{array} } \right];\quad V_{2}^{(2)} = \left[ {\begin{array}{*{20}r} \hfill {0.4317} \\ \hfill {0.1052} \\ \end{array} } \right] \\ \end{aligned} $$

Calculate the different between \(V^{(2)}\) and \(V^{(1)}\) by using Euclid distance

$$ \left\| {V^{(2)} \, - \,V^{(1)} \,} \right\|\,\, = 0.5097 $$

Current iterations count = 2 and \(\left\| {V^{(2)} \, - \,V^{(1)} \,} \right\|\, = 0.5097\, > \,EPS\).

Continue to the third iterations:

$$ \begin{aligned} U_{1}^{(2)} & = \left[ {\begin{array}{*{20}r} \hfill {0.5328} & \hfill {0.1851} & \hfill {0.5624} \\ \hfill {0.4496} & \hfill {0.4392} & \hfill {0.6058} \\ \hfill {0.5398} & \hfill {0.545} & \hfill {0.3668} \\ \end{array} } \right] \\ U_{2}^{(2)} & = \left[ {\begin{array}{*{20}r} \hfill {0.4672} & \hfill {0.8149} & \hfill {0.4376} \\ \hfill {0.5504} & \hfill {0.5608} & \hfill {0.3942} \\ \hfill {0.4602} & \hfill {0.455} & \hfill {0.6332} \\ \end{array} } \right] \\ V_{1}^{(3)} & = \left[ {\begin{array}{*{20}r} \hfill {0.4798} \\ \hfill {0.0743} \\ \end{array} } \right];\quad V_{{_{2} }}^{(3)} = \,\left[ {\begin{array}{*{20}r} \hfill {0.4386} \\ \hfill {0.1073} \\ \end{array} } \right] \\ \end{aligned} $$

Calculate different between \(V^{(3)}\) and \(V^{(2)}\) by using Euclid distance

$$ \left\| {V^{(3)} - V^{(2)} } \right\| = \, 0.0{472} $$

Current iteration count = 3 and \(\left\| {V^{(3)} - V^{(2)} } \right\| = 0.0{472} > EPS\).

\(\to\) Stopped due to the number of iterations exceeded.

The result:

$$ \begin{aligned} U_{1} & = \left[ {\begin{array}{*{20}r} \hfill {0.5328} & \hfill {0.1851} & \hfill {0.5624} \\ \hfill {0.4496} & \hfill {0.4392} & \hfill {0.6058} \\ \hfill {0.5398} & \hfill {0.545} & \hfill {0.3668} \\ \end{array} } \right] \\ U_{2} & = \left[ {\begin{array}{*{20}r} \hfill {0.4672} & \hfill {0.8149} & \hfill {0.4376} \\ \hfill {0.5504} & \hfill {0.5608} & \hfill {0.3942} \\ \hfill {0.4602} & \hfill {0.455} & \hfill {0.6332} \\ \end{array} } \right] \\ V_{1} & = \left[ {\begin{array}{*{20}r} \hfill {0.4798} \\ \hfill {0.0743} \\ \end{array} } \right];\quad V_{2} = \left[ {\begin{array}{*{20}r} \hfill {0.4386} \\ \hfill {0.1073} \\ \end{array} } \right] \\ \end{aligned} $$

Same with the rest of the samples (Sample 2, Sample 3).

Step 3: Generate and Aggregate Spatial CFIS+ rules from clustering results.

  1. (a)

    Rule generation with sample 1 and Image X 1 :

Initialize with all values \(\alpha_{j} = 1;j \in 1,2,\ldots R;\alpha_{j} \in \left( {\alpha_{j}^{a} ,\alpha_{j,}^{b} \alpha_{j}^{c} ,\alpha_{j}^{{a^{\prime}}} ,\alpha_{j}^{{b^{\prime}}} ,\alpha_{j}^{{c^{\prime}}} } \right)\).

Applying formulas (510) we get:

$$ \begin{aligned} b_{1} & = V_{11} ;\quad b_{2} = V_{21} ;\quad b^{\prime}_{1} = V_{12} ;\quad b^{\prime}_{2} = V_{22} ; \\ b_{kj} & = \left[ {0.4798,\,\,0.4386} \right] \\ b^{\prime}_{kj} & = \left[ {0.0743,\,\,0.1073} \right] \\ \end{aligned} $$
$$ \begin{aligned} a_{kj} & = 1 \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, X_{i}^{(k)} \le b_{ij} }} {U_{{\text{i,j}}} \times X_{i}^{(k)} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, X_{i}^{(k)} \, \le \, b_{ij} }} {U_{{\text{i,j}}} } }}} \right) \\ a_{1} & = 0.4375;\quad a_{2} = \,0.4271 \\ \end{aligned} $$
$$ \begin{aligned} a^{\prime}_{kj} & = 1 \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{(k)} \, \le \, b_{ij} }} {U_{{\text{i,j}}} \times {\text{HOD}}_{i}^{(k)} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{(k)} \, \le \, b_{ij} }} {U_{{\text{i,j}}} } }}} \right) \\ a^{\prime}_{1} & = 0.0181;\quad a^{\prime}_{2} = \,0.0386 \\ \end{aligned} $$
$$ \begin{aligned} c_{kj} & = 1 \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, X_{i}^{(k)} \, \ge \, b_{ij} }} {U_{{\text{i,j}}} \times {\kern 1pt} X_{i}^{(k)} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, X_{i}^{(k)} \, \ge \, b_{ij} }} {U_{{\text{i,j}}} } }}} \right) \\ c_{1} & = 0.542;\quad c_{2} = 0.5157 \\ \end{aligned} $$
$$ \begin{aligned} c^{\prime}_{kj} & = 1\, \times \left( {\frac{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{(k)} \, \ge \, b_{ij} }} {U_{{\text{i,j}}} \times {\kern 1pt} {\text{HOD}}_{i}^{(k)} } }}{{\sum\nolimits_{{i = 1,2, \ldots n\, \, {\text{and}}\, \, {\text{HOD}}_{i}^{(k)} \, \ge \, b_{ij} }} {U_{{\text{i,j}}} } }}} \right) \\ c^{\prime}_{1} & = 0.1011;\quad c^{\prime}_{2} = \,0.1404 \\ \end{aligned} $$

Similar to the rest of imaginary part of the remaining data, we have the following rules for the first input data.

Rule 1

Include six parameters a, b, c and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\).

a, b, c are the coordinates of the first triangle of the real part and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\) are the coordinates of the first triangle of the imaginary part (see Fig. 

Fig. 12
figure 12

Rule space 1 spatial CFIS of image X1

12).

$$ (a,\,b,\,c,\,a^{\prime},\,b^{\prime},\,c^{\prime}) = \left[ {a_{1} ,\,b_{1} ,c_{1} ,a^{\prime}_{1} ,b^{\prime}_{1} ,c^{\prime}_{1} } \right] = \left[ {{0}{\text{.4375, 0}}{.4798, 0}{\text{.542, 0}}{.0181, 0}{\text{.0743, 0}}{.1011}} \right] $$

The value area is in the bottom surface \(({\mathbf{AA}}^{\prime } {\mathbf{C}}^{\prime } {\mathbf{BC}})\) where:

$$ A\left( {0,a_{1} ,0} \right);A^{\prime } \left( {a_{1}^{\prime } ,0,0} \right);B^{\prime } \left( {b_{1}^{\prime } ,b_{1} ,1} \right);C\left( {0,c_{1} ,0} \right);C^{\prime } \left( {c_{1}^{\prime } ,0,0} \right); B\left( {b_{1}^{\prime } ,b_{1} ,0} \right) $$

Rule 2

Include six parameters a, b, c and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\).

a, b, c are the coordinates of the second triangle of the real part and \(a^{\prime } , \, b^{\prime } , \, c^{\prime }\) are the coordinates of the second triangle of the imaginary part (see Fig. 

Fig. 13
figure 13

Rule space 2 spatial CFIS of image X1

13).

$$ (a,\,b,\,c,\,a^{\prime},\,b^{\prime},\,c^{\prime}) = \left[ {a_{2} ,\,b_{2} ,c_{2} ,a^{\prime}_{2} ,a^{\prime}_{2} ,b^{\prime}_{2} ,c^{\prime}_{2} } \right] = \left[ {{0}{\text{.4271, 0}}{.4386, 0}{\text{.5157, 0}}{.0386, 0}{\text{.1073, 0}}{.1404}} \right] $$

The value area is in the bottom surface \(({\mathbf{AA}}^{\prime } {\mathbf{C}}^{\prime } {\mathbf{BC}})\) where

$$ A\left( {0,a_{2} ,0} \right);A^{\prime } \left( {a_{2}^{\prime } ,0,0} \right);B^{\prime } \left( {b_{2}^{\prime } ,b_{2} ,1} \right);C\left( {0,c_{2} ,0} \right);C^{\prime } \left( {c_{2}^{\prime } ,0,0} \right); B\left( {b_{2}^{\prime } ,b_{2} ,0} \right) $$

Same with the rest of image:

  • Sample 1 Image X1, X2, X3

  • Sample 2 Image X3, X4, X5

  • Sample 3 Image X5, X6, X7

  • Sample 4 Image X7, X8, X9

  • Rule aggregate

At each image of a sample set, rules will be generated and those rules will be added to the rule set of each sample. Each template will have its own set of rules, but the parameters will be inherited from the previous template.

Example: With sample number 1 when initializing values for all parameters, but to pattern number 2 will not initialize the value but use the parameter value obtained from pattern 1.

Step 4: Calculate the interpolated value and synthesize the next predicted Image.

Step 4.1. Inference membership function.

Based on the complex fuzzy rule system in triangular space (Spatial CFIS), determine the value of the membership function of Image \(X_{1}^{\prime }\)

$$ X_{1}^{\prime } :\quad \left[ {\begin{array}{*{20}l} {\left( {0.607, \, 0.0111} \right)} \hfill & {\left( {0.443, \, 0.1246} \right)} \hfill & {\left( {0.5162, \, 0.0579} \right)} \hfill \\ {\left( {0.2946, \, 0.2018} \right)} \hfill & {\left( {0.322, \, 0.1682} \right)} \hfill & {\left( {0.4962, \, 0.0136} \right)} \hfill \\ {\left( {0.5121, \, 0.0765} \right)} \hfill & {\left( {0.5414, \, 0.0526} \right)} \hfill & {\left( {0.3883, \, 0.1166} \right)} \hfill \\ \end{array} } \right] $$
  • - With the first pixel (0.607, 0.0111) and rule 1.

  •  + We call the first pixel in the rule space D, which will have the following value D (0.607, 0.0111). (Point D on the bottom surface AA′C′BC).

  •  + Since point D is outside the bounds, we need to move point D into the 1st rule space with the coefficient \(\mu = \,1.7\)

  •  + Draw line BD intersecting line AA′ at point E.

  •  + Let F be a point satisfying the condition F in the plane AAB′ and DF perpendicular to the base.

Then, the height DF is the value of the degree of belonging U of the first pixel D (0.607, 0.0111) into the first rule space (see Fig. 14).

Fig. 14
figure 14

Interpolation of an image point in the first rule space \(\frac{DF}{{BB^{\prime } }} = \frac{DE}{{BE}}\, \to \,DF = \frac{{BB^{\prime} \times DE}}{BE} = \frac{1 \times 0.0087}{{0.1093}} = 0.0796\)

  • - With the first image point (0.6181, 0.0141) and the second rule.

  •  + We call the second Image point in the rule space D, which will have the following value D (0.607, 0.0111). (Point D is on the bottom surface of AAC′BC.)

  •  + Since point D is outside the bounds, we need to move point D into 2nd rule space with \(\mu = \,1.7\)

  •  + Draw line BD intersecting line AA′ at point E.

  •  + Let F be the point satisfying the condition F on the plane AAB′ and DF perpendicular to the base surface.

Then, the height DF is the value of the degree of belonging U of the first pixel D (0.607, 0.0111) into the second rule space (see Fig. 15).

$$ \frac{DF}{{BB^{\prime}}} = \frac{DE}{{BE}}\, \to \,DF = \frac{{BB^{\prime} \times DE}}{BE} = \frac{1 \times 0.0002}{{ 0.1296}} = 0.0015 $$
Fig. 15
figure 15

Interpolation of an image point in the second rule space

Same with the rest of the points.

  • - With the second input value pair (0.443, 0.1246).

  •  + The degree of belonging of the second pixel to the first rule, DF = 0.3693.

  •  + The degree of belonging of the second pixel to the second rule, DF = 0.5375.

  • - With the third input value pair (0.5162, 0.0579).

  •  + The degree of belonging of the third pixel to the first rule, DF = 0.179.

  •  + The degree of belonging of the third pixel to the second rule, DF = 0.2116.

  • - With the fourth input value pair (0.2946, 0.2018).

  •  + The degree of belonging of the fourth pixel to the first rule, DF = 0.2457.

  •  + The degree of belonging of the fourth pixel to the second rule, DF = 0.3952.

  • - With the fifth input value pair (0.322, 0.1682).

  •  + The degree of belonging of the fifth pixel to the first rule, DF = 0.2684.

  •  + The degree of belonging of the fifth pixel to the second rule, DF = 0.4319.

  • - With the sixth input value pair (0.4962, 0.0136).

  •  + The degree of belonging of the sixth pixel to the first rule, DF = 0.1831.

  •  + The degree of belonging of the sixth pixel to the second rule, DF = 0.127.

  • - With the seventh input value pair (0.5121, 0.0765).

  •  + The degree of belonging of the seventh pixel to the first rule, DF = 0.2758.

  •  + The degree of belonging of the seventh pixel to the second rule, DF = 0.3064.

  • - With the eighth input value pair (0.5414, 0.0526).

  •  + The degree of belonging of the eighth pixel to the first rule, DF = 0.1564.

  •  + The degree of belonging of the eighth pixel to the second rule, DF = 0.1951.

  • - With the ninth input value pair (0.3883, 0.1166).

  •  + The degree of belonging of the ninth pixel to the first rule, DF = 0.3238.

  •  + The degree of belonging of the ninth pixel to the second rule, DF = 0.4673.

Step 4.2. Determine rule coefficient \(\beta_{i}\).

Initialized the value \(\beta_{ij} = 1,\,\forall i \in 1,2,\ldots L;j \in 1,2,\ldots R + 1\) and applied formula (12), the attribute values of \(W_{ij}\) correspond to the interpolation values of each pixel into that rule, we have:

The first pixel:

$$ \begin{aligned} W_{1} & = \frac{{\beta_{11} \times w_{11} + \beta_{12} \times w_{12} + \beta_{13} }}{{\beta_{11} + \beta_{12} + \beta_{13} }} \\ & = \frac{1 \times 0.0796 + 1 \times 0.0015 + 1}{{1 + 1 + 1}} = 0.3604 \\ \end{aligned} $$

The second pixel:

$$ W_{2} = \frac{{\beta_{21} \times w_{21} + \beta_{22} \times w_{22} + \beta_{23} }}{{\beta_{21} + \beta_{22} + \beta_{23} }} = \frac{1 \times 0.3693 + 1 \times 0.5375 + 1}{{1 + 1 + 1}} = 0.6356 $$

The third pixel:

$$ W_{3} = \frac{{\beta_{31} \times w_{31} + \beta_{32} \times w_{32} + \beta_{33} }}{{\beta_{31} + \beta_{32} + \beta_{33} }} = \frac{1 \times 0.1790 + 1 \times 0.2116 + 1}{{1 + 1 + 1}} = 0.4635 $$

The fourth pixel:

$$ W_{4} = \frac{{\beta_{41} \times w_{41} + \beta_{42} \times w_{42} + \beta_{43} }}{{\beta_{41} + \beta_{42} + \beta_{43} }} = \frac{1 \times 0.2457 + 1 \times 0.3952 + 1}{{1 + 1 + 1}} = 0.547 $$

The fifth pixel:

$$ W_{5} = \frac{{\beta_{51} \times w_{51} + \beta_{52} \times w_{52} + \beta_{53} }}{{\beta_{51} + \beta_{52} + \beta_{53} }} = \frac{1 \times 0.2684 + 1 \times 0.4319 + 1}{{1 + 1 + 1}} = 0.5668 $$

The sixth pixel:

$$ W_{6} = \frac{{\beta_{61} \times w_{61} + \beta_{62} \times w_{62} + \beta_{63} }}{{\beta_{61} + \beta_{62} + \beta_{63} }} = \frac{1 \times 0.1831 + 1 \times 0.127 + 1}{{1 + 1 + 1}} = 0.4367 $$

The seventh pixel:

$$ W_{7} = \frac{{\beta_{71} \times w_{71} + \beta_{72} \times w_{72} + \beta_{73} }}{{\beta_{71} + \beta_{72} + \beta_{73} }} = \frac{1 \times 0.2758 + 1 \times 0.3064 + 1}{{1 + 1 + 1}} = 0.5274 $$

The eighth pixel:

$$ W_{8} = \frac{{w_{81} \times \beta_{81} + w_{82} \times \beta_{82} + \beta_{83} }}{{\beta_{81} + \beta_{81} + \beta_{81} }} = \frac{1 \times 0.1564 + 1 \times 0.1951 + 1}{{1 + 1 + 1}} = 0.4505 $$

The ninth pixel:

$$ W_{9} = \frac{{w_{91} \times \beta_{91} + w_{92} \times \beta_{92} + \beta_{93} }}{{\beta_{91} + \beta_{91} + \beta_{91} }} = \frac{1 \times 0.3238 + 1 \times 0.4673 + 1}{{1 + 1 + 1}} = 0.597 $$

Step 4.3. Determine defuzzification coefficient.

Initialized \(\left( {h_{1j} , \, h_{2j} , \, h_{3j} , \, h^{\prime}_{1j} , \, h^{\prime}_{2j} , \, h^{\prime}_{3j} } \right);\,\forall j \in 1,2,\ldots ,\,R\) according to \(\left( {1,2,1,1,2,1} \right)\) and applied formula (13, 14), we have:

DEF of rule 1:

$$ \begin{aligned} {\text{DEF}}_{1} (X_{1} ) & = \frac{1 \times 0.4375 + 2 \times 0.4798 + 1 \times 0.542}{{1 + 2 + 1}} = 0.4848 \\ {\text{DEF}}_{1} ({\text{HOD}}_{1} ) & = \,\frac{1 \times 0.0181 + 2 \times 0.0743 + 1 \times 0.1011}{{1 + 2 + 1}} = 0.067 \\ \end{aligned} $$

DEF of rule 2:

$$ \begin{aligned} {\text{DEF}}_{2} (X_{1} ) & = \frac{1 \times 0.4271 + 2 \times 0.4386 + 1 \times 0.5157}{{1 + 2 + 1}} = 0.455 \\ {\text{DEF}}_{2} ({\text{HOD}}_{1} ) & = \frac{1 \times 0.0386 + 2 \times 0.1073 + 1 \times 0.1404}{{1 + 2 + 1}} = 0.0984 \\ \end{aligned} $$

Step 4.4. Determine dependence coefficient \(\gamma\).

(*) The next Image prediction result of the real part is determined by formula ( 16 ) , as follows:

$$ O_{{1.{\text{Rel}}}}^{*} = \frac{{\left( {W_{1} \times {\text{DEF}}_{1} \left( {X_{1} } \right) + W_{1} \times {\text{DEF}}_{2} \left( {X_{1} } \right)} \right)}}{2} = \frac{{\left( {0.3604 \times 0.4848 + 0.3604 \times 0.455} \right)}}{2} = 0.1694 $$

Similarly, we have:

$$ O_{{i.{\text{Rel}}}}^{*} = \left[ {\begin{array}{*{20}l} {O_{{1.{\text{Rel}}}}^{*} } \hfill & {O_{{2.{\text{Rel}}}}^{*} } \hfill & {O_{{3.{\text{Rel}}}}^{*} } \hfill \\ {O_{{4.{\text{Rel}}}}^{*} } \hfill & {O_{{5.{\text{Rel}}}}^{*} } \hfill & {O_{{6.{\text{Rel}}}}^{*} } \hfill \\ {O_{{7.{\text{Rel}}}}^{*} } \hfill & {O_{{8.{\text{Rel}}}}^{*} } \hfill & {O_{{9.{\text{Rel}}}}^{*} } \hfill \\ \end{array} } \right] = \left[ {\begin{array}{*{20}l} {0.1694} \hfill & {0.2987} \hfill & {0.2178} \hfill \\ {0.257} \hfill & {0.2663} \hfill & {0.2052} \hfill \\ {0.2478} \hfill & {0.2117} \hfill & {0.2805} \hfill \\ \end{array} } \right] $$

The inferred value of the predicted image \(O_{{i.{\text{Img}}}}^{*}\) is based on the difference value calculated by formula ( 18 ) , as follows:

$$ \begin{aligned} O_{{1.{\text{Img}}}}^{*} & = \frac{{\left( {W_{1} \times {\text{DEF}}_{1} \left( {{\text{HOD}}_{1} } \right) + W_{1} \times {\text{DEF}}_{2} \left( {{\text{HOD}}_{1} } \right)} \right)}}{2} \\ & = \frac{{\left( {0.3604 \times 0.067 + 0.3604 \times 0.0984} \right)}}{2} = 0.0298 \\ \end{aligned} $$

Similarly, we have:

$$ O_{{i.{\text{Img}}}}^{*} = \left[ {\begin{array}{*{20}c} {O_{{1.{\text{Img}}}}^{*} } & {O_{{2.{\text{Img}}}}^{*} } & {O_{{3{\text{Img}}}}^{*} } \\ {O_{{4.{\text{Img}}}}^{*} } & {O_{{5.{\text{Img}}}}^{*} } & {O_{{6.{\text{Img}}}}^{*} } \\ {O_{{7.{\text{Img}}}}^{*} } & {O_{{8.{\text{Img}}}}^{*} } & {O_{{9.{\text{Img}}}}^{*} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}r} \hfill {0.0298} & \hfill {0.0526} & \hfill {0.0383} \\ \hfill {0.0452} & \hfill {0.0469} & \hfill {0.0361} \\ \hfill {0.0436} & \hfill {0.0373} & \hfill {0.0494} \\ \end{array} } \right] $$

(**) The phase part prediction result \(O_{{i.{\text{Img}}}}^{{*^{\prime}}}\) is calculated based on formula ( 17 ) , as follows:

$$ O_{{1.{\text{Img}}}}^{{*^{\prime}}} = 0.607 \times (1 + 0.0298) = 0.6251 $$

Similarly, we have:

$$ O_{{i.{\text{Img}}}}^{{*^{\prime } }} = \left[ {\begin{array}{*{20}c} {O_{{1.{\text{Img}}}}^{{*^{\prime } }} } & {O_{{2.{\text{Img}}}}^{{*^{\prime } }} } & {O_{{3{\text{Img}}}}^{{*^{\prime } }} } \\ {O_{{4.{\text{Img}}}}^{{*^{\prime } }} } & {O_{{5.{\text{Img}}}}^{{*^{\prime } }} } & {O_{{6.{\text{Img}}}}^{{*^{\prime } }} } \\ {O_{{7.{\text{Img}}}}^{{*^{\prime } }} } & {O_{{8.{\text{Img}}}}^{{*^{\prime } }} } & {O_{{9.{\text{Img}}}}^{{*^{\prime } }} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}r} \hfill {0.6251} & \hfill {0.4663} & \hfill {0.536} \\ \hfill {0.3079} & \hfill {0.3371} & \hfill {0.5141} \\ \hfill {0.5344} & \hfill {0.5616} & \hfill {0.4075} \\ \end{array} } \right] $$

Initialize initial value \(\gamma\) = 0.5. The forecast image of the final representative image is the result of the next image prediction \(O_{i}^{*}\) calculated based on the combined result of the real part and phase prediction pixel according to formula (15), as follows:

$$ O_{1}^{*} = \,0.5 \times 0.1694 + (1 - 0.5) \times 0.6251 = 0.3973 $$

Similarly, we have:

$$ O_{{}}^{*} = \left[ {\begin{array}{*{20}c} {O_{1}^{*} } & {O_{2}^{*} } & {O_{3.}^{*} } \\ {O_{4}^{*} } & {O_{5}^{*} } & {O_{6}^{*} } \\ {O_{7}^{*} } & {O_{8}^{*} } & {O_{9}^{*} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}r} \hfill {0.3973} & \hfill {0.3825} & \hfill {0.3769} \\ \hfill {0.2825} & \hfill {0.3017} & \hfill {0.3597} \\ \hfill {0.3911} & \hfill {0.3867} & \hfill {0.344} \\ \end{array} } \right] $$

Return to normal space

$$ O^{*} = \left[ {\begin{array}{*{20}r} \hfill {101.2988} & \hfill {97.5375} & \hfill {96.1095} \\ \hfill {72.0248} & \hfill {76.9335} & \hfill {91.7108} \\ \hfill {99.7305} & \hfill {98.5958} & \hfill {87.72} \\ \end{array} } \right] $$

Step 4.5. Final predict result.

After having the final prediction result of the representative image point \(O_{i}^{*}\), proceed to calculate the neighborhood points of each central representative Image point according to formula (19), we get:

With the point image represents \(O_{1}^{*}\)

$$ \begin{aligned} X_{1}^{db} & = \left| {\frac{1}{{\kappa_{1} \times d_{1} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.035 \times 1} - 101.2988} \right| = 72.7 \\ X_{2}^{db} & = \left| {\frac{1}{{\kappa_{2} \times d_{2} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 2$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.011 \times 1} - 101.2988} \right| = 10.4 \\ X_{3}^{db} & = \left| {\frac{1}{{\kappa_{3} \times d_{3} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 3$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.3216 \times 1} - 101.2988} \right| = 98.2 \\ X_{4}^{db} & = \left| {\frac{1}{{\kappa_{4} \times d_{4} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 4$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.0159 \times 1} - 101.2988} \right| = 98.2 \\ X_{5}^{db} & = \left| {\frac{1}{{\kappa_{5} \times d_{5} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 5$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.0088 \times 1} - 101.2988} \right| = 12.3 \\ X_{6}^{db} & = \left| {\frac{1}{{\kappa_{6} \times d_{6} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 6$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.0424 \times 1} - 101.2988} \right| = 77.7 \\ X_{7}^{db} & = \left| {\frac{1}{{\kappa_{7} \times d_{7} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 7$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.124 \times 1} - 101.2988} \right| = 93.2 \\ X_{8}^{db} & = \left| {\frac{1}{{\kappa_{8} \times d_{8} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 8$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.4038 \times 1} - 101.2988} \right| = 98.8 \\ X_{9}^{db} & = \left| {\frac{1}{{\kappa_{9} \times d_{9} }} - O_{{\left\lceil {{\raise0.5ex\hbox{$\scriptstyle 9$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 9$}}} \right\rceil }}^{*} } \right| = \left| {\frac{1}{0.0375 \times 1} - 101.2988} \right| = 74.6 \\ \end{aligned} $$

From \(O_{1}^{*}\) , restore nine neighboring image points as follows:

$$ O_{1}^{*} = \left[ {\begin{array}{*{20}r} \hfill {72.7} & \hfill {10.4} & \hfill {98.2} \\ \hfill {38.4} & \hfill {12.3} & \hfill {77.7} \\ \hfill {93.2} & \hfill {98.8} & \hfill {74.6} \\ \end{array} } \right] $$

Proceed with the remaining representative image points \(O_{2}^{*} ,O_{3}^{*} ,O_{4}^{*} ,O_{5}^{*} ,O_{6}^{*} ,O_{7}^{*} ,O_{8}^{*} \,{\text{and}}\,O_{9}^{*}\). We get the next prediction by image as follows:

$$ X_{{}}^{db} = \left[ {\begin{array}{*{20}l} {72.7} \hfill & {10.4} \hfill & {98.2} \hfill & {58.7} \hfill & {169} \hfill & {72.6} \hfill & {79.6} \hfill & {83.1} \hfill & {89.8} \hfill \\ {38.4} \hfill & {12.3} \hfill & {77.7} \hfill & {100.2} \hfill & {169} \hfill & {77.3} \hfill & {82.9} \hfill & {61.8} \hfill & {98.6} \hfill \\ {93.2} \hfill & {98.8} \hfill & {74.6} \hfill & {52.5} \hfill & {111.5} \hfill & {29.4} \hfill & {95.4} \hfill & {62.8} \hfill & {95.4} \hfill \\ {98.7} \hfill & {89} \hfill & {74.6} \hfill & {78} \hfill & {77} \hfill & {74.3} \hfill & {97.3} \hfill & {83.4} \hfill & {93.5} \hfill \\ {84.7} \hfill & {95.1} \hfill & {90.1} \hfill & {94.8} \hfill & {75} \hfill & {81.8} \hfill & {73} \hfill & {98.7} \hfill & {68.4} \hfill \\ {88.3} \hfill & {77.9} \hfill & {85.6} \hfill & {97.9} \hfill & {97.9} \hfill & {81.8} \hfill & {67.7} \hfill & {71.3} \hfill & {81.7} \hfill \\ {73.3} \hfill & {84.6} \hfill & {95.9} \hfill & {82.8} \hfill & {92.6} \hfill & {66} \hfill & {93} \hfill & {84.5} \hfill & {91.1} \hfill \\ {92.9} \hfill & {80.1} \hfill & {98.6} \hfill & {81.4} \hfill & {78.6} \hfill & {82} \hfill & {92.8} \hfill & {97.4} \hfill & {84.3} \hfill \\ {87.1} \hfill & {76.8} \hfill & {85.8} \hfill & {99.1} \hfill & {70} \hfill & {95.1} \hfill & {91.2} \hfill & {93.3} \hfill & {86} \hfill \\ \end{array} } \right] $$

Step 5: Simultaneous training of the parameters in the model (Co-learning)

After training process, we had a suitable set of parameters X for the next iteration.

$$ X = \left[ {\begin{array}{*{20}l} {72.7} \hfill & {10.4} \hfill & {98.2} \hfill & {58.7} \hfill & {169} \hfill & {72.6} \hfill & {79.6} \hfill & {83.1} \hfill & {89.8} \hfill \\ {38.4} \hfill & {12.3} \hfill & {77.7} \hfill & {100.2} \hfill & {169} \hfill & {77.3} \hfill & {82.9} \hfill & {61.8} \hfill & {98.6} \hfill \\ {93.2} \hfill & {98.8} \hfill & {74.6} \hfill & {52.5} \hfill & {111.5} \hfill & {29.4} \hfill & {95.4} \hfill & {62.8} \hfill & {95.4} \hfill \\ {98.7} \hfill & {89} \hfill & {74.6} \hfill & {78} \hfill & {77} \hfill & {74.3} \hfill & {97.3} \hfill & {83.4} \hfill & {93.5} \hfill \\ {84.7} \hfill & {95.1} \hfill & {90.1} \hfill & {94.8} \hfill & {75} \hfill & {81.8} \hfill & {73} \hfill & {98.7} \hfill & {68.4} \hfill \\ {88.3} \hfill & {77.9} \hfill & {85.6} \hfill & {97.9} \hfill & {97.9} \hfill & {81.8} \hfill & {67.7} \hfill & {71.3} \hfill & {81.7} \hfill \\ {73.3} \hfill & {84.6} \hfill & {95.9} \hfill & {82.8} \hfill & {92.6} \hfill & {66} \hfill & {93} \hfill & {84.5} \hfill & {91.1} \hfill \\ {92.9} \hfill & {80.1} \hfill & {98.6} \hfill & {81.4} \hfill & {78.6} \hfill & {82} \hfill & {92.8} \hfill & {97.4} \hfill & {84.3} \hfill \\ {87.1} \hfill & {76.8} \hfill & {85.8} \hfill & {99.1} \hfill & {70} \hfill & {95.1} \hfill & {91.2} \hfill & {93.3} \hfill & {86} \hfill \\ \end{array} } \right] $$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giang, L.T., Son, L.H., Giang, N.L. et al. A new co-learning method in spatial complex fuzzy inference systems for change detection from satellite images. Neural Comput & Applic 35, 4519–4548 (2023). https://doi.org/10.1007/s00521-022-07928-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07928-5

Keywords

Navigation