Skip to main content
Log in

Smooth statistical modeling of bivariate non-monotonic data by a three-stage LUT neural system

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The present paper introduces a new statistical data modeling algorithm based on artificial neural systems. This procedure allows abstracting from datasets by working on their probability density functions. The proposed method strives to capture the overall structure of the analyzed data, exhibits competitive computational runtimes and may be applied to non-monotonic real-world data (building on a previously developed isotonic neural modeling algorithm). An outstanding feature of the proposed method is the ability to return a smoother model compared to other modeling algorithms. Smooth models could have applications in the fields of engineering and computer science. In fact, the present research was motivated by an image contour resampling problem that arises in shape analysis. The features of the proposed algorithm are illustrated and compared to the features of existing algorithms by means of numerical tests on shape resampling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. It is tacitly assumed that, while the points in the dataset \({{\mathbb {D}}}\) generally are not evenly spaced, the resampled coordinate \(\hat{x}\) is evenly spaced in \({{\mathbb {M}}}\), although such an assumption is not strictly necessary for the discussed modeling algorithm to work.

  2. The shape contour datasets used within the present paper were drawn from the Surrey fish database described in http://www.ee.surrey.ac.uk/CVSSP/demos/css/demo.html. Unfortunately, these datasets are no longer available from the server. We can make them available to interested readers upon request.

  3. The idea behind the RMSR index is as follows. Let us denote by z(x) the profile ordinate of a line parameterized by x. A completely flat (i.e., minimally rough) profile will be characterized by \({\mathrm{d}}z/{\mathrm{d}}x=0\), hence the cumulative quantity \(\int ({\mathrm{d}}z/{\mathrm{d}}x)^2{\mathrm{d}}x=0\). Conversely, the more a profile is uneven/rough, the larger \(({\mathrm{d}}z/{\mathrm{d}}x)^2\) is, the larger \(\int ({\mathrm{d}}z/{\mathrm{d}}x)^2{\mathrm{d}}x\) will result. We took a numerical approximation of this integral as a measure of roughness of a shape, approximated numerically by the sum of terms \((\hat{\xi }_i-\hat{\xi }_{i-1})^2\) and \((\hat{\eta }_i-\hat{\eta }_{i-1})^2\).

  4. We used the MATLAB function interp1 with the syntax yh = interp1 (x,y,xh,method), where (x,y) is a dataset, yh is the result of interpolation at ‘query points’ xh, and method is either ‘linear’ or ‘spline.’

  5. The derivative \(\kappa (s)=\theta '(s)\) represents the local curvature of the planar shape described by the orientation function \(\theta (s)\).

  6. The above procedure takes the same syntax of the \({\hbox {MATLAB}}^{\copyright }\)’s function ‘interp1’.

References

  1. Abdolahzare Z, Mehdizadeh SA (2016) Nonlinear mathematical modeling of seed spacing uniformity of a pneumatic planter using genetic programming and image processing. Neural Comput Appl. doi:10.1007/s00521-016-2450-1

    Google Scholar 

  2. Barlow RE, Brunk HD (1972) The isotonic regression problem and its dual. J Am Stat Assoc 67(337):140–147

    Article  MathSciNet  MATH  Google Scholar 

  3. Conolly RB, Lutz WK (2003) Nonmonotonic dose-response relationships: mechanistic basis, kinetic modeling, and implications for risk assessment. Toxicol Sci 77:151–157

    Article  Google Scholar 

  4. Domínguez-Menchero JS, González-Rodríguez G (2007) Analyzing an extension of the isotonic regression problem. Metrika 66:19–30

    Article  MathSciNet  MATH  Google Scholar 

  5. Duren RW, Marks RJ II, Reynolds PD, Trumbo ML (2007) Real-time neural network inversion on the SRC-6e reconfigurable computer. IEEE Trans Neural Netw 18(3):889–901

    Article  Google Scholar 

  6. El-Shafie A, Abdelazim T, Noureldin A (2010) Neural network modeling of time-dependent creep deformations in masonry structures. Neural Comput Appl 19(4):583–594

    Article  Google Scholar 

  7. Esaki L, Arakawa Y, Kitamura M (2010) Esaki diode is still a radio star, half a century on. Nature 464:31

    Article  Google Scholar 

  8. Evrendilek F (2014) Assessing neural networks with wavelet denoising and regression models in predicting diel dynamics of eddy covariance-measured latent and sensible heat fluxes and evapotranspiration. Neural Comput Appl 24(2):327–337

    Article  Google Scholar 

  9. Feng Y-J, Lin Z-X, Zhang R-Z (2011) The influence of root mean square phase gradient of continuous phase plate on smoothing focal spot. Acta Phys Sin 60(10):104202 (in Chinese)

    Google Scholar 

  10. Fiori S (2002) Hybrid independent component analysis by adaptive LUT activation function neurons. Neural Netw 15(1):85–94

    Article  Google Scholar 

  11. Fiori S (2013) Fast statistical regression in presence of a dominant independent variable. Neural Comput Appl 22:1367–1378

    Article  Google Scholar 

  12. Fiori S (2013) An isotonic trivariate statistical regression method. Adv Data Anal Classif 7:209–235

    Article  MathSciNet  MATH  Google Scholar 

  13. Fiori S (2014) Fast closed-form trivariate statistical isotonic modeling. Electron Lett 50:708–710

    Article  Google Scholar 

  14. Fiori S, Gong T, Lee HK (2015) Bivariate non-isotonic statistical regression by a look-up table neural system. Cognit Comput 7(6):715–730

    Article  Google Scholar 

  15. Goetghebeur E, Lapp K (1997) The effect of treatment compliance in a placebo-controlled trial: regression with unpaired data. J R Stat Soc Ser C (Appl Stat) 46(3):351–364

    Article  MATH  Google Scholar 

  16. Gujarati DN, Porter DC (2009) Basic econometrics, 5th edn. McGraw-Hill, New York, pp 73 – 78. ISBN: 978-0-07-337577-9

  17. Guo WW, Xue H (2012) An incorporative statistic and neural approach for crop yield modelling and forecasting. Neural Comput Appl 21(1):109–117

    Article  MathSciNet  Google Scholar 

  18. Hájek P, Olej V (2011) Credit rating modelling by kernel-based approaches with supervised and semi-supervised learning. Neural Comput Appl 20(6):761–773

    Article  MATH  Google Scholar 

  19. Isermann R, Münchhof M (2011) Neural networks and Lookup Tables for identification. In: Identification of dynamic systems—an introduction with applications. Springer, Berlin, pp 501–537. ISBN: 978-3-540-78878-2

  20. Klassen E, Srivastava A, Mio W (2004) Analysis of planar shapes using geodesic paths on shape spaces. IEEE Trans Pattern Anal Mach Intell 26:372–383

    Article  Google Scholar 

  21. Kotłowski W, Słowiński R (2009) Rule learning with monotonicity constraints. In: Proceedings of the 26th international conference on machine learning (Montreal, Canada), pp 537–544

  22. Laudani A, Lozito GM, Riganti Fulginei F, Salvini A (2015) On training efficiency and computational costs of a feed forward neural network: a review. Comput Intell Neurosci. Article ID 818243. doi:10.1155/2015/818243

  23. Luss R, Rosset S (2014) Generalized isotonic regression. J Comput Graph Stat 23(1):192–210

    Article  MathSciNet  MATH  Google Scholar 

  24. Motulsky HJ, Ransnas LA (1987) Fitting curves to data using nonlinear regression: a practical and nonmathematical review. Fed Am Soc Exp Biol (FASEB) J 1(5):365–374

    Google Scholar 

  25. Najah A, El-Shafie A, Karim OA, El-Shafie AH (2013) Application of artificial neural networks for water quality prediction. Neural Comput Appl 22(1):187–201

    Article  Google Scholar 

  26. Niu K, Zhao F, Qiao X (2013) A missing data imputation algorithm in wireless sensor network based on minimized similarity distortion. In: Proceedings of the sixth international symposium on computational intelligence and design (Hangzhou, People’s Republic of China, October 28–29, 2013), vol 2, pp 235–238

  27. Owolabi TO, Zakariya YF, Olatunji SO, Akande KO (2016) Estimation of melting points of fatty acids using homogeneously hybridized support vector regression. Neural Comput Appl. doi:10.1007/s00521-016-2344-2

    Google Scholar 

  28. Prabhu S, Uma M, Vinayagam BK (2015) Surface roughness prediction using Taguchi-fuzzy logic-neural network analysis for CNT nanofluids based grinding process. Neural Comp Appl 26(1):41–55

    Article  Google Scholar 

  29. Saâdaoui F (2017) A seasonal feedforward neural network to forecast electricity prices. Neural Comput Appl 28(4):835–847

    Article  Google Scholar 

  30. Simsion G, Witt G (2005) Data Model Essent, 3rd edn. Morgan Kauffman Publishers, Los Altos

    MATH  Google Scholar 

  31. Smith M (1996) Neural networks for statistical modeling. International Thomson Computer Press, London

    Google Scholar 

  32. Specht DE (1991) A general regression neural network. IEEE Trans Neural Netw 2(6):568–576

    Article  Google Scholar 

  33. Tamminen S, Laurinen P, Röning J (1999) Comparing regression trees with neural networks in aerobic fitness approximation. In: Proceedings of the international computing sciences conference symposium on advances in intelligent data analysis (Rochester, 1999), pp 414–419

  34. Tibshirani RJ, Hoefling H, Tibshirani R (2011) Nearly-isotonic regression. Technometrics 53(1):54–61

    Article  MathSciNet  Google Scholar 

  35. Vansteenkiste E (2015) Lookup-table based neurons for an efficient FPGA implementation of neural networks. Master Thesis of the computing systems lab (CSL), Electronics and information systems (ELIS) department, Ghent University (Belgium)

  36. Wu J, Meyer MC, Opsomer JD (2015) Penalized isotonic regression. J Stat Plan Inference 161:1–24

    Article  MathSciNet  MATH  Google Scholar 

  37. Yu R, Leung PS, Bienfang P (2006) Predicting shrimp growth: artificial neural network versus nonlinear regression models. Aquac Eng 34(1):26–32

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to gratefully thank the anonymous reviewers whose thorough reading of the manuscript and detailed comments helped greatly in improving the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simone Fiori.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Appendix: MATLAB implementation of the novel regression procedure

Appendix: MATLAB implementation of the novel regression procedure

A \({\hbox {MATLAB}}^{\copyright }\) implementation of the modeling procedure devised in the present research and illustrated in Algorithm 3 is reported in Fig. 10. The \({\hbox {MATLAB}}^{\copyright }\) code illustrates the simple structure of the proposed method.

Fig. 10
figure 10

\({\hbox {MATLAB}}^{\copyright }\) code that implements the three-stage neural modeling algorithm

The \({\hbox {MATLAB}}^{\copyright }\) function inputs the triple (Sx,Sy,xx) as three arrays. The array pair (Sx,Sy) represents the dataset \({{\mathbb {D}}}\) to be modeled as a look-up table, and the array xx represents the set of \(\hat{x}\)-values whose corresponding \(\hat{y}\)-values are sought.Footnote 6 The function returns an array yy that represents the estimated model \({{\mathbb {M}}}\) as a LUT neural network (xx,yy).

In the above version of the statistical isotonic modeling procedure, the number of subdivisions for probability density function estimation is automatically selected by the rules in the command line 5 and does not coincide necessarily with the number R of partitions of the x-axes for model estimation. The command line 7 computes the lifting-upward constant c in Eq. (8), while the command lines 9 and 26 perform the lifting upward/downward, respectively. The lines 11–24 are borrowed from the isotonic modeling procedure described in [11].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fiori, S., Fioranelli, N. Smooth statistical modeling of bivariate non-monotonic data by a three-stage LUT neural system. Neural Comput & Applic 30, 1353–1368 (2018). https://doi.org/10.1007/s00521-017-3215-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-017-3215-1

Keywords

Navigation