Skip to main content

Empirical Comparison of Resampling Methods Using Genetic Neural Networks for a Regression Problem

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6679))

Abstract

In the paper the investigation of m-out-of-n bagging with and without replacement using genetic neural networks is presented. The study was conducted with a newly developed system in Matlab to generate and test hybrid and multiple models of computational intelligence using different resampling methods. All experiments were conducted with real-world data derived from a cadastral system and registry of real estate transactions. The performance of following methods was compared: classic bagging, out-of-bag, Efron’s .632 correction, and repeated holdout. The overall result of our investigation was as follows: the bagging ensembles created using genetic neural networks revealed prediction accuracy not worse than the experts’ method employed in reality.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Biau, G., Cérou, F., Guyader, A.: On the Rate of Convergence of the Bagged Nearest Neighbor Estimate. Journal of Machine Learning Research 11, 687–712 (2010)

    MathSciNet  MATH  Google Scholar 

  2. Borra, S., Di Ciaccio, A.: Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods. Computational Statistics & Data Analysis 12, 2976–2989 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  Google Scholar 

  4. Bühlmann, P., Yu, B.: Analyzing bagging. Annals of Statistics 30, 927–961 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  5. Buja, A., Stuetzle, W.: Observations on bagging. Statistica Sinica 16, 323–352 (2006)

    MathSciNet  MATH  Google Scholar 

  6. Czuczwara, K.: Comparative analysis of selected evolutionary algorithms for optimization of neural network architectures. Master’s Thesis (in Polish), Wrocław University of Technology, Wrocław, Poland (2010)

    Google Scholar 

  7. Efron, B., Tibshirani, R.J.: Improvements on cross-validation: the 632+ bootstrap method. Journal of the American Statistical Association 92(438), 548–560 (1997)

    MathSciNet  MATH  Google Scholar 

  8. Friedman, J.H., Hall, P.: On bagging and nonlinear estimation. Journal of Statistical Planning and Inference 137(3), 669–683 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  9. Fumera, G., Roli, F., Serrau, A.: A theoretical analysis of bagging as a linear combination of classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(7), 1293–1299 (2008)

    Article  Google Scholar 

  10. Góral, M.: Comparative analysis of selected evolutionary algorithms for optimization of fuzzy models for real estate appraisals. Master’s Thesis (in Polish), Wrocław University of Technology, Wrocław, Poland (2010)

    Google Scholar 

  11. Graczyk, M., Lasota, T., Trawiński, B.: Comparative Analysis of Premises Valuation Models Using KEEL, RapidMiner, and WEKA. In: Nguyen, N.T., Kowalczyk, R., Chen, S.-M. (eds.) ICCCI 2009. LNCS, vol. 5796, pp. 800–812. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  12. Graczyk, M., Lasota, T., Trawiński, B., Trawiński, K.: Comparison of Bagging, Boosting and Stacking Ensembles Applied to Real Estate Appraisal. In: Nguyen, N.T., Le, M.T., Świątek, J. (eds.) Intelligent Information and Database Systems. LNCS (LNAI), vol. 5991, pp. 340–350. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  13. Kim, D., Kim, H., Chung, D.: A Modified Genetic Algorithm for Fast Training Neural Networks. In: Wang, J., Liao, X.-F., Yi, Z. (eds.) ISNN 2005. LNCS, vol. 3496, pp. 660–665. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  14. Kontrimas, V., Verikas, A.: The mass appraisal of the real estate by computational intelligence. Applied Soft Computing 11(1), 443–448 (2011)

    Article  Google Scholar 

  15. Król, D., Lasota, T., Trawiński, B., Trawiński, K.: Investigation of Evolutionary Optimization Methods of TSK Fuzzy Model for Real Estate Appraisal. International Journal of Hybrid Intelligent Systems 5(3), 111–128 (2008)

    Article  MATH  Google Scholar 

  16. Krzystanek, M., Lasota, T., Telec, Z., Trawiński, B.: Analysis of Bagging Ensembles of Fuzzy Models for Premises Valuation. In: Nguyen, N.T., Le, M.T., Świątek, J. (eds.) Intelligent Information and Database Systems. LNCS (LNAI), vol. 5991, pp. 330–339. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  17. Lasota, T., Mazurkiewicz, J., Trawiński, B., Trawiński, K.: Comparison of Data Driven Models for the Validation of Residential Premises using KEEL. International Journal of Hybrid Intelligent Systems 7(1), 3–16 (2010)

    Article  MATH  Google Scholar 

  18. Lasota, T., Telec, Z., Trawiński, B., Trawiński, K.: Exploration of Bagging Ensembles Comprising Genetic Fuzzy Models to Assist with Real Estate Appraisals. In: Corchado, E., Yin, H. (eds.) IDEAL 2009. LNCS, vol. 5788, pp. 554–561. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  19. Lewis, O.M., Ware, J.A., Jenkins, D.: A novel neural network technique for the valuation of residential property. Neural Computing & Applications 5(4), 224–229 (1997)

    Article  Google Scholar 

  20. Molinaro, A.N., Simon, R., Pfeiffer, R.M.: Prediction error estimation: A comparison of resampling methods. Bioinformatics 21(15), 3301–3307 (2005)

    Article  Google Scholar 

  21. Peterson, S., Flangan, A.B.: Neural Network Hedonic Pricing Models in Mass Real Estate Appraisal. Journal of Real Estate Research 31(2), 147–164 (2009)

    Google Scholar 

  22. Polikar, R.: Ensemble Learning. Scholarpedia 4(1), 2776 (2009)

    Article  Google Scholar 

  23. Schapire, R.E.: The Strength of Weak Learnability. Mach. Learning 5(2), 197–227 (1990)

    Google Scholar 

  24. Worzala, E., Lenk, M., Silva, A.: An Exploration of Neural Networks and Its Application to Real Estate Valuation. Journal of Real Estate Research 10(2), 185–201 (1995)

    Google Scholar 

  25. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1444 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lasota, T., Telec, Z., Trawiński, G., Trawiński, B. (2011). Empirical Comparison of Resampling Methods Using Genetic Neural Networks for a Regression Problem. In: Corchado, E., Kurzyński, M., Woźniak, M. (eds) Hybrid Artificial Intelligent Systems. HAIS 2011. Lecture Notes in Computer Science(), vol 6679. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21222-2_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21222-2_26

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21221-5

  • Online ISBN: 978-3-642-21222-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics