Skip to main content

A Multi-agent System to Assist with Real Estate Appraisals Using Bagging Ensembles

  • Conference paper
Computational Collective Intelligence. Semantic Web, Social Networks and Multiagent Systems (ICCCI 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5796))

Included in the following conference series:

Abstract

The multi-agent system for real estate appraisals MAREA was extended to include aggregating agents, which could create ensemble models applying the bagging approach, was presented in the paper. The major part of the study was devoted to investigate to what extent bagging approach could lead to the improvement of the accuracy machine learning regression models. Four algorithms implemented in the KEEL tool, including linear regression, decision trees for regression, support vector machines, and artificial neural network of MLP type, were used in the experiments. The results showed that bagging ensembles ensured higher prediction accuracy than single models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alcalá-Fdez, J., et al.: KEEL: A software tool to assess evolutionary algorithms for data mining problems. Soft Computing 13(3), 307–318 (2009)

    Article  Google Scholar 

  2. Avnimelech, R., Intrator, N.: Boosting regression estimators. Neural Computation 11, 491–513 (1999)

    Google Scholar 

  3. Bertoni, A., Campadelli, P., Parodi, M.: A boosting algorithm for regression. In: Proc. Int. Conference on Artificial Neural Networks, pp. 343–348 (1997)

    Google Scholar 

  4. Bellifemine, F., Caire, G., Poggi, A., Rimassa, G.: JADE. A White Paper. EXP 3(3), 6–19 (2003)

    MATH  Google Scholar 

  5. Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  Google Scholar 

  6. Breiman, L.: Stacked Regressions. Machine Learning 24(1), 49–64 (1996)

    MATH  Google Scholar 

  7. Breiman, L.: Using iterated bagging to debias regressions. Machine Learning 45, 261–277 (2001)

    Article  MATH  Google Scholar 

  8. Büchlmann, P., Yu, B.: Analyzing bagging. Annals of Statistics 30, 927–961 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. Büchlmann, P.: Bagging, subagging and bragging for improving some prediction algorithms. In: Akritas, M.G., Politis, D.N. (eds.) Recent Advances and Trends in Nonparametric Statistics, pp. 19–34. Elsevier, Amsterdam (2003)

    Chapter  Google Scholar 

  10. Drucker, H., Cortes, C.: Boosting decision trees. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 479–485. Morgan Kaufmann, San Francisco (1996)

    Google Scholar 

  11. Drucker, H., Schapire, R.E., Simard, P.: Boosting performance in neural networks. Int. J. of Pattern Recogn. and Artificial Intel. 7(4), 705–719 (1993)

    Article  Google Scholar 

  12. Drucker, H.: Improving regressors using boosting techniques. In: Proc. 14th Int. Conf. on Machine Learning, pp. 107–115. Morgan Kaufmann, San Francisco (1997)

    Google Scholar 

  13. Duffy, N., Helmbold, D.P.: Leveraging for regression. In: Proceedings of the 13th Conference on Computational Learning Theory, pp. 208–219 (2000)

    Google Scholar 

  14. Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using the second order information for training SVM. J. of Mach. Learning Res. 6, 1889–1918 (2005)

    MATH  Google Scholar 

  15. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting, J. of Comp 55(1), 119–139 (1997)

    MathSciNet  MATH  Google Scholar 

  16. Freund, Y., Schapire, R.E.: Experiments with a New Boosting Algorithm. In: Proc. of the Thirteenth Int. Conf. on Machine Learning, pp. 148–156 (1996)

    Google Scholar 

  17. Friedman, J.: Greedy function approximation: a gradient boosting machine. Technical report, Dept. of Statistics, Stanford University (1999)

    Google Scholar 

  18. Gencay, R., Qi, M.: Pricing and hedging derivative securities with neural networks: Bayesian regularization, early stopping, and bagging. IEEE Transactions on Neural Networks 12, 726–734 (2001)

    Article  Google Scholar 

  19. Hansen, L., Salamon, P.: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(10), 993–1001 (1990)

    Article  Google Scholar 

  20. Hashem, S.: Optimal linear combinations of neural networks. Neural Networks 10(4), 599–614 (1997)

    Article  MathSciNet  Google Scholar 

  21. Kégl, B.: Robust regression by boosting the median. In: Proc. of the 16th Conference on Computational Learning Theory, pp. 258–272 (2003)

    Google Scholar 

  22. Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Advances in Neural Inf. Proc. Systems, pp. 231–238. MIT Press, Cambridge (1995)

    Google Scholar 

  23. Lasota, T., Mazurkiewicz, J., Trawiński, B., Trawiński, K.: Comparison of Data Driven Models for the Validation of Residential Premises using KEEL. International Journal of Hybrid Intelligent Systems (in press, 2009)

    Google Scholar 

  24. Lasota, T., Telec, Z., Trawiński, B., Trawiński, K.: Concept of a Multi-agent System for Assisting in Real Estate Appraisals. In: Håkansson, A., et al. (eds.) KES-AMSTA 2009. LNCS (LNAI), vol. 5559, pp. 50–59. Springer, Heidelberg (2009)

    Google Scholar 

  25. Moller, F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6, 525–533 (1990)

    Article  Google Scholar 

  26. Optiz, D., Maclin, R.: Popular Ensemble Methods: An Empirical Study. Journal of Artificial Intelligence Research 11, 169–198 (1999)

    MATH  Google Scholar 

  27. Opitz, D., Shavlik, J.W.: Actively searching for an effective neural network ensemble. Connection Science 8(3-4), 337–353 (1996)

    Article  Google Scholar 

  28. Quinlan, J.R.: Learning with Continuous Classes. In: Proc. 5th Australian Joint Conference on Artificial Intelligence (AI 1992), Singapore, pp. 343–348 (1992)

    Google Scholar 

  29. Rustagi, J.S.: Optimization Techniques in Statistics. Academic Press, London (1994)

    MATH  Google Scholar 

  30. Schapire, R.E.: The Boosting approach to machine learning: An overview. In: Denison, D.D., et al. (eds.) Nonlinear Estimation and Classification. Springer, Heidelberg (2003)

    Google Scholar 

  31. Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)

    Google Scholar 

  32. Skurichina, M., Duin, R.P.W.: Bagging for linear classifiers. Pattern Recognition 31, 909–930 (1998)

    Article  MATH  Google Scholar 

  33. Triadaphillou, S., et al.: Fermentation process tracking through enhanced spectral calibration modelling. Biotechnology and Bioengineering 97, 554–567 (2007)

    Article  Google Scholar 

  34. Zemel, R.S., Pitassi, T.: A gradient based boosting algorithm for regression problems. In: Adv. in Neural Inf. Processing Systems, vol. 13, pp. 696–702 (2001)

    Google Scholar 

  35. Zhang, J.: Inferential estimation of polymer quality using bootstrap aggregated neural networks. Neural Networks 12, 927–938 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lasota, T., Telec, Z., Trawiński, B., Trawiński, K. (2009). A Multi-agent System to Assist with Real Estate Appraisals Using Bagging Ensembles. In: Nguyen, N.T., Kowalczyk, R., Chen, SM. (eds) Computational Collective Intelligence. Semantic Web, Social Networks and Multiagent Systems. ICCCI 2009. Lecture Notes in Computer Science(), vol 5796. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04441-0_71

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04441-0_71

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04440-3

  • Online ISBN: 978-3-642-04441-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics