Abstract
The multi-agent system for real estate appraisals MAREA was extended to include aggregating agents, which could create ensemble models applying the bagging approach, was presented in the paper. The major part of the study was devoted to investigate to what extent bagging approach could lead to the improvement of the accuracy machine learning regression models. Four algorithms implemented in the KEEL tool, including linear regression, decision trees for regression, support vector machines, and artificial neural network of MLP type, were used in the experiments. The results showed that bagging ensembles ensured higher prediction accuracy than single models.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Alcalá-Fdez, J., et al.: KEEL: A software tool to assess evolutionary algorithms for data mining problems. Soft Computing 13(3), 307–318 (2009)
Avnimelech, R., Intrator, N.: Boosting regression estimators. Neural Computation 11, 491–513 (1999)
Bertoni, A., Campadelli, P., Parodi, M.: A boosting algorithm for regression. In: Proc. Int. Conference on Artificial Neural Networks, pp. 343–348 (1997)
Bellifemine, F., Caire, G., Poggi, A., Rimassa, G.: JADE. A White Paper. EXP 3(3), 6–19 (2003)
Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)
Breiman, L.: Stacked Regressions. Machine Learning 24(1), 49–64 (1996)
Breiman, L.: Using iterated bagging to debias regressions. Machine Learning 45, 261–277 (2001)
Büchlmann, P., Yu, B.: Analyzing bagging. Annals of Statistics 30, 927–961 (2002)
Büchlmann, P.: Bagging, subagging and bragging for improving some prediction algorithms. In: Akritas, M.G., Politis, D.N. (eds.) Recent Advances and Trends in Nonparametric Statistics, pp. 19–34. Elsevier, Amsterdam (2003)
Drucker, H., Cortes, C.: Boosting decision trees. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 479–485. Morgan Kaufmann, San Francisco (1996)
Drucker, H., Schapire, R.E., Simard, P.: Boosting performance in neural networks. Int. J. of Pattern Recogn. and Artificial Intel. 7(4), 705–719 (1993)
Drucker, H.: Improving regressors using boosting techniques. In: Proc. 14th Int. Conf. on Machine Learning, pp. 107–115. Morgan Kaufmann, San Francisco (1997)
Duffy, N., Helmbold, D.P.: Leveraging for regression. In: Proceedings of the 13th Conference on Computational Learning Theory, pp. 208–219 (2000)
Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using the second order information for training SVM. J. of Mach. Learning Res. 6, 1889–1918 (2005)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting, J. of Comp 55(1), 119–139 (1997)
Freund, Y., Schapire, R.E.: Experiments with a New Boosting Algorithm. In: Proc. of the Thirteenth Int. Conf. on Machine Learning, pp. 148–156 (1996)
Friedman, J.: Greedy function approximation: a gradient boosting machine. Technical report, Dept. of Statistics, Stanford University (1999)
Gencay, R., Qi, M.: Pricing and hedging derivative securities with neural networks: Bayesian regularization, early stopping, and bagging. IEEE Transactions on Neural Networks 12, 726–734 (2001)
Hansen, L., Salamon, P.: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(10), 993–1001 (1990)
Hashem, S.: Optimal linear combinations of neural networks. Neural Networks 10(4), 599–614 (1997)
Kégl, B.: Robust regression by boosting the median. In: Proc. of the 16th Conference on Computational Learning Theory, pp. 258–272 (2003)
Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Advances in Neural Inf. Proc. Systems, pp. 231–238. MIT Press, Cambridge (1995)
Lasota, T., Mazurkiewicz, J., Trawiński, B., Trawiński, K.: Comparison of Data Driven Models for the Validation of Residential Premises using KEEL. International Journal of Hybrid Intelligent Systems (in press, 2009)
Lasota, T., Telec, Z., Trawiński, B., Trawiński, K.: Concept of a Multi-agent System for Assisting in Real Estate Appraisals. In: Håkansson, A., et al. (eds.) KES-AMSTA 2009. LNCS (LNAI), vol. 5559, pp. 50–59. Springer, Heidelberg (2009)
Moller, F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6, 525–533 (1990)
Optiz, D., Maclin, R.: Popular Ensemble Methods: An Empirical Study. Journal of Artificial Intelligence Research 11, 169–198 (1999)
Opitz, D., Shavlik, J.W.: Actively searching for an effective neural network ensemble. Connection Science 8(3-4), 337–353 (1996)
Quinlan, J.R.: Learning with Continuous Classes. In: Proc. 5th Australian Joint Conference on Artificial Intelligence (AI 1992), Singapore, pp. 343–348 (1992)
Rustagi, J.S.: Optimization Techniques in Statistics. Academic Press, London (1994)
Schapire, R.E.: The Boosting approach to machine learning: An overview. In: Denison, D.D., et al. (eds.) Nonlinear Estimation and Classification. Springer, Heidelberg (2003)
Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)
Skurichina, M., Duin, R.P.W.: Bagging for linear classifiers. Pattern Recognition 31, 909–930 (1998)
Triadaphillou, S., et al.: Fermentation process tracking through enhanced spectral calibration modelling. Biotechnology and Bioengineering 97, 554–567 (2007)
Zemel, R.S., Pitassi, T.: A gradient based boosting algorithm for regression problems. In: Adv. in Neural Inf. Processing Systems, vol. 13, pp. 696–702 (2001)
Zhang, J.: Inferential estimation of polymer quality using bootstrap aggregated neural networks. Neural Networks 12, 927–938 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lasota, T., Telec, Z., Trawiński, B., Trawiński, K. (2009). A Multi-agent System to Assist with Real Estate Appraisals Using Bagging Ensembles. In: Nguyen, N.T., Kowalczyk, R., Chen, SM. (eds) Computational Collective Intelligence. Semantic Web, Social Networks and Multiagent Systems. ICCCI 2009. Lecture Notes in Computer Science(), vol 5796. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04441-0_71
Download citation
DOI: https://doi.org/10.1007/978-3-642-04441-0_71
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04440-3
Online ISBN: 978-3-642-04441-0
eBook Packages: Computer ScienceComputer Science (R0)