Abstract
The Markov Chain Monte Carlo (MCMC) family of methods form a valuable part of the toolbox of social modeling and prediction techniques, enabling modelers to generate samples and summary statistics of a population of interest with minimal information. It has been used successfully to model changes over time in many types of social systems, including patterns of disease spread, adolescent smoking, and geopolitical conflicts. In MCMC an initial proposal distribution is iteratively refined until it approximates the posterior distribution. However, the selection of the proposal distribution can have a significant impact on model convergence. In this paper, we propose a new hybrid modeling technique in which an agent-based model is used to initialize the proposal distribution of the MCMC simulation. We demonstrate the use of our modeling technique in an urban transportation prediction scenario and show that the hybrid combined model produces more accurate predictions than either of the parent models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Cauchemez, S., Carrat, F., Viboud, C., Valleron, A.J., Boëlle, P.Y.: A Bayesian MCMC approach to study transmission of influenza: application to household longitudinal data. Statistics in Medicine 23(22), 3469–3487 (2004)
Ward, M.D., Gleditsch, K.S.: Location, location, location: An MCMC approach to modeling the spatial context of war and peace. Political Analysis 10(3), 244–260 (2002)
Snijders, T.: Markov Chain Monte Carlo estimation of exponential random graph models. Journal of Social Structure 3 (2002)
Snijders, T.: Models and methods in social network analysis. Cambridge University Press, New York (2005)
Carbonetto, P., King, M., Hamze, F.: A stochastic approximation method for inference in probabilistic graphical models. In: NIPS, vol. 22, pp. 216–224 (2009)
Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes: The Art of Scientific Computing. Cambridge University Press (2007)
Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., Teller, E.: Equation of state calculations by fast computing machines. Journal of Chemical Physics 21, 1087–1093 (1953)
Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence (6), 721–741 (1984)
Andrieu, C., De Freitas, N., Doucet, A., Jordan, M.: An introduction to MCMC for machine learning. Machine Learning 50(1), 5–43 (2003)
Eaton, D., Murphy, K.: Bayesian structure learning using dynamic programming and MCMC. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence, pp. 101–108 (2007)
De Freitas, N., Højen-Sørensen, P., Jordan, M., Russell, S.: Variational MCMC. In: UAI, pp. 120–127 (2001)
Andrieu, C., Moulines, É.: On the ergodicity properties of some adaptive MCMC algorithms. The Annals of Applied Probability 16(3), 1462–1505 (2006)
Yeh, Y., Yang, L., Watson, M., Goodman, N., Hanrahan, P.: Synthesizing open worlds with constraints using locally annealed reversible jump MCMC. ACM Transactions on Graphics (TOG) 31(4), 56:1–56:11 (2012)
Brooks, S., Giudici, P., Roberts, G.: Efficient construction of reversible jump Markov chain Monte Carlo proposal distributions. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 65(1), 3–39 (2003)
Macal, C., North, M.: Tutorial on agent-based modelling and simulation. Journal of Simulation 4(3), 151–162 (2010)
Beheshti, R., Sukthankar, G.: Extracting agent-based models of human transportation patterns. In: Proceedings of the ASE/IEEE International Conference on Social Informatics, Washington, D.C., pp. 157–164 (December 2012)
Wilensky, U.: NetLogo. Evanston, IL: Center for Connected Learning and Computer-Based Modeling, Northwestern University (1999), http://ccl.northwestern.edu/netlogo/ (retrieved)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Beheshti, R., Sukthankar, G. (2013). Improving Markov Chain Monte Carlo Estimation with Agent-Based Models. In: Greenberg, A.M., Kennedy, W.G., Bos, N.D. (eds) Social Computing, Behavioral-Cultural Modeling and Prediction. SBP 2013. Lecture Notes in Computer Science, vol 7812. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37210-0_54
Download citation
DOI: https://doi.org/10.1007/978-3-642-37210-0_54
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-37209-4
Online ISBN: 978-3-642-37210-0
eBook Packages: Computer ScienceComputer Science (R0)