Skip to main content

A Bayesian Approach to Attention Control and Concept Abstraction

  • Conference paper
  • 1641 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4840))

Abstract

Representing and modeling knowledge in the face of uncertainty has always been a challenge in artificial intelligence. Graphical models are an apt way of representing uncertainty, and hidden variables in this framework are a way of abstraction of the knowledge. It seems that hidden variables can represent concepts, which reveal the relation among the observed phenomena and capture their cause and effect relationship through structure learning. Our concern is mostly on concept learning of situated agents, which learn while living, and attend to important states to maximize their expected reward. Therefore, we present an algorithm for sequential learning of Bayesian networks with hidden variables. The proposed algorithm employs the recent advancements in learning hidden variable networks for the batch case, and utilizes a mixture of approaches that allows for sequential learning of parameters and structure of the network. The incremental nature of this algorithm facilitates gradual learning of an agent, through its lifetime, as data is gathered progressively. Furthermore inference is made possible, when facing a large corpus of data that cannot be handled as a whole.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Chickering, D.M., Heckerman, D.: Efficient approximations for the marginal likelihood of Bayesian networks with hidden variables. Machine Learning 29, 181–212 (1997)

    Article  MATH  Google Scholar 

  2. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. of the Royal Stat. Society B 39, 1–39 (1977)

    MathSciNet  MATH  Google Scholar 

  3. Drescher, G.L.: Made-up Minds. MIT Press, Cambridge (1991)

    MATH  Google Scholar 

  4. Elidan, G., Friedman, N.: Learning Hidden Variable Networks: The Information Bottleneck Approach. JMLR 6, 81–127 (2005)

    MathSciNet  MATH  Google Scholar 

  5. Friedman, N., Goldszmidt, M.: Learning Bayesian networks with local structure. In: Proc. Twelfth Conf. on UAI, pp. 252–262. Morgan Kaufmann, San Francisco (1996)

    Google Scholar 

  6. Friedman, N., Goldszmidt, M.: Sequential Update of Bayesian Network Structure. In: Proc. Thirteenth Conf. on UAI, Rhode Island, pp. 165–174 (1997)

    Google Scholar 

  7. Friedman, N., Mosenzon, O., Slonim, N., Tishby, N.: Multivariate information bottleneck. In: Breese, J.S., Koller, D. (eds.) Proc. Seventeenth Conf. on UAI, San Francisco, pp. 152–161 (2001)

    Google Scholar 

  8. Friedman, N.: Learning belief networks in the presence of missing values and hidden variables. In: Fisher, D. (ed.) Proc. Fourteenth ICML, San Francisco, pp. 125–133 (1997)

    Google Scholar 

  9. Friedman, N.: The Bayesian structural EM algorithm. In: Proc. Fourteenth Conf. on UAI, San Francisco, pp. 129–138 (1998)

    Google Scholar 

  10. Heckerman, D., Geiger, D., Chickering, D.M.: Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning 20, 197–243 (1995)

    MATH  Google Scholar 

  11. Jordan, M.I., Ghahramani, Z., Jaakkola, T., Saul, L.K.: An introduction to variational approximations methods for graphical models. In: Learning in Graphical Models, Kluwer, Dordrecht, Netherlands (1998)

    Chapter  Google Scholar 

  12. Lam, W., Bacchus, F.: Learning Bayesian belief networks: An approach based on the MDL principle. Computational Intelligence 10, 269–293 (1994)

    Article  Google Scholar 

  13. Neal, R.M., Hinton, G.E.: A new view of the EM algorithm that justifies incremental, Sparse, and other variants. In: Learning in Graphical Models, Kluwer, Dordrecht, Netherlands (1998)

    Google Scholar 

  14. Neal, R.M.: Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, Dept. of Computer Science, University of Toronto (1993)

    Google Scholar 

  15. Piaget, J., Inhelder, B.: The Psychology of a Child. Basic Books, New York (1969)

    Google Scholar 

  16. Spirtes, P., Glymour, C., Scheines, R.: Causation, Prediction and Search. In: Number 81 in Lecture Notes in Statistics, Springer-Verlag, New York (1993)

    Google Scholar 

  17. Tishby, N., Pereira, F., Bialek, W.: The information bottleneck method. In: Proc. 37th Allerton Conference on Communication, Control and Computation, pp. 368–377. University of Illinois, US (1999)

    Google Scholar 

  18. Paletta, L., Rome, E., Buxton, H.: Attention Architectures for Machine Vision and Mobile Robots. In: Itti, L., Rees, G., Tsotsos, J. (eds.) Neurobiology of Attention, pp. 642–648. Academic Press, New York, NY (2005)

    Chapter  Google Scholar 

  19. Rizzolatti, G., Gentilucci, M.: Motor and visual-motor functions of the premotor cortex. In: Rakic, P., Singer, W. (eds.) Neurobiology of Neocortex, pp. 269–284. Wiley, Chichester (1988)

    Google Scholar 

  20. Haidarian, S., Rastegar, F., Nili, M.: Bayesian Approach to Learning Temporally Extended Concpets. In: CSICC 2007. Proceedings of the 12th International CSI Computer Conference, Tehran, Iran (2006)

    Google Scholar 

  21. Fatemi, H., Nili, M.: Biologically Inspired Framework for Learning and Abstract Representation of Attention Control. In: IJCAI 2007. Proceedings of the 4th International Workshop on Attention and Performance in Computational Vision at the International Joint Conference on Artificial Intelligence, Hyderabad, India (2007)

    Google Scholar 

  22. Horvitz, E., Kadie, C.M., Paek, T., Hovel, D.: Models of Attention in Computing and Communications: From Principles to Applications. Communications of the ACM 46(3), 52–59 (2003)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Haidarian Shahri, S., Nili Ahmadabadi, M. (2007). A Bayesian Approach to Attention Control and Concept Abstraction. In: Paletta, L., Rome, E. (eds) Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint. WAPCV 2007. Lecture Notes in Computer Science(), vol 4840. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-77343-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-77343-6_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-77342-9

  • Online ISBN: 978-3-540-77343-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics