Skip to main content

Improvement of Air Handling Unit Control Performance Using Reinforcement Learning

  • Conference paper
  • 492 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4303))

Abstract

Most common applications using neural networks for control problems are the automatic controls using the artificial perceptual function. These control mechanisms are similar to those of the intelligent and pattern recognition control of an adaptive method frequently performed by the animate nature. Many automated buildings are using HVAC(Heating Ventilating and Air Conditioning) by PI that has simple and solid characteristics. However, to keep up good performance, proper tuning and re-tuning are necessary.In this paper, as the one of method to solve the above problems and improve control performance of controller, using reinforcement learning method for the one of neural network learning method(supervised/unsupervised/reinforcement learning), reinforcement learning controller is proposed and the validity will be evaluated under the real operating condition of AHU(Air Handling Unit) in the environment chamber.

This paper has been supported by the 2006 Hannam University Research Fund.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Virk, G.S., Loveday, D.L.: A Comparison of Predictive, PID, and On/Off Techniques for Energy Management and Control. In: Proceedings of ASHRAE, pp. 3–10 (1992)

    Google Scholar 

  2. Åström, K.J., Hägglund, T.: PID controllers: Theory, design and tuning, Research-Triangle Park, NC: Instrument Society of America (1995)

    Google Scholar 

  3. Hang, C.C., Åström, K.J., Ho, W.K.: Ziegler-Nichols tuning formula. IEE Proc. D 138(2), 111–118

    Google Scholar 

  4. Ziegler, J.G., Nichols, N.B.: Optimum settings for automatic controllers. Trans. ASME, 433–444 (1942)

    Google Scholar 

  5. Hang, C.C., Lim, C.C., Soon, S.H.: A new PID auto-tunner design based on correla-tion technique. In: Proc. 2nd Multinational Instrumentation Conf., China (1986)

    Google Scholar 

  6. Hang, C.C., Åström, K.J.: Refinements of the Ziegler Nichols tunning formula for PID auto-tunners. In: Proc. ISA Conf., USA

    Google Scholar 

  7. Åström, K.J., Hang, C.C., Persson, P., Ho, W.K.: Towards Intelligent PID Control, International Federation of Automatic Control (1991)

    Google Scholar 

  8. Åström, K.J., Hang, C.C., Persson, P.: Heuristics for assessment of PID control with Ziegler-Nichols tuning, Automatic Control, Lund Institute of Technology, Lund, Sweden (1988)

    Google Scholar 

  9. Åström, K.J., Hagglund, T.: Automatic tuning of simple regulators with specifications on phase and amplitude margins. Automatica 20, 645–651 (1984)

    Article  MATH  Google Scholar 

  10. Sutton, R.S.: Learning to predict by the methods of TD(temporal differences). Machine Learn. 3, 9–44 (1988)

    Google Scholar 

  11. Anderson, C.W.: Q-learning with hidden-unit restarting. Advances in Neural information processing system 5, 81–88 (1993)

    Google Scholar 

  12. Barto, A.G., Bradtke, S.J., Singh, S.P.: Learning to act using real-time dynamic programming. Artificial Intelligence 72, 81–138 (1995)

    Article  Google Scholar 

  13. Watkins, C.J., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Youk, S., Kim, M., Kim, Y., Park, G. (2006). Improvement of Air Handling Unit Control Performance Using Reinforcement Learning. In: Hoffmann, A., Kang, Bh., Richards, D., Tsumoto, S. (eds) Advances in Knowledge Acquisition and Management. PKAW 2006. Lecture Notes in Computer Science(), vol 4303. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11961239_15

Download citation

  • DOI: https://doi.org/10.1007/11961239_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-68955-3

  • Online ISBN: 978-3-540-68957-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics