Skip to main content

Policy Gradient Reinforcement Learning for I/O Reordering on Storage Servers

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Included in the following conference series:

  • 4598 Accesses

Abstract

Deep customization of storage architectures to the applications they support is often undesirable — nature of application data is dynamic, applications are replaced far more often than storage systems are and usage patterns change dynamically with time. A continuously learning software intervention that dynamically adapts to the changing workload pattern would be the easiest way to bridge this ‘gap’. As borne out by our experiments, the overhead induced by such software interventions turns out to be negligible for large-scale storage systems. Reinforcement Learning offers a way to dynamically learn from a continuous data stream and take appropriate actions towards optimizing a future goal. We adapt policy gradient reinforcement learning to learn a policy that minimizes I/O wait time that in turn maximizes I/O throughput. A set of discrete actions consisting of switches between scheduling schemes is considered to dynamically re-order client-specific I/O operations. Results reveal that I/O reordering policy learned using reinforcement learning results in significant improvement in the overall I/O throughput.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    switching of scheduling scheme and reordering of I/O will be used interchangeably in this paper.

References

  1. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with deep reinforcement learning. In: NIPS Deep Learning Workshop (2013)

    Google Scholar 

  2. Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: a survey. Int. J. Robot. Res. 32(11), 1238–1274 (2013)

    Article  Google Scholar 

  3. Wang, Y., Merchant, A.: Proportional-share scheduling for distributed storage systems. In: Proceedings of the 5th USENIX Conference on File and Storage Technologies, FAST 2007 (2007)

    Google Scholar 

  4. Vengerov, D.: A reinforcement learning approach to dynamic resource allocation. Technical report, Sun Microsystems (2005)

    Google Scholar 

  5. Ipek, E., Mutlu, O., Martínez, J.F., Caruana, R.: Self-optimizing memory controllers: a reinforcement learning approach. In: Proceedings of the 35th Annual International Symposium on Computer Architecture, ISCA 2008, pp. 39–50 (2008)

    Google Scholar 

  6. Wu, J., Xu, X., Zhang, P., Liu, C.: A novel multi-agent reinforcement learning approach for job scheduling in grid computing. Future Gener. Comput. Syst. 27(5), 430–439 (2011)

    Article  Google Scholar 

  7. Singh, S., Bertsekas, D.: Reinforcement learning for dynamic channel allocation in cellular telephone systems. In: Advances in Neural Information Processing Systems, NIPS 96

    Google Scholar 

  8. Zhang, W., Dietterich, T.G.: A reinforcement learning approach to job-shop scheduling. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence, vol. 2, IJCAI 1995 (1995)

    Google Scholar 

  9. Zomaya, A.Y., Clements, M., Olariu, S.: A framework for reinforcement-based scheduling in parallel processor systems. IEEE Trans. Parallel Distrib. Syst. 9(3), 249–260 (1998)

    Article  Google Scholar 

  10. Vengerov, D.: Dynamic tuning of online data migration policies in hierarchical storage systems using reinforcement learning. Technical report (2006)

    Google Scholar 

  11. Deshpande, S., Dheenadayalan, K., Srinivasaraghavan, G., Muralidhara, V.: Filer response time prediction using adaptively-learned forecasting models based on counter time series data. In: 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 13–18 (2016)

    Google Scholar 

  12. Dheenadayalan, K., Srinivasaraghavan, G., Muralidhara, V.N.: Self-tuning filers — overload prediction and preventive tuning using pruned random forest. In: Kim, J., Shim, K., Cao, L., Lee, J.-G., Lin, X., Moon, Y.-S. (eds.) PAKDD 2017. LNCS (LNAI), vol. 10235, pp. 495–507. Springer, Cham (2017). doi:10.1007/978-3-319-57529-2_39

    Chapter  Google Scholar 

  13. Peters, J., Schaal, S.: Reinforcement learning of motor skills with policy gradients. Neural Netw. 21(4), 682–697 (2008)

    Article  Google Scholar 

  14. Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS 1999, pp. 1057–1063. MIT Press (1999)

    Google Scholar 

  15. Kerrisk, M.: The Linux Programming Interface: A Linux and UNIX System Programming Handbook, 1st edn. No Starch Press, San Francisco (2010)

    Google Scholar 

  16. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, pp. 249–256 (2010)

    Google Scholar 

  17. Tieleman, T., Hinton, G.: Lecture 6.5—RmsProp: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4, 26–31 (2012)

    Google Scholar 

  18. Norcott, W., Capps, D.: IOzone file system benchmark (2006). www.iozone.org

Download references

Acknowledgement

This research was supported in part through a research grant from NetApp Advanced Technology Group.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kumar Dheenadayalan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Dheenadayalan, K., Srinivasaraghavan, G., Muralidhara, V.N. (2017). Policy Gradient Reinforcement Learning for I/O Reordering on Storage Servers. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_87

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_87

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics