Skip to main content
Log in

Heliza: talking dirty to the attackers

  • Original Paper
  • Published:
Journal in Computer Virology Aims and scope Submit manuscript

Abstract

In this article we describe a new paradigm for adaptive honeypots that are capable of learning from their interaction with attackers. The main objective of such honeypots is to get as much information as possible about the profile of an intruder, while decoying their true nature and goals. We have leveraged machine learning techniques for this task and have developed a honeypot that uses a variant of reinforcement learning in order to learn the best behavior when facing attackers. The honeypot is capable of adopting behavioral strategies that vary from blocking commands, returning erroneous messages right up to insults that aim to irritate the intruder and serve as reverse Turing Test. Our preliminary experimental results show that behavioral strategies are dependent on contextual parameters and can serve as advanced building blocks for intelligent honeypots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Spitzner L.: Honeypots: Tracking Hackers. Addison-Wesley Longman Publishing Co., Inc., Boston, MA (2002)

    Google Scholar 

  2. Baecher, P., Koetter, M., Dornseif, M., Freiling, F.: The nepenthes platform: an efficient approach to collect malware. In: Proceedings of the 9th International Symposium on Recent Advances in Intrusion Detection RAID, Springer, pp. 165–184 (2006)

  3. Weizenbaum J.: Eliza—a computer program for the study of natural language communication between man and machine. Commun. ACM 9(1), 36–45 (1966)

    Article  Google Scholar 

  4. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). The MIT Press, Cambridge, MA (1998)

  5. Ramsbrock, D., Berthier, R., Cukier, M.: Profiling attacker behavior following SSH compromises. In: DSN ’07: Proceedings of the 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Washington, DC, USA, IEEE Computer Society, 119–124 (2007)

  6. Wagener, G., State, R., Dulaunoy, A., Engel, T.: Self adaptive high interaction honeypots driven by game theory. In: SSS ’09: Proceedings of the 11th International Symposium on Stabilization, Safety, and Security of Distributed Systems, Berlin, Heidelberg, Springer-Verlag, 741–755 (2009)

  7. Newham, C., Vossen, J., Albing, C., Vossen, J.: Bash Cookbook: Solutions and Examples for Bash Users. O’Reilly Media, Inc., Sebastopol (2007)

  8. Xu, X., Xie, T.: A reinforcement learning approach for host-based intrusion detection using sequences of system calls. In: ICIC (1). 995–1003 (2005)

  9. Provos, N., Mcnamee, D., Mavrommatis, P., Wang, K., Modadugu, N.G.: The ghost in the browser analysis of web-based malware. In: Proceedings of the 1st conference on First Workshop on Hot Topics in Understanding Botnets, USENIX Association, Cambridge (2007)

  10. Coates, A.L., Baird, H.S., Fateman, R.J.: Pessimal print: a Reverse Turing Test. In: Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), 1154–1158 (2001)

  11. Alata, E., Nicomette, V., Kaaniche, M., Dacier, M., Herrb, M.: Lessons learned from the deployment of a high-interaction honeypot. In: EDCC ’06: Proceedings of the Sixth European Dependable Computing Conference, Washington, DC, USA, IEEE Computer Society, 39–46 (2006)

  12. Navarro G.: A guided tour to approximate string matching. ACM Comput. Surv. 33(1), 31–88 (2001)

    Article  Google Scholar 

  13. Singh S.P., Jaakkola T., Littman M.L., Szepesvári C.: Convergence results for single-step on-policy reinforcement-learning algorithms. Mach. Learn. 38(3), 287–308 (2000)

    Article  MATH  Google Scholar 

  14. Schaul T., Bayer J., Wierstra D., Sun Y., Felder M., Sehnke F., Rückstieß T., Schmidhuber J.: PyBrain. J. Mach. Learn. Res. 11, 743–746 (2010)

    Google Scholar 

  15. Wright, C., Cowan, C., Morris, J.: Linux security modules: General security support for the linux kernel. In: Proceedings of the 11th USENIX Security Symposium. 17–31 (2002)

  16. Holz, T., Raynal, F.: Detecting honeypots and other suspicious environments. In: 6th IEEE Information Assurance Workshop, United States Military Academy, West Point (2005)

  17. Garfinkel, T., Rosenblum, M.: A virtual machine introspection based architecture for intrusion detection. In: Proceedings Network and Distributed Systems Security Symposium, 191–206 (2003)

  18. Love R.: Kernel korner: intro to inotify. Linux J. 2005(139), 8–12 (2005)

    Google Scholar 

  19. Provos, N., Friedl, M., Honeyman, P.: Preventing privilege escalation. In: SSYM’03: Proceedings of the 12th conference on USENIX Security Symposium, Berkeley, CA, USA, USENIX Association, 231–242 (2003)

  20. Wagener, G.: Aha source code repository. http://git.quuxlabs.com

  21. Owens M.: Embedding an SQL database with SQLite. Linux J. 2003(110), 2–5 (2003)

    MathSciNet  Google Scholar 

  22. Wagener, G.: Aha dataset. http://quuxlabs.com/~gerard/datasets

  23. Alata, E., Nicomette, V., Kaaniche, M., Dacier, M., Herrb, M.: Lessons learned from the deployment of a high-interaction honeypot. In: Dependable Computing Conference, 2006. EDCC’06. Sixth European, 39–46 (2006)

  24. Kaelbling L., Littman M., Moore A.: Reinforcement learning: A survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Google Scholar 

  25. Abbeel, P., Coates, A., Quigley, M., Ng, A.Y.: An application of reinforcement learning to aerobatic helicopter flight. In: Advances in Neural Information Processing Systems 19, MIT Press (2007)

  26. Tan, M.: Multi-agent reinforcement learning: independent vs. cooperative agents. In: Proceedings of the Tenth International Conference on Machine Learning, Morgan Kaufmann, 330–337 (1993)

  27. Barto A.G., Mahadevan S.: Recent advances in hierarchical reinforcement learning. Discrete Event Dyn. Syst. 13(1–2), 41–77 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  28. Gambardella, L.M., Dorigo, M.: Ant-Q: a reinforcement learning approach to the traveling salesman problem. In: Proceedings of the ML-95, 12th International Conference on Machine Learning, Morgan Kaufmann, 252–260 (1995)

  29. Cheswick, B.: An evening with Berferd in which a cracker is lured, endured, and studied. In: Proceedings of Winter USENIX Conference, 163–174 (1992)

  30. Bellovin, S.M.: There be dragons. In: Proceedings of the Third Usenix Unix Security Symposium, 1–16, September 1992

  31. Cohen F.: A note on the role of deception in information protection. Computers & Security 17(6), 483–506 (1998)

    Article  Google Scholar 

  32. McCarty B.: The honeynet arms race. IEEE Secur Priv 1(6), 79–82 (2003)

    Article  Google Scholar 

  33. Xuxian, J., Xinyuan, W.: “out-of-the-box” Monitoring of VM-Based High-Interaction honeypots. In: RAID, 198–218 (2007)

  34. Chowdhary, V., Tongaonkar, A., Chiueh, T.: Towards automatic learning of valid services for honeypots. In: ICDCIT, 469–470 (2004)

  35. Monrose, F., Rubin, A.: Authentication via keystroke dynamics. In: CCS ’97: Proceedings of the 4th ACM Conference on Computer and Communications Security, New York, NY, USA, ACM, 48–56 (1997)

  36. Pouget, F., Pouget, F., Holz, T., Holz, T.: A pointillist approach for comparing honeypots. In: Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA 2005), LNCS 3448, Springer-Verlag, 51–68 (2005)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gérard Wagener.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wagener, G., State, R., Dulaunoy, A. et al. Heliza: talking dirty to the attackers. J Comput Virol 7, 221–232 (2011). https://doi.org/10.1007/s11416-010-0150-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11416-010-0150-4

Keywords

Navigation