Skip to main content

Learning Symbolic User Models for Intrusion Detection: A Method and Initial Results

  • Conference paper
Intelligent Information Processing and Web Mining

Abstract

This paper briefly describes the LUS-MT method for automatically learning user signatures (models of computer users) from datastreams capturing users’ interactions with computers. The signatures are in the form of collections of multistate templates (MTs), each characterizing a pattern in the user’s behavior. By applying the models to new user activities, the system can detect an imposter or verify legitimate user activity. Advantages of the method include the high expressive power of the models (a single template can characterize a large number of different user behaviors) and the ease of their interpretation, which makes possible their editing or enhancement by an expert. Initial results are very promising and show the potential of the method for user modeling.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. 1. Adomavicius, G. and Tuzhilin, A., “Building Customer Profiles in Personalization Applications Using Data Mining Methods,” IEEE Computer, 34(2), 2001.

    Google Scholar 

  2. 2. Bace, R.G.,. Intrusion Detection, Indianapolis: Macmillan Technical Publishing, 2000.

    Google Scholar 

  3. 3. Billsus, D. and Pazzani, M., “User Modeling For Adaptive News Access,” User Modeling and User-Adapted Interaction, 10(2–3):147–180, 2000.

    Article  Google Scholar 

  4. 4. Cortes, C., Fisher, K., Pregibon, D., Rogers, A. and Smith, F., “Hancock: A Language For Extracting Signatures From Data Streams,” Proceedings of the 6th ACM SIGKDD international Conference on Knowledge Discovery and Data Mining, 2000.

    Google Scholar 

  5. 5. Eskin, E., Arnold, A., Prerau, M., Portnoy, L. and Stolfo, S., “A Geometric Framework for Unsupervised Anomaly Detection: Detecting Intrusions in Unlabeled Data,” in D. Barbara, & S. Jajodia (Eds.), Applications of Data Mining in Computer Security, Kluwer, 2002, pp. 77–102.

    Google Scholar 

  6. 6. Goldring, T., “Recent Experiences with User Profiling for Windows NT,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.

    Google Scholar 

  7. 7. Goldring, T., Shostak, J., Tessman, B. and Degenhardt, S., “User Pro.ling (Extended Abstract),” NSA unclassified internal report, 2000.

    Google Scholar 

  8. 8. Hofmeyr, S., Forrest, S. and Somayaji, A., “Intrusion Detection using Sequences of System Calls,” Journal of Computer Security, 6, 1998, pp. 151–180.

    Google Scholar 

  9. 9. Javitz H. S. and Valdes, A., “The SRI IDES Statistical Anomaly Detector,” Proceedings of the IEEE Symposium on Research in Security and Privacy, Oakland, CA, May 1991.

    Google Scholar 

  10. 10. Julisch, K. and Dacier M., “Mining Intrusion Detection Alarms for Actionable Knowledge,” Proc. 8th Intl. Conf. on Knowledge Discovery and Data Mining, July 2002.

    Google Scholar 

  11. 11. Kerber, R., “Chimerge: Discretization for Numeric Attributes,” Proceedings of the Tenth National Conference on Artificial Intelligence (AAAI-92), 1992, pp. 123–128.

    Google Scholar 

  12. 12. Lane, T. and Brodley, C.E., “Temporal Sequence Learning and Data Reduction for Anomaly Detection,” ACM Trans. on Information and Syst. Security, 2, 1999, pp. 295–331.

    Article  Google Scholar 

  13. 13. McHugh, J., “Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Intrusion Detection System Evaluations as Performed by Lincoln Laboratory,” ACM Trans. on Information & Systems Security, 3, November 2000, pp. 262–294.

    Article  Google Scholar 

  14. 14. Michalski, R.S., “Attributional Calculus: A Logic and Representation Language for Natural Induction,” Reports of the Machine Learning and Inference Laboratory, MLI 04–2, George Mason University, 2004.

    Google Scholar 

  15. 15. Michalski, R.S., Kaufman K., Pietrzykowski, J., Sniezynski, B. and Wojtusiak, J., “Learning User Models for Computer Intrusion Detection: Preliminary Results from Natural Induction Approach,” Reports of the Machine Learning and Inference Laboratory, MLI 05–3, George Mason University, 2005.

    Google Scholar 

  16. 16. Mukkamala, S. and Sung, A. “Comparison of Neural Networks and Support Vector Machines in Intrusion Detection,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.

    Google Scholar 

  17. 17. Novak, J., Stark, V. and Heinbuch, D., “Zombie Scan,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.

    Google Scholar 

  18. 18. Reinke, R., “Knowledge Acquisition and Refinement Tools for the ADVISE Meta-expert System,” M.S. Thesis, Reports of the Intelligent Systems Group, ISG 84–5, UIUCDCS-F-84-921, University of Illinois Dept. of Computer Science, Urbana, 1984. Learning Symbolic User Models for Intrusion Detection 285

    Google Scholar 

  19. 19. Schonlau, M. and Theus, M., “Detecting Masquerades in Intrusion Detection based on Unpopular Commands,” Information Processing Letters, 76, 2000, pp. 33–38.

    Article  Google Scholar 

  20. 20. Scott, S., “A Bayesian Paradigm for Designing Intrusion Detection Systems,” Computational Statistics and Data Analysis, 45, 2004, pp. 69–83.

    Article  MathSciNet  Google Scholar 

  21. 21. Shah, K., Jonckheere, E. and Bohacek, S., “Detecting Network Attacks through Traffic Modeling,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.

    Google Scholar 

  22. 22. Shavlik, J. and Shavlik, M., “Selection, Combination, and Evaluation of Effective Software Sensors for Detecting Abnormal Computer Usage,” Proc. of the 10th Intl. Conference on Knowledge Discovery and Data Mining, Seattle, WA, 2004, pp. 276–285.

    Google Scholar 

  23. 23. Streilein, W.W., Cunningham, R.K. and Webster, S.E., “Improved Detection of Low-profile Probe and Novel Denial-of-service Attacks,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.

    Google Scholar 

  24. 24. Valdes, A., “Profile Based Intrusion Detection: Lessons Learned, New Directions,” Workshop on Statistical and Machine Learning Techniques in Computer Intrusion Detection, 2002.

    Google Scholar 

  25. 25. Valdes, A. and Skinner, K., “Adaptive, Model-based Monitoring for Cyber Attack Detection,” in H. Debar, L. Me and F. Wu (Eds.), Lecture Notes in Computer Science #1907 (from Recent Advances in Intrusion Detection, RAID-2000), Springer-Verlag, 2000.

    Google Scholar 

  26. 26. Wojtusiak, J., “AQ21 User's Guide,” Reports of the Machine Learning and Inference Laboratory, MLI 04–3, George Mason University, 2004.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer

About this paper

Cite this paper

Michalski, R.S., Kaufman, K.A., Pietrzykowski, J., Śnieżyński, B., Wojtusiak, J. (2006). Learning Symbolic User Models for Intrusion Detection: A Method and Initial Results. In: Kłopotek, M.A., Wierzchoń, S.T., Trojanowski, K. (eds) Intelligent Information Processing and Web Mining. Advances in Soft Computing, vol 35. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-33521-8_27

Download citation

  • DOI: https://doi.org/10.1007/3-540-33521-8_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-33520-7

  • Online ISBN: 978-3-540-33521-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics