Skip to main content

The Measure of Regular Relations Recognition Applied to the Supervised Classification Task

  • Conference paper
  • First Online:
  • 1697 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11943))

Abstract

The probability measure of regularities recognition in an information stream is introduced in the paper. The measure allows for the creation of machine-learning models without a supervisor. The experiment described in the paper empirically proves that the measure allows the recognition of regularities and helps to find out regular relations between the values of variables.

The machine learning model finds out regular relations in data set and by the model allow reconstructing unknown values of the classification variable. The classification algorithm on the basis of the probability measure of regularities recognition is described in the paper. The measure connection with entropy is demonstrated and mutual information is used to optimise the algorithm’s performance. The accuracy of the algorithm matches the accuracy of well-known supervised machine learning algorithms and also exceeds them.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Ben-Hur, A., et al.: Support vector clustering. J. Mach. Learn. Res. 2, 125–137 (2001)

    MATH  Google Scholar 

  2. Bertsekas, D.P., Bertsekas, D.P.: Dynamic Programming and Optimal Control: Approximate Dynamic Programming, vol. II. Athena Scientific (2012)

    Google Scholar 

  3. Hawkins, J., Blakeslee, S.: On Intelligence. Times Books (2004)

    Google Scholar 

  4. Brown, J.D.: Principal components analysis and exploratory factor analysis – definitions, differences and choices. Shiken: JALT Test. Eval. SIG Newsl. 13, 26–30 (2009)

    Google Scholar 

  5. Cai, X., Sowmya, A.: Level learning set: a novel classifier based on active contour models. In: Kok, J.N., Koronacki, J., Mantaras, R.L., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 79–90. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74958-5_11

    Chapter  Google Scholar 

  6. Ventcel, E.S.: Teoria veroytnostey. Beнтцeль, E. C. (1999). Teopия вepoятнocтeй: yчeбник для вyзoв. - Moscow (1999)

    Google Scholar 

  7. Estivill-Castro, V.: Why so many clustering algorithms – a position paper. ACM SIGKDD Explor. Newsl. 4(1), 65–75 (2002)

    Article  Google Scholar 

  8. Freedman, D.A.: Statistical Models: Theory and Practice. Cambridge University Press, Cambridge (2005)

    Book  Google Scholar 

  9. Hahsler, M., Hornik, K., Reutterer, T.: Implications of probabilistic data modeling for rule mining. Research Report Series, Department of Statistics and Mathematics, 14 (2005)

    Google Scholar 

  10. Jiang, L., Li, C.: Scaling up the accuracy of decision-tree: a Naive-Bayes combination. J. Comput. 6(7), 1325–1331 (2011)

    Google Scholar 

  11. Kliegr, T.: Quantitative CBA: Small and Comprehensible Association Rule Classification Models. School of Electronic Engineering and Computer Science, Queen Mary University of London, United Kingdom, and Faculty of Informatics and Statistics, VSE, Czec (2017)

    Google Scholar 

  12. MacKay, D.J.C.: Information Theory, Inference, and Learning Algorithms, 1st edn, p. 34. Cambridge University Press, Cambridge (2003)

    Google Scholar 

  13. Caruana, R., Niculescu-Mizil, A.: An Empirical Comparison of Supervised Learning Algorithms. Department of Computer Science, Cornell University, Ithaca, NY 14853 USA (2006)

    Google Scholar 

  14. Rish, I.: An Empirical Study of the Naïve Bayes Classifier. In: IJCAI Work Empirical Methods Artificial Intelligence, vol. 3 (2001)

    Google Scholar 

  15. Rokach, L., Maimon, O.: Data Mining with Decision Trees. Theory and Applications. Ben-Gurion University of the Negev, Tel-Aviv University, Israel (2007)

    Google Scholar 

  16. Shannon, C., Weaver, W.: The mathematical. Theory communication. Bell Syst. Tech. J. 27, 379–423, 623–656 (1948)

    Google Scholar 

  17. Sigrist, F.: Gradient and Newton. Boosting for Classification. Lucerne University of Applied Sciences and Arts (2019)

    Google Scholar 

  18. Srikant, R., Agrawal, R.: Fast algorithms for mining association rules. In: 20th International Conference on Very Large Data Bases, Santiago, Chile, pp. 487–499 (1994)

    Google Scholar 

  19. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall (2009)

    Google Scholar 

  20. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1999)

    MATH  Google Scholar 

  21. Li, W., Han, J., Pei, J.: CMAR: accurate and efficient classification based on multiple class-association rules. School of Computing Science, Simon Fraser Universit. IEEE International Conference on Data Mining (2001)

    Google Scholar 

  22. Yin, X., Han, J.: CPAR: Classification Based on Predictive Association Rules. University of Illinois at Urbana-Champaign (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuriy Mikheev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mikheev, Y. (2019). The Measure of Regular Relations Recognition Applied to the Supervised Classification Task. In: Nicosia, G., Pardalos, P., Umeton, R., Giuffrida, G., Sciacca, V. (eds) Machine Learning, Optimization, and Data Science. LOD 2019. Lecture Notes in Computer Science(), vol 11943. Springer, Cham. https://doi.org/10.1007/978-3-030-37599-7_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37599-7_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37598-0

  • Online ISBN: 978-3-030-37599-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics