Skip to main content
Log in

RULES-IT: incremental transfer learning with RULES family

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

In today’s world of excessive development in technologies, sustainability and adaptability of computer applications is a challenge, and future prediction became significant. Therefore, strong artificial intelligence (AI) became important and, thus, statistical machine learning (ML) methods were applied to serve it. These methods are very difficult to understand, and they predict the future without showing how. However, understanding of how machines make their decision is also important, especially in information system domain. Consequently, incremental covering algorithms (CA) can be used to produce simple rules to make difficult decisions. Nevertheless, even though using simple CA as the base of strong AI agent would be a novel idea but doing so with themethods available in CA is not possible. It was found that having to accurately update the discovered rules based on new information in CA is a challenge and needs extra attention. In specific, incomplete data with missing classes is inappropriately considered, whereby the speed and data size was also a concern, and future none existing classes were neglected. Consequently, this paper will introduce a novel algorithm called RULES-IT, in order to solve the problems of incremental CA and introduce it into strong AI. This algorithm is the first incremental algorithm in its family, and CA as a whole, that transfer rules of different domains to improve the performance, generalize the induction, take advantage of past experience in different domain, and make the learner more intelligent. It is also the first to introduce intelligent aspects into incremental CA, including consciousness, subjective emotions, awareness, and adjustment. Furthermore, all decisions made can be understood due to the simple representation of repository as rules. Finally, RULES-IT performance will be benchmarked with six different methods and compared with its predecessors to see the effect of transferring rules in the learning process, and to prove how RULES-IT actually solved the shortcoming of current incremental CA in addition to its improvement in the total performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Yudkowsky E. Levels of organization in general intelligence. In: Goertzel B, Pennachin C, eds. Artificial General Intelligence. Berlin: Springer, 2007, 389–501

    Chapter  Google Scholar 

  2. Pennachin C, Goertzel B. Contemporary approaches to artificial general intelligence. In: Goertzel B, Pennachin C, eds. Artificial General Intelligence. SpringerLink, 2007, 1–28

    Chapter  Google Scholar 

  3. Shita M, Gilman N, Deighton N, Pedersen M, Dodsworth C, Oana J. Kimera Systems, 2013. Available: http://kimerasystems.com

    Google Scholar 

  4. Aksoy M S, Mathkour H, Alasoos B A. Performance evaluation of RULES-3 induction system for data mining. International Journal of Innovative Computing, Information and Control, 2010, 6: 3339–3346

    Google Scholar 

  5. Kotsiantis S B. Supervised machine learning: a review of classification techniques. Informatica (03505596), 2007, 31: 249–268

    MATH  MathSciNet  Google Scholar 

  6. Cios K J, Swiniarski RW, Pedrycz W, Kurgan L A, Cios K, Swiniarski R, Kurgan L. Supervised learning: decision trees, rule algorithms, and their hybrids. Data Mining, eds. US: Springer, 2007, 81–417

  7. Birzniece I. The use of inductive learning in information systems. In: Proceedings of the 16th International Conference on Information and Software Technologies. 2010, 95–101

    Google Scholar 

  8. Qin Z, Wan T. Hybrid bayesian estimation tree learning with discrete and fuzzy labels. Frontiers of Computer Science, 2013, 1–12

    Google Scholar 

  9. Kurgan L A, Cios K J, Dick S. Highly scalable and robust rule learner: performance evaluation and comparison. IEEE Systems, Man, and Cybernetics-Part B: Cybernetics, 2006, 36: 32–53

    Article  Google Scholar 

  10. Pan S J, Yang Q. A Survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22: 1345–1359

    Article  Google Scholar 

  11. Alcalá-Fdez J, Sánchez L, García S, Jesus M J d, Ventura S, Garrell J M, Otero J, Romero C, Bacardit J, Rivas V M, Fernández J C, Herrera F. KEEL: a software tool to assess evolutionary algorithms to data mining problems. Soft Computing, 2009, 13: 307–318

    Article  Google Scholar 

  12. Alcalá-Fdez J, Fernandez A, Luengo J, Derrac J, García S, Sánchez L, Herrera F. KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Journal of Multiple-Valued Logic and Soft Computing, 2011, 17: 255–287

    Google Scholar 

  13. Efron B, Tibshirani R. An Introduction to the Bootstrap. USA: Chapman & Hall, 1993

    Book  MATH  Google Scholar 

  14. Aksoy M S. A review of rules family of algorithms. Mathematical and Computational Applications, 2008, 13: 51–60

    MATH  MathSciNet  Google Scholar 

  15. Pham D T, Aksoy M S. RULES: a simple rule extraction system. Expert Systems with Applications, 1995, 8: 59–65

    Article  Google Scholar 

  16. Pham D T, Aksoy M S. An algorithm for automatic rule induction. Artificial Intelligence in Engineering, 1993, 8: 277–282

    Article  Google Scholar 

  17. Pham D T, Aksoy M S. A new algorithm for inductive learning. Journal of Systems Engenering, 1995, 5: 115–122

    Google Scholar 

  18. Pham D T, Dimov S S. The RULES-3 plus inductive learning algorithm. In: Proceedings of the 3rd World Congress on Expert Systems. 1996, 917–924

    Google Scholar 

  19. Mathkour H I. RULES3-EXT improvement on RULES-3 induction algorithm. Mathematical and Computational Applications, 2010, 15(3): 318–324

    MATH  Google Scholar 

  20. Pham D T, Dimov S S. An algorithm for incremental inductive learning. Journal of Engineering Manufacture, 1997, 211: 239–249

    Article  Google Scholar 

  21. Pham D T, Soroka A J. An immune-network inspired rule generation algorithm (RULES-IS). In: Proceedings of the 3rd Virtual International Conference on Innovative Production Machines and Systems. 2007

    Google Scholar 

  22. Pham D T, Bigot S, Dimov S S. RULES-5: a rule induction algorithm for classification problems involving continuous attributes. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2003, 217(12): 1273–1286

    Google Scholar 

  23. Pham D T, Bigot S, Dimov S S. RULES-F: a fuzzy inductive learning algorithm. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2006, 220: 1433–1447

    Google Scholar 

  24. Bigot S. A new rule space representation scheme for rule induction in classification and control applications. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 2011, 225(7): 1018–1038

    Google Scholar 

  25. Pham D T, Afify A A. RULES-6: a simple rule induction algorithm for supporting decision making. In: Proceedings of the 31st Annual Conference of IEEE Industrial Electronics Society. 2005, 6

    Google Scholar 

  26. Shehzad K. EDISC: a class-tailored discretization technique for rulebased classification. IEEE Transactions on Knowledge and Data Engineering, 2012, 24: 1435–1447

    Article  Google Scholar 

  27. Afify A A, Pham D T. SRI: a scalable rule induction algorithm. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2006, 220: 537–552

    Google Scholar 

  28. ElGibreen H, Aksoy M S. RULES-TL: a simple and improved RULES algorithm for incomplete and large data. Journal of Theoretical and Applied Information Technology, 2013, 47

  29. ElGibreen H, Aksoy M S. Multi model transfer learning with RULES family. In: Proceedings of the 2013 International Conference on Machine Learning and Data Mining. 2013, 42–56

    Google Scholar 

  30. Ramon J, Driessens K, Croonenborghs T. Transfer learning in reinforcement learning problems through partial policy recycling: machine learning. In: Kok J, Koronacki J, Mantaras R, Matwin S, Mladenic D, Skowron A, eds. European Conference on Machine Learning. Berlin: Springer, 2007, 4701: 699–707

    Google Scholar 

  31. Taylor M, Suay H B, Chernova S. Integrating reinforcement learning with human demonstrations of varying ability. In: Proceedings of the 10th International Conferance of Autonomous Agents and Multiagent Systems. 2011, 617–624

    Google Scholar 

  32. Mahmud M. On universal transfer learning algorithmic learning theory. In: Hutter M, Servedio R, Takimoto E, eds. Berlin: Springer, 2007, 4754: 135–149

  33. Taylor M, Kuhlmann G, Stone P. Accelerating search with transferred heuristics. In: Proceedings of the 2007 International Conference on Automated Planning and Scheduling Workshop on Artificial Intelligence Planning and Learning. 2007

    Google Scholar 

  34. Yang Q. Three challenges in data mining. Frontiers of Computer Science in China, 2010, 4: 324–333

    Article  Google Scholar 

  35. Liu Y. A Review about transfer learning methods and applications. In: Proceedings of the 2011 International Conference on Information and Network Technology. 2011, 4: 7–11

    Google Scholar 

  36. Pan W, Zhong E, Yang Q. Transfer learning for text mining. Mining Text Data, 2012, 223–257

    Chapter  Google Scholar 

  37. Xie Y F, Su S Z, Li S Z. A pedestrian classification method based on transfer learning. In: Proceedings of the 2010 International Conference on Image Analysis and Signal Processing. 2010, 420–425

    Google Scholar 

  38. Rodner E, Denzler J. Learning with few examples for binary and multiclass classification using regularization of randomized trees. Pattern Recognition Letters, 2011, 32: 244–251

    Article  Google Scholar 

  39. Estévez J I, Toledo P A, Alayón S. Using an induced relational decision tree for rule injection in a learning classifier system. In: Proceedings of the the IEEE Congress on Evolutionary Computation New Orleans. 2011, 647–754

    Google Scholar 

  40. Boström H. Induction of recursive transfer tules. In: Cussens J, Džezroski S, eds. Learning Language in Logic. Berlin: Springer, 2000, 1925: 369–450

    Google Scholar 

  41. Reid M D. DEFT guessing: using inductive transfer to improve rule evaluation from limited data. Dissertation for the Doctoral Degree of Philosophy. Sydney: University of New South Wales, 2007

    Google Scholar 

  42. Lee JW, Giraud-Carrier C. Transfer learning in decision trees. In: Proceedings of the 2007 International Joint Conference on Neural Networks. 2007, 726–731

    Chapter  Google Scholar 

  43. Lu B, Wang X, Utiyama M. Incorporating prior knowledge into learning by dividing training data. Frontiers of Computer Science in China, 2009, 3: 109–122

    Article  Google Scholar 

  44. Ganchev P, Malehorn D, Bigbee W L, Gopalakrishnan V. Transfer learning of classification rules for biomarker discovery and verification from molecular profiling studies. Journal of Biomedical Informatics, 2011, 44(Suppl 1): S17–S23

    Article  Google Scholar 

  45. Schlimmer J C, Fisher D. A case study of incremental concept induction. In: Proceedings of the 5th National Conference on Artificial Intelligence. 1986, 496–501

    Google Scholar 

  46. Utgoff P E. ID5: an incremental ID3. In: Proceedings of the 5th International Conference on Machine Learning. 1988, 107–120

    Google Scholar 

  47. Utgoff P E. Incremental induction of decision trees. Machine Learning, 1989, 4: 161–186

    Article  Google Scholar 

  48. Michalski R S, Mozetic I, Hong J, Lavrac N. The multi-purpose incremental learning system AQ15 and its testing application to three medical domains. In: Proceedings of the 5th National Conference on Artificial Intelligence. 1986, 1041–1045

    Google Scholar 

  49. ElGibreen H, Aksoy M S. RULES family: where does it stand in inductive learning? In: Proceedings of the 8th International Conference on Computer Engineering and Applications. 2014, 177–186

    Google Scholar 

  50. Maclin R, Opitz D W. Popular ensemble methods: an empirical study. Journal of Artificial Intelligence Research, 1999, 11: 169–198

    MATH  Google Scholar 

  51. Hsu KW, Srivastava J. Improving bagging performance through multialgorithm ensembles. Frontiers of Computer Science in China, 2012, 6: 498–512

    MATH  MathSciNet  Google Scholar 

  52. Srihari S, Yang X, Ball G. Offline chinese handwriting recognition: an assessment of current technology. Frontiers of Computer Science in China, 2007, 1: 137–155

    Article  Google Scholar 

  53. Dai R, Liu C, Xiao B. Chinese character recognition: history, status and prospects. Frontiers of Computer Science in China, 2007, 1: 126–136

    Article  Google Scholar 

  54. Pham D T, Afify A A. Machine-learning techniques and their applications in manufacturing. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 2005, 219(5): 395–412

    Article  Google Scholar 

  55. Lehre P, Yao X. Runtime analysis of search heuristics on software engineering problems. Frontiers of Computer Science in China, 2009, 3: 64–72

    Article  Google Scholar 

  56. Jiang L. Learning random forests for ranking. Frontiers of Computer Science in China, 2011, 5: 79–86

    Article  MathSciNet  Google Scholar 

  57. Bigot S. A study of specialisation and classification heuristics used in covering algorithms. In: Proceedings of the 5th Virtual Conference on Innovative Production Machines and Systems. 2009

    Google Scholar 

  58. Janssen F, Fürnkranz J. On the quest for optimal rule learning heuristics. Machine Learning, 2010, 78: 343–379

    Article  MathSciNet  Google Scholar 

  59. Lee C. Generating classification rules from databases. In: Proceedings of the 9th International Conference on Applications of Artificial Intelligence in Engineering. 1994: 205–212

    Google Scholar 

  60. Mitchell T M. Machine Learning. New York: McGraw-Hill, 1997

    MATH  Google Scholar 

  61. Fayyad U M, Irani K B. Multi-interval discretization of continuousvalued attributes for classification learning. In: Proceedings of the 13th International Joint Conference of Artificial Intelligence. 1993

    Google Scholar 

  62. Cai Z. Technical Aspects of Data Mining. Dissertation for the Doctoral degree. Cardiff, UK: University of Wales Cardiff, 2001

    Google Scholar 

  63. Pham D T, Afify A A. Online discretization of continuous-valued attributes in rule induction. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2005: 829–842

    Google Scholar 

  64. Luengo J, García S, Herrera F. On the choice of the best imputation methods for missing values considering three groups of classification methods. Knowledge and Information Systems, 2012, 32: 77–108

    Article  Google Scholar 

  65. Deogun J, Spaulding W, Shuart B, Li D. Towards missing data imputation: a study of fuzzy k-means clustering method. In: Proceedings of the 4th International Conference of Rough Sets and Current Trends in Computing. 2004, 573–579

    Google Scholar 

  66. Cohen W W. Fast effective rule induction. In: Proceedings of the 12th International Conference on Machine Learning. 1995, 115–123

    Google Scholar 

  67. Michalski R. On the quasi-minimal solution of the general covering problem. In: Proceedings of the 5th International Symposium on In formation Processing. 1969, 128–128

    Google Scholar 

  68. Guan S U, Zhu F. An incremental approach to genetic algorithms based classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 2005, 35: 227–239, 2005

    Article  Google Scholar 

  69. Quinlan J R. C4.5: Programs for Machine Learning. San Francisco: Morgan Kaufmann, 1993

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hebah Elgibreen.

Additional information

Hebah ElGibreen is currently working on her PhD thesis in the field of machine learning. In 2009, she received her MS in information system at the College of Computer and Information Sciences, King Saud University, Saudi Arabia. In 2007, she worked as a teacher assistant at NXT LEGO robot program for gifted girls, sponsored by “King Abdul-Aziz&His Companions Foundation for the Gifted” program. In the same year, she won the 2nd place of “educating by computers” section in the National Competition in Computer Skills, sponsored by College of Telecom and Information. In addition, she worked for two years at King Saud University, Saudi Arabia as a teacher assistant and currently she is working as a lecturer.

Prof. Mehmet Sabih AKSOY completed his PhD in the field of artificial intelligence in Cardiff University, UK in 1994. He did his Master study in Yildiz University, Istanbul in 1985. He graduated from Istanbul Technical University in 1982. He worked in Istanbul Technical University, Fatih University, Sakarya University and Istanbul University in Turkey from 1984 to 2002. His area of interest includes machine learning, inductive learning, expert systems, artificial neural networks, and data mining. He joined College of Computer and Information Sciences, King Saud University, Saudi Arabia in 2002. He is currently working in the same university as a full professor.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Elgibreen, H., Aksoy, M.S. RULES-IT: incremental transfer learning with RULES family. Front. Comput. Sci. 8, 537–562 (2014). https://doi.org/10.1007/s11704-014-3297-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-014-3297-1

Keywords

Navigation