Skip to main content
Log in

Applying Learning by Examples for Digital Design Automation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

This paper describes a new learning by example mechanism and its application for digital circuit design automation. This mechanism uses finite state machines to represent the inferred models or designs. The resultant models are easy to be implemented in hardware using current VLSI technologies. Our simulation results show that it is often possible to infer a well-defined deterministic model or design from just one sequence of examples. In addition this mechanism is able to handle sequential task involving long-term dependence. This new learning by example mechanism is used as a design by example system for automatic synthesis of digital circuits. Such systems have not previously been successfully developed mainly because of the lack of mechanism to implement them. From artificial neural network research, it seems possible to apply the knowledge gained from learning by example to form a design by example system. However, one of the problems with neural network approaches is that the resultant models are very difficult to be implemented in hardware using current VLSI technologies. By using the mechanism described in this paper, the resultant models are finite state machines that are well suited for digital designs. Several sequential circuit design examples are simulated and tested. Although our test results show that such a system is feasible for designing simple circuits or small-scale circuit modules, the feasibility of such a system for large-scale circuit design remains to be showed. Both the learning mechanism and the design method show potential and the future research directions are provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. P. Frasconi, M. Giro, M. Maggini, and G. Soda, “Unified integration of explicit knowledge and learning by example in recurrent networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 7, no. 2, pp. 340–346, 1995.

    Google Scholar 

  2. A. Blum and A. Kalai, “Note on learning from multiple-instance examples,” Machine Learning, vol. 30, no. 1, pp. 23–29, 1998.

    Google Scholar 

  3. I. Pitas, E. Milios, and A.N. Venetsanopoulos, “A minimum entropy approach to rule learning from examples,” IEEE Transactions on Systems, Man and Cybernetics, vol. 22, no. 4, pp. 621–635, 1992.

    Google Scholar 

  4. S.P. Eberhardt, R. Tawel, T.X. Brown, T. Daud, and A.P. Thakoor, “Analog VLSI neural networks—implementation issues and examples in optimization and supervised learning,” IEEE Transactions on Industrial Electronics, vol. 39, no. 6, pp. 552–564, 1992.

    Google Scholar 

  5. O.Vermesan, “Amodular VLSI architecture for neural networks implementation,” From Natural to Artificial Neural Computation, vol. 930, pp. 794–799, 1995.

    Google Scholar 

  6. S. Espejo, R. Carmona, R. DominguezCastro, and A. RodriguezVazquez, “A CNN universal chip in CMOS technology,” International Journal on Circuit Theory and Applications, vol. 24, no. 1, pp. 93–109, 1996.

    Google Scholar 

  7. Proceedings of the 1998 Symposium on VLSI Technology Source, Symposium on VLSI Technology (sponsored by IEEE), 1998.

  8. A. Moini, K. Eshraghian, and A. Bouzerdoum, “Impact of VLSI technology on neural networks,” IEEE International Conference on Neural Networks, pp. 158–163, 1995.

  9. D.E. Rumelhart, G.E. Hinton, and R.J.Willians, “Learning internal representations by error propagation,” Parallel Distributed Processing, vol. 1, 1987.

  10. R. Hecht-Nielsen, Neurocomputing, Addison-Wesley: Reading, MA, 1990.

    Google Scholar 

  11. T. Khanna, Foundations of Neural Networks, Addison-Wesley: Reading, MA, 1990.

    Google Scholar 

  12. H.C. Anderson, “Neural network machines,” IEEE Potentials, pp. 13–16, Feb. 1989.

  13. J.B. Pollack, “Connectionism: Past, present, and future,” Artifi-cial Intelligence Review, vol. 3, pp. 3–22, 1989.

    Google Scholar 

  14. N.J. Nilsson, The Mathematical Foundations of Learning Machines, Morgan Kaufmann Publishers: San Mateo, CA, 1990.

    Google Scholar 

  15. S. Bibyk and M. Ismail, “Issues in analog VLSI and MOS techniques for neural computing,” in Analog VLSI Implementation of Neural systems, edited by C. Mead and M. Ismail, Kluwer Academic Publishers, pp. 103–133, 1989.

  16. J. Hootman, (Ed.), IEEE Micro, Special Issue on Silicon Neural Networks, Dec., 1989.

  17. C. Mead, Analog VLSI and Neural Systems, Addison-Wesley: Reading, MA, 1989.

    Google Scholar 

  18. C. Mead and M. Ismail, Analog VLSI Implementation of Neural Systems, Kluwer Academic Publishers: Boston, 1989.

    Google Scholar 

  19. M. Ismail and T. Fiez, Analog VLSI Signal and Information Processing, McGraw-Hill: New York, 1994.

    Google Scholar 

  20. H.C. Card, D.K. McNeill, and C.R. Schneider, “Analog VLSI circuits for competitive learning networks,” Analog Integrated Circuits and Signal Processing, vol. 15, no. 3, pp. 291–314, 1998.

    Google Scholar 

  21. P.S. Eberhardt, R. Tawel, T.X. Brown, T. Daud, and A.P. Thakoor, “Analog VLSI neural networks: Implementation issues and examples in optimization and supervised learning,” IEEE Transactions on Industrial Electronics, vol. 39, no. 6, pp. 522–564, 1992.

    Google Scholar 

  22. M.W. Roth, “Survey of neural network technology for automatic target recognition,” IEEE Transactions on Neural Networks, vol. 1, no.1, pp. 28–43, 1990.

    Google Scholar 

  23. C.L. Giles, G.M. Kuhn, and R.J. Williams (Guest Editors), “Special issue on dynamic recurrent neural networks, IEEE Transactions on Neural Networks, vol. 5, no. 2, 1994.

  24. M.I. Jordan, “Attractor dynamics and parallelism in connectionist sequential machine,” in Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pp. 521–545, 1986.

  25. Y. Bengio, P. Frasconi, and P. Simard, “The problem of learning long-term dependencies in recurrent networks,” in IEEE International Conference on Neural Networks, 1993.

  26. Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, 1994.

  27. K.J. Breeding, Digital Design Fundamentals, Prentice Hall: Englewood Cliffs, NJ, 1992.

    Google Scholar 

  28. K.C. Chang, Digital Systems Design with VHDL and Synthesis: An Integrated Approach, IEEE Computer Society: Los Alamitos, CA, 1999.

    Google Scholar 

  29. H. Dicken and M. Griffith (Eds.), ASIC Outlook, 1998: An Application Specific IC Report and Directory, Integrated Circuit Engineering Corporation, Nov. 1997.

  30. S. Porat and J.A. Feldman, “Learning automata from ordered examples,” in Proceedings of the 1988 Workshop on Computational Learning Theory (also on Machine Learning, 1991), 1988.

  31. I. Rouvellou and G.W. Hart, “Inference of a probabilistic finite state machine from its output,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 25, no. 3, pp. 424–437, 1995.

    Google Scholar 

  32. A.W. Biermann and J.A. Feldman, “On the synthesis of finitestate machines from samples of their behavior,” IEEE Transactions on Computers, vol. 21, pp. 592–597, 1972.

    Google Scholar 

  33. B.J. Oommen, N. Andrade, and S. Iyengar, “Trajectory planning of robot manipulators in noisy work spaces using stochastic automata,” International Journal of Robotics Research, vol. 10, no. 2, pp. 135–148, 1991.

    Google Scholar 

  34. A.C. Barto and P. Anandan, “Pattern-recognizing stochastic learning automata,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-15, no. 3, pp. 360–375, 1985.

    Google Scholar 

  35. K. Lanctot and B.J. Oommen, “Discretized estimator learning automata,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 22, no. 6, 1992.

    Google Scholar 

  36. R.L. Rivest and R.E. Schapire, “Inference of finite automata using homing sequences,” in Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing, May 15–17, pp. 411–420, 1989.

  37. D. Angluin, “Learning regular sets from queries and counter examples,” Information and Computation, vol. 75, pp. 87–106, 1987.

    Google Scholar 

  38. D. Angluin, “A note on the number of queries needed to identify regular languages,” Information and Control, 1981.

  39. O.H. Ibarra and T. Jiang, “Learning regular languages from counterexamples,” in Proceedings of the 1988 Workshop on Computational Learning Theory, 1988.

  40. A. Marron, “Learning pattern languages form a single initial example and from queries,” in Proceedings of the 1988 Workshop on Computational Learning Theory, 1988.

  41. E.M. Gold, “System identification via state characterization,” Automatica, 1972.

  42. L.A. Litteral, “An algorithm for solving the sequential machine identification problem,” Thesis, The Ohio State University, 1973.

  43. Z.Kohavi, Switching and Finite Automata Theory,McGraw-Hill Book Company: New York, 1978.

    Google Scholar 

  44. R.E. Schapire, “Diversity-based inference of finite automata,” MIT/LCS/TR-414, 1988.

  45. M.C. Mozer and J. Bachrach, “SLUG: A connectionist architecture for inferring the structure of finite-state environments,” Machine Learning, vol. 7, pp. 139–160, 1991.

    Google Scholar 

  46. B. Choi, “Automata for learning sequential tasks,” New Generation Computing, vol. 16, pp. 23–54, 1998.

    Google Scholar 

  47. A. Grasselliand and F. Luccio, “A method for minimizing the number of internal states in incompletely specified sequential networks,” IEEE Transactions on Electronic Computers, vol. 14, no. 3, pp. 350–359, 1965.

    Google Scholar 

  48. W.S. Meisel, “A note on internal state minimization in incompletely specified sequential networks,” IEEE Transactions on Electronic Computers, pp. 508–509, 1967.

  49. E.M. Gold, “Complexity of automaton identification from given data,” Information and Control, vol. 37, pp. 306–320, 1978.

    Google Scholar 

  50. L. Pitt and M.K.Warmuth, “The minimum consistent DFAproblem cannot be approximated within any polynomial,” in Proceedings of the 21 Annual ACM Symposium on Theory of Computing, pp. 421–432, 1989.

  51. R. Alur and D.L. Dill, “Theory of timed automata,” Theoretical Computer Science, vol. 126, no. 2, pp. 183–235, 1994.

    Google Scholar 

  52. C. Largouët and M.O. Cordier, “Timed automata model to improve the classification of a sequence of images,” in Proc. of 14th European Conference on Artificial Intelligence (ECAI' 2000), 2000, pp. 156–160.

  53. A. Puri, “Undecidable problem for timed automat,” Discrete Event Dynamic Systems: Theory and Applications, vol. 9, no. 2, pp. 135–146, 1999.

    Google Scholar 

  54. R. Alur, R.P. Kurshan, and M. Viswanatha, “Membership questions for timed and hybrid automata,” in Proceedings of Real-Time Systems Symposium, 1998, pp. 254–263.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Choi, B. Applying Learning by Examples for Digital Design Automation. Applied Intelligence 16, 205–221 (2002). https://doi.org/10.1023/A:1014338000161

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1014338000161

Navigation