Skip to main content

Advertisement

Log in

Automatic boosting of cross-product coverage using Bayesian networks

  • HVC 2008
  • Published:
International Journal on Software Tools for Technology Transfer Aims and scope Submit manuscript

Abstract

Closing the feedback loop from coverage data to the stimuli generator is one of the main challenges in the verification process. Typically, verification engineers with deep domain knowledge manually prepare a set of stimuli generation directives for that purpose. Bayesian networks based CDG (coverage directed generation) systems have been successfully used to assist the process by automatically closing this feedback loop. However, constructing these CDG systems requires manual effort and a certain amount of domain knowledge from a machine learning specialist. We propose a new method that boosts coverage in the early stages of the verification process with minimal effort, namely a fully automatic construction of a CDG system that requires no domain knowledge. Experimental results on a real-life cross-product coverage model demonstrate the efficiency of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Wile B., Goss J.C., Roesner W.: Comprehensive Functional Verification: The Complete Industry Cycle. Elsevier, Amsterdam (2005)

    Google Scholar 

  2. Piziali A.: Functional Verification Coverage Measurement and Analysis. Springer, Berlin (2004)

    Google Scholar 

  3. Fine, S., Ziv, A.: Coverage directed test generation for functional verification using Bayesian networks. In: Proceedings of the 40th Design Automation Conference, pp. 286–291 (2003)

  4. Fournier, L., Ziv, A.: Using virtual coverage to hit hard-to-reach events. In: Proceedings of the 3rd Haifa Verification Conference, pp. 104–119 (2007)

  5. Wagner I., Bertacco V., Austin T.: Microprocessor verification via feedback-adjusted Markov models. IEEE Trans. Computer-Aided Des. Integrated Circuits Syst 26(6), 1126–1138 (2007)

    Article  Google Scholar 

  6. Bose, M., Shin, J., Rudnick, E.M., Dukes, T., Abadir, M.: A genetic approach to automatic bias generation for biased random instruction generation. In: Proceedings of the 2001 Congress on Evolutionary Computation CEC2001, pp. 442–448 (2001)

  7. Hsiou-Wen, H., Eder, K.: Test directive generation for functional coverage closure using inductive logic programming. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 11–18 (2006)

  8. Pearl J.: Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference. Morgan Kaufmann, San Francisco (1988)

    Google Scholar 

  9. Fine, S., Ziv, A.: Enhancing the control and efficiency of the covering process. In: Proceedings of the High-Level Design Validation and Test Workshop, pp. 96–101 (2003)

  10. Fine S., Freund A., Jaeger I., Mansour Y., Naveh Y., Ziv A.: Harnessing machine learning to improve the success rate of stimuli generation. IEEE Trans. Comput. 55(11), 1344–1355 (2006)

    Article  Google Scholar 

  11. Cooper G.F., Herskovits E.: A Bayesian method for the induction of probabilistic networks from data. J. Mach. Learning 9(4), 309–347 (1992)

    MATH  Google Scholar 

  12. Laskey K.B., Myers J.W.: Population markov chain Monte Carlo. J. Mach. Learning 50(1–2), 175–196 (2003)

    Article  MATH  Google Scholar 

  13. Chickering D.: Optimal structure identification with greedy search. J. Mach. Learning Res. 3, 507–554 (2002)

    Article  MathSciNet  Google Scholar 

  14. Friedman., N.: The Bayesian structural EM algorithm. In: Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, pp. 129–138 (1998)

  15. Ur, S., Yadin, Y.: Micro-architecture coverage directed generation of test programs. In: Proceedings of the 36th Design Automation Conference, pp. 175–180 (1999)

  16. Cowell R.G., Dawid A.P., Lauritzen S.L., Spiegelhalter D.J.: Probabilistic Networks and Expert Systems. Springer, Berlin (1999)

    MATH  Google Scholar 

  17. Heckerman D.: A tutorial on learning with Bayesian networks, Technical report, Microsoft Research. Redmond, Washington (1996)

    Google Scholar 

  18. Rusakov D., Geiger D.: Asymptotic model selection for naive Bayesian networks. J. Mach. Learning Res. 6, 1–35 (2005)

    MathSciNet  Google Scholar 

  19. http://en.wikipedia.org/wiki/IBM_z6

  20. Guyon I., Elisseeff A.: An introduction to variable and feature selection. J. Mach. Learning Res. 3, 1157–1182 (2003)

    Article  MATH  Google Scholar 

  21. Kohavi R., John G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1-2), 273–324 (1997)

    Article  MATH  Google Scholar 

  22. Jolliffe I.T.: Principal Component Analysis. Springer Series in Statistics. Springer, Berlin (2002)

    Google Scholar 

  23. Cover T.M., Thomas J.A.: Elements of Information Theory. Wiley, New York (1991)

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dorit Baras.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Baras, D., Fine, S., Fournier, L. et al. Automatic boosting of cross-product coverage using Bayesian networks. Int J Softw Tools Technol Transfer 13, 247–261 (2011). https://doi.org/10.1007/s10009-010-0160-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10009-010-0160-z

Keywords

Navigation