Abstract.
Traditional approaches to the measurement of performance for CAD algorithms involve the use of sets of so-called “benchmark circuits.” In this paper, we demonstrate that current procedures do not produce results which accurately characterize the behavior of the algorithms under study. Indeed, we show that the apparent advances in algorithms which are documented by traditional benchmarking may well be due to chance, and not due to any new properties of the algorithms. As an alternative, we introduce a new methodology for the characterization of CAD heuristics which employs well-studied design of experiments methods. We show through numerous examples how such methods can be applied to evaluate the behavior of heuristics used in BDD variable ordering.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Published online: 15 May 2001
Rights and permissions
About this article
Cite this article
Harlow III, J., Brglez, F. Design of experiments and evaluation of BDD ordering heuristics. STTT 3, 193–206 (2001). https://doi.org/10.1007/s100090100052
Issue Date:
DOI: https://doi.org/10.1007/s100090100052