Improvement of integrated circuit testing reliability by using the defect based approach

https://doi.org/10.1016/S0026-2714(03)00092-1Get rights and content

Abstract

The systematic decrease in the minimum feature size in VLSI circuits makes spot defects an increasingly significant cause of ICs’ faults. A testing method optimized for detecting faults of this origin has been recently developed. This method, called defect based testing (DBT), requires a lot of computational effort at the stage of testing-procedure preparation, which makes it appear less attractive than the well-known stuck-at-fault oriented testing. This paper, however, shows that a stuck-at-fault-optimized test-vector set may prove highly inefficient in detecting spot-defect-induced faults. Experiments with the C17 ISCAS-85 testability benchmark show that the risk of a spot-defect damaged circuit passing the test is dangerously high if the test set was designed with stuck-at-faults in mind. It is also shown that although spot defects may in some cases transform a combinational circuit into a sequential one, in practice this phenomenon does not require any special treatment from the test designer. Eventually, a few methods are discussed that make the DBT less time consuming.

Introduction

The ever-increasing complexity of VLSI circuits makes test preparation increasingly difficult. Not only must the test-vector set ensure a fault coverage (FC) close to 100%, but it should also be short enough to make the testing time acceptable. Furthermore, it is recommended to arrange the vectors so that the testing procedure will begin with those of them that are most likely to detect a fault. In other words, the test vectors should be arranged so that the FC should be as high as possible from the very beginning and grow as fast as possible after applying subsequent vectors. Detecting a fault at an early stage (preferably after applying a single vector) allows classifing the tested circuit as useless and passing another one to the tester. If a high percentage of circuits are failed, the overall time spent on testing a manufacturing lot strongly depends on how quickly failed circuits are identified.

The shortest possible sequence of input vectors providing a 100% coverage of faults (belonging to a given class, e.g. bridging faults) in a given circuit and arranged in the manner described above will be called the optimum test sequence of the circuit.

To obtain the optimum test sequence for a given circuit, an appropriate fault model is required. Most failures in currently manufactured VLSI chips are due to bridging faults caused by spot defects [1], [2]. Such faults may cause electrical shorts between nonequipotential nodes. This usually modifies the function performed by the circuit or even alters its character from combinational to sequential. A method using defect based testing (DBT) approach has been recently developed that allows generating a test sequence optimized for detecting bridging faults [3]. However, the fault model most commonly used in test design is still the stuck-at-fault (S@F) model. The reason is a relative ease of finding tests detecting stuck-at-faults (by means of simulating the gate-level representation of the circuit) as compared with the complicated and computationally expensive procedure of generating a test sequence for detecting bridging faults (outlined later in this article). It has been reported, however, that a test set capable of detecting all the possible stuck-at-faults may skip as many as 50% of the actual (i.e. bridging) faults [4], [5]. Apart from the severe drop in FC, such an S@F optimized test set may appear larger than one designed for detecting bridging faults. Another weakness of the S@F model is that it does not allow comparing tests vectors with respect to their capability of detecting bridging faults––a step necessary to arrange the tests so as to ensure the fastest possible growth of FC.

In this paper we show that a test sequence optimized for detecting stuck-at-faults may prove far from perfect when it comes to detecting the actual (i.e. bridging) faults. This conclusion is based on the results of computer analyses of the C17 benchmark (ISCAS-85 testability benchmark [6]). The theoretical foundations of the method used in those analyses are presented in Section 2. These include the critical-area based formula for estimating the probability of occurrence of a given bridging fault [2] as well as the algorithm used to generate the optimum test sequence of the circuit [3]. This section also contains a redefinition of FC so that this well-known measure will better reflect the quality of a test sequence.

In Section 3 we present how this methodology was used to generate two test-vector sequences for the C17 circuit: one optimized for detecting stuck-at-faults and another optimized for bridging faults. The two sequences were then compared with respect to their ability to detect bridging faults. It is shown that a S@F-optimized sequence does not guarantee detecting all the possible bridging faults.

In Section 4 we show that some bridging faults may indeed introduce sequential effects into a combinational circuit. We also show that in practice no special account must be taken of “sequential” effects at the stage of test generation.

Section 5 is concerned with two methods that help reduce the time necessary to prepare the test-vector sets.

Section snippets

Defect based testing

The advantage of the DBT over the S@F approach results from the fact that stuck-at-faults are “abstract” faults modeled on the gate level only. Some stuck-at-faults may cause a circuit (simulated on the gate-level) to operate in a way that cannot be explained by any physical defect in the CMOS structure. On the other hand, some physical defects influence the circuit’s behavior in a way that cannot be modeled by any stuck-at-fault.

Furthermore, in the S@F-based approach the quality of a test

DBT vs. S@F

The methodology presented in Section 2 was used to generate the optimum test sequence for a CMOS implementation of the C17 ISCAS-85 testability benchmark. Since the original description of the circuit does not go beyond a logic diagram (Fig. 2), a corresponding layout had to be created to provide the information necessary to calculate the probabilities of the occurrence of bridging faults. The layout was automatically synthesized using the Cell Ensemble tool being part of Cadence® Design

Defect-introduced feedback loops

Some bridging faults cause feedback loops to form in combinational circuits. If the number of inverting gates in the loop is odd, then the feedback is negative. This degenerates the output voltage of the block locked in the loop, i.e. causes the voltage to be substantially different from the expected 0 V (for logic zero) or VDD (for logic one). Finding test vectors for such a fault requires assuming a certain threshold value of voltage: output voltage higher than the threshold will be

Reducing computational time

The main disadvantage of the DBT is that the procedure of generating the testing sequence is highly time-consuming. There are, however, several ways to reduce the computational time.

One method is to reduce the set of bridging faults considered in the analysis. For example, the analysis of the layout may be limited to the interconnects and the ground and supply lines. Such a decomposition is very natural because in CAD-software databases layouts are usually stored in a hierarchical structure

Conclusions

As it has been shown, sequences of input vectors used to test digital circuits should be designed using the accurate (i.e. physical) fault model. Using the simpler S@F model may cause an unacceptably high percentage of faulty circuits to pass the test procedure. DBT method allows creation of relatively short test sequences that assure a very high growth of fault coverage in the course of testing. It was shown that although some bridging faults may change the circuit’s behavior from

Acknowledgements

The authors are very grateful to anonymous reviewers for their comments and suggestions which helped to improve the final manuscript.

This work was supported in part by the Polish State Committee for Scientific Research under project no. 4 T11B 023 24.

References (11)

  • W.A. Pleskacz et al.

    Estimation of the IC layout sensitivity to spot defects

    Electron Technology

    (1999)
  • Pleskacz WA, Maly W. Improved yield model for submicron domain. In: Proceedings of the IEEE International Symposium on...
  • Blyzniuk M, Pleskacz WA, Lobur M, Kuzmicz W. Estimation of the usefulness of test vector components for detecting...
  • Blyzniuk M, Cibakova T, Gramatova E, Kuzmicz W, Lobur M, Pleskacz WA, et al. Hierarchical defect-oriented fault...
  • Blyzniuk M, Cibakova T, Gramatova E, Kuzmicz W, Lobur M, Pleskacz WA, et al. Defect oriented fault coverage of 100%...
There are more references available in the full text version of this article.

Cited by (9)

View all citing articles on Scopus
View full text