Loading [a11y]/accessibility-menu.js
Duo: Differential Fuzzing for Deep Learning Operators | IEEE Journals & Magazine | IEEE Xplore

Duo: Differential Fuzzing for Deep Learning Operators


Abstract:

Deep learning (DL) libraries reduce the barriers to the DL model construction. In DL libraries, various building blocks are DL operators with different functionality, res...Show More

Abstract:

Deep learning (DL) libraries reduce the barriers to the DL model construction. In DL libraries, various building blocks are DL operators with different functionality, responsible for processing high-dimensional tensors during training and inference. Thus, the quality of operators could directly impact the quality of models. However, existing DL testing techniques mainly focus on robustness testing of trained neural network models and cannot locate DL operators’ defects. The insufficient test input and undetermined test output in operator testing have become challenging for DL library developers. In this article, we propose an approach, namely Duo, which combines fuzzing techniques and differential testing techniques to generate input and evaluate corresponding output. It implements mutation-based fuzzing to produce tensor inputs by employing nine mutation operators derived from genetic algorithms and differential testing to evaluate outputs’ correctness from multiple operator instances. Duo is implemented in a tool and used to evaluate seven operators from TensorFlow, PyTorch, MNN, and MXNet in an experiment. The result shows that Duo can expose defects of DL operators and realize multidimension evaluation for DL operators from different DL libraries.
Published in: IEEE Transactions on Reliability ( Volume: 70, Issue: 4, December 2021)
Page(s): 1671 - 1685
Date of Publication: 13 September 2021

ISSN Information:

Funding Agency:


References

References is not available for this document.