Abstract:
Voiceprint recognition based on deep learning (DL) is a promising method for intelligent fault diagnosis in rotating machinery, as it overcomes the limitations of vibrati...Show MoreMetadata
Abstract:
Voiceprint recognition based on deep learning (DL) is a promising method for intelligent fault diagnosis in rotating machinery, as it overcomes the limitations of vibration measurement by employing noncontact technique. However, the traditional voiceprint-based methods struggle to capture the spatial–temporal coupling relationships between voiceprint signals. Additionally, the existing voiceprint graph construction methods lack the ability to dynamically adjust the constructed graph, resulting in poor adaptability and performance. To overcome these shortcomings, a dynamic spatial–temporal subgraph convolutional network (DSTSGCN) is proposed for noncontact fault diagnosis. First, an edge-level dynamic graph convolutional network (ELDGCN) is designed to adaptively learn the spatial correlations between voiceprint signals by optimizing the weights of edges. Subsequently, a new temporal feature fusion module (TFFM) is developed to capture important long-term dependency information within signals to highlight intersignal temporal relationships. Finally, the discriminative representation of voiceprint fault features is enhanced by fusing multisignal spatial–temporal information to achieve graph-level fault diagnosis. The experimental results on the real three-phase asynchronous motor test platform show that the proposed DSTSGCN achieves an accuracy of 99.72% with one training sample.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 74)