Abstract
Automatic speech recognition (ASR) is an essential technology used in commercial products nowadays. However, the underlying deep learning models used in ASR systems are vulnerable to adversarial examples (AEs), which are generated by applying small or imperceptible perturbations to audio to fool these models. Recently, universal adversarial perturbations (UAPs) have attracted much research interest. UAPs used to generate audio AEs are not limited to a specific input audio signal. Instead, given a generic audio signal, audio AEs can be generated by directly applying UAPs. This paper presents a method of generating UAPs based on a targeted phrase. To the best of our knowledge, our proposed method of generating UAPs is the first to successfully attack ASR models with connectionist temporal classification (CTC) loss. In addition to generating UAPs, we empirically show that the UAPs can be considered as signals that are transcribed as the target phrase. We also show that the UAPs themselves preserve temporal dependency, such that the audio AEs generated using these UAPs also preserved temporal dependency.
1 Introduction
To date, automatic speech recognition (ASR) [2, 6, 9, 19] systems have been deployed ubiquitously in popular commercial products, such as Google Assistant, Amazon Alexa, and so on. An ASR system converts speech from audio into text before further processing. Deep learning techniques play an important role in modern ASR systems. Specifically, end-to-end ASR, which relies on recurrent neural network (RNN), was able to achieve human level performance when tested on several benchmark datasets [2].
However, deep learning models suffer from the threat of adversarial examples (AEs), which were first found in the image recognition domain [23]. An image AE is generated by applying imperceptible perturbations to a benign (normal) image, such that the resulting modified image will fool a deep learning model. There are targeted and untargeted image AEs. Targeted AEs force a target model to output predefined labels, while untargeted image AEs merely aim to make the target model output an incorrect result [16]. In addition, adversaries can assume a white-box or black-box threat model to generate AEs [4, 10, 30]. Under a white-box threat model, adversaries can access the internal workings of the target model, including model weights, training data, etc. In contrast, under a black-box threat model only input and output pairs can be obtained.
Besides image recognition, researchers also found that ASR models are vulnerable to audio AEs. In seminal work conducted by Carlini and Wagner [5], they generated audio AEs by solving an optimization problem by constraining the maximum norm of perturbations. Their work was improved in Qin et al. [20] via incorporating psychoacoustics to hide perturbations below the hearing threshold. However, such adversarial perturbations can only produce an AE for a specific audio signal, and must be recalculated to produce AEs for different audio signals. To overcome this shortcoming, researchers have investigated the generation of AEs using universal adversarial perturbations (UAPs) that can be applied directly to generic audio [1]. UAPs can be used to generate both untargeted and targeted audio AEs [17, 26]. It should be mentioned that the concept of UAPs was first introduced for image AEs [15].
Although a great amount of effort has been spent on attacking speaker verification models, sound classification models, etc., there is limited research focused on generating UAPs to attack ASR systems. For a given audio, ASR models deal with an excessively large number of potential transcripts. This task is typically more difficult compared to other classification models, which only output a fixed set of labels. Early work was conducted by Neekhara et al. [17], in which they generated UAPs for untargeted audio AEs. Compared to targeted audio AEs, untargeted audio AEs are less interesting as they only make ASR models output incorrect or even meaningless transcripts. Lu et al. [14] recently performed a preliminary study on targeted UAPs to attack ASR models. However, their UAPs cannot generate UAPs against models with connectionist temporal classification (CTC) loss [8]. This severely limits their method since CTC loss is widely deployed in modern ASR models that achieve state-of-the-art performance [2, 9].
In this paper, we fill the research gap by proposing UAPs that can be applied directly to audio to generate targeted audio AEs. Our main contributions are summarized as follows:
-
To the best of our knowledge, our UAP method is the first to successfully attack CTC loss based ASR models. Most existing work focus on speaker verification models, sound classification models, etc., instead of ASR models.
-
Unlike previous work by Lu et al. [14], we improve the quality of audio AEs by constraining the maximum norm of UAPs. Furthermore, we conducted a feasibility study to hide UAPs below the hearing threshold in a piece of music.
-
In addition to generating UAPs, we empirically show that UAPs can be considered to be signals that will be transcribed into the target phrase. The generation of UAPs can then be viewed as training (modifying) UAPs to be robust against modification using audio containing speech.
-
We show that the UAPs themselves preserve temporal dependency, such that the audio AEs generated by applying these UAPs also preserve temporal dependency.
2 Related Work
Early work in this field by Neekhara et al. [17], studied the generation of untargeted UAPs by maximizing CTC loss for each input audio. Compared to random noise, their UAPs can more effectively cause DeepSpeech [9] to output incorrect transcripts. However, untargeted attack cannot predetermine the output of a target model, and this makes untargeted attack less interesting than targeted attack. In contrast, our work focuses on targeted UAPs which pose severe threats because an adversary is able to control the output from a target model. Abdoli et al. [1] proposed UAPs that can generate targeted audio AEs. Instead of attacking ASR models, they attacked environmental sound classification and speech command recognition models.
In other work, Xie et al. [26] proposed to incorporate transformations by simulated room impulse response (RIR), so that audio AEs generated by their UAPs were robust against such transformations. The purpose is to make audio AEs still adversarial when played through speakers and received by microphones. They focused on fooling speaker verification models. Compared to ASR models which transcribe voice input, speak verification models aim to identify whether input voice comes from a valid user. Li et al. [13] demonstrated that it is unnecessary to perturb all samples in an audio signal. They generated UAPs that were much shorter than the input audio and the UAPs can be applied to an arbitrary position within the input audio. To make audio AEs physically adversarial, they used datasets of physically recorded RIRs instead of simulated RIRs.
As opposed to generating input-agnostic UAPs, another line of work focused on training a generative model, so that perturbations can be efficiently generated for previously unknown audio. Broadly speaking, the generative model represents UAPs that are input-dependent. Wang et al. [24] trained a generative adversarial network (GAN) to produce specific perturbations for an input audio. The output of GAN can fool the prediction of command classification and music classification models into outputing predetermined labels. Recent work by Li et al. [12] trained a generator that can map random noise to targeted UAPs given an input audio.
In contrast with existing work, this research investigates targeted UAPs against ASR models.
3 Problem Definition and Assumptions
Our goal is to generate UAPs \(\delta \) that will result in targeted audio AEs when applied to input audio. Note that \(\delta \) is specific to a target phrase, such that a different target phrase will require a different \(\delta \). We assume a white-box threat model, under which the internal workings of the target model are accessible and gradients with respect to the input can explicitly be calculated. Formally, let \(\delta \in \mathbb {R}^m\) be perturbations of length m. \(\delta _{i:j}= (\delta _i, \dots , \delta _j)\) denotes a slice of \(\delta \) from the \(i^{th}\) to \(j^{th}\) elements. Let \(f(\cdot )\) represent the ASR model. Let \(\mathcal {D}\) be a set of audio with audio sample values ranging from \([-1, 1]\), i.e. if \(x \in \mathcal {D}\) then \(||x||_\infty \le 1\). It should be noted that the length \(x \in \mathcal {D}\) varies. Without loss of generality, let n represent the length of x: \(x \in \mathbb {R}^n\). It is required that \(n \le m\), as given an input audio, \(\delta \) will first be truncated to the same length as the input. Then, an audio AE is generated by applying \(\delta \) to the input audio.
Specifically, we want to generate \(\delta \) that satisfies:
where t is a predefined target phrase, \(x'\) is the modified audio with elements clipped into \([-1, 1]\): \(x' = \max (\min (x+ \delta _{1:n}, 1), -1)\), \(\eta \) denotes the minimal success rate of attack, and \(\tau \) constrains the maximum norm of \(\delta \).
3.1 Evaluation
Given an input audio \(x\in \mathbb {R}^n\), we measure the distortion caused by \(\delta \) in decibels (dB):
This metric was initially defined by Carlini and Wagner [5] and is also used in other work [1, 14, 17, 26]. This metric is analogous to the maximum norm measurement in the image AE domain.
4 Proposed Method
4.1 Universal Adversarial Perturbations
To generate UAPs that satisfy the requirements defined in Eq. 1, we solve the following optimization problem:
where \(\mathcal {D}\) is a set of input audio, \(x'\) is the modified audio clipped into the range \([-1, 1]\): \(x' = \max (\min (x + \delta ^{\tau }_{1:n}, 1), -1)\). \(\delta ^{\tau }\) is the perturbations applied to x and equals to \(\delta \) clipped into a specific range: \(\delta ^{\tau } = \max (\min (\delta , \tau ), -\tau )\), with \(\tau \) constraining the maximum norm. \(\ell _{adv}(\cdot )\) calculates the loss of the ASR model and minimizing \(\ell _{adv}(\cdot )\) encourages the modified input \(x'\) be to transcribed as t. If a solution is found, \(\delta ^{\tau }\) is returned as a UAP. To make \(\delta ^{\tau }\) less suspicious, it is preferred that \(\tau \) be as small as possible. Thus, \(\tau \) should be initialized to a large value, then gradually decreased until a valid solution can no longer be found.
Instead of viewing x as the input audio and \(\delta ^{\tau }\) as noise, we consider \(\delta ^{\tau }\) as a signal which is transcribed as t. From this perspective, x is considered as “noise” applied to \(\delta ^{\tau }\), and \(\delta ^{\tau }\) is robust against modification by adding \(x \in D\). We will validate this point of view later in Sect. 5. A recent study by Zhang et al. [29] presented a similar idea in the image AE domain. They showed that UAPs were highly correlated with the output logits of image classifiers so that the classification was actually dominated by UAPs.
\(\ell _{reg}(\cdot )\) is the regularization term with \(\lambda \) for weighting. \(\ell _{reg}(\cdot )\) is defined as follows:
Minimizing \(\ell _{reg}(\cdot )\) encourages the maximum norm of \(\delta \) to be within \(\tau \). This prevents \(\frac{\partial \ell _{adv}(f(x + \delta '_{1:n}), t)}{\partial \delta _i}\) from always being 0 when \(|\delta _i| > \tau \).
In practice, we split the generation process into two stages. During stage 1, we set \(\tau =1\) and gradually let \(\delta ^{\tau }\) be effective for more and more audio in \(\mathcal {D}\). Stage 1 finishes when \(\delta ^{\tau }\) can attack all audio in \(\mathcal {D}\), i.e. an audio AE is generated by applying \(\delta ^{\tau }\) to any audio in \(\mathcal {D}\). The purpose of this stage is to quickly find a valid \(\delta ^{\tau }\), even though \(\delta ^{\tau }\) may be too noisy. In stage 2, we focus on making \(\delta ^{\tau }\) less noisy by gradually decreasing \(\tau \) until no valid solution can be found. This two stage generation process is provided in Algorithm 1.
4.2 Robustness Against Room Impulse Response
In the audio AE domain, expectation over transformation (EOT) has been widely used to make audio AEs robust against RIRs [20, 22, 25]. The purpose of being robust against RIRs is to let audio AEs still be adversarial when played through speakers and received by microphones. EOT [3] was initially proposed to make image AEs robust against camera transformations.
In this research, we also deploy EOT to make our UAPs robust against RIR. It should be mentioned that computation will be prohibitively expensive if too many RIRs are considered [7]. To incorporate EOT, the optimization problem define in Eq. 3 is modified as follows:
where \(\mathcal {H}\) is the distribution of RIRs considered, and \(*\) denotes convolution operation.
Algorithm 2 provides the process used to solve the optimization problem shown in Eq. 5. Specifically, \(\delta \) is initialized as the solution found in Stage 1 of Algorithm 1. For each audio, we randomly select an RIR to transform the audio. \(\tau \) constrains the maximum norm of \(\delta ^\tau \), and it gradually decreases until no valid solution can be found.
5 Results and Discussion
5.1 Setup
In this study, we used DeepSpeech2 as the target model, which is an end-to-end RNN based ASR model with CTC loss [2]. We used the open source implementation of DeepSpeech2 V2Footnote 1 with Librispeech [18] as the dataset since a pre-trained model on this dataset was released. Specifically, we randomly extracted 150 audio with durations from 2 to 4 seconds from the “dev-clean” dataset to generate UAPs. We also extracted all audio with duration 2 to 4 seconds from the “test-clean” dataset for evaluation. We used the following 5 target phrases to generate UAPs: “power off”, “open the door”, “turn off lights”, “use airplane mode”, “visit malicious dot com”. It should be noted that target phrases cannot be too long. This is because it is overly challenging to force a target model to output transcripts that are too long for short input audio.
Throughout the experiments, if not otherwise indicated, we used the following settings. The Adam method [11] was used for optimization with a learning rate of 0.001. \(\tau \), which controls the maximum norm of UAPs as shown in Eq. 3 and Eq. 5, was initially set to 1.0 then decreased by being multiplied with 0.8. The minimum success rate \(\eta \) was fixed at 0.8 for both Eq. 3 and Eq. 5. Without incorporating EOT, the maximum iterations to lower the maximum norm of UAPs was set to 30. If EOT was incorporated, the maximum iterations was set to 60, because it is more computationally expensive to converge in this case.
5.2 Generating Universal Adversarial Perturbations
We first used the Stage1 function in Algorithm 1 to generate UAPs for the 5 target phrases. As previously mentioned, the aim of this stage is to generate valid UAPs, even though they may be noisy. The time taken to generate UAPs for the target phrases: “power off”, “open the door”, “turn off lights”, “use airplane mode”, “visit malicious dot com”, it took 5.0, 2.8, 7.8, 4.2 and 7.9 hours respectively. Obviously, the generation time for different target phrases is different. This may be because target phrases that are seen less frequently during training of the target model will require more iterations. At the start of the generation process, the audio set only contained 1 audio. When the generated UAPs were able to attack all audio in the current set, we added a new audio to the set, i.e. the size of the set increased by 1. This strategy is beneficial for convergence since the UAPs for a specific set only needs to handle one new audio. The set at the end of the process contained 150 audio.
Figure 1 shows the iteration trend to generate UAPs capable of attacking all audio as we gradually increase the size of the audio set. To clearly show the iteration trend, we present a moving average based on 3 data points. The horizontal axis represents the number of audio used to train UAPs, while the vertical axis indicates the number of iterations needed for the UAPs to attack all audio in the set. Early on when the size of the set was small, the number of iterations increased as more audio were added to the set. This is reasonable since the UAPs had to attack a greater number of audio, so more computation was required to find a solution. However, interestingly the iterations started to decrease when the size of the audio set reached around 20. This can be explained from the point of view that the generated UAPs are considered as signals that are transcribed into the target phrase, while audio containing speech are considered as noise being applied to UAPs. From that perspective, it is intuitive that after a while, the UAPs become more robust despite additional audio being added to the set. In other words, when UAPs are robust against a large set of audio, fewer iterations are required to find a solution to attack the newly added audio.
To test the performance of the generated UAPs, we applied the UAPs to all audio with a duration between 2 to 4 s from the “test-clean” set. As shown in Fig. 2, the success rate of UAPs increased as more audio was used for training. In the Figure, the horizontal axis represents the number of audio used to train UAPs, while the success rate was calculated by applying UAPs to all 736 audio with a duration between 2 to 4 seconds from “test-clean” set. The increase in success rate is complementary to the above discussion that UAPs become more robust against new audio as the size of training set increases.
UAPs generated using Stage1 alone were too noisy to be used in practice as they easily cause suspicion. Stage2 was used to constrain the maximum norm of UAPs. To effectively decrease the maximum norm, UAPs were only required to attack \(80\%\) of audio in the audio set by setting \(\eta =0.8\). Intuitively, lowering \(\eta \) will lead to smaller maximum norm of UAPs.
Table 1 presents the results of the 5 UAPs. It took around 1 hour to finish Stage2 for each UAPs. We can see that the maximum norm of UAPs was greatly reduced after Stage2. UAPs generated using Stage1 and Stage2 with “power off” as the target phrase is compared in Fig. 3. Although the success rate on the test audio decreased because we set \(\eta =0.8\) instead of 1.0, the UAPs were still effective to attack over \(45\%\) of audio from the test set.
To give a sense of the distortion cause by our UAPs, Carlini and Wagner [5] reported that the \(95\%\) interval for distortion using their approach was between −15 dB to −45 dB. While our UAPs introduce more distortion compared with their approach, the key thing to note is that their perturbations are only effective for a specific audio input and must be recalculated for different audio, as opposed to UAPs which are universal and able to attack generic audio.
5.3 Preserving Temporal Dependency
Temporal dependency (TD) was proposed as an important property to detect audio AEs by Yang et al. [27]. The key assumption is that benign audio preserves TD while audio AEs do not. Specifically, let \(S_k\) denote the transcript of the first kth portion of input audio. Let \(S_{\{whole, k\}}\) denote the first kth portion of the entire transcript, such that the length of \(S_{\{whole, k\}}\) is equal to the length of \(S_k\). If \(S_{\{whole, k\}}\) is not consistent with \(S_k\), this means the audio is potentially adversarial.
In our experiments, we found that UAPs generated by Stage2 can be transcribed as the target phrase and preserved TD. This finding is complementary to our point view that UAPs can be considered as signals that are transcribed as the target phrase. The results for the target phrases: “power off”, “use airplane mode” and “visit malicious dot com”, are shown in Table 2. The experimental results show that the transcripts of differently sliced UAPs were consistent with the corresponding portions of the target phrase. An interesting observation is that when \(k \ge 0.6\), all the partial UAPs were accurately transcribed as the target phrase. This is intuitive because the duration of the UAPs was 4 seconds, and were required to attack \(80\%\) of audio with duration between 2 to 4 seconds by design. Thus, the first portion of the UAPs were transcribed as the target phrase and robust against modification. The remaining parts of UAPs then aimed to suppress output from DeepSpeech2, i.e. forcing DeepSpeech2 to output nothing for those parts.
As the UAPs preserved TD, this suggests that audio AEs generated by applying UAPs would also preserve TD. Therefore, we calculated the same metrics proposed by Yang et al. [27] to validate if our audio AEs generated using the UAPs were able to avoid TD detectionFootnote 2. These metrics were area under curve (AUC) score of word error rate (WER), AUC of character error rate (CER), and AUC of longest common prefix (LCP).
The audio AEs used in the experiment were those successfully generated by applying our Stage 2 UAPs to the test audio. Table 3 shows the experimental results for \(k=\frac{1}{2}, \frac{2}{3}, \frac{3}{4}\). We can see that TD detection only achieved good performance with WER and LCP on detecting audio AEs with the target phrase “power off” when \(k=\frac{1}{2}\). This implies that the first half of the UAPs for “power off” was not robust enough. To improve the robustness against TD detection for “power off” when \(k=\frac{1}{2}\), a potential solution is to increase the value of \(\eta \) in Stage2. If \(\eta = 1.0\), the first half of the UAPs for “power off” will be forced to be robust, although this will result in a larger maximum norm for UAPs. Other than the “power off” target phrase, we can see from Table 3 that most AUC scores were below 0.75. This indicates that audio AEs generated by our UAPs were overall robust against TD detection.
5.4 Robustness Against Gaussian Noise
As discussed above, UAPs were trained to be robust against modification using audio containing speech. Table 4 further shows that audio AEs generated by applying UAPs to test audios were also robust against Gaussian noise until \(std=0.01\).
5.5 Robustness Against Room Impulse Response
We generated 100 RIRs from virtual rooms with dimension \((width, length, height)\) using pyroomacoustics 0.4.2Footnote 3. 80 RIRs were used for training while 20 RIRs used were for testing. height was set to 3.5 while \(width = length\) and we randomly sampled their values from \(\mathcal {U}(4,6)\). The time it takes for the RIR to decay by 60 dB was randomly sampled from \(\mathcal {U}(0.15,0.20)\). Locations of microphones and audio sources were randomly sampled inside the virtual rooms.
To test the robustness against RIR, each audio AE was transformed by a random RIR from the 20 RIRs. We also transformed the UAPs by all the 20 RIRs and to check whether UAPs themselves are robust against RIRs. When using Algorithm 2 to generate robust UAPs, we set the maximum iterations to 60.
Table 5 shows the results of comparing robust UAPs generated using Algorithm 2 with UAPs generated by Stage2. Table 5 also compares the robustness of audio AEs, which were generated by applying the corresponding UAPs to test audio. Although there was an exception for UAPs of “open the door”, UAPs generated by Stage2 and corresponding audio AEs were obviously not robust against RIRs. In contrast, UAPs generated using Algorithm 2 and their corresponding audio AEs were robust against RIRs. It should be noted that robustness against RIRs was obtained at the cost of significantly larger maximum norm.
5.6 Limitation
Our experiments showed that the quality of audio AEs generated by applying UAPs was poor. The distortion caused by UAPs will be worse if we make them robust against RIRs. While it will be difficult to lower the maximum norm of UAPs further while keeping them adversarial, we can potentially hide UAPs below the hearing threshold of unsuspicious sound. This may be a promising future direction. A potential scenario is where an adversary plays unsuspicious adversarial audio in the background, while the victim speaks to a voice interface, thereby causing the underlying ASR model to be fooled. A similar idea was proposed by Commandersong [28], in which they hid perturbations within a song. However, their method may not be robust for speech, which is common for voice interfaces.
In this section, we present a feasibility study on hiding UAPs below the hearing threshold in a piece of piano music. We incorporated the masking loss proposed by Qin et al. [20], which hid perturbations below the hearing threshold of speech. Specifically, we replaced the \(l_{reg}(\cdot )\) in Eq. 3 with the masking loss. Instead of generating UAPs from scratch, we used UAPs generated by Stage2 of Algorithm 1 as initial values. It should be mentioned that audio AEs were generated by applying UAPs together with the music.
Measuring the maximum norm of UAPs is meaningless in this case because large values in UAPs would be masked by the music. Therefore, we measured the Perceptual Evaluation of Speech Quality (PESQ), which was proposed to automatically measure degradation in the context of telephony [21]. The values range from 1.0 to 4.5 with larger values indicating better quality.
After running 30 iterations, we successfully generated UAPs by setting \(\eta =0.5\). The PESQ between the original music and music distorted by UAPs was 2.97, which means moderate quality. The success rate of generating audio AEs from test audios was \(30.71\%\). This shows UAPs hidden in music are still able to attack generic audio.
6 Conclusion and Future Work
In the audio AE domain, there is limited work focusing on generating UAPs against ASR models. In this research, we filled this research gap by proposing the first successful targeted UAPs against ASR models with CTC loss. We analyzed UAPs from the point of view that UAPs can be considered as signals that were transcribed as the target phrase. To decrease the distortion caused by UAPS, we tried to minimize the maximum norm of UAPs. In addition, we showed that UAPs themselves preserved temporal dependency, such that the audio AEs generated by applying UAPs also preserved temporal dependency. UAPs and the corresponding audio AEs were also robust against Gaussian noise. We demonstrated the possibiliy of hiding UAPs below the hearing threshold of unsuspicious sound, such as music. Future work will focus on generating UAPs with reduced distortion.
Notes
- 1.
- 2.
We used the open source implementation from https://github.com/AI-secure/Characterizing-Audio-Adversarial-Examples-using-Temporal-Dependency.
- 3.
References
Abdoli, S., Hafemann, L.G., Rony, J., Ayed, I.B., Cardinal, P., Koerich. A.L.: Universal adversarial audio perturbations. arXiv preprint arXiv:1908.03173 (2019)
Amodei, D., et al.: Deep speech 2: end-to-end speech recognition in English and mandarin. In: International Conference on Machine Learning, pp. 173–182 (2016)
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning, pp. 284–293. PMLR (2018)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 1–7. IEEE (2018)
Chan, W., Jaitly, N., Le, Q.V., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016, Shanghai, China, 20–25 March 2016, pp. 4960–4964. IEEE (2016)
Du, X., Pun, C., Zhang, Z.: A unified framework for detecting audio adversarial examples. In: Chen, C.W., et al. (eds.), MM 2020: The 28th ACM International Conference on Multimedia, Virtual Event/Seattle, WA, USA, 12–16 October 2020, pp. 3986–3994. ACM (2020)
Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning, pp. 369–376 (2006)
Hannun, A., et al.: Deep speech: scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567 (2014)
Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: Dy, J.G., Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018, volume 80 of Proceedings of Machine Learning Research, pp. 2142–2151. PMLR (2018)
Kingma, D.P., Ba. J.: Adam: a method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015)
Li, J., et al.: Universal adversarial perturbations generative network for speaker recognition. In: IEEE International Conference on Multimedia and Expo, ICME 2020, London, UK, 6–10 July 2020, pp. 1–6. IEEE (2020)
Li, Z., Wu, Y., Liu, J., Chen, Y., Yuan. B.: Advpulse: universal, synchronization-free, and targeted audio adversarial attacks via subsecond perturbations. In: Ligatti, J., Ou, X., Katz, J., Vigna, G. (eds.), CCS 2020: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, 9–13 November 2020, pp. 1121–1134. ACM (2020)
Lu, Z., Han, W., Zhang, Y., Cao. I.: Exploring targeted universal adversarial perturbations to end-to-end ASR models. arXiv preprint arXiv:2104.02757 (2021)
Moosavi-Dezfooli, S.-M., Fawzi, A., Fawzi, O., Frossard., P.: Universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765–1773 (2017)
Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)
Neekhara, P., Hussain, S., Pandey, P., Dubnov, S., McAuley, J.J., Koushanfar, F.: Universal adversarial perturbations for speech recognition systems. In: Kubin, G., Kacic, Z. (eds.) Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15–19 September 2019, pp. 481–485. ISCA (2019)
Panayotov, V., Chen, G., Povey, D., Khudanpur, S.: Librispeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210. IEEE (2015)
Park, D.S., Chan, W., Zhang, Y., Chiu, C., Zoph, D.S., Cubuk, E.D., Le, Q.V.: Specaugment: A simple data augmentation method for automatic speech recognition. In: Kubin, G., Kacic, Z. (eds.) Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15–19 September 2019, pp. 2613–2617. ISCA (2019)
Qin, Y., Carlini, N., Cottrell, G.W., Goodfellow, I.J., Raffel, C.: Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9–15 June 2019, Long Beach, CA, USA, pp. 5231–5240 (2019)
Rix, A.W., Beerends, J.G., Hollier, M.P., Hekstra, A.P.: Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2001, 7–11 May, 2001, Salt Palace Convention Center, Salt Lake City, Utah, USA, Proceedings, pp. 749–752. IEEE (2001)
Schönherr, L., Eisenhofer, T., Zeiler, S., Holz, T., Kolossa, D.: Imperio: Robust over-the-air adversarial examples for automatic speech recognition systems. In: ACSAC 2020: Annual Computer Security Applications Conference, Virtual Event/Austin, TX, USA, 7–11 December, 2020, pp. 843–855. ACM (2020)
Szegedy, C., et al.: Intriguing properties of neural networks. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings (2014)
Wang, D., Dong, L., Wang, R., Yan, D., Wang, J.: Targeted speech adversarial example generation with generative adversarial network. IEEE Access 8, 124503–124513 (2020)
Xie, Y., Li, Z., Shi, C., Liu, J., Chen, Y., Yuan, B.: Enabling fast and universal audio adversarial attack using generative model. arXiv preprint arXiv:2004.12261 (2020)
Xie, Y., Shi, C., Li, Z., Liu, J., Chen, Y., Yuan, B.: Real-time, universal, and robust adversarial attacks against speaker recognition systems. In: 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, 4–8 May 2020, pp. 1738–1742. IEEE (2020)
Yang, Z., Li, B., Chen, P., Song, D.: Characterizing audio adversarial examples using temporal dependency. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019)
Yuan, X., et al.: Commandersong: A systematic approach for practical adversarial voice recognition. In 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18), pages 49–64, 2018
Zhang, C., Benz, P., Imtiaz, T., Kweon, I.S.: Understanding adversarial examples from the mutual influence of images and perturbations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14521–14530 (2020)
Zhao, P., et al.: On the design of black-box adversarial examples by leveraging gradient-free optimization and operator splitting method. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 121–130 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zong, W., Chow, YW., Susilo, W., Rana, S., Venkatesh, S. (2021). Targeted Universal Adversarial Perturbations for Automatic Speech Recognition. In: Liu, J.K., Katsikas, S., Meng, W., Susilo, W., Intan, R. (eds) Information Security. ISC 2021. Lecture Notes in Computer Science(), vol 13118. Springer, Cham. https://doi.org/10.1007/978-3-030-91356-4_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-91356-4_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91355-7
Online ISBN: 978-3-030-91356-4
eBook Packages: Computer ScienceComputer Science (R0)