ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Evaluating the Vulnerability of End-to-End Automatic Speech Recognition Models to Membership Inference Attacks

Muhammad A. Shah, Joseph Szurley, Markus Mueller, Athanasios Mouchtaris, Jasha Droppo

Recent studies have shown that it may be possible to determine if a machine learning model was trained on a given data sample, using Membership Inference Attacks (MIA). In this paper we evaluate the vulnerability of state-of-the-art speech recognition models to MIA under black-box access. Using models trained with standard methods and public datasets, we demonstrate that without any knowledge of the target model’s parameters or training data a MIA can successfully infer membership with precision and recall more than 60%. Furthermore, for utterances from about 39% of the speakers the precision is more than 75%, indicating that training data membership can be inferred more precisely for some speakers than others. While strong regularization reduces the overall accuracy of MIA to almost 50%, the attacker can still infer membership for utterances from 25% of the speakers with high precision. These results indicate that (1) speaker-level MIA success should be reported, along with overall accuracy, to provide a holistic view of the model’s vulnerability and (2) conventional regularization is an inadequate defense against MIA.We believe that the insights gleaned from this study can direct future work towards more effective defenses.


doi: 10.21437/Interspeech.2021-1188

Cite as: Shah, M.A., Szurley, J., Mueller, M., Mouchtaris, A., Droppo, J. (2021) Evaluating the Vulnerability of End-to-End Automatic Speech Recognition Models to Membership Inference Attacks. Proc. Interspeech 2021, 891-895, doi: 10.21437/Interspeech.2021-1188

@inproceedings{shah21_interspeech,
  author={Muhammad A. Shah and Joseph Szurley and Markus Mueller and Athanasios Mouchtaris and Jasha Droppo},
  title={{Evaluating the Vulnerability of End-to-End Automatic Speech Recognition Models to Membership Inference Attacks}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={891--895},
  doi={10.21437/Interspeech.2021-1188}
}