Beyond $L_p$ Clipping: Equalization based Psychoacoustic Attacks against ASRs

Hadi Abdullah, Muhammad Sajidur Rahman, Christian Peeters, Cassidy Gibson, Washington Garcia, Vincent Bindschaedler, Thomas Shrimpton, Patrick Traynor
Proceedings of The 13th Asian Conference on Machine Learning, PMLR 157:672-688, 2021.

Abstract

Automatic Speech Recognition (ASR) systems convert speech into text and can be placed into two broad categories: traditional and fully end-to-end. Both types have been shown to be vulnerable to adversarial audio examples that sound benign to the human ear but force the ASR to produce malicious transcriptions. Of these attacks, only the “psychoacoustic” attacks can create examples with relatively imperceptible perturbations, as they leverage the knowledge of the human auditory system. Unfortunately, existing psychoacoustic attacks can only be applied against traditional models, and are obsolete against the newer, fully end-to-end ASRs. In this paper, we propose an equalization-based psychoacoustic attack that can exploit both traditional and fully end-to-end ASRs. We successfully demonstrate our attack against real-world ASRs that include DeepSpeech and Wav2Letter. Moreover, we employ a user study to verify that our method creates low audible distortion. Specifically, 80 of the 100 participants voted in favor of \textit{all} our attack audio samples as less noisier than the existing state-of-the-art attack. Through this, we demonstrate both types of existing ASR pipelines can be exploited with minimum degradation to attack audio quality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v157-abdullah21a, title = {Beyond $L_{p}$ Clipping: Equalization based Psychoacoustic Attacks against {ASRs}}, author = {Abdullah, Hadi and Rahman, Muhammad Sajidur and Peeters, Christian and Gibson, Cassidy and Garcia, Washington and Bindschaedler, Vincent and Shrimpton, Thomas and Traynor, Patrick}, booktitle = {Proceedings of The 13th Asian Conference on Machine Learning}, pages = {672--688}, year = {2021}, editor = {Balasubramanian, Vineeth N. and Tsang, Ivor}, volume = {157}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v157/abdullah21a/abdullah21a.pdf}, url = {https://proceedings.mlr.press/v157/abdullah21a.html}, abstract = {Automatic Speech Recognition (ASR) systems convert speech into text and can be placed into two broad categories: traditional and fully end-to-end. Both types have been shown to be vulnerable to adversarial audio examples that sound benign to the human ear but force the ASR to produce malicious transcriptions. Of these attacks, only the “psychoacoustic” attacks can create examples with relatively imperceptible perturbations, as they leverage the knowledge of the human auditory system. Unfortunately, existing psychoacoustic attacks can only be applied against traditional models, and are obsolete against the newer, fully end-to-end ASRs. In this paper, we propose an equalization-based psychoacoustic attack that can exploit both traditional and fully end-to-end ASRs. We successfully demonstrate our attack against real-world ASRs that include DeepSpeech and Wav2Letter. Moreover, we employ a user study to verify that our method creates low audible distortion. Specifically, 80 of the 100 participants voted in favor of \textit{all} our attack audio samples as less noisier than the existing state-of-the-art attack. Through this, we demonstrate both types of existing ASR pipelines can be exploited with minimum degradation to attack audio quality.} }
Endnote
%0 Conference Paper %T Beyond $L_p$ Clipping: Equalization based Psychoacoustic Attacks against ASRs %A Hadi Abdullah %A Muhammad Sajidur Rahman %A Christian Peeters %A Cassidy Gibson %A Washington Garcia %A Vincent Bindschaedler %A Thomas Shrimpton %A Patrick Traynor %B Proceedings of The 13th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Vineeth N. Balasubramanian %E Ivor Tsang %F pmlr-v157-abdullah21a %I PMLR %P 672--688 %U https://proceedings.mlr.press/v157/abdullah21a.html %V 157 %X Automatic Speech Recognition (ASR) systems convert speech into text and can be placed into two broad categories: traditional and fully end-to-end. Both types have been shown to be vulnerable to adversarial audio examples that sound benign to the human ear but force the ASR to produce malicious transcriptions. Of these attacks, only the “psychoacoustic” attacks can create examples with relatively imperceptible perturbations, as they leverage the knowledge of the human auditory system. Unfortunately, existing psychoacoustic attacks can only be applied against traditional models, and are obsolete against the newer, fully end-to-end ASRs. In this paper, we propose an equalization-based psychoacoustic attack that can exploit both traditional and fully end-to-end ASRs. We successfully demonstrate our attack against real-world ASRs that include DeepSpeech and Wav2Letter. Moreover, we employ a user study to verify that our method creates low audible distortion. Specifically, 80 of the 100 participants voted in favor of \textit{all} our attack audio samples as less noisier than the existing state-of-the-art attack. Through this, we demonstrate both types of existing ASR pipelines can be exploited with minimum degradation to attack audio quality.
APA
Abdullah, H., Rahman, M.S., Peeters, C., Gibson, C., Garcia, W., Bindschaedler, V., Shrimpton, T. & Traynor, P.. (2021). Beyond $L_p$ Clipping: Equalization based Psychoacoustic Attacks against ASRs. Proceedings of The 13th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 157:672-688 Available from https://proceedings.mlr.press/v157/abdullah21a.html.

Related Material