skip to main content
10.1145/3586209.3591404acmconferencesArticle/Chapter ViewAbstractPublication PageswisecConference Proceedingsconference-collections
research-article

Analysis of Lossy Generative Data Compression for Robust Remote Deep Inference

Published: 28 June 2023 Publication History

Abstract

Networks of wireless sensors, including Internet of Things (IoT), motivate the use of lossy compression of the sensor data to match the available network bandwidth (BW). Hence, sensor data intended for inference by a remote deep learning (RDL) model is likely to be reconstructed with distortion, from a compressed representation received by the remote user over a wireless channel. Our focus is a particular type of lossy compression algorithm based on DL models, and known as learned compression (LC). The link between the information loss and compression rate in LCs has not been studied yet in the framework of information theory, nor practically associated with any meta-data which could describe the type and level of information loss to downstream users. This may make this compression undetectable yet potentially harmful. We study the robustness of a RDL classification model against the lossy compression of the input, including the robustness under an adversarial attack. We apply different compression methods of MNIST images, such as JPEG and a hierarchical LC, all with different compression ratios. For any lossy reconstruction and its uncompressed original, several techniques for topological feature characterization based on persistent homology are used to highlight the important differences amongst compression approaches that may affect the robust accuracy of a DL classifier trained on the original data. We conclude that LC is preferred in the described context, because we achieve the same accuracy as with the originals (with and without an adversarial attack) on a trained DL MNIST classifier, while using only 1/4 of the BW. We show that calculated topological features differ between JPEG and the comparable LC reconstructions, which are closer to the features of the original. We show that there is a distribution shift in those features due to the attack. Finally, most LC models are generative, meaning that we can generate multiple statistically independent compressed representations of a data point, which opens the possibility for the inference error correction at the RDL model. Due to space limitations, we leave this aspect for future work.

References

[1]
C. Szegedy et al. 2014. Intriguing properties of neural networks. arXiv (2014). https://arxiv.org/abs/1312.6199
[2]
D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. 2007. Stability of persistence diagrams. Discrete And Computational Geometry, Vol. 37, 1 (2007).
[3]
A. Van den Oord, O. Vinyals, and K. Kavukcuoglu. 2017. Neural Discrete Representation Learning. In 31st Intern. Conf. on Neural Information Processing Systems.
[4]
H. Edelsbrunner and D. Morozov. 2014. Persistent homology: theory and practice. In European Congress of Mathematics.
[5]
M. Ehrlich, L. Davis, S.N. Lim, and A. Shrivastava. 2021. Analyzing and Mitigating JPEG Compression Defects in Deep Learning. In IEEE/CVF Intern. Conf. on Computer Vision.
[6]
F. Mentzer et al. 2020. High-Fidelity Generative Image Compression. ArXiv, Vol. abs/2006.09965 (2020).
[7]
D P Gangwar and Anju Pathania. 2018. Authentication of Digital Image using Exif Metadata and Decoding Properties. Intern. Journal of Scient. Research in Comp. Science, Engin. and Inform. Technology (12 2018).
[8]
Adélie Garin and Guillaume Tauzin. 2019. A Topological "Reading" Lesson: Classification of MNIST using TDA. arxiv: 1910.08345
[9]
I. J. Goodfellow, J. Shlens, and C. Szegedy. 2015. Explaining and Harnessing Adversarial Examples. arXiv (2015). https://arxiv.org/abs/1412.6572
[10]
S. Gu and L. Rigazio. 2015. Towards Deep Neural Network Architectures Robust to Adversarial Examples. arXiv (2015). https://arxiv.org/abs/1412.5068
[11]
Y. Hu, W. Yang, Z. Ma, and J. Liu. 2022. Learning End-to-End Lossy Image Compression: A Benchmark., Vol. 44, 8 (2022).
[12]
C. Jia, Z. Liu, Y. Wang, S. Ma, and W. Gao. 2019. Layered Image Compression Using Scalable AutoEncoder. In IEEE Conf. on Multimedia Inform. Processing and Retrieval (MIPR).
[13]
Diederik P. Kingma and Max Welling. 2013. Auto-Encoding Variational Bayes. ArXiv, Vol. abs/1312.6114 (2013).
[14]
Diederik P. Kingma and Max Welling. 2019. An Introduction to Variational Autoencoders. Found. Trends Mach. Learn., Vol. 12, 4 (2019).
[15]
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE, Vol. 86, 11 (1998).
[16]
J. R. Munkres. 1984. Elements of Algebraic Topology. Addison Wesley Publishing Company.
[17]
OpenAI. 2022. VQ-VAE in Dall-E-2. https://openai.com/research/dall­e#fn­1. [Online; accessed Mar-2023].
[18]
Turke R, Nys J, Verdonck T, and Latré S. 2021. Noise robustness of persistent homology on greyscale images, across filtrations and signatures. PLoS One, Vol. 16, 9 (2021).
[19]
Peter J. Rousseeuw, Ida Ruts, and John W. Tukey. 1999. The Bagplot: A Bivariate Boxplot. The American Statistician, Vol. 53 (1999).
[20]
Choe Seungho and Ramanna Sheela. 2022. Cubical Homology-Based Machine Learning: An Application in Image Classification. Axioms, Vol. 11, 3 (2022).
[21]
C. E. Shannon. 1948. A mathematical theory of communication. Bell Syst. Tech. Journal, Vol. 27 (1948).
[22]
Speechmatics. 2020. HQA code. https://github.com/speechmatics/hqa.git. [Online; accessed Sep-2022].
[23]
W. Williams et al. 2020. Hierarchical Quantized Autoencoders. In 34th Intern. Conf. on Neural Information Processing Systems (NIPS).
[24]
G. K. Wallace. 1992. The JPEG still picture compression standard. IEEE Trans. on consumer electronics, Vol. 38, 1 (1992). io

Cited By

View all
  • (2024)Deep-Learned Compression for Radio-Frequency Signal Classification2024 IEEE International Symposium on Information Theory Workshops (ISIT-W)10.1109/ISIT-W61686.2024.10591760(1-6)Online publication date: 7-Jul-2024
  • (2023)Generative Lossy Sensor Data Reconstructions for Robust Deep Inference2023 International Balkan Conference on Communications and Networking (BalkanCom)10.1109/BalkanCom58402.2023.10167886(1-5)Online publication date: 5-Jun-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WiseML'23: Proceedings of the 2023 ACM Workshop on Wireless Security and Machine Learning
June 2023
62 pages
ISBN:9798400701337
DOI:10.1145/3586209
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 June 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial attack
  2. generative deep learning
  3. learned compression
  4. lossy compression
  5. persistent homology

Qualifiers

  • Research-article

Conference

WiSec '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)19
  • Downloads (Last 6 weeks)1
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Deep-Learned Compression for Radio-Frequency Signal Classification2024 IEEE International Symposium on Information Theory Workshops (ISIT-W)10.1109/ISIT-W61686.2024.10591760(1-6)Online publication date: 7-Jul-2024
  • (2023)Generative Lossy Sensor Data Reconstructions for Robust Deep Inference2023 International Balkan Conference on Communications and Networking (BalkanCom)10.1109/BalkanCom58402.2023.10167886(1-5)Online publication date: 5-Jun-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media