Abstract
Adversarial machine learning is an emerging area showing the vulnerability of deep learning models. Exploring attack methods to challenge state-of-the-art artificial intelligence (AI) models is an area of critical concern. The reliability and robustness of such AI models are one of the major concerns with an increasing number of effective adversarial attack methods. Classification tasks are a major vulnerable area for adversarial attacks. The majority of attack strategies are developed for colored or gray-scaled images. Consequently, adversarial attacks on binary image recognition systems have not been sufficiently studied. Binary images are simple—two possible pixel-valued signals with a single channel. The simplicity of binary images has a significant advantage compared to colored and gray-scaled images, namely computation efficiency. Moreover, most optical character recognition systems (OCRs), such as handwritten character recognition, plate number identification, and bank check recognition systems, use binary images or binarization in their processing steps. In this paper, we propose a simple yet efficient attack method, efficient combinatorial black-box adversarial attack (ECoBA), on binary image classifiers. We validate the efficiency of the attack technique on two different data sets and three classification networks, demonstrating its performance. Furthermore, we compare our proposed method with state-of-the-art methods regarding advantages and disadvantages as well as applicability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dalvi, N., Domingos, P., Mausam, Sanghai, S., Verma, D.: Adversarial classification. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’04, pp. 99–108. Association for Computing Machinery, New York, NY, USA (2004). ISBN 1581138881. https://doi.org/10.1145/1014052.1014066
Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018.) ISSN 0031-3203. https://doi.org/10.1016/j.patcog.2018.07.023
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. Dumitru Erhan (2014)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015). https://doi.org/10.48550/arXiv.1412.6572
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/SP.2017.49
Nguyen, A., Yosinski, J., Clune, J.: High confidence predictions for unrecognizable images: deep neural networks are easily fooled (2015)
Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks (2016)
Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, pp. 1528–1540. Association for Computing Machinery, New York, NY, USA (2016.) ISBN 9781450341394. https://doi.org/10.1145/2976749.2978392
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. CoRR (2016). https://doi.org/10.48550/arXiv.1607.02533
Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning models (2018)
Smith, R.: An overview of the tesseract OCR engine. In: Proceedings of the Ninth International Conference on Document Analysis and Recognition (ICDAR), pp. 629–633 (2007)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2019)
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019). ISSN 1941-0026. https://doi.org/10.1109/TEVC.2019.2890858
Tramér, F., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: The space of transferable adversarial examples (2017)
Balkanski, E., Chase, H., Oshiba, K., Rilee, A., Singer, Y., Wang, R.: Adversarial attacks on binary image recognition systems (2020)
Wang, Y., Zhang, W., Shen, T., Hui, Y., Wang, F.-Y.: Binary thresholding defense against adversarial attacks. Neurocomputing 445, 61–71 (2021). https://doi.org/10.1016/j.neucom.2021.03.036
LeCun, Y., Cortes, C., Burges, C.J.: MNIST handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2 (2010)
Cohen, G., Afshar, S., Tapson, J., van Schaik, A.: EMNIST: an extension of MNIST to handwritten letters. arXiv preprint arXiv:1702.05373 (2017)
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791
Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Bayram, S., Barner, K. (2023). A Black-Box Attack on Optical Character Recognition Systems. In: Tistarelli, M., Dubey, S.R., Singh, S.K., Jiang, X. (eds) Computer Vision and Machine Intelligence. Lecture Notes in Networks and Systems, vol 586. Springer, Singapore. https://doi.org/10.1007/978-981-19-7867-8_18
Download citation
DOI: https://doi.org/10.1007/978-981-19-7867-8_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7866-1
Online ISBN: 978-981-19-7867-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)