Skip to main content

Output Prediction Attacks on Block Ciphers Using Deep Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13285))

Abstract

In this paper, we propose deep learning-based output prediction attacks in a blackbox setting. As preliminary experiments, we first focus on two toy SPN block ciphers (small PRESENT-[4] and small AES-[4]) and one toy Feistel block cipher (small TWINE-[4]). Due to its small internal structures with a block size of 16 bits, we can construct deep learning models by employing the maximum number of plaintext/ciphertext pairs, and we can precisely calculate the rounds in which full diffusion occurs. Next, based on the preliminary experiments, we explore whether the evaluation results obtained by our attacks against three toy block ciphers can be applied to block ciphers with large block sizes, e.g., 32 and 64 bits. As a result, we demonstrate the following results, specifically for the SPN block ciphers: (1) our attacks work against a similar number of rounds that the linear/differential attacks can be successful, (2) our attacks realize output predictions (precisely ciphertext prediction and plaintext recovery) that are much stronger than distinguishing attacks, and (3) swapping or replacing the internal components of the target block ciphers affects the average success probabilities of the proposed attacks. It is particularly worth noting that this is a deep learning specific characteristic because swapping/replacing does not affect the average success probabilities of the linear/differential attacks. We also confirm whether the proposed attacks work on the Feistel block cipher. We expect that our results will be an important stepping stone in the design of deep learning-resistant symmetric-key ciphers.

This work was done when the first author, Hayato Kimura, was a master student at the Tokai University, Japan, and was a research assistant at the National Institute of Information and Communications Technology (NICT), Japan.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/keras-team/keras.

  2. 2.

    https://keras.io/ja/layers/recurrent/.

  3. 3.

    https://github.com/optuna/optuna.

  4. 4.

    This assumption is strictly incorrect, but we use it for simple discussion.

  5. 5.

    https://keras.io/ja/layers/convolutional/.

  6. 6.

    This assumption is strictly incorrect, but we use it for simple discussion.

References

  1. Alallayah, K.M., Alhamami, A.H., AbdElwahed, W., Amin, M.: Applying neural networks for simplified data encryption standard (SDES) cipher system cryptanalysis. Int. Arab J. Inf. Technol. 9(2), 163–169 (2012)

    Google Scholar 

  2. Alani, M.M.: Neuro-cryptanalysis of DES and Triple-DES. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012. LNCS, vol. 7667, pp. 637–646. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34500-5_75

    Chapter  Google Scholar 

  3. Alshammari, R., Nur Zincir-Heywood, A.: Machine learning based encrypted traffic classification: identifying SSH and Skype. In: IEEE CISDA, pp. 1–8 (2009)

    Google Scholar 

  4. Baek, S., Kim, K.: Recent Advances of Neural Attacks against Block Ciphers. SCIS (2020). https://caislab.kaist.ac.kr/publication/paper_files/2020/scis2020_SG.pdf

  5. Bafghi, A.G., Safabakhsh, R., Sadeghiyan, B.: Finding the differential characteristics of block ciphers with neural networks. Inf. Sci. 178(15), 3118–3132 (2008). Nature Inspired Problem-Solving

    Google Scholar 

  6. Baksi, A., Breier, J., Chen, Y., Dong, X.: Machine learning assisted differential distinguishers for lightweight ciphers. In: DATE, pp. 176–181 (2021)

    Google Scholar 

  7. Bao, Z., Guo, J., Liu, M., Ma, L., Yi, T.: Conditional differential-neural cryptanalysis. IACR Cryptology ePrint Archive 2021:719 (2021)

    Google Scholar 

  8. Benamira, A., Gerault, D., Peyrin, T., Tan, Q.Q.: A deeper look at machine learning-based cryptanalysis. In: Canteaut, A., Standaert, F.-X. (eds.) EUROCRYPT 2021. LNCS, vol. 12696, pp. 805–835. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77870-5_28

    Chapter  Google Scholar 

  9. Bogdanov, A., et al.: PRESENT: an ultra-lightweight block cipher. In: Paillier, P., Verbauwhede, I. (eds.) CHES 2007. LNCS, vol. 4727, pp. 450–466. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74735-2_31

    Chapter  Google Scholar 

  10. Chen, Y., Yu, H.: Neural aided statistical attack for cryptanalysis. IACR Cryptology ePrint Archive 2020:1620 (2020)

    Google Scholar 

  11. Chen, Y., Yu, H.: A new neural distinguisher model considering derived features from multiple ciphertext pairs. IACR Cryptology ePrint Archive 2021:310 (2021)

    Google Scholar 

  12. Chen, Y., Yu, H.: Bridging machine learning and cryptanalysis via EDLCT. IACR Cryptology ePrint Archive 2021:705 (2021)

    Google Scholar 

  13. Chen, Y., Yu, H.: Improved neural aided statistical attack for cryptanalysis. IACR Cryptology ePrint Archive 2021:311 (2021)

    Google Scholar 

  14. Danziger, M., Amaral Henriques, M.A.: Improved cryptanalysis combining differential and artificial neural network schemes. In: ITS, pp. 1–5 (2014)

    Google Scholar 

  15. Focardi, R., Luccio, F.L.: Neural cryptanalysis of classical ciphers. In: ICTCS, pp. 104–115 (2018)

    Google Scholar 

  16. Mishra, G., Krishna Murthy, S.V.S.S.N.V.G., Pal, S.K.: Neural network based analysis of lightweight block cipher PRESENT. In: Yadav, N., Yadav, A., Bansal, J.C., Deep, K., Kim, J.H. (eds.) Harmony Search and Nature Inspired Optimization Algorithms. AISC, vol. 741, pp. 969–978. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-0761-4_91

    Chapter  Google Scholar 

  17. Gohr, A.: Improving attacks on round-reduced speck32/64 using deep learning. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019. LNCS, vol. 11693, pp. 150–179. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26951-7_6

    Chapter  Google Scholar 

  18. Gomez, A.N., Huang, S., Zhang, I., Li, B.M., Osama, M., Kaiser, L.: Unsupervised cipher cracking using discrete GANs. CoRR, abs/1801.04883 (2018)

    Google Scholar 

  19. Greydanus, S.: Learning the enigma with recurrent neural networks. CoRR, abs/1708.07576 (2017)

    Google Scholar 

  20. Hochreiter, S., Schmidhuber, J.: Long short-term memory. In: Neural Computation, vol. 9, no. 8, pp. 1735–1780 (1997)

    Google Scholar 

  21. Hou, B., Li, Y., Zhao, H., Wu, B.: Linear attack on round-reduced DES using deep learning. In: Chen, L., Li, N., Liang, K., Schneider, S. (eds.) ESORICS 2020. LNCS, vol. 12309, pp. 131–145. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59013-0_7

    Chapter  Google Scholar 

  22. Hou, Z., Ren, J., Chen, S.: Cryptanalysis of round-reduced SIMON32 based on deep learning. IACR Cryptology ePrint Archive 2021:362 (2021)

    Google Scholar 

  23. Hou, Z., Ren, J., Chen, S.: Improve neural distinguisher for cryptanalysis. IACR Cryptology ePrint Archive 2021:1017 (2021)

    Google Scholar 

  24. Hu, X., Zhao, Y.: Research on plaintext restoration of AES based on neural network. Secur. Commun. Netw. 2018, 6868506:1–6868506:9 (2018)

    Google Scholar 

  25. Idris, M.F., Teh, J.S., Yan, J.L.S., Yeoh, W.-Z.: A deep learning approach for active S-box prediction of lightweight generalized Feistel block ciphers. IEEE Access 9, 104205–104216 (2021)

    Article  Google Scholar 

  26. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  27. Leander, G.: Small scale variants of the block cipher PRESENT. IACR Cryptology ePrint Archive 2010:143 (2010)

    Google Scholar 

  28. Lee, T., Teh, J.S., Liew, J., Yan, S., Jamil, N., Yeoh, W.-Z.: A machine learning approach to predicting block cipher security. In: CRYPTOLOGY (2020)

    Google Scholar 

  29. Lee, T.R., Teh, J.S., Jamil, N., Yan, J.L.S., Chen, J.: Lightweight block cipher security evaluation based on machine learning classifiers and active S-boxes. IEEE Access 9, 134052–134064 (2021)

    Article  Google Scholar 

  30. Liu, Y., Chen, J., Deng, L.: Unsupervised sequence classification using sequential output statistics. In: NIPS, pp. 3550–3559 (2017)

    Google Scholar 

  31. Mantin, I., Shamir, A.: A practical attack on broadcast RC4. In: Matsui, M. (ed.) FSE 2001. LNCS, vol. 2355, pp. 152–164. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45473-X_13

    Chapter  Google Scholar 

  32. Shibutani, K., Isobe, T., Hiwatari, H., Mitsuda, A., Akishita, T., Shirai, T.: Piccolo: an ultra-lightweight blockcipher. In: Preneel, B., Takagi, T. (eds.) CHES 2011. LNCS, vol. 6917, pp. 342–357. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23951-9_23

    Chapter  Google Scholar 

  33. So, J.: Deep learning-based cryptanalysis of lightweight block ciphers. Secur. Commun. Netw. 2020, 3701067 (2020)

    Google Scholar 

  34. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112 (2014)

    Google Scholar 

  35. Tan, C., Ji, Q.: An approach to identifying cryptographic algorithm from ciphertext. In: ICCSN, pp. 19–23 (2016)

    Google Scholar 

  36. Tieleman, T., Hinton, G.: Lecture 6.5-RMSprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4(2), 26–31 (2012)

    Google Scholar 

  37. Wang, G., Wang, G.: Improved differential-ML distinguisher: machine learning based generic extension for differential analysis. In: Gao, D., Li, Q., Guan, X., Liao, X. (eds.) ICICS 2021. LNCS, vol. 12919, pp. 21–38. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88052-1_2

    Chapter  Google Scholar 

  38. Xiao, Y., Hao, Q., Yao, D.D.: Neural cryptanalysis: metrics, methodology, and applications in CPS ciphers. In: IEEE DSC, pp. 1–8 (2019)

    Google Scholar 

  39. Yadav, T., Kumar, M.: Differential-ML distinguisher: machine learning based generic extension for differential cryptanalysis. In: Longa, P., Ràfols, C. (eds.) LATINCRYPT 2021. LNCS, vol. 12912, pp. 191–212. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88238-9_10

    Chapter  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the JSPS KAKENHI Grant Number 19K11971.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hayato Kimura .

Editor information

Editors and Affiliations

Appendices

A A Our Target Ciphers

In this section, we introduce two SPN block ciphers (PRESENT [9] and AES-like cipher), one Feistel block cipher (TWINE-like cipher), and their toy ciphers (small PRESENT-[n] [27], small AES-[n], and small TWINE-[n]).

PRESENT and Small PRESENT-[ n ]: PRESENT [9] is a lightweight SPN block cipher with a 64-bit block size, 31 rounds, and a key size of either 80 or 128 bits. To analyze PRESENT, a toy model of PRESENT called small PRESENT-[n] [27] has been proposed. We show the round function of small PRESENT-[n] in Fig. 1. Since the block size is 4n, small PRESENT-[16] is equivalent to the original PRESENT. The variant n, which specifies the block size and round key length, allows us to control the round of full diffusion. The S-box has 4-bit input and output. We provide the correspondence table in Table 7 that maps \(\mathbb {F}_2^4 \rightarrow \mathbb {F}_2^4\). The pLayer is described as bit permutation P(i), which is defined as follows. Note that this is a generalization of that of PRESENT and is equivalent to that of PRESENT when \(n=16\). P(i) is used for encryption and \(P^{-1}(i)\) is used for decryption.

$$\begin{aligned} P(i)= {\left\{ \begin{array}{ll} n \times i \bmod (4n-1) &{} (0 \le i< 4n-1)\\ 4n-1 &{} (i=4n-1) \end{array}\right. }\\ P^{-1}(i)= {\left\{ \begin{array}{ll} 4 \times i \bmod (4n-1) &{} (0 \le i < 4n-1)\\ 4n-1 &{} (i=4n-1) \end{array}\right. } \end{aligned}$$

For key scheduling, the key scheduling algorithm of PRESENT-80, which is a variant of PRESENT with a key length of 80, is executed; furthermore, the 4n rightmost bits are used as round keys \(rk_{i}\).

AES-Like and Small AES-[ n ]: We design AES-like cipher with a 64-bit block size, called AES-like for short. To analyze AES-like, we design its toy model called small AES-[n]. The round function of small AES is shown in Fig. 1. As with the case of PRESENT, small AES-[16] is equivalent to AES-like since the block size is 4n. The S-box and key scheduling are the same as those of PRESENT. The maximum distance separable (MDS) matrix (over \(GF(2^4)\) defined by the irreducible polynomial \(x^4+x+1\)) is the same as that of Piccolo [32], which is expressed as follows.

$$\begin{aligned} M = \left( \begin{array}{cccc} 2 &{} 3 &{} 1 &{} 1 \\ 1 &{} 2 &{} 3 &{} 1 \\ 1 &{} 1 &{} 2 &{} 3 \\ 3 &{} 1 &{} 1 &{} 2 \\ \end{array} \right) \end{aligned}$$

When a 16-bit input \(X_{(16)}\) is given, the output is computed as \(^t\!(y_{0(4)}, y_{1(4)}, y_{2(4)},y_{3(4)}) \leftarrow M \cdot ^t\!(x_{0(4)}, x_{1(4)}, x_{2(4)}, x_{3(4)})\).

Fig. 1.
figure 1

(a) Round Functions of small PRESENT-[n] and small AES-[n], (b) Last Round Function of small AES-[n].

Table 7. S-box (PRESENT and small PRESENT-[n])
Fig. 2.
figure 2

Round Function of small TWINE-[n]

TWINE-Like and Small TWINE-[ n ]: We design TWINE-like cipher with a 64-bit block size, called TWINE-like for short. To analyze TWINE-like, we design its toy model called small TWINE-[n]. For our design, we adopt the type-II generalized Feistel structure with n branches and similar F function as TWINE, which comprises round key operation and 4-bit S-box, as shown in Fig. 2.

As with the case of PRESENT, small TWINE-[16] is equivalent to TWINE-like since the block size is 4n. The S-box and key scheduling are the same as those of PRESENT. The pLayer is described as round permutation RP, which is defined as follows:

$$\begin{aligned} RP: (y_0, y_1, \dots , y_{n-2}, y_{n-1}) \leftarrow (x_1, x_2, \dots , x_{n-1}, x_0). \end{aligned}$$

Two sub-round keys, \(rk^{s}_{i}\) for \(s \in \{0, 1, \dots , \frac{n}{2}-1\}\), are used in each round, which are generated from the round key \(rk_{i}\) as follows:

$$ \begin{aligned} rk^{s}_i = (rk_i \gg (4n - (4s +4)))~ \& ~\textsf{0xF}, \end{aligned}$$

where \(\gg \) and & are bitwise right shift operation and bitwise AND operation, respectively.

B B Related Works

Table 8. Comparison of deep learning-based cryptanalysis. OP:=Output Prediction, PR:=Plaintext Recovery, KR:=Key Recovery, DD:=Differential Distinguisher, LD:=Linear Distinguisher, and DLD:=Differential-Linear Distinguisher.

Regarding the whitebox analysis, Danziger et al. presented deep learning-based attacks that predict key bits of 2-round DES from a plaintext/ciphertext set, and analyze the relationship between these attacks and the differential probability [14]. They compared variants employing several types of S-boxes with different properties for differential attacks, and they concluded that there is a nontrivial relationship between the differential characteristics and success probability of their deep learning-based attacks. However, their results are extremely limited because they targeted a two-round Feistel construction, which is quite insecure even if the component is ideal function. It is unclear how much the property of internal components affects the security of the whole construction. In addition to improve Gohr’s deep learning-based attack [17], Benamira et al. [8] and Chen et al. [12] improved the success probability of traditional distinguishers using characteristics that are expected to be reacted by Gohr’s attack. Their work confirms whether characteristics explored by Gohr can be employed in the traditional distinguishing attacks and they did not identify any deep learning specific characteristic. However, we calculated the ability of traditional distinguisher and our deep learning-based attack and compared them to investigate a relationship between them. Then, we identified a deep learning specific characteristic of small-PRESENT. To summarize, to the best of our knowledge, our results are the first ones that perform the whitebox analysis.

Alani and Hu reported plaintext recovery attacks on DES, 3-DES, and AES [2, 24] that guess plaintexts from given ciphertexts. They claimed that attacks on DES, 3-DES, and AES are feasible with \(2^{11}\), \(2^{11}\) and 1741 (\(\simeq 2^{10.76}\)) plaintext/ciphertext pairs, respectively. However, Xiao et al. doubted the correctness of their results [2, 24] because they could not be reproduced. Baek et al. also pointed this out in the literature [4]. Therefore, we exclude these results in Table 8. Mishra et al. reported that they mounted output prediction attacks on full-round PRESENT; however, it did not work well [16]. In addition, certain results have yielded classical ciphers such as Caesar cipher, Vigenere, and Enigma ciphers [15, 18, 19, 30].

Other machine learning-based analyses have also been reported, e.g., [28, 29]. Tan et al. demonstrated that deep learning can be used to distinguish ciphertexts encrypted by AES, Blowfish, DES, 3-DES, and RC5, respectively [35], for detecting the encryption algorithm that the malware utilizes. Alshammari et al. attempted to classify encrypted Skype and SSH traffic [3].

C C Experimental Results Using the CNN

To confirm that the use of the LSTM induces better experimental results than that of the CNN, we conducted experiments using the CNN in the same procedure described in Sect. 3.1. In our experiments, we optimize activation functions in addition to the hyperparameters shown in Table 1. The following is the search range for activation functions: Tanh, Sigmoid, and ReLU. For developing CNN models by Keras, e.g., model.add(Conv1D(...)), we specify only filters, kernel_size, activation, and input_shape as its argumentsFootnote 5.

Table 9 shows experimental results using the CNN. Consequently, we clarify the following facts by comparing the experimental results using the LSTM and CNN based on Tables 3 and 9:

  • For small PRESENT-[4], the maximum number of rounds that the proposed attacks using the LSTM and CNN can be successful is 4 and 3, respectively.

  • For small AES-[4], the maximum number of rounds that the proposed attacks using the LSTM and CNN can be successful is 1 for each case. In addition, the average success probabilities of ciphertext prediction (plaintext recovery) by the proposed attacks against the 1-round small AES-[4] using the LSTM and CNN are 1 (1) and \(2^{-11.88}\) (\(2^{-11.83}\)), respectively.

  • For small TWINE-[4], the maximum number of rounds that the proposed attacks using the LSTM and CNN can be successful is 3 and 1, respectively.

To summarize the foregoing facts, we conclude that the use of the LSTM induces better experimental results of all the target block ciphers compared to the use of the CNN.

Table 9. Average success probabilities of ciphertext prediction/plaintext recovery using the proposed attacks against three toy block ciphers with a block size of 16 bits. We employ the CNN and use \(2^{15}\) training data as well as remaining \(2^{15}\) testing data. CP:=Ciphertext Prediction and PR:=Plaintext Recovery.

D D Maximum Differential Probabilities of small PRESENT-[4], small AES-[4], and small TWINE-[4]

Table 10 shows the maximum differential probabilities of small PRESENT-[4], small AES-[4], and small TWINE-[4].

Table 10. Maximum differential probabilities of small PRESENT-[4], small AES-[4], and small TWINE-[4]

E E More Detailed Results in Sect. 3.1

Table 11. Minimum amount of training data required for successful ciphertext prediction/plaintext recovery using the proposed attacks against three toy block ciphers with a block size of 16 bits. We use the optimized hyperparameters obtained in Experiment 1 (see Table 3). CP:=Ciphertext Prediction and PR:=Plaintext Recovery.

Table 11 shows the minimum amount of training data required for successful ciphertext prediction/plaintext recovery against three toy block ciphers. In addition, Table 12 details the experimental results shown in Table 11. If the predicted probability is greater than \(2^{-15}\), we consider the proposed attacks to be successfulFootnote 6.

Table 12. Average success probabilities of ciphertext prediction/plaintext recovery using the proposed attacks against three toy block ciphers with a block size of 16 bits. We vary the amount of training data in the range of from \(2^{2}\) to \(2^{14}\) and use all remaining plaintexts or ciphertexts as testing data. Moreover, we use the optimized hyperparameters obtained in Experiment 1 (see Table 3). CP:=Ciphertext Prediction and PR:=Plaintext Recovery.

F F More Detailed Results in Sect. 3.2

Tables 13 and 14 show the minimum amount of training data required for successful ciphertext prediction/plaintext recovery against three block ciphers with block sizes of 32 and 64 bits, respectively. In addition, Tables 15 and 16 detail the experimental results shown in Tables 13 and 14, respectively.

We vary the amount of training data in the range of from \(2^{8}\) to \(2^{17}\) or from \(2^{10}\) to \(2^{19}\) and use \(2^{16}\) of the remaining plaintexts or ciphertexts as testing data against three toy block ciphers with block sizes of 32 or 64 bits, respectively; thus, if the predicted probability is greater than the threshold derived by equation \({(2^{4n} - 2^{x})}^{-1}\) shown in Sect. 2.3, we consider the proposed attacks to be successful for both cases. In this case, those thresholds are \((2^{32}-2^{8})^{-1}\) to \((2^{32}-2^{17})^{-1}\) or from \((2^{64}-2^{10})^{-1}\) to \((2^{64}-2^{19})^{-1}\).

Table 13. Minimum amount of training data required for successful ciphertext prediction/plaintext recovery using the proposed attacks against three block ciphers with a block size of 32 bits. We use the optimized hyperparameters obtained in Experiment 1 (see Table 3). CP:=Ciphertext Prediction and PR:=Plaintext Recovery.
Table 14. Minimum amount of training data required for successful ciphertext prediction/plaintext recovery using the proposed attacks against three block ciphers with a block size of 64 bits. We use the optimized hyperparameters obtained in Experiment 1 (see Table 3). CP:=Ciphertext Prediction and PR:=Plaintext Recovery.
Table 15. Average success probabilities of ciphertext prediction/plaintext recovery using the proposed attacks against three block ciphers with a block size of 32 bits. We vary the amount of training data in the range of from \(2^{8}\) to \(2^{17}\) and use \(2^{16}\) of the remaining plaintexts or ciphertexts as testing data. Moreover, we use the optimized hyperparameters obtained in Experiment 1 (see Table 3). CP:=Ciphertext Prediction and PR:=Plaintext Recovery.
Table 16. Average success probabilities of ciphertext prediction/plaintext recovery using the proposed attacks against three block ciphers with a block size of 64 bits. We vary the amount of training data in the range of from \(2^{10}\) to \(2^{19}\) and use \(2^{16}\) of the remaining plaintexts or ciphertexts as testing data. Moreover, we use the optimized hyperparameters obtained in Experiment 1 (see Table 3). CP:=Ciphertext Prediction and PR:=Plaintext Recovery.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kimura, H., Emura, K., Isobe, T., Ito, R., Ogawa, K., Ohigashi, T. (2022). Output Prediction Attacks on Block Ciphers Using Deep Learning. In: Zhou, J., et al. Applied Cryptography and Network Security Workshops. ACNS 2022. Lecture Notes in Computer Science, vol 13285. Springer, Cham. https://doi.org/10.1007/978-3-031-16815-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16815-4_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16814-7

  • Online ISBN: 978-3-031-16815-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics