Abstract
The Internet of Things (IoT) is boon and bane. It offers great potential for new business models and ecosystems, but raises major security and privacy concerns. Because many IoT systems collect, process, and store personal data, a secure and privacy-preserving identity management is of utmost significance. Yet, strong resource limitations of IoT devices render resource-hungry public-key cryptography infeasible. Additionally, the security model of IoT enforces solutions to work under memory-leakage attacks. Existing constructions address either the privacy issue or the lightweightness, but not both. Our work contributes towards bridging this gap by combining physically unclonable functions (PUFs) and channel-based key agreement (CBKA): (i) We show a flaw in a PUF-based authentication protocol, when outsider chosen perturbation security cannot be guaranteed. (ii) We present a solution to this flaw by introducing CBKA with an improved definition. (iii) We propose a provably secure and lightweight authentication protocol by combining PUFs and CBKA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
NXP strengthens SmartMX2 security chips with PUF anti-cloning technology. https://www.intrinsic-id.com/nxp-strengthens-smartmx2-security-chips-with-puf-anti-cloning-technology/. Accessed 23 Aug 2016
Ambekar, A., Hassan, M., Schotten, H.D.: Improving channel reciprocity for effective key management systems. In: 2012 International Symposium on Signals, Systems, and Electronics (ISSSE), pp. 1–4. IEEE (2012)
Armknecht, F., Moriyama, D., Sadeghi, A.-R., Yung, M.: Towards a unified security model for physically unclonable functions. In: Sako, K. (ed.) CT-RSA 2016. LNCS, vol. 9610, pp. 271–287. Springer, Heidelberg (2016). doi:10.1007/978-3-319-29485-8_16
Atzori, L., Iera, A., Morabito, G.: The internet of things: a survey. Comput. Netw. 54(15), 2787–2805 (2010)
Aysu, A., Ghalaty, N.F., Franklin, Z., Yali, M.P., Schaumont, P.: Digital fingerprints for low-cost platforms using MEMS sensors. In: Proceedings of the Workshop on Embedded Systems Security, p. 2. ACM (2013)
Aysu, A., Gulcan, E., Moriyama, D., Schaumont, P., Yung, M.: End-to-end design of a PUF-based privacy preserving authentication protocol. In: Güneysu, T., Handschuh, H. (eds.) CHES 2015. LNCS, vol. 9293, pp. 556–576. Springer, Heidelberg (2015). doi:10.1007/978-3-662-48324-4_28
Biglieri, E., Calderbank, R., Constantinides, A., Goldsmith, A., Arogyaswami Paulraj, H., Poor, V.: MIMO Wireless Communications. Cambridge University Press, New York (2007)
Boyen, X.: Reusable cryptographic fuzzy extractors. In: Proceedings of the 11th ACM Conference on Computer and Communications Security, pp. 82–91. ACM (2004)
Delvaux, J., Peeters, R., Dawu, G., Verbauwhede, I.: A survey on lightweight entity authentication with strong PUFs. ACM Comput. Surv. 48(2), 26: 1–26: 42 (2015)
Dodis, Y., Katz, J., Reyzin, L., Smith, A.: Robust fuzzy extractors and authenticated key agreement from close secrets. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 232–250. Springer, Heidelberg (2006). doi:10.1007/11818175_14
Dodis, Y., Reyzin, L., Smith, A.: Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 523–540. Springer, Heidelberg (2004). doi:10.1007/978-3-540-24676-3_31
Edman, M., Kiayias, A., Yener, B.: On passive inference attacks against physical-layer key extraction. In: Proceedings of the Fourth European Workshop on System Security, EUROSEC 2011, New York, NY, USA, pp. 8:1–8:6. ACM (2011)
Evans, D.: The internet of things: how the next evolution of the internet is changing everything. CISCO white paper, vol. 1, pp. 1–11 (2011)
Gassend, B., Clarke, D.E., van Dijk, M., Devadas, S.: Silicon physical random functions. In: Atluri, V. (ed.) Proceedings of the 9th ACM Conference on Computer and Communications Security, CCS 2002, Washington, DC, USA, 18–22 November 2002, pp. 148–160. ACM (2002)
Guajardo, J., Kumar, S.S., Schrijen, G.-J., Tuyls, P.: FPGA intrinsic PUFs and their use for IP protection. In: Paillier, P., Verbauwhede, I. (eds.) CHES 2007. LNCS, vol. 4727, pp. 63–80. Springer, Heidelberg (2007). doi:10.1007/978-3-540-74735-2_5
Guillaume, R., Ludwig, S., Müller, A., Czylwik, A.: Secret key generation from static channels with untrusted relays. In: 2015 IEEE 11th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), pp. 635–642 (2015)
Helfmeier, C., Nedospasov, D., Tarnovsky, C., Krissler, J.S., Boit, C., Seifert, J.-P.: Breaking and entering through the silicon. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, pp. 733–744. ACM (2013)
Herder, C., Meng-Day, Y., Koushanfar, F., Devadas, S.: Physical unclonable functions and applications: a tutorial. Proc. IEEE 102(8), 1126–1141 (2014)
Huth, C., Guillaume, R., Strohm, T., Duplys, P., Samuel, I.A., Güneysu, T.: Information reconciliation schemes in physical-layer security: a survey. Comput. Netw. 109, 84–104 (2016)
Huth, C., Zibuschka, J., Duplys, P., Güneysu, T.: Securing systems on the Internet of things via physical properties of devices and communications. In: Proceedings of 2015 IEEE International Systems Conference (SysCon 2015), pp. 8–13, April 2015
Jakes, W.C., Cox, D.C. (eds.): Microwave Mobile Communications. Wiley-IEEE Press, New York (1994)
Jana, S., Premnath, S.N., Clark, M., Kasera, S.K., Patwari, N., Krishnamurthy, S.V.: On the effectiveness of secret key extraction from wireless signal strength in real environments. In: Proceedings of the 15th Annual International Conference on Mobile Computing and Networking, pp. 321–332. ACM (2009)
Juels, A., Weis, S.A.: Defining strong privacy for RFID. ACM Trans. Inf. Syst. Secur. (TISSEC) 13(1), 7 (2009)
Mathur, S., Trappe, W., Mandayam, N., Ye, C., Reznik, A.: Radio-telepathy: extracting a secret key from an unauthenticated wireless channel. In: Proceedings of the 14th ACM International Conference on Mobile Computing and Networking, pp. 128–139. ACM (2008)
Maurer, U.: Secret key agreement by public discussion from common information. IEEE Trans. Inf. Theor. 39(3), 733–742 (1993)
Maurer, U., Wolf, S.: Information-theoretic key agreement: from weak to strong secrecy for free. In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 351–368. Springer, Heidelberg (2000). doi:10.1007/3-540-45539-6_24
Medaglia, C.M., Serbanati, A.: An overview of privacy and security issues in the internet of things. In: Giusto, D., Iera, A., Morabito, G., Atzori, L. (eds.) The Internet of Things, pp. 389–395 (2010)
Mirzadeh, S., Cruickshank, H., Tafazolli, R.: Secure device pairing: a survey. IEEE Commun. Surv. Tutorials 16(1), 17–40 (2014)
Moriyama, D., Matsuo, S., Yung, M.: PUF-based RFID authentication secure and private under memory leakage. Cryptology ePrint Archive, Report 2013/712 (2013). http://eprint.iacr.org/2013/712
Nishan, N., Zuckerman, D.: Randomness is linear in space. J. Comput. Syst. Sci. 52(1), 43–52 (1996)
Pappu, S.R.: Physical one-way functions. Ph.D. thesis. Massachusetts Institute of Technology (2001)
Schaller, A., Skoric, B., Katzenbeisser, S.: Eliminating leakage in reverse fuzzy extractors. IACR Cryptology ePrint Archive 2014/741 (2014)
Tope, M.A., McEachen, J.C.: Unconditionally secure communications over fading channels. In: Military Communications Conference, MILCOM 2001. Communications for Network-Centric Operations: Creating the Information Force, vol. 1, pp. 54–58. IEEE (2001)
Van Herrewege, A., Katzenbeisser, S., Maes, R., Peeters, R., Sadeghi, A.-R., Verbauwhede, I., Wachsmann, C.: Reverse fuzzy extractors: enabling lightweight mutual authentication for PUF-enabled RFIDs. In: Keromytis, A.D. (ed.) FC 2012. LNCS, vol. 7397, pp. 374–389. Springer, Heidelberg (2012). doi:10.1007/978-3-642-32946-3_27
Wild, A., Güneysu, T.: Enabling SRAM-PUFs on xilinx FPGAs. In: 2014 24th International Conference on Field Programmable Logic and Applications (FPL), pp. 1–4. IEEE (2014)
Willers, O., Huth, C., Guajardo, J., Seidel, H.: MEMS-based gyroscopes as physical unclonable functions. Cryptology ePrint Archive, Report 2016/261 (2016). http://eprint.iacr.org/2016/261
Zenger, C.T., Pietersz, M., Zimmer, J., Posielek, J.-F., Lenze, T., Paar, C.: Authenticated key establishment for low-resource devices exploiting correlated random channels. Comput. Netw. 109, 105–123 (2016)
Zenger, C.T., Zimmer, J., Pietersz, M., Posielek, J.-F., Paar, C.: Exploiting the physical environment for securing the internet of things. In: Proceedings of the 2015 New Security Paradigms Workshop, pp. 44–58. ACM (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Security Proof
We use the proof provided by the work of Moriyama [29] and Aysu et al. [6] as a basis for our proof. The proof for Theorem 1 is as follows.
Proof
The adversary \(\mathcal {A}\) wants the verifier \(\mathcal {V}\) or the prover \(\mathcal {P}\) to accept the session while the communication is altered by the adversary. We concentrate only on the former case, as the verifier authentication is quite similar to that of the prover. We consider the following game transformations. Let \(S_i\) be the advantage that the adversary wins the game in Game i.
-
Game 0. This is the original game between the challenger and the adversary.
-
Game 1. The challenger randomly guesses the device \(dev^*\) with PUF \(f_{i^*}(\cdot )\), where \(i^* \xleftarrow {\textstyle \mathsf {{\scriptstyle U}}} \{1 \le i \le n\}\). If the adversary cannot impersonate \(dev^*\) to the verifier, the challenger aborts the game.
-
Game 2. Assume that \(\ell \) is the upper bound of the sessions that the adversary can establish in the game. For \(1 \le j \le \ell \), we evaluate or change the variables related to the session between the verifier and \(dev^*\) up to the \(\ell \)-th session as the following.
-
Game 2- j -1. The challenger evaluates the output from the channel measurement and quantization of the CBKA algorithm implemented in \(dev^*\) at the j-th session. If the output does not have enough min-entropy \(m_Q\) or requirements for channel observations are violated, then the challenger aborts the game.
-
Game 2- j -2. The output from the information reconciliation procedure (\(r_\mathcal {P}\)) is changed to a random variable.
-
Game 2- j -3. The output from the privacy amplification procedure (sk) is changed to a random variable.
-
Game 2- j -4. The challenger evaluates the output from the PUF implemented in \(dev^*\) at the j-th session. If the output does not have enough min-entropy m or requirements for intra-distance and inter-distance are violated, then the challenger aborts the game.
-
Game 2- j -5. The output from the fuzzy extractor (\(r_1\)) is changed to a random variable.
-
Game 2- j -6. The output from the PRF \(\mathcal {G}(r_1, \cdot )\) is derived from a truly random function in this game.
-
Game 2- j -7. We change the PRF \(\mathcal {G}(r_{old}, \cdot )\) to a truly random function.
-
Game 2- j -8. We change the XORed output \(u_1 := s_2 \oplus z_2\) to randomly chosen \(u_1 \xleftarrow {\textstyle \mathsf {{\scriptstyle U}}} \{0,1\}^k\).
-
Game 2- j -9. The output from the PRF \(\mathcal {G'}(s_3, \cdot )\) is derived from a truly random function in this game.
-
If the common source of randomness generates enough min-entropy, then the CBKA algorithm can output strings statistically close to uniform. Furthermore, if the PUF, that is equipped on the device generates enough min-entropy, then the fuzzy extractor can output strings statistically close to uniform. We then can set these strings as the seed for the PRF and the verifier and the prover share a common secret. So we can construct the challenge response authentication protocol with secure key update.
Lemma 1
\(S_0 = n \cdot S_1\) (where n is the number of devices, i.e. provers).
Proof
If the adversary wins the game, there is at least one session which the verifier or prover accepts while the communication is modified by the adversary. Since the challenger randomly selects the session, the probability that the session is correctly guessed by the challenger is at least \(1{\slash }n\).
Lemma 2
\(|S_1 - S_{2-1-1}| \le \epsilon \) and \(|S_{2-(j-1)-9} - S_{2-j-1}| \le \epsilon \) for any \(2 \le j \le \ell \) if the CBKA algorithm is secure as required in Theorem 1.
Proof
Here, the output of the channel measurement and quantization of the CBKA algorithm has enough min-entropy and is independent from the other outputs except with negligible probability \(\epsilon \). If so, then there is no difference between these games. The property of CBKA assumed here says that even if the input to channel measurement and quantization of the CBKA algorithm is published, i.e. the authentic common randomness, the output derived from the input keeps the sufficient min-entropy property, and therefore each output is uncorrelated. Hence, the reveal query issued by the adversary is random looking by the assumption of this property.
Lemma 3
\(|S_{2-j-1} - S_{2-j-2}| \le \epsilon \) for any \(2 \le j \le \ell \) if the \(\mathsf {CBKA.IR}\) is an information reconciliation in a \((m_Q, m_{IR},t,n,\ell ,\epsilon )\)-channel-based key agreement.
Proof
Since we assumed that, always, the output from the quantization of the CBKA algorithm has enough min-entropy, the output of the information reconciliation procedure of the CBKA algorithm has enough min-entropy and is independent from the other outputs except with negligible probability \(\epsilon \). This is given by the security property of information reconciliation.
Lemma 4
\(|S_{2-j-2} - S_{2-j-3}| \le \epsilon \) for any \(2 \le j \le \ell \) if the \(\mathsf {CBKA.PA}\) is a privacy amplification in a \((m_Q, m_{IR},t,n,\ell ,\epsilon )\)-channel-based key agreement.
Proof
Since we assumed that, always, the output from the information reconciliation procedure of the CBKA algorithm has enough min-entropy, it is clear that no adversary can distinguish these games due to the randomization property of privacy amplification, meaning privacy amplification guarantees that its output is statistically close to random. This is given by the security property of privacy amplification.
Lemma 5
\(|S_{2-j-3} - S_{2-j-4}| \le \epsilon \le j \le \ell \) if f is a secure PUF as required in Theorem 1.
Proof
Here, the PUF’s output has enough min-entropy and is independent from the other outputs except with negligible probability \(\epsilon \). If so, then there is no difference between these games. The property of the PUF assumed here says that even if the input to the PUF is published, the output derived from the input keeps the sufficient min-entropy property, and therefore each output is uncorrelated. Hence, the reveal query issued by the adversary is random looking by the assumption of this property.
Lemma 6
\(|S_{2-j-4} - S_{2-j-5}| \le \epsilon \) for any \(2 \le j \le \ell \) if the \(\mathsf {FE}\) is a (\(m, \ell , t, \epsilon \))-fuzzy extractor.
Proof
Since we assumed that, always, the output from the PUF has enough min-entropy, it is clear that no adversary can distinguish these games due to the randomization property of the fuzzy extractor, meaning the fuzzy extractor guarantees that its output is statistically close to random.
Lemma 7
\(\forall 1 \le j \le \ell \), \(|S_{2-j-5} - S_{2-j-6}| \le \mathsf {Adv}^{\mathsf {PRF}}_{\mathcal {G,B}}(k)\) where \(\mathsf {Adv}^{\mathsf {PRF}}_{\mathcal {G,B}}(k)\) is an advantage of \(\mathcal {B}\) to break the security of the PRF \(\mathcal {G}\).
Proof
If there is a difference between these games, we construct an algorithm \(\mathcal {B}\) which breaks the security or PRF \(\mathcal {G}\). \(\mathcal {B}\) can access the real PRF \(\mathcal {G}(r_1, \cdot )\) or truly random function \(\mathsf {RF}\). \(\mathcal {B}\) sets up all secret keys and simulates our protocol except the n-th session. When the adversary invokes the n-th session, \(\mathcal {B}\) sends \(m_1 \xleftarrow {\textstyle \mathsf {{\scriptstyle U}}} \{0,1\}^k\) as the output of the verifier. When \(\mathcal {A}\) sends \(m^*_1\) to a device \(dev_i\), \(\mathcal {B}\) selects \(m_2\) and issues \(m^*_1 || m_2\) to the oracle instead of the normal computation of \(\mathcal {G}\). Upon receiving (\(s_1, \dots , s_4\)), \(\mathcal {B}\) continues the computation as the protocol specification and outputs (\(c, m_2, s_1, u_1, v_1\)) as the prover’s response. When the adversary sends (\(m^*_2, s^*_1, u^*_1, v^*_1\)), \(\mathcal {B}\) issues \(m_1 || m^*_2\) to the oracle and obtains (\(s'_1, \dots , s'_6\)).
If \(\mathcal {B}\) accesses the real PRF, this simulation is equivalent to Game 2-j-5. Otherwise, the oracle query issued by \(\mathcal {B}\) is completely random and this distribution is equivalent to Game 2-j-6. Thus we have \(|S_{2-j-5} - S_{2-j-6}| \le \mathsf {Adv}^{\mathsf {PRF}}_{\mathcal {G,B}}(k)\).
Lemma 8
\(\forall 1 \le j \le \ell \), \(|S_{2-j-6} - S_{2-j-7}| \le \mathsf {Adv}^{\mathsf {PRF}}_{\mathcal {G,B}}(k)\).
Proof
The proof is as the proof for Lemma 7.
Lemma 9
\(\forall 1 \le j \le \ell \), \(S_{2-j-7} = S_{2-j-8}\).
Proof
Since the PRF \(\mathcal {G}(r_1, \cdot )\) is already changed to the truly random function in Game 2-j-7, \(s_2\) is used as effectively one-time pad to encrypt \(z'_2\). Therefore this transformation is purely conceptual change and the output distributions of these games are information theoretically equivalent.
Lemma 10
\(\forall 1 \le j \le \ell \), \(|S_{2-j-8} - S_{2-j-9}| \le 2 \cdot \mathsf {Adv}^{\mathsf {PRF}}_{\mathcal {G',B'}}(k)\).
Proof
We can think that the seed input to the PRF \(\mathcal {G'}\) is changed to the random variable from the previous games. Consider an algorithm \(\mathcal {B}\) which interacts with PRF \(\mathcal {G'}(s_3, \cdot )\) or random function \(\mathsf {RF}\). As in the proof for Lemma 7, \(\mathcal {B}\) simulates the protocol as the challenger up to the n-th session. \(\mathcal {B}\) generates (\(c, u_1\)) and issues \(c || u_1\) to the oracle. \(\mathcal {B}\) generates the other variables as in the previous game and sends \((c, m_2, s_1, u_1, v_1)\) as the prover’s output after it obtains \(v_1\) from the oracle. If the verifier receives \((c^*, m^*_2, s^*_1, u^*_1, v^*_1)\), \(\mathcal {B}\) checks that \((c^*, m^*_2, s_1^*) = (c, m_2, s_1)\). If so, \(\mathcal {B}\) issues \(c^* || m^*_2 || u^*_1\) to the oracle to check whether its response is identical to \(v^*_1\).
If \(\mathcal {B}\) accesses the real PRF, this simulation is equivalent to Game 2-j-8. Otherwise, \(\mathcal {B}\)’s simulation is identical to Game 2-j-9. Thus the difference between these games are bounded by the security of PRF \(\mathcal {G'}\).
Since the above game transformation is bounded by certain assumptions; i.e. for PUF, fuzzy extractor and PRFs, we can transform Game 0 to Game 2-\(\ell \)-9. Considering Game 2-\(\ell \)-9 there is no advantage for the adversary to impersonate the prover. Consider the case that the server accepts the session which is not actually derived the prover. Assume that the adversary obtains \((c, m_2, s_1, u_1, v_1)\) from the prover. To mount the man-in-the-middle attack, the adversary must modify at least one of these variables.
Even when the adversary issues the reveal query and obtains \(y_1\) before the session, he cannot predict the response \(z_1\). Since sk is generated after he can issue his reveal query, the session key remains secret and so hd remains encrypted. When the adversary modifies \(m_2\), the probability that the adversary wins the security game is negligible since \(s_1\) is chosen from the truly random function. If \(m_2\) is not changed, the verifier only accepts \(s_1\) since it is deterministically defined by \(m_1\) chosen by the verifier and \(m_2\). The first verification is passed only when the adversary reuses \((c, m_2, s_1)\), but \(v_1\) is also derived from another random function. Thus the adversary cannot guess it and any modified message is rejected except with negligible probability. The same argument also applies to the verifier authentication, because the prover checks the verifier with the outputs from \(\mathcal {G}\) and \(\mathcal {G'}\). Therefore, any adversary cannot mount the man-in-the-middle attack in our protocol and we finally have
if the PUF and fuzzy extractor holds its properties.
B Privacy Proof
Again, we use the proof provided by the work of Moriyama [29] and Aysu et al. [6] as a basis for our proof. The proof for Theorem 2 is as follows.
Proof
The proof we provide here is similar to that for Theorem 1. However, we remark that it is important to assume that our protocol satisfies security as in Theorem 1 first for privacy to hold. The reason is that if the security is broken and a malicious adversary successfully impersonates device \(dev^*_0\), the verifier will update the secret key that is not derived by the prover any more. So the verifier does not accept this prover after the attack and the adversary easily distinguishes the prover in the privacy game. Even if the adversary honestly transmits the communication message between \(\mathcal {I}(dev^*_0)\) and the verifier in the challenge phase, the authentication result is always 0 and the adversary can realize which prover is selected as the challenge prover.
We modify Game 1 such that the challenger guesses two provers which will be chosen by the adversary in the privacy game. This probability that is at least \(1 / n^2\), and, then, we can continue the game transformation. After that, the game transformation described in Game 2 is applied to the sessions related to \(dev^*_0\) and \(dev^*_1\). Then the communication message \((c,m_2, s_1, u_1, v_1)\) and \((s'_4)\) are changed to random variables. Even if the adversary can obtain the secret key of the prover within the privacy game, input to the PUF and helper data used in the challenge phase are independent from choices in the other phases. The re-synchronization allows this separation and new values are always random. Therefore, there is no information against which the adversary can distinguish the challenge prover in the privacy game, and we get:
for some algorithm \((\mathcal {A'}, \mathcal {B}, \mathcal {B'})\) derived from the games.
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Huth, C., Aysu, A., Guajardo, J., Duplys, P., Güneysu, T. (2017). Secure and Private, yet Lightweight, Authentication for the IoT via PUF and CBKA. In: Hong, S., Park, J. (eds) Information Security and Cryptology – ICISC 2016. ICISC 2016. Lecture Notes in Computer Science(), vol 10157. Springer, Cham. https://doi.org/10.1007/978-3-319-53177-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-53177-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-53176-2
Online ISBN: 978-3-319-53177-9
eBook Packages: Computer ScienceComputer Science (R0)