Abstract
In this work, we investigate the problem of statistical data analysis while preserving user privacy in the distributed and semi-honest setting. Particularly, we study properties of Private Stream Aggregation (PSA) schemes, first introduced by Shi et al. in 2011. A PSA scheme is a secure multiparty protocol for the aggregation of time-series data in a distributed network with a minimal communication cost. We show that in the non-adaptive query model, secure PSA schemes can be built upon any key-homomorphic weak pseudo-random function (PRF) and we provide a tighter security reduction. In contrast to the aforementioned work, this means that our security definition can be achieved in the standard model. In addition, we give two computationally efficient instantiations of this theoretic result. The security of the first instantiation comes from a key-homomorphic weak PRF based on the Decisional Diffie-Hellman problem and the security of the second one comes from a weak PRF based on the Decisional Learning with Errors problem. Moreover, due to the use of discrete Gaussian noise, the second construction inherently maintains a mechanism that preserves \((\epsilon ,\delta )\)-differential privacy in the final data-aggregate. A consequent feature of the constructed protocol is the use of the same noise for security and for differential privacy. As a result, we obtain an efficient prospective post-quantum PSA scheme for differentially private data analysis in the distributed model.
The research was supported by the DFG Research Training Group GRK 1817/1.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
These mechanisms work in the centralised setting, where a trusted curator sees the full database in the clear and perturbs it properly.
- 2.
If u is random, we consider instead of l after the choice of u in the execution of game \(\varvec{3}_l\) for all l.
- 3.
One can assume that corrupted users will not add any noise to their data in order to help the analyst to compromise the privacy of the remaining users. For simplicity, we ignore this issue here.
- 4.
Due to the use of a cryptographic protocol, the plaintexts have to be discrete. This is the reason why we use discrete distributions for generating noise.
- 5.
For sake of comparability, we disregard here the condition for reproducibility of \(D(\nu )\).
References
Applebaum, B., Cash, D., Peikert, C., Sahai, A.: Fast cryptographic primitives and circular-secure encryption based on hard learning problems. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 595–618. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03356-8_35
Banerjee, A., Peikert, C., Rosen, A.: Pseudorandom functions and lattices. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 719–737. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29011-4_42
Benhamouda, F., Joye, M., Libert, B.: A new framework for privacy-preserving aggregation of time-series data. ACM Trans. Inf. Syst. Secur. 18(3), 21 (2016)
Blum, A., Ligett, K., Roth, A.: A learning theory approach to non-interactive database privacy. In: Proceedings of STOC 2008, pp. 609–618 (2008)
Boneh, D., Freeman, D.M.: Linearly homomorphic signatures over binary fields and new tools for lattice-based signatures. In: Catalano, D., Fazio, N., Gennaro, R., Nicolosi, A. (eds.) PKC 2011. LNCS, vol. 6571, pp. 1–16. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-19379-8_1
Cramer, R., Shoup, V.: Universal hash proofs and a paradigm for adaptive chosen ciphertext secure public-key encryption. In: Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332, pp. 45–64. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-46035-7_4
Dwork, C.: Differential privacy: a survey of results. In: Agrawal, M., Du, D., Duan, Z., Li, A. (eds.) TAMC 2008. LNCS, vol. 4978, pp. 1–19. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79228-4_1
Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14
Ghosh, A., Roughgarden, T., Sundararajan, M.: Universally utility-maximizing privacy mechanisms. In: Proceedings of STOC 2009, pp. 351–360 (2009)
Goldreich, O., Goldwasser, S., Micali, S.: How to construct random functions. J. ACM 33(4), 792–807 (1986)
Joye, M., Libert, B.: A scalable scheme for privacy-preserving aggregation of time-series data. In: Sadeghi, A.-R. (ed.) FC 2013. LNCS, vol. 7859, pp. 111–125. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39884-1_10
Li, N., Li, T., Venkatasubramanian, S.: t-closeness: privacy beyond k-anonymity and l-diversity. In: Proceedings of ICDE 2007, pp. 106–115 (2007)
Lindell, Y., Pinkas, B.: Secure multiparty computation for privacy-preserving data mining. J. Priv. Confidentiality 1(1), 5 (2009)
Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M.: L-diversity: privacy beyond k-anonymity. In: Proceedings of Knowledge Discovery Data 2007 (2007)
McSherry, F., Talwar, K.: Mechanism design via differential privacy. In: Proceedings of FOCS 2007, pp. 94–103 (2007)
Micciancio, D., Mol, P.: Pseudorandom knapsacks and the sample complexity of LWE search-to-decision reductions. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 465–484. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22792-9_26
Mironov, I., Pandey, O., Reingold, O., Vadhan, S.: Computational differential privacy. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 126–142. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03356-8_8
Naor, M., Pinkas, B., Reingold, O.: Distributed pseudo-random functions and KDCs. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 327–346. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48910-X_23
Naor, M., Reingold, O.: Synthesizers and their application to the parallel construction of pseudo-random functions. In: Proceedings of FOCS 1995, pp. 170–181 (1995)
Peikert, C.: Public-key cryptosystems from the worst-case shortest vector problem: extended abstract. In: Proceedings of STOC 2009, pp. 333–342 (2009)
Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. In: Proceedings of STOC 2005, pp. 84–93 (2005)
Samarati, P., Sweeney, L.: Generalizing data to provide anonymity when disclosing information (abstract). In: Proceedings of PODS 1998, p. 188 (1998)
Shi, E., Hubert Chan, T.-H., Rieffel, E.G., Chow, R., Song, D.: Privacy-preserving aggregation of time-series data. In: Proceedings of NDSS 2011 (2011)
Thiruvenkatachar, V.R., Nanjundiah, T.S.: Inequalities concerning bessel functions and orthogonal polynomials. Proc. Indian Nat. Acad. Part A 33, 373–384 (1951)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Valovich, F. (2019). Aggregation of Time-Series Data Under Differential Privacy. In: Lange, T., Dunkelman, O. (eds) Progress in Cryptology – LATINCRYPT 2017. LATINCRYPT 2017. Lecture Notes in Computer Science(), vol 11368. Springer, Cham. https://doi.org/10.1007/978-3-030-25283-0_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-25283-0_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25282-3
Online ISBN: 978-3-030-25283-0
eBook Packages: Computer ScienceComputer Science (R0)