Abstract
In the paper, we design a privacy algorithm for maximizing a general submodular set function over a down-monotone family of subsets, which includes some typical and important constraints such as matroid and knapsack constraints. The technique is inspired by the measured continuous greedy (MCG) which compensates for the difference between the residual increase of elements at a given point and the gradient of it by distorting the original direction with a multiplicative factor. It directly makes the continuous greedy approach fit to the problem of maximizing a non-monotone submodular function. We generate the MCG algorithm in the framework of differential privacy. It is accepted as a robust mathematical guarantee and can provide the protection to sensitive and personal data. We propose a 1/e-approximation algorithm for the general submodular function. Moreover, for monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of \(1-1/e\).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M.: Deep learning with differential privacy. In: Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)
Alon, N., Spencer, J.H.: The Probabilistic Method, vol. 3, pp. 307–314. Wiley, New York (2004)
Bu, Z.Q., Gopi, S., Kulkarni, J., Lee, Y.T., Shen, J.H., Tantipongpipat, U.: Fast and memory efficient differentially private-SGD via JL projections. arXiv: 2102.03013 (2021)
Buchbinder, N., Feldman, M.: Constrained submodular maximization via a non-symmetric technique. Math. Oper. Res. 44(3), 988–1005 (2019)
Buchbinder, N., Feldman, M., Naor, J.S., Schwart, R.: Submodular maximization with cardinality constraints. In: Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1433–1452 (2014)
C\(\breve{\rm a}\)linescu, G., Chekuri, C., Pál, M., Vondrák, J.: Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput. 40(6), 1740–1766 (2011)
Chekuri, C., Jayram, T.S., Vondrák, J.: On multiplicative weight updates for concave and submodular function maximization. In: Proceedings of the 6th Innovations in Theoretical Computer Science, pp. 201–210 (2015)
Chekuri, C., Vondrák, J., Zenklusen, R.: Dependent randomized rounding for matroid polytopes and applications. In: Proceedings of the 51st Annual Symposium on Foundations of Computer Science, pp. 575–584 (2010)
Chekuri, C., Vondrák, J., Zenklusen, R.: Multi-budgeted matchings and matroid intersection via dependent rounding. In: Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1080–1097 (2011)
Chekuri, C., Vondrák, J., Zenklusen, R.: Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM J. Comput. 43(6), 1831–1879 (2014)
Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29
Ene, A., Nguyen, H.L.: Constrained submodular maximization: beyond \(1/e\). In: Proceedings of the 57th Annual Symposium on Foundations of Computer Science, pp. 248–257 (2016)
Feige, U., Mirrokni, V.S., Vondrák, J.: Maximizing non-monotone submodular functions. SIAM J. Comput. 40(4), 1133–1153 (2011)
Feldman, M., Naor, J., Schwartz, R.: A unified continuous greedy algorithm for submodular maximization. In: Proceedings of the 52nd Annual Symposium on Foundations of Computer Science, pp. 570–579 (2011)
Gharan, S.O., Vondrák, J.: Submodular maximization by simulated annealing. In: Proceedings of the 22 Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1098–1116 (2011)
Gupta, A., Ligett, K., McSherry, F., Roth, A., Talwar, K.: Differentially private combinatorial optimization. In: Proceedings of the 31st Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1106–1125 (2010)
Gupta, A., Roth, A., Schoenebeck, G., Talwar, K.: Constrained non-monotone submodular maximization: offline and secretary algorithms. In: Saberi, A. (ed.) WINE 2010. LNCS, vol. 6484, pp. 246–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-17572-5_20
Hochbaum, D.S.: An efficient algorithm for image segmentation, Markov random fields and related problems. J. ACM 48(4), 686–701 (2001)
Kempe, D., Kleinberg, J.M., Tardos, E.: Maximizing the spread of influence through a social network. In: Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 137–146 (2003)
Krause, A., Guestrin, C.: Near-optimal nonmyopic value of information in graphical models. In: Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, pp. 324–331 (2005)
Krause, A., Guestrin, C.: Near-optimal observation selection using submodular functions. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pp. 1650–1654 (2007)
Kulik, A., Shachnai, H., Tamir, T.: Maximizing submodular set functions subject to multiple linear constraints. In: Proceedings of the 20th Annual ACM SIAM Symposium on Discrete Algorithms, pp. 545–554 (2009)
Lin, H., Bilmes, J.A.: A class of submodular functions for document summarization. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 510–520 (2011)
Lee, J., Mirrokni, V.S., Nagarajan, V., Sviridenko, M.: Non-monotone submodular maximization under matroid and knapsack constraints. In: Proceedings of the 41th Annual ACM Symposium on Theory of Computing, pp. 323–332 (2009)
McSherry, F., Talwar, K.: Mechanism design via differential privacy. In: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science, pp. 94–103 (2007)
Mirzasoleiman, B., Badanidiyuru, A., Karbasi, A.: Fast constrained submodular maximization: personalized data summarization. In: Proceedings of the 33rd International Conference on Machine Learning, pp. 1358–1367 (2016)
Mitrovic, M., Bun, M., Krause, A., Karbasi, A.: Differentially private submodular maximization: data summarization in disguise. In: Proceedings of the 34th International Conference on Machine Learning, pp. 2478–2487 (2017)
Nemhauser, G.L., Wolsey, L.A.: Best algorithms for approximating the maximum of a submodular set function. Math. Oper. Res. 3(3), 177–188 (1978)
Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions-I. Math. Program. 14(1), 265–294 (1978)
Papadimitriou, C.H., Schapira, M., Singer, Y.: On the hardness of being truthful. In: Proceedings of the 49th Annual Symposium on Foundations of Computer Science, pp. 250–259 (2008)
Rafiey, A., Yoshida, Y.: Fast and private submodular and \(k\)-submodular functions maximization with matroid constraints. In: Proceedings of the 37th International Conference on Machine Learning, pp. 7887–7897 (2020)
Streeter, M.J., Golovin, D.: An online algorithm for maximizing submodular functions. In: Proceedings of the 22nd International Conference on Advances in Neural Information Processing Systems, pp. 1577–1584 (2008)
Sviridenko, M.: A note on maximizing a submodular set function subject to knapsack constraint. Oper. Res. Lett. 32(1), 41–43 (2004)
Vondrák, J.: Optimal approximation for the submodular welfare problem in the value oracle model. In: Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pp. 67–74 (2008)
Vondrák, J.: Symmetry and approximability of submodular maximization problems. SIAM J. Comput. 42(1), 265–304 (2013)
Yoshida, Y.: Cheeger inequalities for submodular transformations. In: Proceedings of the 30th Annual ACM Symposium on Discrete Algorithms, pp. 2582–2601 (2019)
Acknowledgements
The first author is supported by Beijing Natural Science Foundation Project No. Z200002 and National Natural Science Foundation of China (No. 12131003). The fourth author is supported by National Natural Science Foundation of China (No. 12001025) and Science and Technology Program of Beijing Education Commission (No. KM201810005006).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Missing Proofs
Appendix: Missing Proofs
Theorem 3.1
Algorithm 1 preserves \(O(\epsilon \cdot d_{\mathcal {P}}^2)\)-differential privacy.
Proof
Let D and \(D^{\prime }\) be two neighboring datasets and \(f_D\), \(f_{D^{\prime }}\) be their associated functions. For a fixed \(\mathbf {y}_t\in C_{\rho }\), we consider the relative probability of Algorithm 1 (denoted by M) choosing \(\mathbf {y}_t\) at time step t given multilinear extensions of \(f_D\) and \(f_{D^{\prime }}\). Let \(M_t(f_D|\mathbf {x}_t)\) denote the output of M at time step t given dataset D and point \(\mathbf {x}_t\). Similarly, \(M_t(f_{D^{\prime }}|\mathbf {x}_t)\) denotes the output of M at time step t given dataset \(D^{\prime }\) and point \(\mathbf {x}_t\). Further, write \(d_\mathbf {y}=\langle \mathbf {y},\nabla f_D(\mathbf {x}_t)\rangle \) and \(d^{\prime }_\mathbf {y}=\langle \mathbf {y},\nabla f_D(\mathbf {x}_t)\rangle \). We have
For the first factor, we have
Note that the last inequality holds since \(\mathbf {y}_t\) is a member of the polytope \(\mathcal {P}\) and by definition we have \(\sum _{e\in \mathcal {X}}a_{i,e}\mathbf {y}_t(e)\le b_i\) and \(d_{\mathcal {P}}=\min _{1\le i\le m}\frac{b_i}{\sum _{e\in \mathcal {X}}a_{i,e}}\). Moreover, recall that \(f_D\) is \(\varDelta \)-sensitive.
For the second factor, let us write \(\beta _{\mathbf {y}} = d^{\prime }_{\mathbf {y}} - d_{\mathbf {y}}\) to be the deficit of the probabilities of choosing direction \(\mathbf {y}\) in instances \(f_{D^{\prime }}\) and \(f_D\). Then, we have
\(\square \)
Lemma 3.1
For every time \(0\le t\le T\), I(t) is the sampled vector by Algorithm 1 and \(I^\prime (t)\) is the solution of the linear programming in measured continuous greedy algorithm. Then,
Proof
where the inequality is followed by submodularity.
Since \(\mathbf {1}_{OPT}\in \mathcal {P}\), we get from Algorithm 1
Hence,
Recall we define a neighboring feasible field, i.e., the \(\rho \)-covering of \(\mathcal {P}\). And we get the followings by the Theorem 2.2 of exponential mechanism:
\(\square \)
Lemma 3.2
For every time \(0\le t < T\),
Proof
where the first and last inequalities are given by Lemma 2.2 and Lemma 3.1. And the algorithm makes the equality hold. \(\square \)
Lemma 3.3
For every \(0\le t\le T\),
Proof
Assume the big O notation in Lemma 3.3 to be \(cn^3\delta ^2\). Prove by induction on t that \(g(t)\le F(\mathbf {y}(t)) + cn^3\delta t f(OPT)\). For \(t=0\), \(g(0)=0\le F(\mathbf {y}(0))\). Assume that the claim holds for some t. Then
where the inductive assumption and Lemma 3.3 give the first two inequalities and the last one is hold by \(\delta \in [0,1]\). \(\square \)
Lemma 3.4
For every time \(0\le t\le T\), \(g(t)\ge h(t)\).
Proof
The proof is by induction on t. For \(t=0\), \(g(0)=0=h(0)\). Assume that the lemma holds for some t. Then, we can easily get
\(\square \)
Corollary 3.1
\(F(\mathbf {y}(t)) \ge \left[ Te^{-T} - o(1) \right] \cdot f(OPT) - \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \).
Proof
Recall that \(\delta \le n^{-5}\), hence, \(O(n^3\delta )=o(1)\) and the proof is complete. \(\square \)
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Sun, X., Li, G., Zhang, Y., Zhang, Z. (2021). Measured Continuous Greedy with Differential Privacy. In: Wu, W., Du, H. (eds) Algorithmic Aspects in Information and Management. AAIM 2021. Lecture Notes in Computer Science(), vol 13153. Springer, Cham. https://doi.org/10.1007/978-3-030-93176-6_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-93176-6_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93175-9
Online ISBN: 978-3-030-93176-6
eBook Packages: Computer ScienceComputer Science (R0)