Skip to main content

Measured Continuous Greedy with Differential Privacy

  • Conference paper
  • First Online:
Algorithmic Aspects in Information and Management (AAIM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13153))

Included in the following conference series:

Abstract

In the paper, we design a privacy algorithm for maximizing a general submodular set function over a down-monotone family of subsets, which includes some typical and important constraints such as matroid and knapsack constraints. The technique is inspired by the measured continuous greedy (MCG) which compensates for the difference between the residual increase of elements at a given point and the gradient of it by distorting the original direction with a multiplicative factor. It directly makes the continuous greedy approach fit to the problem of maximizing a non-monotone submodular function. We generate the MCG algorithm in the framework of differential privacy. It is accepted as a robust mathematical guarantee and can provide the protection to sensitive and personal data. We propose a 1/e-approximation algorithm for the general submodular function. Moreover, for monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of \(1-1/e\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M.: Deep learning with differential privacy. In: Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)

    Google Scholar 

  2. Alon, N., Spencer, J.H.: The Probabilistic Method, vol. 3, pp. 307–314. Wiley, New York (2004)

    Google Scholar 

  3. Bu, Z.Q., Gopi, S., Kulkarni, J., Lee, Y.T., Shen, J.H., Tantipongpipat, U.: Fast and memory efficient differentially private-SGD via JL projections. arXiv: 2102.03013 (2021)

  4. Buchbinder, N., Feldman, M.: Constrained submodular maximization via a non-symmetric technique. Math. Oper. Res. 44(3), 988–1005 (2019)

    Article  MathSciNet  Google Scholar 

  5. Buchbinder, N., Feldman, M., Naor, J.S., Schwart, R.: Submodular maximization with cardinality constraints. In: Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1433–1452 (2014)

    Google Scholar 

  6. C\(\breve{\rm a}\)linescu, G., Chekuri, C., Pál, M., Vondrák, J.: Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput. 40(6), 1740–1766 (2011)

    Google Scholar 

  7. Chekuri, C., Jayram, T.S., Vondrák, J.: On multiplicative weight updates for concave and submodular function maximization. In: Proceedings of the 6th Innovations in Theoretical Computer Science, pp. 201–210 (2015)

    Google Scholar 

  8. Chekuri, C., Vondrák, J., Zenklusen, R.: Dependent randomized rounding for matroid polytopes and applications. In: Proceedings of the 51st Annual Symposium on Foundations of Computer Science, pp. 575–584 (2010)

    Google Scholar 

  9. Chekuri, C., Vondrák, J., Zenklusen, R.: Multi-budgeted matchings and matroid intersection via dependent rounding. In: Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1080–1097 (2011)

    Google Scholar 

  10. Chekuri, C., Vondrák, J., Zenklusen, R.: Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM J. Comput. 43(6), 1831–1879 (2014)

    Article  MathSciNet  Google Scholar 

  11. Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29

    Chapter  Google Scholar 

  12. Ene, A., Nguyen, H.L.: Constrained submodular maximization: beyond \(1/e\). In: Proceedings of the 57th Annual Symposium on Foundations of Computer Science, pp. 248–257 (2016)

    Google Scholar 

  13. Feige, U., Mirrokni, V.S., Vondrák, J.: Maximizing non-monotone submodular functions. SIAM J. Comput. 40(4), 1133–1153 (2011)

    Article  MathSciNet  Google Scholar 

  14. Feldman, M., Naor, J., Schwartz, R.: A unified continuous greedy algorithm for submodular maximization. In: Proceedings of the 52nd Annual Symposium on Foundations of Computer Science, pp. 570–579 (2011)

    Google Scholar 

  15. Gharan, S.O., Vondrák, J.: Submodular maximization by simulated annealing. In: Proceedings of the 22 Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1098–1116 (2011)

    Google Scholar 

  16. Gupta, A., Ligett, K., McSherry, F., Roth, A., Talwar, K.: Differentially private combinatorial optimization. In: Proceedings of the 31st Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1106–1125 (2010)

    Google Scholar 

  17. Gupta, A., Roth, A., Schoenebeck, G., Talwar, K.: Constrained non-monotone submodular maximization: offline and secretary algorithms. In: Saberi, A. (ed.) WINE 2010. LNCS, vol. 6484, pp. 246–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-17572-5_20

    Chapter  Google Scholar 

  18. Hochbaum, D.S.: An efficient algorithm for image segmentation, Markov random fields and related problems. J. ACM 48(4), 686–701 (2001)

    Article  MathSciNet  Google Scholar 

  19. Kempe, D., Kleinberg, J.M., Tardos, E.: Maximizing the spread of influence through a social network. In: Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 137–146 (2003)

    Google Scholar 

  20. Krause, A., Guestrin, C.: Near-optimal nonmyopic value of information in graphical models. In: Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, pp. 324–331 (2005)

    Google Scholar 

  21. Krause, A., Guestrin, C.: Near-optimal observation selection using submodular functions. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pp. 1650–1654 (2007)

    Google Scholar 

  22. Kulik, A., Shachnai, H., Tamir, T.: Maximizing submodular set functions subject to multiple linear constraints. In: Proceedings of the 20th Annual ACM SIAM Symposium on Discrete Algorithms, pp. 545–554 (2009)

    Google Scholar 

  23. Lin, H., Bilmes, J.A.: A class of submodular functions for document summarization. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 510–520 (2011)

    Google Scholar 

  24. Lee, J., Mirrokni, V.S., Nagarajan, V., Sviridenko, M.: Non-monotone submodular maximization under matroid and knapsack constraints. In: Proceedings of the 41th Annual ACM Symposium on Theory of Computing, pp. 323–332 (2009)

    Google Scholar 

  25. McSherry, F., Talwar, K.: Mechanism design via differential privacy. In: Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science, pp. 94–103 (2007)

    Google Scholar 

  26. Mirzasoleiman, B., Badanidiyuru, A., Karbasi, A.: Fast constrained submodular maximization: personalized data summarization. In: Proceedings of the 33rd International Conference on Machine Learning, pp. 1358–1367 (2016)

    Google Scholar 

  27. Mitrovic, M., Bun, M., Krause, A., Karbasi, A.: Differentially private submodular maximization: data summarization in disguise. In: Proceedings of the 34th International Conference on Machine Learning, pp. 2478–2487 (2017)

    Google Scholar 

  28. Nemhauser, G.L., Wolsey, L.A.: Best algorithms for approximating the maximum of a submodular set function. Math. Oper. Res. 3(3), 177–188 (1978)

    Article  MathSciNet  Google Scholar 

  29. Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions-I. Math. Program. 14(1), 265–294 (1978)

    Article  MathSciNet  Google Scholar 

  30. Papadimitriou, C.H., Schapira, M., Singer, Y.: On the hardness of being truthful. In: Proceedings of the 49th Annual Symposium on Foundations of Computer Science, pp. 250–259 (2008)

    Google Scholar 

  31. Rafiey, A., Yoshida, Y.: Fast and private submodular and \(k\)-submodular functions maximization with matroid constraints. In: Proceedings of the 37th International Conference on Machine Learning, pp. 7887–7897 (2020)

    Google Scholar 

  32. Streeter, M.J., Golovin, D.: An online algorithm for maximizing submodular functions. In: Proceedings of the 22nd International Conference on Advances in Neural Information Processing Systems, pp. 1577–1584 (2008)

    Google Scholar 

  33. Sviridenko, M.: A note on maximizing a submodular set function subject to knapsack constraint. Oper. Res. Lett. 32(1), 41–43 (2004)

    Article  MathSciNet  Google Scholar 

  34. Vondrák, J.: Optimal approximation for the submodular welfare problem in the value oracle model. In: Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pp. 67–74 (2008)

    Google Scholar 

  35. Vondrák, J.: Symmetry and approximability of submodular maximization problems. SIAM J. Comput. 42(1), 265–304 (2013)

    Article  MathSciNet  Google Scholar 

  36. Yoshida, Y.: Cheeger inequalities for submodular transformations. In: Proceedings of the 30th Annual ACM Symposium on Discrete Algorithms, pp. 2582–2601 (2019)

    Google Scholar 

Download references

Acknowledgements

The first author is supported by Beijing Natural Science Foundation Project No. Z200002 and National Natural Science Foundation of China (No. 12131003). The fourth author is supported by National Natural Science Foundation of China (No. 12001025) and Science and Technology Program of Beijing Education Commission (No. KM201810005006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yapu Zhang .

Editor information

Editors and Affiliations

Appendix: Missing Proofs

Appendix: Missing Proofs

Theorem 3.1

Algorithm 1 preserves \(O(\epsilon \cdot d_{\mathcal {P}}^2)\)-differential privacy.

Proof

Let D and \(D^{\prime }\) be two neighboring datasets and \(f_D\), \(f_{D^{\prime }}\) be their associated functions. For a fixed \(\mathbf {y}_t\in C_{\rho }\), we consider the relative probability of Algorithm 1 (denoted by M) choosing \(\mathbf {y}_t\) at time step t given multilinear extensions of \(f_D\) and \(f_{D^{\prime }}\). Let \(M_t(f_D|\mathbf {x}_t)\) denote the output of M at time step t given dataset D and point \(\mathbf {x}_t\). Similarly, \(M_t(f_{D^{\prime }}|\mathbf {x}_t)\) denotes the output of M at time step t given dataset \(D^{\prime }\) and point \(\mathbf {x}_t\). Further, write \(d_\mathbf {y}=\langle \mathbf {y},\nabla f_D(\mathbf {x}_t)\rangle \) and \(d^{\prime }_\mathbf {y}=\langle \mathbf {y},\nabla f_D(\mathbf {x}_t)\rangle \). We have

$$\begin{aligned} \frac{\Pr [M_t(f_D|\mathbf {x}_t)=\mathbf {y}_t]}{\Pr [M_t(f_{D^{\prime }}|\mathbf {x}_t)=\mathbf {y}_t]} = \frac{\exp (\epsilon ^{\prime }\cdot d_{\mathbf {y}_t})}{\exp (\epsilon ^{\prime }\cdot d^{\prime }_{\mathbf {y}_t})} \cdot \frac{\sum _{\mathbf {y}\in C_{\rho }}\exp (\epsilon ^{\prime }\cdot d^{\prime }_{\mathbf {y}})}{\sum _{\mathbf {y}\in C_{\rho }}\exp (\epsilon ^{\prime }\cdot d_{\mathbf {y}})}. \end{aligned}$$

For the first factor, we have

$$\begin{aligned}&\frac{\exp (\epsilon ^{\prime }\cdot d_{\mathbf {y}_t})}{\exp (\epsilon ^{\prime }\cdot d^{\prime }_{\mathbf {y}_t})} \\= & {} \exp \left( \epsilon ^{\prime }(d_{\mathbf {y}_t} - d^{\prime }_{\mathbf {y}_t}) \right) \\= & {} \exp \left( \epsilon ^{\prime }(\langle \mathbf {y}_t, \nabla f_D(\mathbf {x}_t)-\nabla f_{D^{\prime }}(\mathbf {x}_t)\rangle ) \right) \\\le & {} \exp \left( \epsilon ^{\prime }\Vert \mathbf {y}_t\Vert _1 \Vert \nabla f_D(\mathbf {x}_t)-\nabla f_{D^{\prime }}(\mathbf {x}_t)\Vert _{\infty } \right) \\= & {} \exp \left( \epsilon ^{\prime }\sum _{e\in \mathcal {X}}\mathbf {y}_t(e)\cdot \left( \max _{e\in \mathcal {X}}\mathbb {E}_{R\sim \mathbf {x}_t}\left[ f_D(R\cup \{e\})-f_D(R)-f_{D^{\prime }}(R\cup \{e\})+f_{D^{\prime }}(R) \right] \right) \right) \\\le & {} \exp (O(\epsilon ^{\prime }\cdot md_{\mathcal {P}} \cdot 2\varDelta )) = \exp (O(\epsilon \cdot d_{\mathcal {P}})). \end{aligned}$$

Note that the last inequality holds since \(\mathbf {y}_t\) is a member of the polytope \(\mathcal {P}\) and by definition we have \(\sum _{e\in \mathcal {X}}a_{i,e}\mathbf {y}_t(e)\le b_i\) and \(d_{\mathcal {P}}=\min _{1\le i\le m}\frac{b_i}{\sum _{e\in \mathcal {X}}a_{i,e}}\). Moreover, recall that \(f_D\) is \(\varDelta \)-sensitive.

For the second factor, let us write \(\beta _{\mathbf {y}} = d^{\prime }_{\mathbf {y}} - d_{\mathbf {y}}\) to be the deficit of the probabilities of choosing direction \(\mathbf {y}\) in instances \(f_{D^{\prime }}\) and \(f_D\). Then, we have

$$\begin{aligned} \frac{\sum _{\mathbf {y}\in C_{\rho }}\exp (\epsilon ^{\prime }\cdot d^{\prime }_{\mathbf {y}})}{\sum _{\mathbf {y}\in C_{\rho }}\exp (\epsilon ^{\prime }\cdot d_{\mathbf {y}})}= & {} \frac{\sum _{\mathbf {y}\in C_{\rho }}\exp (\epsilon ^{\prime }\cdot \beta _{\mathbf {y}})\exp (\epsilon ^{\prime }\cdot d_{\mathbf {y}})}{\sum _{\mathbf {y}\in C_{\rho }}\exp (\epsilon ^{\prime }\cdot d_{\mathbf {y}})} \\= & {} \mathbb {E}_{\mathbf {y}}[\exp (\epsilon ^{\prime }\cdot \beta _{\mathbf {y}})] \le \exp (O(\epsilon ^{\prime }\cdot md_{\mathcal {P}}\cdot 2\varDelta )) \\= & {} \exp \left( O(\epsilon \cdot d_{\mathcal {P}})\right) . \end{aligned}$$

   \(\square \)

Lemma 3.1

For every time \(0\le t\le T\), I(t) is the sampled vector by Algorithm 1 and \(I^\prime (t)\) is the solution of the linear programming in measured continuous greedy algorithm. Then,

$$\begin{aligned} I(t)w(t)(1-\mathbf {y}(t))\ge & {} F(\mathbf {y}(t) \vee \mathbf {1}_{OPT}) - F(\mathbf {y}(t)) - O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) . \end{aligned}$$

Proof

$$\begin{aligned} w(t)\cdot \mathbf {1}_{OPT}= & {} \sum _{e\in OPT}w_e(t) = \sum _{e\in OPT}[F(\mathbf {y}(t)\vee \mathbf {1}_e) - F(\mathbf {y}(t))] \\= & {} \mathbb {E}\left[ \sum _{e\in OPT} f(R(\mathbf {y}(t)) + e) - f(R(\mathbf {y}(t))) \right] \\\ge & {} \mathbb {E}\left[ f(R(\mathbf {y}(t))\cup OPT) - f(R(\mathbf {y}(t))) \right] = F(\mathbf {y}(t) \vee \mathbf {1}_{OPT}) - F(\mathbf {y}(t)), \end{aligned}$$

where the inequality is followed by submodularity.

Since \(\mathbf {1}_{OPT}\in \mathcal {P}\), we get from Algorithm 1

$$\begin{aligned} I(t)^\prime \cdot w(t) \ge F(\mathbf {y}(t) \vee \mathbf {1}_{OPT}) - F(\mathbf {y}(t)). \end{aligned}$$

Hence,

$$\begin{aligned}&\sum _{e\in \mathcal {X}} I^\prime _e(t)\cdot (1-\mathbf {y}_e(t))\cdot \partial _e F(\mathbf {y}(t)) \\= & {} \sum _{e\in \mathcal {X}} (1-\mathbf {y}_e(t))\cdot I^\prime _e(t)\cdot [F(\mathbf {y}(t)\vee \mathbf {1}_e) - F(\mathbf {y}(t)\wedge \mathbf {1}_{\hat{e}})] \\= & {} \sum _{e\in \mathcal {X}} I^\prime _e(t) \cdot [F(\mathbf {y}(t)\vee \mathbf {1}_e) - F(\mathbf {y}(t))] = I^\prime (t)\cdot w(t) \\\ge & {} F(\mathbf {y}(t)\vee \mathbf {1}_{OPT}) - F(\mathbf {y}(t)). \end{aligned}$$

Recall we define a neighboring feasible field, i.e., the \(\rho \)-covering of \(\mathcal {P}\). And we get the followings by the Theorem 2.2 of exponential mechanism:

$$\begin{aligned} I(t)w(t)(1-\mathbf {y}(t))\ge & {} \sum _{e\in \mathcal {X}}I^\prime _e(t)\cdot (1-\mathbf {y}_e(t))\cdot \partial _e F(\mathbf {y}(t)) - O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \\\ge & {} F(\mathbf {y}(t) \vee \mathbf {1}_{OPT}) - F(\mathbf {y}(t)) - O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) . \end{aligned}$$

   \(\square \)

Lemma 3.2

For every time \(0\le t < T\),

$$\begin{aligned}&F(\mathbf {y}(t+\delta )) - F(\mathbf {y}(t)) \\\ge & {} \delta \cdot \left[ F(\mathbf {y}(t) \vee \mathbf {1}_{OPT}) - F(\mathbf {y}(t)) - O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \right] - O(n^3\delta ^2)\cdot f(OPT). \end{aligned}$$

Proof

$$\begin{aligned}&F(\mathbf {y}(t+\delta )) - F(\mathbf {y}(t)) \\\ge & {} (\mathbf {y}(t+\delta )-\mathbf {y}(t))\cdot \partial F(\mathbf {y}(t)) - O(n^3\delta ^2)\cdot f(OPT) \\= & {} \delta \cdot I(t)(1-\mathbf {y}(t))w(t) - O(n^3\delta ^2)\cdot f(OPT) \\\ge & {} \delta \cdot \left[ F(\mathbf {y}(t) \vee \mathbf {1}_{OPT}) - F(\mathbf {y}(t)) - O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \right] - O(n^3\delta ^2)\cdot f(OPT), \end{aligned}$$

where the first and last inequalities are given by Lemma 2.2 and Lemma 3.1. And the algorithm makes the equality hold.    \(\square \)

Lemma 3.3

For every \(0\le t\le T\),

$$\begin{aligned} g(t)\le F(\mathbf {y}(t)) + O(n^3\delta )\cdot tf(OPT) + \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) . \end{aligned}$$

Proof

Assume the big O notation in Lemma 3.3 to be \(cn^3\delta ^2\). Prove by induction on t that \(g(t)\le F(\mathbf {y}(t)) + cn^3\delta t f(OPT)\). For \(t=0\), \(g(0)=0\le F(\mathbf {y}(0))\). Assume that the claim holds for some t. Then

$$\begin{aligned} g(t+\delta )= & {} (1-\delta )g(t) + \delta f(OPT) \\\le & {} (1-\delta )\left[ F(\mathbf {y}(t) + cn^3\delta t f(OPT)) \right] + \delta e^{-t} f(OPT) \\= & {} F(\mathbf {y}(t)) + \delta [e^{-t}f(OPT) - F(\mathbf {y}(t))] + c(1-\delta )n^3\delta t f(OPT) \\\le & {} F(\mathbf {y}(t+\delta )) + cn^3\delta ^2 f(OPT) + c(1-\delta )n^3\delta t f(OPT) + \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \\\le & {} F(\mathbf {y}(t+\delta )) + cn^3\delta (t+\delta ) f(OPT) + \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) , \end{aligned}$$

where the inductive assumption and Lemma 3.3 give the first two inequalities and the last one is hold by \(\delta \in [0,1]\).    \(\square \)

Lemma 3.4

For every time \(0\le t\le T\), \(g(t)\ge h(t)\).

Proof

The proof is by induction on t. For \(t=0\), \(g(0)=0=h(0)\). Assume that the lemma holds for some t. Then, we can easily get

$$\begin{aligned} h(t+\delta )= & {} h(t) + \int _t^{t+\delta } h^\prime (l)dl = h(t) + f(OPT)\cdot \int _t^{t+\delta } e^{-l}(1-l)dl \\\le & {} h(t) + f(OPT)\cdot \delta e^{-t}(1-t) = (1-\delta )h(t) + \delta e^{-t}\cdot f(OPT) \\\le & {} (1-\delta )g(t) + \delta e^{-t}\cdot f(OPT) = g(t+\delta ). \end{aligned}$$

   \(\square \)

Corollary 3.1

\(F(\mathbf {y}(t)) \ge \left[ Te^{-T} - o(1) \right] \cdot f(OPT) - \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \).

Proof

By Lemma 3.3 and Lemma 3.4,

$$\begin{aligned} F(\mathbf {y}(T))\ge & {} g(T)-O(n^3\delta )\cdot T\cdot f(OPT) - \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \\\ge & {} h(T)-O(n^3\delta )\cdot f(OPT) - \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) \\= & {} \left[ Te^{-T} - O(n^3\delta ) \right] \cdot f(OPT) - \delta O\left( \sqrt{\epsilon }+\frac{2\varDelta \ln n}{\epsilon ^3}\right) . \end{aligned}$$

Recall that \(\delta \le n^{-5}\), hence, \(O(n^3\delta )=o(1)\) and the proof is complete.    \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, X., Li, G., Zhang, Y., Zhang, Z. (2021). Measured Continuous Greedy with Differential Privacy. In: Wu, W., Du, H. (eds) Algorithmic Aspects in Information and Management. AAIM 2021. Lecture Notes in Computer Science(), vol 13153. Springer, Cham. https://doi.org/10.1007/978-3-030-93176-6_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93176-6_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93175-9

  • Online ISBN: 978-3-030-93176-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics