Skip to main content

ProvDP: Differential Privacy for System Provenance Dataset

  • Conference paper
  • First Online:
Applied Cryptography and Network Security (ACNS 2025)

Abstract

Provenance-based Intrusion Detection System (PIDS) is getting widely deployed for safeguarding enterprises across various verticals against sophisticated cyberattacks such as Advanced Persistent Threat (APT) campaigns. Rich in detail, provenance data are essentially fine-grained activities of users logged at their endpoints. Therefore, a security breach of provenance data can result in the leak of private information pertaining to users and the enterprise they work for (e.g., the clients to which a user often communicates), raising significant privacy concerns.

In this work, we propose a novel privacy-preserving solution, ProvDP, specifically tailored to protect the privacy of provenance data and focusing on provenance graphs. Our approach introduces multiple technical contributions: (1) a novel method for converting provenance graphs to and from provenance trees, enabling us to harness properties of trees to develop privacy-preserving techniques while preserving the semantic value of provenance graphs, (2) a novel subtree differential privacy framework for providing privacy guarantee on these provenance trees, and (3) empirical evidence that the application of differential privacy does not diminish the detection accuracy of PIDSes. Our evaluation demonstrates that PIDS trained on differentially private data maintain utility while preserving privacy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/provdp/prov-dp.

References

  1. The Linux audit framework (2015). https://github.com/linux-audit/

  2. Deep graph library: Easy deep learning on graphs (2019). https://www.dgl.ai/

  3. Evasive attacker leverages solarwinds supply chain compromises with sunburst backdoor (2019). https://tinyurl.com/bdz8s5yn

  4. Event tracing for windows (ETW) - windows drivers \(|\) Microsoft docs (2019). https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw-

  5. North Korea’s Lazarus apt leverages windows update client, github in latest campaign (2019). https://tinyurl.com/mr4h7d35

  6. U.S. said to find North Korea ordered cyberattack on Sony (2019). https://tinyurl.com/5da2h9bx

  7. Wildpressure targets industrial in the middle east (2019). https://tinyurl.com/mr2n8hdu

  8. Extended detection and response (XDR) (2023). https://www.cybereason.com/platform/xdr

  9. Anderson, B., McGrew, D.: Machine learning for encrypted malware traffic classification: accounting for noisy labels and non-stationarity. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1723–1732 (2017)

    Google Scholar 

  10. Bilge, L., Balzarotti, D., Robertson, W., Kirda, E., Kruegel, C.: Disclosure: detecting botnet command and control servers through large-scale netflow analysis. In: Proceedings of the 28th Annual Computer Security Applications Conference, pp. 129–138 (2012)

    Google Scholar 

  11. Cantrill, B.: Dtrace. In: Large Installation System Administration Conference (LISA) (2005)

    Google Scholar 

  12. Cheng, Z., et al.: Kairos: practical intrusion detection and investigation using whole-system provenance. In: IEEE Symposium on Security and Privacy (SP) (2024)

    Google Scholar 

  13. Dinur, I., Nissim, K.: Revealing information while preserving privacy. In: Proceedings of the Twenty-Second ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 202–210 (2003)

    Google Scholar 

  14. Divakaran, D.M., Fok, K.W., Nevat, I., Thing, V.L.: Evidence gathering for network security and forensics. Digit. Investig. 20, S56–S65 (2017)

    Article  Google Scholar 

  15. Dwork, C.: Differential privacy. In: International Colloquium on Automata, Languages, and Programming, pp. 1–12. Springer (2006)

    Google Scholar 

  16. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    Google Scholar 

  17. Goyal, A., Wang, G., Bates, A.: R-caid: embedding root cause analysis within provenance-based intrusion detection. In: IEEE Symposium on Security and Privacy (SP) (2024)

    Google Scholar 

  18. Griffith, J., et al.: Scalable transparency architecture for research collaboration (STARC)-DARPA transparent computing (TC) program. Technical report (2020)

    Google Scholar 

  19. Gysel, P., Wüest, C., Nwafor, K., Jašek, O., Ustyuzhanin, A., Divakaran, D.M.: Eagleeye: attention to unveil malicious event sequences from provenance graphs. arXiv preprint arXiv:2408.09217 (2024)

  20. Han, X., et al.: SIGL: securing software installations through deep graph learning. In: USENIX Security Symposium (SEC) (2021)

    Google Scholar 

  21. Hassan, W.U., et al.: NoDoze: combatting threat alert fatigue with automated provenance triage. In: Network and Distributed System Security Symposium (NDSS) (2019)

    Google Scholar 

  22. Hay, M., Miklau, G., Jensen, D., Towsley, D., Weis, P.: Accurate estimation of the degree distribution of private networks. In: 2009 Ninth IEEE International Conference on Data Mining, pp. 169–178. IEEE (2009)

    Google Scholar 

  23. Inam, M.A., et al.: SoK: history is a vast early warning system: auditing the provenance of system intrusions. In: IEEE Symposium on Security and Privacy (SP) (2023)

    Google Scholar 

  24. Karwa, V., Raskhodnikova, S., Smith, A., Yaroslavtsev, G.: Private analysis of graph structure. Proc. VLDB Endow. 4(11), 1146–1157 (2011)

    Article  Google Scholar 

  25. Kasiviswanathan, S.P., Nissim, K., Raskhodnikova, S., Smith, A.: Analyzing graphs with node differential privacy. In: Theory of Cryptography: 10th Theory of Cryptography Conference, TCC 2013, Tokyo, Japan, 3–6 March 2013 Proceedings, pp. 457–476. Springer (2013)

    Google Scholar 

  26. King, S.T., Chen, P.M.: Backtracking intrusions. In: Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles (2003)

    Google Scholar 

  27. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  28. Liu, Y., et al.: Towards a timely causality analysis for enterprise security. In: Network and Distributed System Security Symposium (NDSS) (2018)

    Google Scholar 

  29. McSherry, F.D.: Privacy integrated queries: an extensible platform for privacy-preserving data analysis. In: Proceedings of the 2009 ACM SIGMOD International Conference on Management of data, pp. 19–30 (2009)

    Google Scholar 

  30. Miller, S., Childers, D.: Probability and random processes: with applications to signal processing and communications. Academic Press (2012)

    Google Scholar 

  31. Mukherjee, K., Harrison, Z., Balaneshin, S.: Z-rex: human-interpretable GNN explanations for real estate recommendations. arXiv preprint arXiv:2503.18001 (2025)

  32. Mukherjee, K., et al.: Evading provenance-based ml detectors with adversarial system actions. In: USENIX Security Symposium (SEC) (2023)

    Google Scholar 

  33. Mukherjee, K., et al.: Proviot: detecting stealthy attacks in IoT through federated edge-cloud security. In: International Conference on Applied Cryptography and Network Security, pp. 241–268. Springer (2024)

    Google Scholar 

  34. Mukherjee, K., et al.: Interpreting GNN-based ids detections using provenance graph structural features. arXiv preprint arXiv:2306.00934 (2023)

  35. Narayanan, A., Shmatikov, V.: Robust de-anonymization of large sparse datasets. In: 2008 IEEE Symposium on Security and Privacy (SP 2008), pp. 111–125. IEEE (2008)

    Google Scholar 

  36. Narayanan, A., Shmatikov, V.: De-anonymizing social networks. In: 2009 30th IEEE Symposium on Security and Privacy, pp. 173–187. IEEE (2009)

    Google Scholar 

  37. Nevat, I., et al.: Anomaly detection and attribution in networks with temporally correlated traffic. IEEE/ACM Trans. Networking 26(1), 131–144 (2017)

    Article  Google Scholar 

  38. Nguyen, H.H., Imine, A., Rusinowitch, M.: Differentially private publication of social graphs at linear cost. In: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, ASONAM 2015, pp. 596–599. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2808797.2809385

  39. Nissim, K., Raskhodnikova, S., Smith, A.: Smooth sensitivity and sampling in private data analysis. In: Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, pp. 75–84 (2007)

    Google Scholar 

  40. Rehman, M.U., Ahmadi, H., Hassan, W.U.: FLASH: a comprehensive approach to intrusion detection via provenance graph representation learning. In: IEEE Symposium on Security and Privacy (SP) (2024)

    Google Scholar 

  41. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  42. Wang, Q., et al.: You are what you do: hunting stealthy malware via data provenance analysis. In: Network and Distributed System Security Symposium (NDSS) (2020)

    Google Scholar 

  43. Wang, T., et al.: Provcreator: synthesizing graph data with text attributes

    Google Scholar 

  44. Yuan, Q., Zhang, Z., Du, L., Chen, M., Cheng, P., Sun, M.: \(\{\)PrivGraph\(\}\): differentially private graph data publication by exploiting community information. In: 32nd USENIX Security Symposium (USENIX Security 2023), pp. 3241–3258 (2023)

    Google Scholar 

  45. Zengy, J., et al.: Shadewatcher: recommendation-guided cyber threat analysis using system audit records. In: IEEE Symposium on Security and Privacy (SP) (2022)

    Google Scholar 

  46. Zhang, S., Ni, W.: Graph embedding matrix sharing with differential privacy. IEEE Access 7, 89390–89399 (2019)

    Article  Google Scholar 

  47. Zheng, X., Zhang, L., Li, K., Zeng, X.: Efficient publication of distributed and overlapping graph data under differential privacy. Tsinghua Sci. Technol. 27(2), 235–243 (2021)

    Article  Google Scholar 

Download references

Acknowledgments

We thank the Shepherd and anonymous reviewers for their helpful feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kunal Mukherjee .

Editor information

Editors and Affiliations

A Appendix

A Appendix

1.1 A.1 Extended Top-m Filter Algorithm

Chemical structure diagram showing a complex organic molecule with multiple hexagonal rings connected by single and double bonds. The structure includes various functional groups and branching chains, illustrating a detailed molecular configuration.

1.2 A.2 Prune Subtree at Single Point Algorithm

Flow chart illustrating a process with multiple interconnected steps. The chart begins with a start node, followed by decision points and actions, leading to various outcomes. Arrows indicate the flow direction between nodes. The text "USE ONLY FOR EXPORT" is prominently displayed at the bottom.

1.3 A.3 Prune Subtree at k Point Algorithm

Flow chart illustrating a process with multiple interconnected steps. The chart begins with a starting point labeled "Start" and progresses through various decision points and actions, such as "Decision 1" and "Action A." Arrows indicate the flow of the process, leading to an endpoint labeled "End." The chart is designed to guide users through a sequence of decisions and actions, emphasizing logical progression and decision-making pathways.

1.4 A.4 Graft Subtree Algorithm

Chemical structure diagram showing a complex organic molecule with multiple hexagonal rings connected by single and double bonds. The structure includes various functional groups and heteroatoms, depicted with standard chemical notation.

1.5 A.5 \(\epsilon \)-Edge-Differential Privacy (\(\epsilon \)-Edge-DP) Proof for Algorithm 1

The ETmF algorithm which is adapt the TmF [38] is composed of two mechanisms which are \(\epsilon _1\)-Edge-DP and \(\epsilon _2\)-Edge-DP, and ETmF is \(\epsilon \)-Edge-DP by sequential composition (based on Theorem 2.1 in [38]) where \(\epsilon = \epsilon _1 + \epsilon _2\).

The first mechanism calculated the number of edges to be added \(\tilde{m}\) by perturbing the number of edges, \(m = |E|\) using Laplace noise under budget \(\epsilon _2\) to produce perturbed \(\tilde{m}\).

$$ \tilde{m} = \lceil m + Lap(\varDelta f/\epsilon _2)\rceil $$

Based on the definition of neighboring graphs in [38], two graphs are neighboring if they differ in a single edge, thus the global sensitivity of function f which is defined as \(\varDelta f\) is 1.

$$ \tilde{m} = \lceil m + Lap(1/\epsilon _2)\rceil $$

Therefore, this step is \(\epsilon _2\)-Edge-DP (based on Theorem 2.1 in [38]) since Laplacian noise is added as a scaled parameter \(\frac{\varDelta f}{\epsilon _2}\) with respect to global sensitivity function \(\varDelta f\).

The second mechanism adds edges to the graph of size \(\tilde{m}\), by first identifying candidate edges in the graph by adding Laplace noise under budget \(\epsilon _1\) and then passing the candidate edges through a high pass filters with threshold \(\theta \). The global sensitivity is also 1 in this case similar to first mechanism. By the assumption of edge independence, parallel composition (based on Theorem 2.2 in [38]) is applicable at edge level. Therefore, the second method is \(\epsilon _1\)-Edge-DP. Therefore, ETmF is \(\epsilon \)-Edge-DP by sequential composition (based on Theorem 2.1 in [38]) where \(\epsilon = \epsilon _1 + \epsilon _2\).

In system provenance the edges are public knowledge, and with that assumption, TmF can be generalized to apply to any set of valid edges \(E_{\text {valid}}\). We can brute-force enumerate over all possible edges and reject invalid edges to obtain our \(E_{\text {valid}}\). To derive the new upper-bound for \(\theta \), we calculate the theoretical maximum of \(|E_\text {valid}|\) since \(\theta \) is upper bounded by \(\epsilon _t\). There ca be a directed edge between any two nodes. Therefore, the maximum number of edges is bounded by \(C^n_2\) i.e.,  \(n(n-1)/2\). Therefore, the upper bound of \(\theta \) is \(n(n-1)/2\).

1.6 A.6 \(\epsilon _1\)-Subtree-Differential Privacy (\(\epsilon _1\)-Subtree-DP) Proof for Algorithm 2

In Definition 1 we defined \(\epsilon _1\)-Subtree-Differential Privacy.

Definition 3

In Definition 1 we defined two trees (\(T_1\) and \(T_2\)) as neighboring, if the differ by a single subtree, t. Now we define the global sensitivity of a function f as \(\varDelta f = \underset{T_1, T_2, t}{\max }\ || f(T_1, t) - f(T_2, t)||_1\).

Using Definition 1, we consider a pair of tree neighbors \(T_1, T_2\) who differ by one subtree t, \( \Pr [R(T_1) = x] \) which is proportional to \( \exp \left( \frac{\epsilon _1 f(T_2, t)}{S}\right) \) using exponential distribution [29, 30]. Therefore, the ratio of \( \Pr [R(T_1) = x] \) and \( \Pr [R(T_2) = x] \) is equivalent to:

$$ \frac{\exp \left( \frac{\epsilon _1 f(T_1, t)}{S}\right) }{\exp \left( \frac{\epsilon _1 f(T_2, t)}{S}\right) }. $$

This can be simplified to \(e^{\epsilon _1}\) as the maximum value of \(f(T_1, t) - f(T_2, t)\) is S,

$$ \exp \left( \frac{\epsilon _1 (f(T_1, t) - f(T_2, t))}{S}\right) \le \exp \left( \frac{\epsilon _1 S}{S}\right) = e^{\epsilon _1} $$

For Algorithm 2, we will describe the effect of subtree size at that node, degree of the node, subtree height, and subtree depth on the global sensitivity calculation. We define global sensitivity of function f with respect to size as \(\varDelta f_{size} = \underset{v \in V \setminus V_{root}}{\max } \{|V| - \text {(number of nodes after pruning }T_v\text {)}\}\). It can be rewritten in terms of subtree \(T_v\) pruned, \(\varDelta f = \underset{v \in V}{\max }\ |T_v|\).

The height parameter does not contribute to the global sensitivity if there is more than one longest alternative root-to-leaf paths. If node v which is on one of the longest root-to-leaf paths is pruned, the other longest root-to-leaf paths survive. Therefore, the height of the tree \(\tilde{T}\) remains same as the height of T. If and only if there is one unique longest root-to-leaf path and the pruned node v is a member of the path, the sensitivity with respect to height becomes \(\varDelta f_{height} \leftarrow \max (height(T) - height(\tilde{T}), height(T_v))\). Therefore, the upper bound of \(\varDelta f_{height} \leftarrow height(T) - 1\).

The depth does not contribute to the global sensitivity as the depth is not a property of a tree rather it is a property of the node which is pruned. Therefore, it will contribute towards the local sensitivity of the pruned subtree. Let us consider \(L_T\) is the set of leaf nodes of the tree T. Therefore, the upper limit of the local sensitivity with respect to the depth becomes \(\varDelta f_{depth} \leftarrow \max _{v \in L_T} depth(v)\).

The branching factor \(b_T\) of a tree T is defined as the maximum over the degrees of all the nodes in a tree, \(b_T \leftarrow \max _{v \in T} {degree(v)}\). Let us define the least common ancestor \(LCA_b \leftarrow lca(v_1, v_2, ..., v_k) \mid degree_{i= 1...k}(v_i) = b_T\). The branching factor of the tree reduces if and only if we prune a node which is either \(LCA_b\) or its ancestors. Therefore, the sensitivity with respect to the branching factor becomes \(\varDelta f_{branching\_factor} \leftarrow b_T\). Please note if the \(LCA_b\) is living at level 1 of the tree and only node at level 1 then the above mentioned scenario occurs.

Finally, we calculate the global sensitivity \(\varDelta f\) by taking a weighted sum of \(\varDelta f_{size}\), \(\varDelta f_{height}\), \(\varDelta f_{depth}\), and \(\varDelta f_{branching\_factor}\).

$$ \varDelta f \leftarrow \alpha \cdot \varDelta f_{size} + \beta \cdot \varDelta f_{height} + \gamma \cdot \varDelta f_{depth} + \eta \cdot \varDelta f_{branching\_factor} $$

Special Scenario: Lets us assume that the session behaviors have similar structures. Therefore, a tree T contains user activities that are uniform in nature across different sessions and T contains multiple similar subtrees.

The subtree size strictly decreases as depth increases, thus this v that gives the global sensitivity must be located at depth 1. At depth \(\ge \) 2, the subtrees cannot be a candidate as there always exists a subtree rooted at depth 1 which will be larger in size by at least 1 node. Using our assumption of uniform user activity, all subtrees at depth 1 are of equal size, then the size of any subtree rooted at depth 1 will be \(\frac{|V|-1}{degree(v_{root})}\), where \(degree(v_{root})\) is the number of user behavior contained in the tree. For height and branching factor since the subtree are not unique the effect on global sensitivity is nullified. The sensitivity due to the depth is maximum when we prune at leaf. In such scenario, the depth will be the \(height - 1\) which is equal to \(\frac{|V|-1}{degree(v_{root})}\).

These two cases cannot occur together. We can either prune at level 1 or we can prune at leaf. Therefore, \(\varDelta f\) for this special scenario:

$$ \varDelta f \leftarrow \max (\alpha , \gamma ) * \frac{|V|-1}{degree(v_{root})} $$

1.7 A.7 \(\epsilon _{k}\)-Subtree-Differential Privacy (\(\epsilon _{k}\)-Subtree-DP) Proof for Algorithm 3

In Definition 2 we defined \(\epsilon _{k}\)-Subtree-Differential Privacy. Using the sequential composition theorem (Theorem 3) from [29] which states any sequence of computations that each provide differential privacy in isolation also provide differential privacy in sequence. Mathematically, we denote it as, if a randomizer R provide \(\epsilon _i\)- differential privacy, then the sequence of R provides \(\sum _{i} \epsilon _i\)-differential privacy.

In k-subtree pruning algorithm (Algorithm 3), we perform subtree pruning (Algorithm 2) in sequence k times on the tree T. In Sect. A.6 we showed that Algorithm 2 is \(\epsilon _1\)-Subtree-DP. Therefore applying Algorithm 2, k times in sequence will provide \(\sum _{i=1}^{k} e_{1}\) Subtree differential privacy. We define \(\epsilon _{k}\) as \(\epsilon _{k} = \sum _{i=1}^{k} e_{1}\). Therefore, Algorithm 3 is \(\epsilon _{k}\)-Subtree-DP (using the sequential composition theorem from [29]).

1.8 A.8 \(\epsilon _{k^\prime }\)-Subtree-Differential Privacy (\(\epsilon _{k^\prime }\)-Subtree-DP) Proof for Algorithm 4

In the grafting algorithm (Algorithm 4), we perform grafting once which is \(\epsilon _{2}\)-Subtree-DP because in line 6, we add Laplacian noise (e.g.,  \(Lap(1/\epsilon _2)\)) to \(s_v\) to get \(\tilde{s}_v\) which affects the grafting probability. This step is \(\epsilon _2\)-Edge-DP (based on Theorem 2.1 in [38]) since Laplacian noise is added as a scaled parameter \(\frac{\varDelta f}{\epsilon _2}\) with respect to global sensitivity function \(\varDelta f\).

In the grafting algorithm (Algorithm 4), we perform subtree grafting in sequence \(k^\prime \) times on the tree \(\tilde{T}\). We showed that grafting one subtree is \(\epsilon _2\)-Subtree-DP. Therefore, applying it \(k^\prime \) times in sequence will provide \(\sum _{i=1}^{k^\prime } e_2\) Subtree differential privacy. We define \(\epsilon _{k^\prime }\) as \(\epsilon _{k^\prime } = \sum _{i=1}^{k^\prime } e_2\). Therefore, Algorithm 4 is \(\epsilon _{k^\prime }\)-Subtree-DP (using the sequential composition theorem from [29]).

1.9 A.9 \(\epsilon \)-Subtree-Differential Privacy (\(\epsilon \)-Subtree-DP) Proof

We proved Algorithm 3 is \(\epsilon _{k}\)-Subtree-DP in Sect. A.7 and Algorithm 4 is \(\epsilon _{k^\prime }\)-Subtree-DP in Sect. A.8. Since, the algorithm are invoked in sequence, ProvDP is \(\epsilon \)-Subtree-DP where \(\epsilon = \epsilon _k + \epsilon _{k^\prime }\), using the sequential composition theorem from [29].

1.10 A.10 Implementation

Our ProvDP tool is implemented in Python, comprising approximately 1K lines of code. We developed custom Python functions to construct provenance trees from graphs, filter extraneous information, and abstract node entities. This preprocessing ensures that the raw graph data is optimally prepared for our downstream task, GNN-based anomaly detection [32]. For the GNN model, we utilize the DGL [2] library, specifically employing GAT layers. The GAT layer was chosen due to recent works [31, 34] showing explanation capability of the architecture which increases the user’s confidence. These layers aggregate features from neighboring nodes to generate refined node representations. The model architecture incorporates dropout and ReLU activation functions between layers to enhance generalization.

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mukherjee, K., Yu, J., De, P., Divakaran, D.M. (2025). ProvDP: Differential Privacy for System Provenance Dataset. In: Fischlin, M., Moonsamy, V. (eds) Applied Cryptography and Network Security. ACNS 2025. Lecture Notes in Computer Science, vol 15827. Springer, Cham. https://doi.org/10.1007/978-3-031-95767-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-95767-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-95766-6

  • Online ISBN: 978-3-031-95767-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics