Abstract
A quasi attribute refers to a distinct subset of unique attributes that can adequately recognize tuples in a table. Hasty distribution of the quasi attributes will prompt privacy leakage. Choosing private data from a list of attributes is decided by the publisher, and it undoubtedly changes from dataset to dataset. The need for dynamically choosing and informing systems about a quasi and a non-quasi attribute remains a challenging task. Presently, there is no particular automation model for the classification of quasi and non-quasi. It could be a burden when a massive dataset has to be classified, or aggregation of datasets has to be performed.
This research paper considers the need to categorize quasi attributes for a non-expert through a direct attack and proposes a solution through the game theory approach and reinforcement machine learning model. For demonstration, a \(2 \times 2\) state matrix is considered. The results include case-wise time consumption and comparison among all necessary steps for accurate navigation, between various attributes. Among all the notable cases, the matrix arrangement with a quasi attribute in \(00^{th}\) and \(11^{th}\)position, non-quasi in \(01^{th}\) and \(10^{th}\) position obtained better performance. This reinforcement-based solution helps the automation of the classification of quasi and non-quasi attributes.
Visvesvaraya Technological University, Belagavi.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Shi, P., Xiong, L., Fung, B.C.M.: Anonymizing data with quasi-sensitive attribute values. In: Proceedings of the 19th ACM International Conference on Information and Knowledge Management (CIKM 2010), pp. 1389–1392. Association for Computing Machinery, New York (2010). https://doi.org/10.1145/1871437.1871628
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14
Dwork, C.: A firm foundation for private data analysis. Commun. ACM 54(1), 86–95 (2011). https://doi.org/10.1145/1866739.1866758
Yildiz, Y., Agogino, A., Brat, G.: Predicting pilot behavior in medium-scale scenarios using game theory and reinforcement learning. J. Guid. Control. Dyn. 37(4), 1335–1343 (2014)
Nowé, A., Vrancx, P., De Hauwere, Y.M.: Game theory and multi-agent reinforcement learning. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning. ALO, vol. 12, pp. 441–470. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3_14
Bowling, M., Veloso, M.: An analysis of stochastic game theory for multiagent reinforcement learning. Carnegie-Mellon University Pittsburgh Pa School of Computer Science (2000)
Sweeney, L.: K-anonymity: a model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 10(5), 557–570 (2002). https://doi.org/10.1142/S0218488502001648
Von Neumann, J., Morgenstern, O.: 2nd rev. edn. Princeton University Press (1947)
Nash, J.F., Jr.: Equilibrium points in n-person games. Proc. Natl. Acad. Sci. 36(1), 48–49 (1950). https://doi.org/10.1073/pnas.36.1
Lindell, P.P., Mining, P.D.: J. Cryptol. 15, 177–206 (2002). https://doi.org/10.1007/s00145-001-0019-2
Jagannathan, G., Wright, R.N.: Privacy-preserving distributed k-means clustering over arbitrarily partitioned data. In: Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining (KDD 2005), pp. 593–599. Association for Computing Machinery, New York (2005). https://doi.org/10.1145/1081870.1081942
Gambs, S., Kégl, B., Aïmeur, E.: Privacy-preserving boosting. Data Min. Knowl. Disc. 14, 131–170 (2007). https://doi.org/10.1007/s10618-006-0051-9
Qu, Y., Yu, S., Gao, L., Peng, S., Xiang, Y., Xiao, L.: FuzzyDP: fuzzy-based big data publishing against inquiry attacks. In: 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 7–12 (2017). https://doi.org/10.1109/INFCOMW.2017.8116344
Qu, Y., Yu, S., Zhou, W., Peng, S., Wang, G., Xiao, K.: Privacy of things: emerging challenges and opportunities in wireless internet of things. IEEE Wirel. Commun. 25(6), 91–97 (2018). https://doi.org/10.1109/MWC.2017.1800112
Andre, D., Russell, S.J.: State abstraction for programmable reinforcement learning agents. In: AAAI/IAAI, pp. 119–125 (2002)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yaji, S., Bayyapu, N. (2023). Reinforcement Technique for Classifying Quasi and Non-quasi Attributes for Privacy Preservation and Data Protection. In: Prabhu, S., Pokhrel, S.R., Li, G. (eds) Applications and Techniques in Information Security . ATIS 2022. Communications in Computer and Information Science, vol 1804. Springer, Singapore. https://doi.org/10.1007/978-981-99-2264-2_1
Download citation
DOI: https://doi.org/10.1007/978-981-99-2264-2_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-2263-5
Online ISBN: 978-981-99-2264-2
eBook Packages: Computer ScienceComputer Science (R0)