Differential Evolution based bucket indexed data deduplication for big data storage
Article type: Research Article
Authors: Kumar, Naresha; * | Antwal, Shobhaa | Jain, S.C.b
Affiliations: [a] Computer Science and Engineering, Department UIET, Kurukshetra University, Kurukshetra, India | [b] Department Computer Science and Engineering, Rajasthan Technical University, Kota, India
Correspondence: [*] Corresponding author. Naresh Kumar, Computer Science and Engineering, Department UIET, Kurukshetra University, Kurukshetra-136119, India. Tel.: +91 9467012567; E-mails: nkumar2015@kuk.ac.in; naresh_duhan@rediffmail.com.
Abstract: Focus of this research work is optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, proposed Differential Evolution (DE) algorithm based approach is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where significantly it reduces the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD. Therefore, proposed DE based TTTD-P optimize chunking to maximize chunking throughput with increased deduplication ratio (DR); and bucket indexing approach reduces hash values judgment time to identify and declare redundant chunk about 16 times faster than Rabin CDC, 5 times than Asymmetric Extremum (AE) CDC, 1.6 times than FAST CDC. Experimental results comparative analysis reveal that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best distributed deduplication for big data storage systems.
Keywords: Big data, data deduplication, content defined chunking, Differential Evolution, TTTD, HDFS
DOI: 10.3233/JIFS-17593
Journal: Journal of Intelligent & Fuzzy Systems, vol. 34, no. 1, pp. 491-505, 2018
What is it about?
Focus of this research work is optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, proposed Differential Evolution (DE) algorithm based approach is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where significantly it reduces the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD.
Why is it important?
Experimental results comparative analysis reveal that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best distributed deduplication for big data storage systems.