ABSTRACT
With the large-scale growth of data, traditional single-machine data processing methods are difficult to deal with massive data, especially iterative clustering algorithms that require frequent reading and writing operations. On the basis of Spark framework, this paper proposes a distributed possibilistic c-means algorithm based on memory computing, called Spark-PCM. The proposed method improves the related processing of distributed matrix operation and is implemented on the Spark platform. Experimental results show that the proposed Spark-PCM algorithm runs in a linear relationship with the number of nodes and has a good scalability, which indicates that it has higher scalability and adaptability to large-scale data.
- Krishnapuram, R. and Keller, J. M. 1993. A possibilistic approach to clustering. IEEE Trans. Fuzzy Syst. 1, 2, 98--110. Google ScholarDigital Library
- Krishnapuram, R. and Keller, J. M. 1996. The possibilistic c-means algorithm: insights and recommendations. IEEE Trans. Fuzzy Syst. 4, 3, 385--393. Google ScholarDigital Library
- Bahrampour, S., Moshiri, B., and Salahshoor, K. 2011. Weighted and constrained possibilistic c-means clustering for online fault detection and isolation. Applied Intelligence 35, 2, 269--284. Google ScholarDigital Library
- Zhang, Q., Yang, L. T., Chen, Z., and Li, P. 2017. PPHOPCM: Privacy-preserving high-order possibilistic c-means algorithm for big data clustering with cloud computing. IEEE Trans. Big Data.Google ScholarCross Ref
- Dean, J. and Ghemawat, S. 2008. MapReduce: Simplified data processing on large clusters. Communications of the ACM 51, 1, 107--113. Google ScholarDigital Library
- White, T. 2012. Hadoop, The Definitive Guide. O'Reilly Media, Inc. Google ScholarDigital Library
- Kim, Y. Shim, K., Kim, M. S., and Lee, J. S. 2014. DBCURE-MR: An efficient density-based clustering algorithm for large data using MapReduce. Information Systems 42, 15--35. Google ScholarDigital Library
- Zaharia, M., Chowdhury, M., Franklin, M. J., Shenker, S., and Stoica, I. 2010. Spark: Cluster computing with working sets. In Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud Computing (Boston, MA, June 22-25, 2010). HotCloud'10. Google ScholarDigital Library
- Leibiusky, J., Eisbruch, G., and Simonassi, D. 2012. Getting Started with Storm. O'Reilly Media, Inc. Google ScholarDigital Library
- Zaharia, M., Chowdhury, M., Das, T., Ma, J., McCauley, M., Franklin, M. J., Shenker, S., and Stoica, I. 2012. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation, USENIX Association. Google ScholarDigital Library
- Shanahan, J. G. and Dai, L. 2015. Large scale distributed data science using apache spark. In Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2323--2324. Google ScholarDigital Library
- Rathore, M. M., Son, H., Ahmad, A., Paul, A., and jeon, G. 2018. Real-time big data stream processing using GPU with spark over Hadoop ecosystem. International Journal of Parallel Programming 46, 3, 630--646. Google ScholarDigital Library
- Backhoff, O. and Ntoutsi, E. 2017. Scalable online-offline stream clustering in apache spark. In Proceedings of International Conference on Data Mining Workshops, 37--44.Google Scholar
- Gu, R., Tang, Y., Tian, C., Zhou, H., Li, G., Zheng, X., and Huang, Y. 2017. Improving execution concurrency of large-scale matrix multiplication on distributed data-parallel platforms. IEEE Trans. Parallel Distrib. Syst. 28, 9, 2539--2552.Google ScholarCross Ref
Index Terms
A Distributed PCM Clustering Algorithm Based on Spark
Recommendations
Design of Fast and Scalable Clustering Algorithm on Spark
ICCBDC '20: Proceedings of the 2020 4th International Conference on Cloud and Big Data ComputingClustering is a popular unsupervised data mining technique. It has been applied in various data mining and big data applications. Efficient clustering algorithms and implementation techniques are keys to cope with the scalability and performance ...
Personnel matching model of K-means clustering algorithm based on spark platform
CAIH2020: Proceedings of the 2020 Conference on Artificial Intelligence and HealthcareIn order to improve the efficiency of personnel matching system, this paper proposes a k-means clustering algorithm based on spark platform to complete the personnel matching model; spark platform completes the clustering iterative operation in the ...
A 2-Tier Clustering Algorithm with Map-Reduce
CHINAGRID '10: Proceedings of the The Fifth Annual ChinaGrid ConferenceIn the field of data mining, clustering is one of the important methods. K-Means is a typical distance-based clustering algorithm; 2-tier clustering should implement scalable clustering by means of dividing, sampling and knowledge integrating. Among ...
Comments