Elsevier

Neurocomputing

Volume 209, 12 October 2016, Pages 67-74
Neurocomputing

Efficient auto-scaling scheme for rapid storage service using many-core of desktop storage virtualization based on IoT

https://doi.org/10.1016/j.neucom.2016.05.090Get rights and content

Abstract

Following the progressive development of IT technology, on-premise IT resources have been shifted to cloud computing environments. The principle reason for this change in IT resource-composing environments is that cloud computing services allow IT resources to be used as and when necessary, which means without buying hardware equipment. For this reason, studies on diverse aspects are being conducted for better security, rapidity, availability, reliability, and elasticity of cloud computing. Among the virtualization technologies that are basic for cloud computing, desktop storage virtualization (DSV) is composed of distributed legacy desktop personal computers. In DSV environments, clustering by unavailable state time and auto-scaling for storage provision as requested by users are considered very important. In addition, deferred processing for analysis of desktop PC performance states in DSV environments to select an appropriate desktop PC is directly connected to the quality of service (QoS). Although diverse algorithms and schemes for clustering and auto-scaling have been developed to this end, they have limited performance or have been made without considering DSV environments. Consequently, large amounts of deferred processing time are required. In the present paper, an efficient auto-scaling scheme (EAS) is proposed that minimizes deferred processing time in Internet of Things (IoT) environments by using many-cores of the GPU for clustering and auto-scaling in DSV environments. The EAS provides higher QoS to storage users compared to the CPU by mapping the information of numerous distributed desktop PCs on individual threads of the GPU and processing the information in parallel.

Introduction

Following the rapid development of IT technology, on-premise IT resources have shifted to cloud computing environments. Clouds enable accessing IT resources as and when necessary. These clouds are divided into diverse delivery models according to IT resource service provisions and into deployment models depending on cloud access configuration methods. Representative services provided by cloud delivery models include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Representative access configurations of cloud deployment models include public clouds, private clouds, hybrid clouds, and community clouds. To configure these clouds, virtualization is utilized as a foundation technology [1], [2], [3], [4], [5], [6]. In the case of these clouds, studies on diverse aspects are being conducted for better security, rapidity, availability, resiliency, and elasticity [7], [8], [9], [10], [11], [12], [13], [14], [15], [16]. Virtualization, which is a basic technology of cloud computing, includes application virtualization, hardware virtualization, desktop virtualization, network virtualization, server virtualization, and storage virtualization. Among them, desktop storage virtualization (DSV) integrates and distributes unavailable IT resources of distributed legacy desktop personal computers (PCs). Due to the capacity of heterogeneous desktop PCs, clustering and auto-scaling of idle and unavailable storage resource is considered very important in DSV [1], [3], [6], [17], [18], [19], [20], [21], [22]. Such clustering and auto-scaling is directly connected to the quality of service (QoS) due to the deferred processing of on-demand storage of many users occurring in unspecified time zones. Therefore, from the viewpoint of clustering, a number of algorithms have been developed, including centroid-based clustering [23], distributed-based clustering [24], density-based clustering [25], and connectivity-based clustering [26]. In addition, although diverse studies and schemes have been developed for auto-scaling, these algorithms and schemes cause large delay times because DSV environments are not considered.

In the present paper, an efficient auto-scaling scheme (EAS) is proposed that minimizes deferred processing time in Internet of Things (IoT) environments using many-cores of the GPU. The EAS provides higher QoS to storage users compared to a CPU by mapping the information of numerous distributed desktop PCs on individual threads of the GPU and processing the information in parallel for clustering and auto-scaling in DSV environments.

The present paper is composed of the following sections:

In Section 2, resource integration schemes for existing cloud storage services, schemes for auto-scaling, and CUDA for using a GPU are examined. In Section 3, an operation scheme for GPU-based auto-scaling of our proposed EAS is explained. In 4 Design of, 5 Implementation of, the design for and the implementation of the EAS application in DSV environments is described, respectively. In Section 6, the processing speed for storage services when auto-scaling is performed in EAS-based DSV environments is evaluated. Finally, a summary of overall conclusions and a suggestion for future studies are presented in Section 7.

Section snippets

Conventional resource integration schemes

The Hadoop Distributed File System (HDFS) [22] is composed of distributed desktops and is divided into a NameNode and a DataNode that can integrate storage resources of several thousand desktops. Since these storage resources and users' storage requests are processed based on the CPU, delay time occurs due to the large number of user requests. Although a SecondaryNode was updated to prevent the loss of numerous data when a fault occurs in the NameNode, the HDFS involves a problem of being

EAS scheme

Our proposed EAS performs clustering for hierarchical integration of distributed desktop resources and also performs auto-scaling for provision of storage services to users based on the results of clustering. In this section, desktop metadata for clustering and auto-scaling and the operation method using the GPU's many-core are explained. In addition, EAS clustering and EAS auto-scaling applied with GPU operations are explained.

Design of f-EAS

Frameworks applied with our proposed EAS are defined as framework-EAS (f-EAS). The user interface plays the role of entering the set allowance of resources of the desktop and receiving inputs for server operation from the user. The Resource Integrated Manager contains functions for management of desktops connected to the f-EAS and stored files and for auto-scaling. The Desktop Inspector plays the role of analyzing the states of the desktops connected to the f-EAS. The Viewer plays the role of

Implementation of f-EAS

The initially composed screen of our proposed f-EAS is shown in Fig. 4. Fig. 4-① is the initial execution screen that shows a list of desktops connected to the f-EAS and information on the desktops connected to R-State view. Fig. 4-② shows the activated R-Statistics view that shows the log of storage use as requested by the user. Fig. 4-③ shows the activated clustering view that visualizes internally clustered desktops. Fig. 4-④ shows the basic setting for desktop connection for entries of IP,

Performance evaluation

For evaluation of the performance of the EAS, CPU operation processing and GPU operation processing for automatic extension in the f-EAS according to the user's request for storage are measured and compared. Through such operation processing, clustering and auto-scaling are implemented. We used actual about 501 desktops within university computer center, for one server should be needed for performance evaluation.

Fig. 6 shows a comparison of the lengths of time to implement clustering according

Conclusion

In this paper, an EAS for fast processing of IoT data in DSV-based storage services was proposed. The EAS integrated distributed desktop resources and implemented clustering and auto-scaling for provision of the resources to the user through parallel processing by mapping the tasks to individual threads utilizing the many-cores of the GPU. Through this procedure, clustering and auto-scaling were processed faster compared to the CPU. This procedure can be applied to various computing

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2014R1A1A2053564).

Hyun-Woo Kim received the B.S in Computer Engineering WonKwang Univ. in 2012. He has been studied the MS and Ph.D. combined course in department of Multimedia Engineering, Dongguk University. His interests are cloud computing, ubiquitous computing, virtualization, big data, Internet of Things.

References (29)

  • Im-Yeong Lee Sun-Ho Lee

    A secure index management scheme for providing data sharing in cloud storage

    J. Inf. Process. Syst.

    (2013)
  • Ganesh Kumar Nitesh Shrivastava

    A survey on cost effective multi-cloud storage in cloud computing

    Int. J. Adv. Res. Comput. Eng. Technol.

    (2013)
  • Hiroyuki Takizawa et al.

    Hierarchical parallel processing of large scale data clustering on a PC cluster with GPU co-processing

    J. Supercomput.

    (2006)
  • So-Yeon Kim, Hong-Chan Roh, Chi-Hyun Park, Sang-Hyun Park, Analysis of metadata server on clustered file systems, in:...
  • Cited by (8)

    • A study on container virtualization for guarantee quality of service in Cloud-of-Things

      2019, Future Generation Computer Systems
      Citation Excerpt :

      Whereas, the deployment requirements of IoT gateways by means of container virtualization is discussed in [13]. An Efficient Auto-scaling Scheme (EAS) based con container vitualization that, by means of the use of the Graphics Processing Unit (GPU) of numerous distributed Personal Computers (PCs), is able to provide a very high Quality of Service (QoS) is discussed in [14]. Specifically, an approach that allows to dynamically allocate container-based resources offered by IoT and Edge computing devices is presented in [15].

    • Adaptive job allocation scheduler based on usage pattern for computing offloading of IoT

      2019, Future Generation Computer Systems
      Citation Excerpt :

      The job allocation as such makes the processing of the user’s applications and allocated jobs impossible when the user executes applications that require large computing power due to sudden increases in the battery consumption of the IoT devices. This lowers the quality of service for the user and causes problems resulting in the delayed processing of allocated jobs and the reallocation of jobs [6,8–17] and [18]. In this paper, an adaptive job allocation scheduler (AJAS) that adaptively redistributes the jobs allocated to IoT devices by using behavior patterns for individual applications of the user is proposed.

    • Automatic control system based on IoT data identification

      2020, Indonesian Journal of Electrical Engineering and Computer Science
    View all citing articles on Scopus

    Hyun-Woo Kim received the B.S in Computer Engineering WonKwang Univ. in 2012. He has been studied the MS and Ph.D. combined course in department of Multimedia Engineering, Dongguk University. His interests are cloud computing, ubiquitous computing, virtualization, big data, Internet of Things.

    Young-Sik Jeong is a professor in the Department of Multimedia Engineering at Dongguk University in Korea. His research interests include multimedia cloud computing, information security of cloud computing, mobile computing, IoT(Internet of Things), and wireless sensor network applications. He received his B.S. degree in Mathematics and his M.S. and Ph.D. degrees in Computer Science and Engineering from Korea University in Seoul, Korea in 1987, 1989, and 1993, respectively. He was a professor in the Department of Computer Engineering at Wonkwang University in Korea from 1993 to 2012. He worked and researched to Michigan State University and Wayne State University as visiting professor in 1997 and 2004 respectively. Since 2002, he has been serving as an IEC/TC 100 Korean Technical Committee member, as the IEC/TC 108 Chairman of Korean Technical Committee, and as an ISO/IEC JTC1 SC25 Korean Technical Committee member. Also He is an EiC(Editor-in-Chief) of Journal of Information Processing Systems, an associate editor of JoS(Journal of Supercomputing), IJCS(international Journal of Communication Systems) and an editor of JIT(Journal of Internet Technology), finally an associate editor of Journal of Human-centric Computing(HCIS). In addition, he has been serving as a Guest Editor for international journals by some publishers: Springer, Elsevier, John Wiley, Oxford Univ. Press, Hindawi, Emerald, Inderscience and so on. He is also a member of the IEEE. http://ucloud-lab.dongguk.edu

    View full text