An intelligent cloud-based data processing broker for mobile e-health multimedia applications

https://doi.org/10.1016/j.future.2016.03.019Get rights and content

Highlights

  • We focus on the intelligent central cloud broker for single, mixed and multiple food object images.

  • Proposed scheduling and decision algorithms specifically for different food categories in cloud.

  • Proposed Dynamic Cloud Allocation mechanism for evaluating and autonomously allocating/deallocating cloud resources.

  • Focus on application of Deep Learning for food image recognition.

Abstract

Mobile e-health applications provide users and healthcare practitioners with an insightful way to check users/patients’ status and monitor their daily calorie intake. Mobile e-health applications provide users and healthcare practitioners with an insightful way to check users/patients’ status and monitor their daily activities. This paper proposes a cloud-based mobile e-health calorie system that can classify food objects in the plate and further compute the overall calorie of each food object with high accuracy. The novelty in our system is that we are not only offloading heavy computational functions of the system to the cloud, but also employing an intelligent cloud-broker mechanism to strategically and efficiently utilize cloud instances to provide accurate and improved time response results. The broker system uses a dynamic cloud allocation mechanism that takes decisions on allocating and de-allocating cloud instances in real-time for ensuring the average response time stays within a predefined threshold. In this paper, we further demonstrate various scenarios to explain the workflow of the cloud components including: segmentation, deep learning, indexing food images, decision making algorithms, calorie computation, scheduling management as part of the proposed cloud broker model. The implementation results of our system showed that the proposed cloud broker results in a 45% gain in the overall time taken to process the images in the cloud. With the use of dynamic cloud allocation mechanism, we were able to reduce the average time consumption by 77.21% when 60 images were processed in parallel.

Introduction

Mobile devices have become an indispensable gadget for many people, not just as a communication medium, but are also used to run e-health applications for measuring and processing pulse-rate, blood pressure, calorie measurement, activity tracking, etc. The potential for mobile technology based health interventions to help populations is expanding in ways that previously were not possible  [1]. These applications are in great demand especially in countries where shortage in qualified healthcare professionals is a challenge confronting healthcare providers. Mobile e-health applications provide users and healthcare practitioners with an insightful way to determine the users/patients’ status and their daily activities such as waking up times, exercising, eating habits, sleeping patterns, etc. All these personal activities could be easily tracked and processed with the use of mobile applications. However, there are several technical challenges that encumber wide adoption of e-health mobile applications as described in  [2], [3]. Amongst those challenges is the limited processing power of the mobile device. Even with the enhanced capabilities of today’s smartphones, like more and improved mobile sensors, better processing capacity in terms of quad cores and octa core chipsets, more physical storage and RAM, and intensive data processing, e-health applications can consume the resources of a mobile device over its limits, making the application unfeasible or unacceptable, from a quality of experience perspective, to run on the mobile device. This is especially true for multimedia e-health applications that use images, video, computer vision, or other computationally intensive media. To overcome this limitation, certain heavy computational parts of the mobile application could be offloaded to the cloud, which has the necessary processing power and resources to provide satisfactory performance for the mobile application and its features. But, the decision of offloading computationally intensive tasks to cloud instances is not trivial. Cloud resources must be allocated using an intelligent mechanism that dispatches the processing tasks in real-time while ensuring the minimum processing time possible. Furthermore, the cost of allocating cloud resources increases as the number of instances increases, hence, the mechanism should be able to dynamically free the resources when the average processing time of the resources stays within a predefined threshold.

In this paper, we propose an intelligent cloud-based data processing broker mechanism for mobile e-health multimedia applications, and we demonstrate its feasibility and performance by integrating it with our specific e-health mobile application, Eat Health Stay Health (EHSH). EHSH includes food image processing, deep learning for food recognition and classification, and calorie estimation. These data processing functions are computationally intensive and require resources that most current mobile devices cannot provide. To overcome the computationally intensive resource problem, offloading a portion or the application entirely to cloud would take the load away from mobile device and improve the performance and resource consumption of the mobile device  [4], [5]. However, in our mobile e-health application, the algorithm includes certain unique food characteristics which, if efficiently utilized would improve the overall system performance and achieve scalability as well as ensure optimal use of cloud resources. To incorporate food characteristics in the cloud model it is important to understand why and how these characteristics can help improve the performance of our system. There are three main food categories that we look to incorporate into the cloud system: Single, multiple and mixed food objects. Out of these three, the segmentation of single food objects would result in one food object which would further be processed by the cloud broker. Then, it is followed by calorie computation, which contributes to overall time taken to process the single food image (Ts). Multiple food objects on the other hand can include more than two objects on the plate and hence segmentation of this will result in X number of images which would have to be classified and processed for calorie computation for each of the X food items. Hence, the overall time to compute for multiple food objects (Tm) would be Tm=1Xtd+tc, where td is the time taken to classify the food object using deep learning algorithm and tc is the time taken to compute the calorie. Considering the total multiple food objects m>1, then Tm>Ts. In case of mixed food objects, we propose a second level ingredient algorithm in Section  4.2 which results in two levels of deep learning algorithm. Hence, the overall time taken to process the mixed food object Tmx will be greater than Ts due to the additional second level of processing. Taking the overall time into consideration it is essential to include a scheduling and decision making algorithm that takes these food categories in an independent queue and processes them accordingly.

Our intelligent decision making algorithm of the proposed cloud-based broker decides on how to distribute the processing of the aforementioned functions based on a number of factors such as the number of active user connections, statistics of the broker (CPU utilization, memory utilization, queue status, etc.), and historical data such as the average time needed for running the deep learning algorithm for a single food object. The aim of our proposed broker is to significantly improve the scalability of the mobile application by introducing queuing management, indexing food images, allocating cloud resources, and other components that would smartly manage multiple food images at the same time, and provide more accurate calorie estimation for the food image sent by a specific user.

The main contributions of this paper are as follows:

  • We propose an efficient cloud-based broker model that incorporates intelligent mechanism to decide where and when to offload the processing of food images to cloud instances. This is rather significant for our mobile e-health application given the complex nature of food images. The processing of these images involves computationally intensive tasks such as segmentation algorithms and deep learning for food object recognition. The broker uses decision making algorithms to collect performance metrics at regular intervals allowing the broker to make the choice of cloud instance to offload the computation tasks.

  • We propose an intelligent scheduling mechanism which allows the broker to provide optimal utilization of the cloud resources and improve the overall response time of processing the food images. The implementation results of our system show that the proposed mechanism results in a 45% gain in the overall time taken to process the images in the cloud.

  • We propose a dynamic cloud allocation mechanism to handle parallel image processing requests from different users. By introducing this mechanism, the system is able to automatically add the cloud instance in real-time by gauging the system resource: number of available cloud instances, the queue length and the total number of incoming parallel food processing requests for achieving the pre-defined threshold value for the response time. Also, by deallocating the cloud instances when not in use, the system is able to avoid under-utilization of cloud resources. This will ensure that the system will not add additional resources exponentially, but rather gauge the requirement and accordingly take decision to add and remove the cloud resources. This is particularly important and unique because it ensures scalability when handling multiple mobile device application requests and different food category recognition requests. The proposed dynamic allocation mechanism along with the queue management strategy would smartly manage multiple food images, at the same time, and maintain the consistency of the system. Our results show that we were able to reduce the average time consumption by 77.21% when 60 images were processed in parallel.

  • We implemented the system and tested scenarios of scalability in terms of the number of users connecting to the cloud. Our system shows consistent calorie measurements results even when we increase the number of food objects. From the user perspective it is important to provide a single correct result; otherwise scenarios of incorrect sequence of the recognized food objects or the calorie measurement results will adversely impact the user acceptance of the application.

The rest of the paper is organized as follows: Section  2 provides background information about our application EHSH followed by the related work in Section  3. In Section  4, we present the design details and the architecture of the proposed mechanism. In Section  5, we discuss the proposed cloud broker and its various components, followed by Implementation Results in Section  6 and lastly Section  7, which includes the conclusion and future work.

Section snippets

Background on EHSH

EHSH is able to recognize a food object, the image of which is captured by the user’s smartphone, and can estimate the total number of calories in the food object. For recognizing the food portions in the meal, we proposed two approaches in  [6], [7], [8]. In our first approach  [6], we implemented a Support Vector Machine (SVM) based parallel classifier, which was implemented in parallel in multiple cloud instances. In our second approach, we used deep learning  [8], which resulted in higher

Related work

Recently, because virtualization techniques enable cloud computing environments to remotely run services for mobile devices, computation offloading has attracted significant interest from researchers  [9]. Computation offloading extends a mobile device’s capabilities when running intensive computational services such as e-health apps. However, seamless computation offloading from mobile devices to the cloud is not trivial and requires optimal offloading and scheduling decisions. These decisions

EHSH design

To understand our cloud broker, it is imperative to first understand how our food classification and calorie measurement system works. In this section, we give a brief explanation of how EHSH detects food and measures their calorie.

Proposed cloud broker

In this section, we describe the proposed cloud-based broker mechanism for handling single, multiple, and mixed foods. We specifically describe how the broker handles multiple and concurrent requests from a large number of users. The decision making algorithm of the broker is also described in this section.

Implementation results

In this section, we report on the experimental setup and results. As part of the experimental setup, different cloud instances were configured for training and testing of food images. We will also explain the simulations performed on the central cloud broker and the setup we used in order to test parallel processing of the intelligent cloud broker.

Training setup

For GPU processing, we used the g2.2xlarge Amazon EC2 cloud instance wherein we installed CUDA SDK of version 6.5 on top of Ubuntu

Conclusions and future work

In this paper, we have introduced an intelligent cloud broker for EHSH, supporting different food categories of Single Food Object, Multiple Food Object and Mixed Food Object. We have further introduced a 2 level ingredient testing approach for the mixed food category. We then introduced two decision algorithms (Decision Algorithm 1 and Decision Algorithm 2) that enabled the cloud broker to make decisions of whether or not to offload to other cloud instances and if so to which cloud instance in

Sri Vijay Bharat Peddi is a researcher at the School of Electrical Engineering and Computer Science of the University of Ottawa, Canada. He has completed his Master of Applied Science in Electrical and Computer Engineering from University of Ottawa and is currently working in the Analytics division in IBM Canada. He has expertise in parallel computing, distributed cloud computing, mobile cloud computing and image recognition. He has previously worked as database administrator and has several

References (37)

  • Karthik Kumar et al.

    A survey of computation offloading for mobile systems

    Mobile Netw. Appl.

    (2013)
  • S. Deng et al.

    Computation offloading for service workflow in mobile cloud computing

    IEEE Trans. Parallel Distrib. Syst.

    (2015)
  • H. Wu, Q. Wang, K. Wolter, Mobile healthcare systems with multi-cloud offloading, in: Proc. of the 14th IEEE...
  • R. Kemp

    Cuckoo: a computation offloading framework for smartphones

  • E. Cuervo

    MAUI: making smartphones last longer with code offload

  • M. Satyanarayanan et al.

    The case for vm-based cloudlets in mobile computing

    IEEE Pervasive Comput.

    (2009)
  • H.B.W. Heinzelman, C.A. Janssen, J. Shi, Mobile computing-a green computing resource, in: Wireless Communications and...
  • E.E. Marinelli

    Hyrax: Cloud computing on mobile devices using mapreduce

    (2009)
  • Cited by (41)

    • An automated review of body sensor networks research patterns and trends

      2020, Journal of Industrial Information Integration
      Citation Excerpt :

      Some researchers developed domain specific health and sports activity monitoring kits with WBSN [124,142–145] and contributed in BSN research pool. Other research areas explored by researchers are mobile-healthcare [18,41,105,146–148], e-healthcare [2,74], BSN data security [53,149–152] and privacy [81,119,146,152–155]. Table 11 provide the list of body sensors and healthcare applications explored by researchers.

    • An m-health application for cerebral stroke detection and monitoring using cloud services

      2019, International Journal of Information Management
      Citation Excerpt :

      Authors discussed that the camera and the microphone are the most employed parts of a smartphone in health-related solutions. Sri Vijay Bharat Peddi et al. presented in Peddi et al. (2017) a cloud-based mobile application that employed the camera to detect the types of food on a plate and calculate the calories. There were three categories being single, multiple and mixed food objects.

    View all citing articles on Scopus

    Sri Vijay Bharat Peddi is a researcher at the School of Electrical Engineering and Computer Science of the University of Ottawa, Canada. He has completed his Master of Applied Science in Electrical and Computer Engineering from University of Ottawa and is currently working in the Analytics division in IBM Canada. He has expertise in parallel computing, distributed cloud computing, mobile cloud computing and image recognition. He has previously worked as database administrator and has several years of experience in managing several production and development Oracle and Sybase Databases.

    Pallavi Kuhad is a researcher at the School of Electrical Engineering and Computer Science of the University of Ottawa, Canada. She has completed her Master of Applied Science in Electrical and Computer Engineering from University of Ottawa and is currently working in the Scientific Research & Experimental Development Team at Ernst & Young (EY). She has several publications to her name, especially in the area of food recognition, deep learning, auto-calibration approaches for calorie measurement in food images and computer vision algorithms.

    Abdulsalam Yassine is a researcher at the School of Electrical Engineering and Computer Science of the University of Ottawa, Canada. He has extensive experience in the telecom and IT industry. He is able to leverage his deep knowledge in telecommunication and information technology systems, project management, and service solution architectures. In addition to having high qualification in engineering and project management, he is rather skilled at planning and managing IT/Telecom projects, including both development and validation, building and motivating teams, scheduling, prioritizing, coordinating, and problem solving.

    Parisa Pouladzadeh received her M.Sc. from University of Ottawa in 2012, where her thesis was nominated for a best thesis award. Currently she is a Ph.D. student in the School of Electrical Engineering and Computer Science at the University of Ottawa, working on food recognition systems. Her other research interests include image processing, artificial intelligence and classification.

    Shervin Shirmohammadi received his Ph.D. in Electrical Engineering from the University of Ottawa, Canada, where he is currently a Professor with the School of Electrical Engineering and Computer Science. He is Director of the Distributed and Collaborative Virtual Environment Research Laboratory, and an affiliate member with the Multimedia Communications Research Laboratory, doing research in multimedia systems and networks, specifically video systems, gaming systems, and multimedia-assisted healthcare systems. The results of his research, funded by more than $13 million from public and private sectors, have led to 300 publications, over 65 researchers trained at the postdoctoral, PhD, and Master’s levels, over 20 patents and technology transfers to the private sector, and a number of awards. He is the Associate Editor-in-Chief of IEEE Transactions on Instrumentation and Measurement, Senior Associate Editor of ACM Transactions on Multimedia Computing, Communications, and Applications, an Associate Editor of IEEE Transactions on Circuits and Systems for Video Technology, and was an Associate Editor of Springer’s Journal of Multimedia Tools and Applications from 2004 to 2012. Dr. Shirmohammadi is a University of Ottawa Gold Medalist, a licensed Professional Engineer in Ontario, a Senior Member of IEEE, and a Lifetime Professional Member of the ACM.

    Ali Asghar Nazari Shirehjini received the Ph.D. degree in computer science from the Technische Universitat Darmstadt, Darmstadt, Germany, in 2008. He is currently assistant professor at the Sharif University of Technology (SUT). He is a member of the editorial board of the Multimedia Tools and Applications journal, and the recently launched EIT endorsed Transaction of Collaborative Computing. Prior to joining the SUT, he was a research group leader at the KIT, Karlsruhe, Germany. From 2011 to 2012 he was co-director at the Distributed Artificial Intelligence Lab (DAI-Labor), TU Berlin, Germany. Between 2009 and 2011, he was one of the four Vision 2010 Postdoctoral Fellows at the University of Ottawa, Ottawa, ON, Canada. Between 2001 and 2008, he was with the Fraunhofer Institute for Computer Graphics and GMD-IPSI, Darmstadt. His research interests include Ambient Intelligence, Internet of Things, human factors, intelligent agents and multi-agent systems, pervasive and mobile games, game-based rehabilitation, massively multiplayer online gaming, and electronic commerce.

    View full text