Keywords

1 Introduction

Forrester Research estimates that the cloud computing market could reach 236 billion by 2020 [1]. This rapid rate of adoption of cloud computing is likely to have a dramatic impact on how organizations develop and operate enterprise applications. Public cloud platforms (e.g. AWS [2], Microsoft Azure [3], etc.) offer highly elastic and practically unlimited compute power and storage capacity, allowing flexible acquisition of IT (Information Technology) resources in the form of cloud services, avoiding many of the limitations of on-premises IT solutions. This new technology environment is creating opportunities for innovative solutions at a fraction of the cost of traditional enterprise applications. However, to take full advantage of these developments, organizations involved in the production of enterprise applications must adopt a suitable enterprise architecture and application development frameworks. The architecture should reduce the complexity of application development and maintenance and facilitate effective reuse of application components and infrastructure services. These requirements demand a revision of existing architectural principles and application development methods. Enterprise IT architecture needs to reflect current technology trends and provide framework services that support cloud deployment of applications, mobile computing and integration with IoT devices. This allows application developers to concentrate on the functionality that directly supports business processes and adds value for the end users. Framework services should enable a single sign-on and user authentication regardless of the physical location of the user and the device used, and the application should run on different end user devices such as mobile phones, tablets, notebooks, etc. without the need for excessive modifications. Reusing standard framework services across all projects saves programming effort and improves the reliability of the applications.

Unlike traditional enterprise applications that store data on local servers on-premises, most mobile applications store data and applications in the cloud to allow applications to be shared by very large population of users. Furthermore, given the requirements of modern business environments, the architecture needs to facilitate fast incremental development of application components and ensure rapid cloud deployment. The need for continuous delivery and monitoring of application components impacts on the structure and skills profile of IT teams, favoring small cross-functional teams leading to the convergence of development and operations (DevOps).

There is now increasing empirical evidence that to effectively address such requirements, the architecture needs to support microservices and container-based virtualization. A major advantage of using containers to implement microservices is that the microservices architecture hides the technical details of the underlying hardware and operating systems, and allows developers to focus on managing applications using application level APIs (Application Programming Interfaces). It also allows the replacement of hardware and operating systems upgrades without impacting on existing applications. Finally, as the unit of deployment is the application, monitoring information (e.g. metrics such as CPU and memory usage) is tied to applications rather than machines, which dramatically improves application monitoring and introspection [4]. It can also be argued that using containers for virtualization improves isolation in multi-tenant environments [5]. However, it is also becoming clear that the management of large-scale container-based environments has its own challenges and requires automation of application deployment, auto-scaling and predictability of resource usage. At the same time, there is a requirement for portability across different public and private clouds. While the use of public cloud platforms is economically compelling, an important function of the architecture is to ensure independence from individual cloud providers, avoiding a provider lock-in.

Platforms and frameworks that support the development and deployment of container-based applications have become an active area of research and development with a number of open source projects, including Cloud Foundry [6], OpenShift [7], OpenStack [8] and Kubernetes [9] recently initiated. These projects share many common concepts and in some cases common technologies. A key idea of such open source platforms is to abstract the complexity of the underlying cloud infrastructure and present a well-designed API that simplifies the management of container-based cloud environments. An important requirement that is often not fully addressed by these frameworks concerns the management of multi-tenancy. Management of multiple tenants across multiple public cloud platform presents a number of significant challenges. Each tenant has to be allocated separate resources that are separately monitored and billed. Moreover, the access to tenants’ resources must be controlled so that only authorized users are able to deploy applications on specific resources (i.e. physical or virtual servers) and access these applications.

In this paper, we describe the Unicorn Universe Cloud Framework (uuCloud) - framework designed to manage multi-tenant cloud environments. Unicorn is group of companies based in Prague, Czech Republic that specializes in providing information systems solutions for private and government organizations across a range of industry domains including banking, insurance, energy and utilities, telecommunications, manufacturing and trade (https://unicorn.com/). Unicorn Universe Cloud (uuCloud) is an integral part of the Unicorn Application Framework (UAF), a framework developed by Unicorn that comprises a reference architecture and fully documented methods and tools that support the collaboration of teams of developers during the entire development life-cycle of enterprise applications. A key UAF architectural objective is to support various types of mobile and IoT devices and to facilitate deployment of containerized cloud-based applications using standard framework services [10].

In the next section (Sect. 2) we review related literature focusing on frameworks for the management of container-based cloud environments. Section 3 describes the main components of uuCloud framework and Sect. 4 includes our conclusions and directions for further work.

We note that all diagrams in this paper are drawn using uuBML (Unicorn Universe Business Modeling Language: https://unicornuniverse.eu/en/uubml.html), UML-like graphical notation used throughout Unicorn organizations. Figure 1 illustrates basic relationships and their graphical representation. The prefix uu abbreviates Unicorn Universe, and is a naming convention used throughout the paper to indicate UAF objects.

Fig. 1.
figure 1

uuBML relationships

2 Related Work

Claus Pahl [11, 12] reviews container virtualization principles and investigates the relevance of container technology for PaaS clouds. The author argues that while VMs are ultimately the medium to provision a PaaS platform and application components at the infrastructure layer, containers appear to be more suitable for application packaging and management in PaaS clouds. The paper compares different container models and concludes that container technology has a huge potential to substantially advance PaaS technology, but that significant improvements are required to deal with data and net- work management aspects of container-based architectures.

Brewer [13] in his keynote “Kubernetes The Path to Cloud Native” argues that we are in middle of a great transition toward cloud native applications characterized by highly available unlimited “ethereal” cloud resources. This environment consists of co-designed, but independent microservices and APIs, abstracting away from machines and operating systems. Microservices resemble objects as they encapsulate state and interact via well-define APIs. This allows independent evolution and scaling of individual microservices. The Kubernetes project initiated by Google in 2014 as an open source cluster manager for Docker containers has its origins in an earlier Google container management systems called Borg [14] and Omega [15]. The Kubernetes project is hosted by the Cloud Native Computing Foundation (CNCF) [16] that has a mission “to create and drive the adoption of a new computing paradigm that is optimized for modern distributed systems environments capable of scaling to tens of thousands of self-healing multi-tenant nodes”. The objective is to facilitate cloud native systems that run applications and processes in isolated units of application deployment (i.e. software containers). Containers implement microservices which are dynamically managed to maximize resource utilization and minimize the cost associated with maintenance and operations. CNCF promotes well-defined APIs as the main mechanism for ensuring extensibility and portability. The state of objects in Kubernetes is accessed exclusively through a domain-specific REST API that supports versioning, validation, authentication and authorization for a diverse range of clients [4]. A basic Kubernetes building block is a Pod - a REST object that encapsulates a set of logically connected application containers with storage resources (Volumes) and a unique IP address. Pods constitute a unit of deployment (and a unit of failure) and are deployed to Nodes (physical or logical machines). Lifetime of a Volume is the same as the lifetime of the enclosing Pod allowing restart of individual containers without the loss of data, however restarting the Pod results in the loss of Volume data. Pods are externalized as Services; Kubernetes service is an abstraction that defines a logical set of Pods and a policy for accessing the Pods. Replication Controller is used to create replica Pods to match the demand of the application and provide auto-scaling. Kubernetes uses Namespaces to avoid object name conflicts and to partition resources allocated to different groups of users. Although namespaces can be used to enable multi-tenancy in Kubernetes, this approach appears to have limitations. The application of Docker and Kubernetes container architecture to multi-tenant SaaS (Software as a Service) applications has been investigated and assessed using SWOT (Strength, Weakness, Opportunities and Threats) analysis and contrasted with developing multi-tenant SaaS applications using middleware services [17]. The authors conclude that more research is needed to understand the true potential and risks associated with container orchestration platforms such as Kubernetes. Other authors have considered multi-tenancy from the viewpoint of security challenges and proposed a model for identifying suspicious tenant activities [18].

Other efforts in this area include OpensStack [19] - a cloud operating system designed to control large pools of compute, storage, and networking resources. Administrators manage resources using a dashboard, and provision resources through a web interface. In [20], the authors argue that many existing frameworks have a high degree of centralization and do not tolerate system component failures; they present the design, implementation and evaluation of a scalable and autonomic virtual machine (VM) management framework called Snooze.

Li et al. [21] describe a REST service framework based on the concept of Resource-Oriented Network (RON). The authors discuss the advantages of container-based virtualization over VMs (Virtual Machines) that include lower CPU and memory consumption, faster reboot time, and significantly higher deployment density (6–12 times), requiring fewer physical servers for the same workload and resulting in significant savings in capital expenditure. They identify a trend towards RaaS (Resource-as-a-Service) that allows consumers to rent fine-grained resources such as memory, CPU and storage on demand for short periods of time. The authors point out advances in Network Functions Virtualization (NFV) that make it possible to control network elements, and advances in computer architecture that make it possible to disaggregate CPU cores from the internal memory. The authors argue that a new service abstraction layer is required to control and monitor fine-grained resources and to overcome the heterogeneity of the underlying Linux resource control models. They and note that RaaS cloud brings new challenges, in particular the ability to efficiently control a very large number of fine-grain resources that are rented for short periods of time. According to Ben-Yehuda [22], the trend toward increasingly finer resource granularity is set to continue resulting in increased flexibility and efficiency of cloud-based solutions. Resources such as compute, memory, I/O, storage, etc. will be charged for in dynamically changing amounts, not in fixed bundles. The authors draw an analogy between cloud providers and phone companies that have progressed from billing landlines per several minutes to billing cellphones by the minute, or even per second. They conclude that the RaaS cloud requires new mechanisms for allocating, metering, charging for, reclaiming, and redistributing CPU, memory, and I/O resources among multiple clients every few seconds.

In another publication [23] the authors explore the potential of vertical scalability, and propose a system for autonomic vertical elasticity of Docker containers (ElasticDocker). ElasticDocker performs live migration when the resources on the hosting machine are exhausted. Experimental results show improved efficiency of live migration technique, with a gain of 37.63% when compared to Kubernetes. As a result of increase or decrease of workload, ElasticDocker scales up and down CPU and memory assigned to containers, modifying resource limits directly in the Linux control groups (cgroups) associated with Docker containers. When the host resource limits are reached, the container is migrated to another host (i.e. the container image is transferred from a source to a target host). The authors argue that vertical scalability has many benefits when compared to horizontal scalability, including fine-grained scaling and avoiding the need for additional resources such as load balancers or replicated instances and delays caused by starting up new instances. Furthermore, horizontal scalability is only applicable to applications that can be replicated and decomposed into independent components. The experimental verification was performed using Kubernetes platform running CentOS Linux 7.2 on 4 nodes compared to running identical workload on ElasticDocker, showing an improvement in the average total execution time of almost 40%.

While Kubernetes appears to be gaining momentum as a platform for the management of cloud resources with support for major public cloud platforms including Google Cloud Platform, Microsoft Azure, and most recently Amazon AWS, there is a number of other projects that aim to address the need for universal framework for the development and deployment of cloud applications. This rapidly evolving area is of research interest to both academia and industry practitioners, but currently there is lack of agreement about a standard application framework designed specifically for cloud development and deployment. This is particularly true in the context of multi-tenant cloud applications where the framework needs to support the management of resources allocated to a large number of individual tenants, potentially across multiple public cloud platforms. Moreover, some proposals lack empirical verification using large-scale real-life applications. Further complicating the situation is the rapid rate of innovation in this area characterized by the current trend toward increasingly finer resource granularity with corresponding impact on the complexity of cloud resource management frameworks.

3 Unicorn Universe Cloud (uuCloud)

uuCloud framework facilitates the provisioning of elastic cloud services using containers and virtual servers, and consist of two basic components: the uuCloud Operation Registry (uuOR) - a database that maintains active information about uuCloud objects, and the uuCloud Control Centre (uuC3) - a tool that is used to automate deployment and operation of container-based microservices. To ensure portability and to reduce overheads, uuCloud uses Docker [24] container-based virtualization. Docker containers can be deployed either to a public cloud infrastructure (e.g. AWS or Microsoft Azure) or to a private (on-premises) infrastructure (e.g. Plus4U – Unicorn Universe cloud infrastructure). uuCloud supports the deployment of virtualized Unicorn Universe Applications (uuApps). uuApp implements a cohesive set of functions, and in general is composed of sub-applications (uuSubApps); each uuSubApp is an independent unit of functionality that implements a specific business use case (e.g. booking a visit to a doctor’s surgery). We made an architectural decision to implement each uuSubApp as a containerized microservice using a Docker container.

Access to sub-applications is controlled using identities that are derived from user Roles (typically derived from the organizational structure), and are assigned to users and system commands. The uuCloud security model is based on a combination of Application Profiles (collection of application functions that the application executes) and Application Workspaces (collection of application objects). Access to Application Workspaces is granted according to identities associated with the corresponding Application Profile.

In the following sections, we describe the uuCloud Operation Registry (Sect. 3.1) and the Cloud Control Centre (Sect. 3.2). In Sect. 3.3 we describe the process of application deployment in the context of a multi-tenant cloud environment.

3.1 uuCloud Operation Registry

uuCloud Operation (uuOR) is a component of the uuCloud environment that maintains active information about uuCloud objects (i.e. tenants, resource pools, regions, resource groups, hosts, nodes, etc.). Table 1 contains a list of uuCloud objects and their brief descriptions, and Fig. 2. shows the attributes of uuCloud objects and relationships between these objects.

Table 1. uuCloud objects
Fig. 2.
figure 2

Structure of the uuCloud Operation Registry

Each Host is allocated to a single logical unit called Resource Group. Resource groups can be allocated additional Resources (e.g. MongoDB [25], gateways, etc.); Resource attributes are not indicated on the diagram as different attributes apply to different types of resources. Each Resource Group belongs to a Region – a separate IT infrastructure from a single provider with co-located compute, storage and network resources with low latency and high bandwidth connectivity (e.g. Azure North (EU-N-AZ)). Regions are grouped into a (hybrid) Cloud that can be composed of multiple public (e.g. AWS, Microsoft Azure) and private clouds (e.g. Plus4U).

uuCloud supports multi-tenant operation; each Tenant typically represents a separate organization (e.g. a doctor’s surgery). Tenants are assigned resources from Resource Pools using the mechanism of Resource Lease that specifies the usage constraints, such as the contracted capacity, free capacity, etc. A Resource Pool defines the maximum amount of resources (i.e. number of vCPUs, RAM size, size of storage, etc.) that are available for the operation of a specific Tenant. A Tenant can be allocated several Resource Pools, for example separate Resource Pool for production and development, preventing application development from consuming production resources. Each Tenant consumes its own resources and is monitored and billed separately. Applications can be shared among multiple Tenants using the Share command (see Table 2), with the Tenant in whose Resource Pool the application is deployed being responsible for the consumed resources.

Table 2. uuC3 REST API

3.2 Cloud Control Center

The Operation Registry described in the previous section stores and manages uuCloud metadata and is used by the Cloud Control Center (uuC3) to automate the management and operation of container-based microservices deployed on public or private cloud infrastructure. The Operation Registry database is managed using REST commands that include commands for creating and deleting repository objects and commands for generating various repository reports. uuC3 defines an API that abstracts the complexity of the proprietary cloud infrastructures and supports node deployment and host management operations listed in Table 2.

3.3 Application Deployment Process

The first step in the application deployment process is to create a Node Image. As illustrated in Fig. 3, Node Image is created from a uuSubApp and a Runtime Stack that contains all the related archives and components needed to run the application (i.e. system tools, system libraries, etc.). During deployment, the uuC3 application deployment command (Deploy) reads the corresponding Deployment Descriptor (JSON file that contains parameter that define the application runtime environment). The resulting Node Image constitutes a unit of deployment. Node Image metadata are recorded in the uuOR and the Node Image is published to the private Docker Image Registry.

Fig. 3.
figure 3

Process of creating a Node Image

Fig. 4.
figure 4

Deployment descriptor

The JSON code fragment in Fig. 4 shows the structure of the deployment descriptor, and the Table 3 below contains the description of the Deployment Descriptor attributes.

Table 3. Deployment descriptor attributes

Figure 5 shows the relationship between objects used during the process of deployment of containerized applications. Node (i.e. Docker container) is a unit of deployment with characteristics that include virtual CPU (vCPU) count, RAM size and the size of ephemeral storage. A Node can be deployed to an available Host (physical or virtual server with computational resources, i.e. CPU, RAM, and storage). Nodes are classified according to NodeSize (Small: 1x vCPU, 1 GB of RAM, 0.5 GB of disk storage, Medium: 1x vCPU, 1 GB of RAM, 1 GB of disk storage, and Large: 2x vCPU, 2 GB of RAM, 1 GB of disk storage). uuCloud Nodes are stateless and use external resources (e.g. MongoDB) to implement persistent storage. Nodes are further classified as synchronous or asynchronous depending on the behavior of the application that the node virtualizes. Nodes are grouped into NodeSets - sets of Nodes with identical functionality (i.e. nodes that virtualize the same applications). NodeSets support elasticity by increasing or decreasing the number of available nodes. At runtime, a gateway (uuGateway) forwards client requests to a router that passes each request to a load balancer. The load balancer selects a Node from a NodeSet of functionally identical nodes, optimizing the use of the hardware infrastructure and providing a failover capability (if the Node is not responsive the request is re-directed to an alternative Node).

Fig. 5.
figure 5

Application deployment scenario

The deployment of the application is performed using the uuC3 Deploy command. The Deploy command searches for an existing application deployment object in uuOperation Registry, which can be identified by a composite key (ResourcePool URI, ASID, uuAppBox URI). If the deployment already exists, it will be updated, else a new application deployment is created for each instance of a particular application (uuSubApp).

The input parameters of uuC3 Deploy command in the code fragment illustrated in Fig. 6 include: DEPLOY_POOL_URI (URI of the Resource Pool that the application is to be deployed into), appBoxUri (URI of the application distribution package), ASID (Application Server Identifier) – an application identifier generated by uuCloud to uniquely identify each sub-application, uuEEs (list of identities that the application uses), TID (Tenant Identifier of the Tenant whose resources the application uses), uuSubAppDataStoreMap (mapping between the logical SubApplication Data Store and the physical Data Store identifier values), privilegedUserMap (mapping between logical identity names and the actual user identities that are associated with access rights), uuSubAppInstanceSysOwner (identity of the system user who is authorized to deploy the application).

Fig. 6.
figure 6

uuC3 Deploy command

4 Conclusions

Wide-spread adoption of cloud computing and extensive use of mobile and IoT devices have impacted on the architecture of enterprise applications with corresponding impact on application development frameworks [26]. Microservices architecture has evolved from SOA (Service Oriented Architecture) and is currently the most popular approach to implementing cloud-based applications. However, the management of large-scale container-based microservices requires automation to ensure fast and predictable application deployment, auto-scaling and control of resource usage. Platforms and frameworks that support the development and deployment of container-based applications have become an active area of research and development, and a number of open source projects have been recently initiated. Kubernetes project, in particular, appears to be gaining acceptance by major public cloud providers including Google, Microsoft and most recently Amazon AWS, as well as other important industry players. However, currently there is a lack of agreement about a standard application development framework designed specifically for cloud development and deployment. This situation is further complicated by the high rate of innovation in this area. It is quite likely that the present approach of implementing applications as containerized microservices will evolve to take advantage of much finer grained cloud resources with dramatic impact on the design, implementation and operation of cloud-based applications.

In this paper, we have described the uuCloud framework that is designed to facilitate the management of large-scale container-based microservices with specific extensions to handle multi-tenancy. An important benefit of the uuCloud framework is its ability to manage complex multi-tenant environments deployed across multiple (hybrid) cloud platforms, hiding the heterogeneity of the underlying infrastructures (i.e. hardware, operating systems, virtualization technologies, etc.). uuCloud manages cloud metadata, supports automatic application deployment and autonomic scaling of applications.

We are continuously monitoring the rapidly evolving landscape of cloud management platforms and frameworks, and we may consider alignment of the uuCloud framework with Kubernetes in the future as both frameworks mature and the direction of cloud standardization becomes clearer.