Abstract
By deploying virtual machines (VMs) on shared infrastructure in the cloud, users gain flexibility, increase scalability, and decrease their operational costs compared to on-premise infrastructure. However, a cloud environment introduces new vulnerabilities, particularly from untrusted users sharing the same physical hardware. In 2009, Ristenpart et al. demonstrated that an attacker could place a VM on the same physical hardware and extract confidential information from a target using a side-channel attack. We replicated this seminal work on cloud cartography and network-based co-residency tests on Amazon Web Services (AWS) and OpenStack cloud infrastructures. Although the Elastic Compute Cloud (EC2) cloud cartography remains similar to prior work, current mitigations deter the network-based co-residency tests. OpenStack’s cloud cartography differs from EC2’s, and we found that OpenStack was vulnerable to one network-based co-residency test. Our results indicate that co-residency threats remain a concern more than a decade after their initial description.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Cloud providers leverage virtualization to run multiple workloads concurrently on a physical server. These workloads may use different operating systems or the same operating systems, but at different patch levels [1, 2]. The hypervisor ensures that the virtual machines (VMs) are isolated from each other and that one VM cannot access information from another VM. Sharing resources allows cloud providers to maximize the utilization of each physical machine and reduces the cost to cloud consumers, but co-residency also introduces risks and privacy concerns for customers. For example, an attacker may exploit co-residency by extracting sensitive data, such as cryptographic keys, from other VMs running on the same host.
In seminal work, Ristenpart et al. [3] demonstrated that the use of shared physical infrastructure in cloud environments introduces unique vulnerabilities. Their work used cloud cartography to map the placement of VMs in the cloud. Armed with this information, an adversary can achieve a 40% chance of placing their own VM on the same physical machine as a target VM, which opens the door for cross-VM attacks. Ristenpart et al. recommend that cloud providers obfuscate the internal structure of their services to inhibit simple co-residency checks. Other researchers [4, 5] echo these recommendations.
Later research examined additional types of co-resident attacks. Zhang et al. [6] detailed the construction of an access-driven side-channel attack to extract fine-grained information from a victim VM running on the same physical computer. Irazoqui et al. [7] demonstrated a side-channel attack for multi-processor systems even when the VMs do not share the same physical CPU. Xu et al. [8] investigated how Virtual Private Cloud (VPC) networking mitigates earlier attacks [3] but develop new placement strategies to sidestep the defenses. Even though Web Services (AWS) Elastic Compute Cloud (EC2), the Google Compute Engine, and Microsoft Azure all implement mitigations against simple co-residency tests, they remain vulnerable to memory probing, and co-residency is correlated with the launch time of VMs in data centers [9].
In addition to extracting sensitive information using side channels, researchers have also studied attacks targeting the degradation of performance of target VMs through the means of (DoS) attacks. For example, a malicious cloud customer can mount low-cost attacks to cause severe performance degradation for a Hadoop application, and 38x delay in response time for an e-commerce website hosted in AWS EC2 [10]. Because there still exists a high possibility of attacks that can affect the performance or security of a VM, ongoing research into co-location is valuable.
In this paper, we investigate whether the co-location techniques tested by Ristenpart et al. [3] can be replicated a decade after the original research. A cloud cartography map was re-created for instances on EC2 on the same VPC network. Amazon VPC prevented network-based co-residency tests for instances created from different AWS accounts. The cloud cartography map for OpenStack was different than EC2, but several patterns were discovered in the IP address assignment for OpenStack instances. Additionally, network-based tests on OpenStack instances were successful in determining co-residency for several different network and site configurations. Our results indicate that co-residency threats remain a concern more than a decade after their initial description, and we offer recommendations to mitigate such risks.
The remainder of this paper is organized as follows. Section 2 provides an overview of the two services that we evaluated, AWS EC2 and OpenStack, and describes how we used them in our experiments. In Sects. 3 and 4, we replicate the cloud cartography and network-based co-residency check on both services. In Sect. 5, we discuss our results, and we conclude in Sect. 6.
2 Experiment Platforms and Methodology
This section provides an overview of AWS EC2 and OpenStack, the two cloud platforms that we evaluated. We chose EC2 to replicate Ristenpart et al.’s original study [3] and OpenStack due to its position as a leading open source cloud platform.
2.1 AWS Elastic Compute Cloud (EC2)
Amazon Web Services (AWS)Footnote 1 is a public cloud platform that is used by startups, enterprises, and government agencies. Its Elastic Compute Cloud (EC2) service allows users to manage VMs, referred to as “instances,” with virtualized hardware (CPU, memory, storage, etc.) derived from a user-selected “instance type.” EC2 is subdivided into 22 geographic regions and 69 availability zones around the world. Each region is separated geographically, and each region consists of isolated availability zones that are connected to each other through low-latency links. Users may select an availability zone within a region, or EC2 will select one automatically when launching an instance.
A Virtual Private Cloud (VPC) is an AWS service that allows users to provision a logically isolated section of AWS to launch resources. A VPC provides networking capabilities similar to a private data center, but comes with the benefits of availability, reliability, and scalability. Additional benefits and features include having complete control over the virtual networking environment, including selection of the IP address range, creation of subnets, and configuration of route tables and network gateways.
Originally, EC2 used a platform that is now referred to as EC2-classic. With this platform, the instances ran in a single, flat network that was shared with other customers. AWS accounts created before December 2013 still support this platform. Newer accounts only support the EC2-VPC platform. The EC2-VPC platform provides capabilities such as the ability to assign static private IPv4 addresses to instances and to assign multiple IP addresses to an instance. The EC2-VPC platform also creates a default VPC in each AWS region so that the user can start quickly without having to configure a custom VPC. The default VPC receives a /16 IPv4 CIDR block and a size /20 default subnet in each availability zone. These default values end up producing up to 65,536 private IPv4 addresses with up to 4,096 addresses per subnet. The default VPC also configures an Internet gateway and security groups. Figure 1 illustrates the architecture of a default VPC that is set up by AWS.
With the introduction the EC2-VPC platform, the internal IP addresses of instances are private to that cloud tenant. This configuration differs from how internal IP addresses were assigned in EC2-classic. To understand the security implications of this, two cloud experiments were created with the EC2 service to determine the cloud cartography and co-residency between VM instances.
2.2 OpenStack
OpenStackFootnote 2 is an open source project that provides cloud infrastructure management and control capabilities. For our experiments, OpenStack (Rocky release) was deployed using CloudLab [11], which allows researchers to run cloud software stacks in an isolated environment. CloudLab is a testbed for the design of new cloud architectures, testing for vulnerabilities, and learning about deploying clouds. The service is unique because it gives researchers control over their experimental cloud stack, a capability not provided when using commercial cloud offerings.
We created three custom cloud experiments with OpenStack on CloudLab. In the descriptions that follow, we use AWS terminology for consistency across the two cloud platforms.
-
Simple This configuration had two compute nodes in the same geographic region and were connected on the same LAN. This environment is designed for basic co-residency tests.
-
Multi-site This configuration had four compute nodes, two nodes in each of two geographic regions that were connected on the same LAN. This environment was used to test if instances on different sites can be distinguished from co-resident instances by analyzing network traffic.
-
Multi-LAN This configuration had two compute nodes in the same geographic region but each compute node was connected to a different LAN. This environment was used to compare co-resident instances on the same LAN with co-resident instances on different LANs.
OpenStack distinguishes two types of network traffic: north-south and east-west. North-south traffic travels between an instance and external network such as the Internet. East-west traffic travels between instances on the same or different networks. Co-residency detection is only concerned with analyzing east-west traffic. East-west traffic starts at a VM instance, passes through a bridge that maps the virtual Ethernet to physical Ethernet and enforces network security rules, passes through a switch, and finally is sent to the second compute node. Since the traffic passes through a Layer 2 switch, a traceroute will not measure any hops between the two compute nodes. This configuration applies to the simple and multi-site networks.
The multi-LAN experiment places co-resident instances on different LANs. Traffic is routed from one instance to a switch, a router, and back through the switch and to the other instance. Because the traffic passes through a router, a traceroute will measure a single hop between the two instances. Even though these two instances are co-resident, network traffic will take longer to travel between the two instances than two co-resident instances on the same network.
3 Cloud Cartography
Cloud cartography maps a cloud provider’s infrastructure (i.e., the location of hardware and physical servers) to identify where a particular VM may reside. In this paper, we replicate the maps previously created for EC2 and create new ones for OpenStack.
3.1 AWS EC2
Before replicating the methodology used by Ristenpart et al. for cloud cartography on EC2, we hypothesized that due to the increase in regions, availability zones, instance types, and IP ranges, an updated map could look significantly different than the original study [3]. To create the map, we collected one data set to fulfill the requirements for two experiments. The first experiment inspected the internal IP addresses associated with EC2 instances that were launched in specific availability zones. The second experiment examined the internal IP addresses assigned to EC2 instances of varying instance types.
We used two regions, us-east-1 (Northern Virginia) and us-west-2 (Oregon), and three availability zones within those regions (see Table 1). Because we used the default VPC, three subnets that are associated with their own respective availability zones were used to provision the instances. The IPv4 CIDR block associated with the different subnets are listed in Table 1. We also used four different instance types: t2.micro, t2.medium, t2.large, and m4.large. AWS sets limits for the number of concurrent EC2 instances and the number of concurrent instance types for each AWS account. By default, AWS limits users to 20 concurrent instances, but not all instance types are allowed to have this number of concurrent instances. The instance types that we used are just a sample of all the instance types that AWS offers, but these instance types are allowed to have the maximum number of concurrent EC2 instances. In addition, not all availability zones within a region have the ability to provision all the instance types. We selected the specific availability zones because other availability zones in the two regions could not launch an instance of one or more of the aforementioned instance types.
For both experiments, 20 instances of 4 different types were launched in 3 availability zones, for a total of 240 instances. For example, 20 t2.micro instances were launched in the us-east-1a region, and the same was repeated for the other two zones in that region. Figure 2 shows the IP addresses that were assigned to the 80 EC2 instances in each availability zone for the us-east-1 and us-west-2 regions, where each colored marker represents an instance in a different availability zone.
Patterns emerge when looking at the IP address assignment in each region and availability zone. Regardless of the instance type, all the instances launched in one availability zone had IP addresses that are clustered together. As a result, if the internal IP address of a VM was exposed, an attacker would be able to launch VMs in the same availability zone as their target to increase the probability of obtaining co-residency. The results shown here exbibit similar patterns as the cloud cartography map from Ristenpart et al. In both cases, VMs were assigned IP addresses clustered with other VMs in the same availability zone. However, the range of IP addresses assigned to each availability zone was approximately 32 times larger in Ristenpart et al. than the ranges shown in Fig. 2.
Our second experiment focused on the internal IP addresses assigned to instances of varying instance types. Figure 3 indicates a lack of a clear pattern for where different instance types are launched in each respective region. The four different instance types are randomly distributed, which suggests that each instance type may be placed anywhere within an availability zone. The results do not display any of the clustered patterns found in Ristenpart et al. and it appears that AWS has since added some randomization to internal IP address assignments.
3.2 OpenStack
We used our multi-site experiment on OpenStack for its cloud cartography. This experiment created two compute nodes in the Clemson region and two compute nodes in the Wisconsin region. We launched 232 m1.small instances, which were placed on a compute node determined by nova, OpenStack’s compute service (roughly analogous to EC2).
Each instance receives a private IP address on the local network using Dynamic Host Configuration Protocol (DHCP). The local network was configured to use the 10.254.0.0/16 IP address space. Figure 4 shows the IP addresses assigned to each instance type with the colored markers indicating the two regions. Unfortunately, networking errors prevented any instances from being assigned to the second compute node in the Wisconsin region.
There are several patterns in the IP address allocation process that could potentially be used to determine co-residency. First, the internal IP addresses were assigned somewhat sequentially. For example, six instances were launched with 147, 148, 151, 153, 154, and 178 as the last octet of the internal IP address. An attacker might be able to infer that the instances with numerically-close IP addresses were launched around the same time. Second, 158 of the 232 instances (68%) have a co-resident instance that was assigned a neighboring IP address. For example, the instance assigned 10.254.0.32 is co-resident with the instance assigned 10.254.0.33. Nevertheless, the probability of obtaining co-residency with a neighboring IP address decreases as more compute nodes are added.
Next, we launched 62 instances with four different flavors (comparable to EC2 instance types) on the multi-site configuration to determine if flavors could be used to determine co-residency. The results are shown in Fig. 5. There does not appear to be any patterns for the different flavors. The instances were assigned internal IPs in a somewhat sequential order, but that order is not dependent on the flavor because the same pattern was observed previously with all m1.small-flavored instances.
The OpenStack cloud cartography shown in this section does not have any of the patterns observed in the AWS cloud cartography in Ristenpart et al. IP addresses were assigned by OpenStack between the two regions as resources were available on each node and not in separate clusters of IP addresses. OpenStack instance types were also assigned without a cluster pattern.
4 Co-residency Experiments
Ristenpart et al. [3] used two tests to determine co-residency: network-based and cross–VM covert channel based. We replicate the network-based co-residence checks to determine if EC2 and OpenStack have any mitigations in place to deter using them to determine co-residency. The network-based co-residency test asserts that two instances are co-resident if they satisfy any of the following conditions:
-
Matching Dom0 IP addresses
-
Small packet (RTTs)
-
Numerically close internal IP addresses (e.g., within 7)
The remainder of this section explores these checks on EC2 and OpenStack.
4.1 AWS EC2
The first check of matching Dom0 IP addresses did not yield valid results. A TCP SYN traceroute was not able to determine the IP address of the target instance hypervisor. The traceroute produced information for just one hop, which indicates that the two instances are on the same network. There were also multiple IP addresses for that one hop, which indicates that there are multiple routes through which the instance can reach its destination.
For the second network-based co-residence check (small packet RTTs), we collected data from the 20 t2.micro instances in each availability zone with each instance pinging the other 19 instances in that availability zone 10 times. We used hping3, a network tool that is able to send custom TCP/IP packets and display target replies, to ping the other instances. We collected 190 ping times for each instance. Table 2 shows the mean and median RTTs for each availability zone. The mean and median RTTs between instances provides no clear indication that any two instances are co-resident. That is, no two instances had a significantly lower mean RTT than the mean of the entire availability zone.
Last, the third co-residence check of numerically close internal IP addresses of two instances was unreliable. As seen in the cloud cartography (Figs. 2 and 3), instances in EC2-VPC are assigned to default subnets within fixed IPv4 CIDR blocks. This assignment results in instances in one availability zone having very similar IP addresses to each other.
4.2 OpenStack
For the OpenStack experiments, the IP address of the hypervisor (Dom0) could not be determined for the target instance using traceroute or other network analysis. Thus, OpenStack is not vulnerable to the first network-based co-residency check. Additionally, OpenStack is not vulnerable to the third network-based co-residency check based on the prior results. OpenStack assigns IP addresses in a somewhat sequential order, and the assignment is not correlated with co-residency. The second network-based co-residency test is the only one that might divulge if two VM instances are co-resident.
To understand how network traffic differs for co-resident and non-coresident instances on OpenStack, RTTs were measured between instances using the simple configuration (see Sect. 2.2). With administrator access to the OpenStack controller instance, co-residency can be determined a priori. This information allows a threshold RTT to be estimated, and this threshold serves as a predictor of co-residency on instances where we do not have administrator access to the controller node. To collect data, 1000 packets were sent between two co-resident instances and then repeated for two non-coresident instances.
A histogram of the round trip times for each measurement is shown in Fig. 6. The mean RTT between co-resident instances was 0.437 ms and the mean RTT for non-coresident instances was 0.515 ms. From the histogram, we observe some separation between the two RTT distributions, which implies that an RTT network-based co-residency check on OpenStack works for this cloud configuration. If the two distributions completely overlapped, it would not be possible to determine co-residency from RTTs. From this data, we set a threshold (RTTs) of 0.476 ms to serve as our co-residency check. This threshold is the midpoint between the two mean RTT from co-resident and non-coresident instances.
To determine if this RTT co-residency check could be used on other OpenStack experiment configurations, the same tests were performed on the multi-site and multi-LAN configurations. The results are summarized in Table 3. The first row summarizes the mean RTTs for the simple cloud configuration, where VMs were created on the same LAN in the same region. The second row summarizes the mean RTTs for the multi-LAN configuration, where VMs were created on different LAN in the same region. Here, RTTs are larger than the simple configuration, but there remains a significant difference between the co-resident and non-coresident instances. This difference implies that a RTT co-residency check would still work, as long as the different LANs could be identified. Finally, the last row summarizes the mean RTTs for the multi-site experiment, where VMs were created on the same LAN, but in different regions. VMs in different regions are always located on different hardware. Therefore, it is not possible to have co-resident instances, and the co-resident entry in the table is listed as “N/A.” The largest mean RTT occurs when two instances are located at different regions (25.9 ms). This latency is expected because the traffic is routed through an external network from the Clemson to Wisconsin region.
5 Discussion
In this section, we discuss the results from our cloud cartography and co-residency experiments.
5.1 AWS EC2
The cloud cartography maps produced from the data set collected on the different availability zones and instance types (Figs. 2 and 3) resulted in a very similar map to the original work [3]. There is a clear pattern in the assignment of IP addresses: instances that are launched in one availability zone have very close IP addresses to each other, and instances in other availability zones are clustered in a different IP range. Similarly, the second map that highlights the IP addresses of the different instance types being launched in a region also produced a map similar to the original work.
The cartography maps may have produced similar results but the data sets over which the maps were produced were very different. Ristenpart et al. [3] collected IP addresses from EC2-classic instances, which run in a single, flat network that is shared with other customers. This configuration is different from the one that we used to produce our maps. We collected data from instances on the EC2-VPC platform, meaning instances are run in a VPC which is logically isolated to a single AWS account. Therefore, the cartography maps created and illustrated in this paper only apply to the AWS account where the data was collected and not the entire cloud network. Due to this relatively small sample size, we cannot necessarily generalize our claims and results to the entire AWS cloud network. However, if we are able to replicate these experiments in more VPCs and different AWS accounts and thus, produce the corresponding cartography maps, that will provide the necessary data which could lead us to generalize our claims with a certain degree of confidence for the larger AWS network.
None of the network-based co-residency checks yielded useful results. With the adoption of new technologies such as Amazon VPC, Ristenpart et al.’s network-based co-residency tests are no longer valid. VPC allows EC2 instances that are in the same VPC or different VPC to communicate with each other through a few different means including inter-region VPC peering, public IP addresses, NAT gateway, NAT instances, VPN connections, and Direct Connect connections. The requirement for instances to communicate using public IP addresses ends up causing the routing to happen within the network of the data center instead of going through the hypervisor. Due to this, the prior techniques no longer produce results that can determine if two instances are co-residents.
5.2 OpenStack
The only network-based co-residency test that worked for OpenStack was measuring small packet RTT. We conducted an additional experiment to perform a blind co-residency test across 8 compute nodes. We measured the RTT across instance pairs and used a threshold RTT of 0.437 ms to determine co-residency. This co-residency test was 86% accurate with no false positives and a false negative rate of 33%. For an attacker, these probabilities are very good, especially when considering how quickly and inexpensively one can perform a side-channel attack.
Performance-wise, network-based co-residency tests consume minimal processor, memory, and network resources, making them difficult for a cloud service to detect and restrict. The only significant limitation of these tests is that they consume considerable time for a large number of instances. For large cloud services, there may be thousands of customers using compute nodes in a given availability zone and it might take many days to locate and identify a specific target. For small cloud infrastructures, like the ones that we created with OpenStack on CloudLab, the time investment required to determine co-location is much smaller.
To mitigate these network-based co-residency checks, OpenStack administrators should design their networks to route traffic through a centralized switch regardless of co-residency. Such a configuration increases internal network latency, but for most cases, it would not impact performance significantly, since the additional round trip overhead is only 0.22 ms on average. Nevertheless, when large amounts of data are being transferred internally, this overhead adds up, and administrators must balance the security and performance trade-offs of re-routing network traffic through a centralized switch.
5.3 Legal Issues
Since the experiments were only conducted on AWS accounts that the authors created, we did not violate any terms of AWS’s acceptable use policy. The policy prohibits unauthorized access and traffic monitoring, but since we had permission to access instances that we created, these terms were not violated. The OpenStack experiments were run on CloudLab infrastructure, which was explicitly built to test for vulnerabilities in a controlled and isolated environment, and no terms of service were violated during these experiments.
6 Conclusion
Since Ristenpart et al. disclosed the vulnerabilities in AWS EC2 a decade ago, Amazon implemented new security practices and systems, notably Amazon VPC, to prevent network-based co-residency tests. The introduction and continuous innovation of VPC is the primary reason that replicating the prior results is not possible. Repeating the cloud cartography methodology resulted in similar results even though the data sets were different. The original paper was able to create a map that represented a subset of the entire cloud network whereas the data collected in this paper was only for one logically isolated section within one AWS account. The network-based co-residency checks also did not work due to how VPCs changed the behavior of how two instances communicate with each other.
OpenStack is currently vulnerable to network-based co-residency detection when instances are connected to the same or different internal networks. IP addresses for new OpenStack instances are assigned in a somewhat sequential pattern. These vulnerabilities could allow an attacker to co-locate their instance on a target and then conduct side-channel attacks. To mitigate these vulnerabilities, the DHCP server should assign IP addresses randomly, and network traffic for all instances should be sent through a centralized router.
Covert channel co-residency checks are always a potential vulnerability when sharing resources with malicious users. To improve security, further research could examine different co-residency tests based on the shared memory, processor, or storage resources. It is also likely that cloud instances are not allocated to machines completely randomly and future research could try to determine if there are predictable allocation trends that could be used to increase the probability of co-residency. Additional research could also investigate ways to defend against these attacks to either decrease the probability of detection for the attacker or to increase the amount of time and resources required to determine co-residency.
References
Smith, J.E., Nair, R.: The architecture of virtual machines. Computer 38(5), 32–38 (2005)
Kotsovinos, E.: Virtualization: blessing or curse? Commun. ACM 54(1), 61–65 (2011)
Ristenpart, T., Tromer, E., Shacham, H., Savage, S.: Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds. In: Proceedings of the 16th ACM Conference on Computer and Communications Security. CCS ’09, New York, NY, USA, pp. 199–212. ACM (2009)
Vaquero, L.M., Rodero-Merino, L., Morán, D.: Locking the sky: a survey on IaaS cloud security. Computing 91(1), 93–118 (2011)
Hashizume, K., Rosado, D.G., Fernández-Medina, E., Fernandez, E.B.: An analysis of security issues for cloud computing. J. Internet Serv. Appl. 4(1), 25 (2013)
Zhang, Y., Juels, A., Reiter, M.K., Ristenpart, T.: Cross-VM side channels and their use to extract private keys. In: Proceedings of the 2012 ACM Conference on Computer and Communications Security. CCS 2012, New York, NY, USA, pp. 305–316. ACM (2012)
Irazoqui, G., Eisenbarth, T., Sunar, B.: Cross processor cache attacks. In: Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security - ASIA CCS 2016, Xi’an, China, pp. 353–364. ACM Press (2016)
Xu, Z., Wang, H., Wu, Z.: A measurement study on co-residence threat inside the cloud. In: 24th USENIX Security Symposium. USENIX Security 2015, Washington, D.C., USENIX Association, pp. 929–944 (August 2015)
Varadarajan, V., Zhang, Y., Ristenpart, T., Swift, M.: A placement vulnerability study in multi-tenant public clouds. In: Proceedings of the 24th USENIX Security Symposium, Washington, D.C., USENIX Association, pp. 913–928, August 2015
Zhang, T., Zhang, Y., Lee, R.B.: Memory DoS Attacks in Multi-tenant Clouds: Severity and Mitigation. arXiv:1603.03404 [cs] (March 2016)
Duplyakin, D., et al.: The Design and Operation of CloudLab. In: Proceedings of the USENIX Annual Technical Conference. ATC 2019, pp. 1–14 (July 2019)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Gupta, S., Miceli, R., Coffman, J. (2020). A Replication Study to Explore Network-Based Co-residency of Virtual Machines in the Cloud. In: Zhang, Q., Wang, Y., Zhang, LJ. (eds) Cloud Computing – CLOUD 2020. CLOUD 2020. Lecture Notes in Computer Science(), vol 12403. Springer, Cham. https://doi.org/10.1007/978-3-030-59635-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-59635-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59634-7
Online ISBN: 978-3-030-59635-4
eBook Packages: Computer ScienceComputer Science (R0)