16.1 Introduction

In wireless networks, signals are transmitted via open and shared media. Without protection, anyone in the transmission range of the sender can intercept the sender’s signal. Therefore, wireless communications are inherently less secure than their wired counterparts. Furthermore, wireless (mobile) devices usually have limited bandwidth, storage space, and processing capacities. It is harder to reinforce security in wireless networks than in wired networks.

There are two types of wireless networks: wireless LAN (WLAN) and wireless ad hoc network. The former requires the use of one or more access points (or base stations). Those access points connect wireless users which are one hop away and centrally control their access to Internet and the other WLANs. The ad hoc form of communications is based on radio to radio multi-hopping. Wireless ad hoc networks have been evolving to serve a growing number of applications, including military communications, emergency rescue operations, and disaster recoveries efforts. Benefiting from the ease of deployment, wireless ad hoc networks show great potential. Compared with WLANs, the security management in wireless ad hoc networks is much tougher due to the following characteristics.

  1. 1.

    Resource Constraints: The wireless devices usually have limited bandwidth, memory and processing power. This means costly security solutions may not be affordable in wireless ad hoc networks.

  2. 2.

    Unreliable Communications: The shared-medium nature and unstable channel quality of wireless links may result in high packet-loss rate and re-routing instability, which is a common phenomenon that leads to throughput drops in multi-hop networks. This implies that the security solution in wireless ad hoc networks cannot rely on reliable communication.

  3. 3.

    Node mobility and dynamic topology: The network topology of wireless ad hoc network may change rapidly and unpredictably over time, since the connectivity among the nodes may vary with time due to node departures, node arrivals, and the mobility of nodes. This emphasizes the need for secure solutions to be adaptive to dynamic topology.

  4. 4.

    Scalability: Due to the limited memory and processing power on mobile devices, the scalability is a key problem when we consider a large network size. Networks of 10,000 or even 100,000 nodes are envisioned, and scalability is one of the major design concerns.

Performance in wireless ad hoc networks is strongly related to the strength of security. However, without satisfactory network performance, security is meaningless. Therefore, in this chapter, we address network performance perspectives in security protocol design rather than cryptanalysis or formal verification of security protocols. The following requirements need to be considered for secure real-time communications.

  • Authentication: Authentication is the process to verify the identity of the sender of a communication. Without authentication, malicious attackers can access resource, gain-sensitive information, and interfere with the operation of other nodes very easily.

  • Confidentiality: Confidentiality means certain information is only accessible to authorized recipients. Participating parties to handle an emergency event need to cooperate with each other, while keeping the confidentiality of the traffic traversing the network.

  • Non-repudiation: Non-repudiation ensures that the origin of a message cannot deny having sent the message. It is useful for detection and isolation of compromised nodes.

  • Integrity: The integrity of a message is the property that the message cannot be modified without detection. Without integrity, attackers can easily corrupt and modify the data and therefore cause mobile devices to make wrong decisions based on the corrupted data.

  • Availability: Availability ensures the survivability of network services despite denial of service attacks. In unreliable wireless communications with highly dynamic topology, availability affects network performance greatly.

From security point of view, multiple lines of defense against attacks are desired. A complete security solution for wireless ad hoc networks should contain three components: prevention, detection, and reaction. In this chapter, we focus on the preventive protection in mobile ad hoc wireless networks. The topics described include key management and broadcast authentication issues.

16.2 Background

16.2.1 Key Management in Wireless Networks

Security solutions in wireless ad hoc networks rely on key management mechanisms. In this section, we briefly introduce symmetric key management and asymmetric (public) key management.

16.2.1.1 Symmetric Key Management

Symmetric key systems, like DES, AES and keyed hash functions, are based on shared key information between two parties in communications. In this case, if the sender uses the secret key to encrypt a message, the receiver uses the same secret key to decrypt the message. Symmetric key techniques are attractive due to their energy efficiency. Therefore, a number of techniques have been developed for a specific type of ad hoc networks—wireless sensors networks, since sensors are inexpensive and low-power devices.

In symmetric key cryptography, a sender and a receiver must establish a shared key before communication. In the context of sensor networks, shared keys are distributed to sensors before their deployment. It is very challenging to design key distribution schemes with the following two concerns in a large-scale sensor network under limited memory resources:

  • Connectivity: High percentage of the neighboring sensor nodes should share at least one secret key.

  • Resilience: When some nodes are compromised by an adversary, other sensors are still able to maintain secure communications.

16.2.1.1.1 Random Key Distribution

In [10], key distribution consists of three phases: (1) key pre-distribution, (2) shared-key discovery, and (3) path-key establishment. In the pre-distribution phase, a large key-pool of \(K\) keys and their corresponding identities are generated. For each sensor within the sensor network, \(k\) keys are randomly drawn from the key-pool. These \(k\) keys form a key ring for a sensor node. During the key-discovery phase, each sensor node finds out which neighbors share a common key with itself by exchanging discovery messages. If two neighboring nodes share a common key, then there is a secure link between two nodes. In the path-key establishment phase, a path-key is negotiated for each pair of neighboring sensor nodes who do not share a common key but can be connected by two or more multi-hop secure links at the end of the shared-key discovery phase. In the random key distribution mechanism mentioned above, the probability that any pair of nodes possesses at least one common key is:

$$p = 1 - \frac{{((K - k)!)^2 }}{{(K - 2\,k)!K!}} = 1 - \frac{{(1 - \frac{k}{K})^{2(K - k + \frac{1}{2})} }}{{(1 - \frac{{2\,k}}{K})^{(K - 2\,k + \frac{1}{2})} }}.$$
((16.1))

A modification of the basic random key distribution scheme has been made in [4], where multiple common keys are needed to establish a secure link in the key-setup phase, instead of one common key. Such a modification increases the network resilience against node compromise. Random key distribution schemes can be further improved if the location of a sensor after deployment is predictable. Deployment information is able to reduce the memory requirements and increase the resilience against node compromise [7, 22].

16.2.1.1.2 Combinatorial Design on Key Distribution

The combinatorial design [2] supports \(q^2 + q + 1\) nodes in the network. The size of key-pool is \(q^2 + q + 1\), and each node has \(q + 1\) keys. The scheme is based on finite projective plane of order \(q\), where \(q\) is a prime number, to generate a symmetric design with parameters \((q^2 + q + 1,q + 1,1)\). Therefore, every pair of nodes has exactly one key in common, and every key is owned by exactly \(q + 1\) nodes. Thus, probability of key sharing among a pair of sensor node is 1. When a sensor node is captured by an adversary, the probability that a link is compromised is about \(1/q\).

The disadvantage of the combinatorial design is that the parameter \(q\) is a prime number, hence not all network sizes can be supported. To support arbitrary network sizes, the combinatorial design techniques and random key distribution approaches can be used together. Assuming the target network size is \(n\), we can use the combinatorial design to determine the key distribution on \(M\) nodes, where \(M<n\). Then we can employ the random key distribution approach to assign keys to the remaining \(N - M\) nodes. Such a hybrid design improves scalability and resilience of combinatorial design solution, at the cost of degraded connectivity (key sharing probability between neighboring nodes).

16.2.1.1.3 Schemes Based on Blom’s \(\lambda\)-secure Key Pre-distribution

Blom proposed a key pre-distribution method with \(\lambda\)-secure property [1]. It means that if no more than \(\lambda\) nodes are compromised, the communications in the network are secure. Blom's scheme guarantees that any given pair of nodes in the network has a shared secret key. If we draw an edge between every two nodes sharing a secret key, the resulting graph will be a complete graph, and we get full connectivity. To achieve better resilience against node capture, we can sacrifice the connectivity moderately and let each sensor node carry less keys. Along this direction, multiple key-space Blom’s schemes have been proposed in [8].

There are many other efforts addressing symmetric key distribution in recent literature, including [5, 21, 29, 33]. We observe that these schemes trade off conflicting design requirements among memory usage, connectivity, scalability, and resilience. A brief summary of performance issues in symmetric-key algorithms is given below:

  1. 1.

    Speed: Symmetric-key algorithms are generally much less computationally intensive than asymmetric key algorithms. Therefore, symmetric key algorithms are very popular in resource-limited wireless sensor networks.

  2. 2.

    Scalability: In symmetric-key algorithms, to ensure secure communications between everyone in a network of size \(n\), a total of \(n \times (n - 1)/2\) keys are needed. However, current design of key management in wireless sensor networks does not require that each pair of nodes share a unique secret key.

  3. 3.

    Management: Symmetric-key algorithms require a shared secret key known at both sides during communication. To prevent adversaries from discovering the cryptographic keys, keys should be changed regularly. It is difficult to keep shared keys secure during key distribution.

16.2.1.2 Public Key Management

Unlike symmetric-key algorithms, asymmetric (or public) key algorithms (e.g., RSA, ECC) use two different keys, namely private and public keys, for encryption, decryption, authentication, and verification. For instance, a user knowing the public key of an asymmetric algorithm can encrypt messages destined for a receiver. The nodes other than the receiver do not know the receiver’s private key and thus cannot decrypt encrypted messages. Compared to symmetric-key algorithms, public-key cryptography increases security and convenience, because private keys never need to be transmitted or revealed to anyone. Another advantage is that public-key cryptography can provide digital signatures which cannot be repudiated.

Because both asymmetric and symmetric key algorithms have their advantages and disadvantages, we wish to use virtue of both. Therefore, asymmetric keys are used to negotiate symmetric keys, and then symmetric keys are used to secure communications in the wireless ad hoc networks.

Public-key approaches were originally targeted at the Internet [19]. Recently, Elliptic Curve Cryptography (ECC) has been emerging as an attractive public key cryptography scheme for mobile/wireless environments [12, 17, 25]. In public key cryptography, any two nodes can establish a secure channel between them without necessarily carrying pre-distributed keys. However, if nodes do not carry pre-distributed public keys, one or more trusted certificate authorities (CAs) are needed. In wireless ad hoc networks, authentication process by CAs is very costly in terms of wireless communication overhead. In order to tailor public-key approaches to ad hoc networks, Zhou and Haas proposed a distributed public-key management scheme for ad hoc networks [32], where multiple distributed certificate authorities are used. To sign a certificate, each authority generates a partial signature for the certificate and submits the partial signature to a coordinator that calculates the signature from the partial signatures. Kong et al. described a fully distributed scheme [20], where every node carries a share of the private key of the service. This scheme increases availability of authentication, but it increases communication overhead for authentication. Capkun, Buttyan, and Hubaux proposed a self-organized public key management system [3], where users issue certificates based on their personal acquaintances. Each user maintains a local certificate repository. When two users want to verify the public keys of each other, they merge their local certificate repositories and try to find (within the merged repository) appropriate certificate chains that make the verification possible. We notice that in these certificate-based schemes, communication overhead to transfer certificates around is large in bandwidth-restricted wireless ad hoc networks. Next, we will present a public key management scheme for resource-limited environments in details.

16.2.2 Broadcast Packet Authentication

In large-scale wireless environment, data packets are often forwarded multiple hops until arrival at intended receivers. Operating in an open or hostile environment, it is easy for the packets to be modified or impersonated. False injection not only relays misleading information, which results in malfunction or catastrophe, but also consumes excessive communication, computational resources, and energy if they fail to be dropped at the very beginning. Therefore, it is critical to prevent packet manipulation, carefully grant authorized access and consistently assure the resource availability. A pessimistic way is to authenticate every packet in a hop-by-hop fashion and only packets from a legitimate device are forwarded. Hop-by-hop authentication is acceptable for unicast traffic, thanks to efficient symmetric key scheme and instantaneous one-way hash function scheme [34]. It is still acceptable for routing control packets, which could be either unicast or broadcast, because of their low rate and criticality. Example works can be found in secure routing protocols [14, 15]. However, hop-by-hop authentication scheme imposes remarkable penalty on end-to-end delay for legitimate broadcast data due to authentication delay at each intermediate hop. The accumulated delay postpones packet delivery to nodes far away from the sources, and the maximal delay is proportional to network diameter in hops.

Hence, the following requirements are desired for broadcast authentication schemes.

  • Containment: Faked messages are dropped near the initiators so that unnecessary bandwidth and CPU consumption is avoid. The ideal case is to stop the false packets in one-hop range of their initiators.

  • Timeliness: Authentic messages are delivered to majority of nodes quickly.

  • No Single Point of Failure: Security schemes should embody randomness and distribution to avoid a single point of failure or targeted attacks.

  • Load Balancing: Nodes receive approximately equal workload to avoid battery depletion, channel congestion and resulting network partition. Exceptions are nodes in one-hop range of attackers because they must authenticate every packet in order to properly filter out false packets.

With limited resources at mobile devices, the above requirements cannot be achieved simultaneously, and tradeoff exists. Next, we will summarize common assumptions, authentication primitives, and classification of broadcast authentication protocols.

16.2.2.1 Assumptions

Design of authentication protocol highly relies on assumption of network and attackers. If deployed mobile devices are powerful enough, advanced cryptography can be applied without much degradation on service performance, similar as wired network. If attackers are powerful enough to perform physical compromise, authentication protocol must take compromise and sybil attacks into account. We list common assumptions in prior arts below.

16.2.2.1.1 Network Model

It is reasonable to assume that all the devices are dispatched from a single administrative domain, in which case integrity and authenticity of packets are valued most. Before being deployed on field, all devices are loaded with the necessary public and private keys or certificates from trusted authorities. After mission starts, they are able to establish pair-wise trust relationship without help from trust authorities.

Assumptions on network model cover the following aspects: (a) node mobility and the resulting dynamics of network topology; (b) lifetime range from hours, days, to years; (c) hardware and software trustworthy, which correlates with attackers’ capability of physical compromise, secret key information exposure and software turnover; (d) traffic pattern, such as the numbers of senders and traffic rate; and (e) clock synchronization, which is required by TESLA [26] and its variations.

16.2.2.1.2 Attacker Model

Attackers can target at various resources, like energy, CPU, bandwidth and memory. Assumptions of attackers’ capability, resources and typical behaviors largely determine the quality of protection level demanded by authentication protocols.

They could (a) physically compromise legitimate devices and subsequently extract private cryptographic information; (b) overhear over channels, possibly across a large network region; (c) understand protocol details, send semantics-compatible faked messages, and replay overheard packets (They may occasionally forward authentic messages to confuse intrusion detection mechanism.); (d) manipulate control fields if they are not authenticated; and (e) collude with each other.

16.2.2.2 Authentication Primitives

There are three cryptographic candidates for broadcast authentication: public key cryptography (PKC), symmetric keys, and one-way hash function.

16.2.2.2.1 Public Key Cryptography

In PKC, either private/public key pre-distribution or certificate approach are used. Receivers use sender’s public key to validate signed packets immediately after reception. There is no need for online trust maintenance. However, for low-end mobile devices or sensors, public key signature generation and verification processes have heavy computation and memory overhead, thus limiting integrated traffic rate processed per node.

16.2.2.2.2 Symmetric Keys

Symmetric keys [9], widely used in low-end sensor networks to secure one-hop transmission, are suitable for pairwise trust establishment. However, it is inappropriate for broadcast authentication. If a unique secret key is assigned for each pair of nodes, broadcast traffic needs to carry separate signatures for all destinations, incurring tremendous bandwidth, computational and memory overhead. On the other hand, if one secret key is shared by a group of nodes, any compromised node can forge packets and impersonate others. Revocation of compromised keys is difficult.

16.2.2.2.3 One-way Hash Function

One-way hash function, represented by TESLA [26] and its variations, has low computation overhead (several order smaller than that of PKC) and memory overhead, which makes it a superior alternative to defend against false data injection in a light-weighted way [27, 35].

InTESLA , a sender splits time into even time intervals, called round, and generates a one-way chain using a predetermined one-way hash function \(H\) and a seed \(S\). A hash value is assigned to the corresponding round and used to authenticate packets generated in that round. Each hash value also has a release time, several time intervals after the assigned interval. After negotiation with the sender about the key disclosure schedule, \(S\) and \(H\), receivers are ready to validate packets sent by the senders. When sending a message, the sender attaches a message authentication code (MAC), which is generated based on the key associated with the round plus the most recent key which it can disclose. When receiving a message, the receiver checks that the message is not replayed and the disclosed key is legitimate, and then buffers the message. Finally, it removes all the buffered packets whose authentication keys are released and authenticates the packets. In summary, authentication by TESLA is featured with low communication and computation overhead, scalability to a large number of receivers and tolerance to packet loss.

Due to delayed key disclosure, authentication is delayed until the instant authentication keys are delivered to receivers. We call this delay keying delay. Even though immediate key disclosure schemes are proposed, they are vulnerable to online replaying attacks. In addition, TESLA requires trust negotiation and periodical trust maintenance. Therefore, it does not fit for high-mobility scenarios.

16.2.2.3 Classification of Broadcast Authentication Protocols

We can divide existing broadcast protocols according to node roles and adaptability.

 

Flat

Hierarchical

Static

Authenticate first/Forward first

Trusted base-station [6, 11]

Adaptive

Dynamic Window Scheme [30]

When every node has equal responsibility, static scheme adopts the policy of either authenticate-first or forward-first. While the former suffers from the significant end-to-end delay for legitimate traffic, the latter suffers from the wasted computational and communication resources network-wide to authenticate faked messages and low delivery ratio due to buffer overflow and network partition.

Drissi and Gu considered a large sensor network with a few predetermined trusted and better secured nodes, which divides the whole network into subsets [6, 11]. Trusted nodes use TESLA to broadcast messages to its subset and each ordinary node, upon receiving a broadcast message from its subset, rebroadcasts it before validation. Trusted nodes exchange broadcast messages among themselves. A source other than the trusted nodes sends broadcast messages to a nearby trusted node. However, the predetermined set of trusted nodes may attract attacks and too much authentication and coordination overhead consumes their battery and the battery of nodes around them quickly. Besides, how messages are sent from source to trusted nodes, among trusted nodes and from trusted nodes to subnets may introduce additional vulnerability, such as DoS attacks and jamming

Wang, Du, and Ning proposed a dynamic window scheme to contain bogus data by the public key cryptography (PKC) authentication [30], where sensor nodes determine whether to verify a message first or forward the message first by estimating their distance from the malicious attackers and how many hops the incoming message has passed without authentication. Despite the fact that Dynamic Window scheme displays good potential to improve end-to-end delay of authentic traffic, it is not robust against smart attackers who maliciously manipulate the AIMD scheme. Additional, during the decreasing process of AIMD, a large number of false packets are broadcast to the whole network.

16.3 Public Key Management with Resource Constraints

16.3.1 Introduction

The characteristics of mission-critical ad hoc networks pose the following challenges for the design of public key management schemes that would support secure communication over wireless ad hoc networks:

  1. (1)

    Communication Overhead for Authentication: As we mentioned before, certificate-based schemes incur communication overhead during authentication.

  2. (2)

    Unreliable Communications and Network Dynamics: Due to shared-medium nature of wireless links, flows may frequently interfere with each other. Moreover, a network may be partitioned frequently due to node mobility and poor channel condition. Mobile nodes may leave and join the ad hoc network frequently, and new legitimated nodes may join the network later after some nodes have already been deployed in the field. Mobility increases the complexity for trust management.

  3. (3)

    Large Scale: The number of ad hoc wireless devices deployed at an incident scene depends on specific nature of the incident. In general, the network size can be very large. In addition, an ad hoc network should be able to accommodate more mobile devices if necessary. Therefore, it is necessary to have newly deployed devices and previously deployed devices trust each other without introducing too much overhead.

  4. (4)

    Resource Constraints: The wireless devices usually have limited bandwidth, memory and processing power. Among these constraints, communication bandwidth and memory consumptions are the two primary concerns for key management schemes. Wireless bandwidth is the scarcest resource in wireless network. And memory concern for key storage is more and more evident as the requirement on network scalability (or network size) is increasing.

Given the above challenges (1) and (2), a node in a network may encounter unreliable communication and large communication overhead for authentication. Therefore, we need a self-contained key management scheme. We will present a self-contained public-key management scheme, where all necessary cryptographic keys (certificates) are stored at individual nodes before the deployment of the nodes. As a result, we can expect almost zero communication overhead for authentication. In contrast to traditional certificate-based schemes, our authentication procedure does not require the communication of certificate which binds the node's ID to its public key and is signed by an off-line trusted authority. The required storage space for traditional self-contained public key management schemes is of O(n) order. With challenges (3) and (4), storage space at individual nodes may be too small to accommodate self-contained security service, when network size n is large. Hence, we discuss a Scalable Method Of Cryptographic Key (SMOCK) management scheme, which scales logarithmically with network size, \(O(log\ n)\), with respect to storage space.

In order for SMOCK to use smaller set of cryptographic keys, a sender uses multiple keys to encrypt a message and a receiver needs multiple keys to decrypt the message. We then use the public key cryptography as follows. Each node possesses a unique combination of private keys, and knows all public keys. The private key combination pattern is unambiguously associated with the node ID. It means, if a sender A wants to send a message to receiver B, A will first acquire B’ s ID to infer a set of private keys owned by. B Then A will encrypt the message with the public key set that corresponds to the private keys owned by B. We have evaluated SMOCK with respect to the communication overhead for key management, memory footprint, and resilience to node break-in by adversaries. Note that it is likely that adversaries may eventually break into a limited number of nodes over a certain period before a network detects the break-in and revokes the compromised keys. However, before the system detects break-ins, a majority of network nodes under the SMOCK will operate securely even when a small amount of nodes is compromised.

16.3.2 Overview

In SMOCK, network nodes want to exchange correspondence securely in a pair-wise fashion. Symbols and terms used throughout this section are shown as in Table 16.1. The key pool \({\cal K}\) of such a group consists of a set of private–public key pairs, and is maintained by an off-line trusted server. Each key pair consists of two mathematically related keys. The ith key pair in the key pool is represented by \((k_{priv}^{i}, k_{pub}^{i})\). To support secure communication in the group, each member is loaded with all public keys of the group and assigned a distinct subset of private keys. Let \({\cal K}_{Alice}^{priv}\) denote a subset of private keys held by Alice, and \({\cal K}_{Alice}^{pub}\) represents Alice's corresponding public key subset. If Bob wants to send a secret message to Alice, he needs to know \({\cal K}_{Alice}^{pub}\), where \({\cal K}_{Alice}^{pub} \not\subset {\cal K}_{anybody\_else}^{priv}\). Bob is able to pass the secret message to Alice, using the public keys \({\cal K}_{Alice}^{pub}\) to encrypt the message. The message can be opened only by Alice, who has the private key set \({\cal K}_{Alice}^{pub}\), but others do not.

Table 16.1 Notations and symbols

Consider an example of a small group with 10 users. In SMOCK, we need five distinct public–private key pairs to build pair-wise secure communication channels among 10 users. They are \((k_{priv}^{1}, k_{pub}^{1})\), \((k_{priv}^{2}, k_{pub}^{2})\), \((k_{priv}^{3}, k_{pub}^{3})\), \((k_{priv}^{4}, k_{pub}^{4})\), \((k_{priv}^{5}, k_{pub}^{5})\). Each user keeps five public keys and two private keys. The unique private key set allocation for each user is then shown in Table 16.2.

Table 16.2 An example private key allocation

In this scenario, we know that

  • Each node keeps a predetermined subset of private keys, and no one else has all the private keys in that subset.

  • For a public–private key pair, multiple copies of the private key can be held by different users. In the given scenario, each private key has four copies.

  • A message is encrypted by multiple public keys, and it can only be read by a user who has the corresponding private keys. For example, if user 1 encrypts a message m by public keys \(k_{pub}^{2}\) and \(k_{pub}^{5}\) as \(Enc(Enc(m,k_{pub}^{2}),k_{pub}^{5})\), then only user 7 can decrypt it with private keys \(k_{priv}^{2}\) and \(k_{priv}^{5}\).

In traditional public management schemes, each user holds one public–private key pair. Therefore, a user should store n public keys and 1 private key to achieve self-contained key management in a network of size n. In SMOCK, the storage for public and private keys is much smaller. In the above 10-user example, a user only needs to store 7 keys (5 public keys and 2 private keys), which is smaller than 11 keys (10 public keys and 1 private keys) in traditional schemes. We will show that in SMOCK the total number of keys held by each user is approximately O(log (n)), but it is O(n) under traditional key management schemes.

16.3.3 Design Objectives

Before we elaborate SMOCK, we introduce several definitions and design objectives.

Definition 1:

A key allocation KA: \(2^{{\cal K}}\rightarrow V\), maps the key pairs in \({\cal K}\) to a set of users in V, so that \(\nu \in V\) is assigned a subset of key pairs \({\cal K}_{i}\) ( \({\cal K}_{i} \subset {\cal K})\)). To guarantee the secure communication between each pair of nodes i and j, we have \(\forall i\, \forall j\, {\cal K}_{i}\not\subseteq{\cal K}_{j}\) (the same as \({\cal K}_{i}^{priv} \not\subset{\cal K}_{j}^{priv}\)) and \({\cal K}_{j} \not\subseteq {\cal K}_{i}\) (the same as \({\cal K}_{j}^{priv} \not\subseteq{\cal K}_{i}^{priv}\)), \(iff\ i \neq j\). If this property holds, the key allocation is valid.

Definition 2:

We say that a key allocation is isometric , if \(|{\cal K}_{1}|= | {\cal K}_{2}| = \ldots=\) \(|{\cal K}_{n}|=b\); otherwise, the key allocation is non-isometric.

Definition 3:

We say that the key assignment to user i and j conflicts, if either \({\cal K}_{i}^{priv} \subseteq{\cal K}_{j}^{priv}\) or \({\cal K}_{j}^{priv} \subseteq{\cal K}_{i}^{priv}\). For a valid key allocation, there does not exist conflicting key assignments for any pair of the users.

Generally, we desire the key management to be memory efficient for key storage, computationally efficient during encryption and decryption, and resilient to break-ins. Therefore, we define multiple objectives of the SMOCK key allocation mechanism as follows:

Objective 1:

Memory Efficiency: Given a network of size n , we need to find a key pool \({\cal K}\) and a key allocation KA to achieve

$$\left\{\begin{array}{lll}\qquad min\quad|{\cal K}| +\max\limits_{i \in V} |{\cal K}_{i}^{priv}|\\s.t.\quad{\cal K}_{i}\not\subseteq {\cal K}_{j}\ and\ {\cal K}_{i}\not\supseteq {\cal K}_{j} \quad \forall i \neq j\end{array}\right.$$
((16.2))

where \(|{\cal K}_{i}^{priv}|= |{\cal K}_{i}|\) is the total number of private keys stored at node i. \(|{\cal K}|\) is the total number of public keys stored at each node. Note that each node stores all public keys , but it only stores a small subset of private keys \({\cal K}_{i}^{priv}\) for user i. Therefore, \(|{\cal K}|+|{\cal K}_{i}^{priv}|\) is the number of memory slots at node i to store the public keys and private keys for secure communications.

Objective 2:

Computational Complexity: To simplify security operation, each user wants to use a small number of public keys to encrypt the outgoing messages, and a small number of private keys to decrypt incoming messages. Therefore, we have the following objective

$$\left\{\begin{array}{lll}\qquad\quad min\quad\max\limits_{i \in V}|{\cal K}_{i}^{priv}|\\s.t.\quad {\cal K}_{i}\not\subseteq{\cal K}_{j},{\cal K}_{i}\not\supseteq {\cal K}_{j}(\forall i \neq j)\ and\, |{\cal K} |\leq M \end{array} \right.$$
((16.3))

where M is the total number of memory slots for key storage at each node.

We have proved that isometric allocation of keys performs better than non-isometric allocation in terms of Objective 1 and Objective 2 in [13]. Therefore, we assume isometric key allocation throughout the section.

Objective 3:

Resilience Requirement: Under isometric key allocation scheme, we denote \(a=|{\cal K}|\) and \(b=|{\cal K}_{i}|=|{\cal K}_{i}^{priv}|\). Each user needs only to carry b private keys and a public keys under isometric key allocation, wherein \(b\ll a\ll (a+b)\ll n\). Clearly, if a node is compromised, all its keys are compromised, regardless of the number of private keys it carries. Therefore, on average \(C(k_{c}(x),b)\) distinct key-sets are compromised when adversaries break into x nodes, wherein \(k_{c}=\lfloor a-(a-b)\left(\frac{(a-b)}{a}\right)^{x-1}\rfloor\) is the number of disclosed keys [13].

We denote a vulnerability metric by \(V_{x}(a,b)\), which is the percentage of communications being compromised when x nodes are broken in. It follows that \(V_{x}(a,b)\) is \(\frac{C(k_{c}(x),b)}{C(a,b)}\) on average. We define the resilience requirement as

$$V_{x}(a,b)=\frac{C(k_{c}(x),b)}{C(a,b)}\leq{\cal P},$$
((16.4))

where \({\cal P}\) is the resilience bound representing the upper-bound of the compromised communications when x nodes are randomly compromised, each with equal likelihood.

We observe that \(C(k_{c}(x),b)\) and \(C(k_{c}(x),b)\) do not compare favorably with x. But by increasing the value of a, we can make \(C(a,b)>\ \>n\), therefore making \(V_{x}(a,b)\) compared favorably with \(x/n\), which we refer to as benchmark resilience. There is a trade-off between memory usage and resilience against break-ins: For a larger number of public–private key pairs, we can get better resilience against break-ins at the cost of larger memory footprint.

16.3.4 Key Allocation Algorithm

In this section we show: (1) For a given network, how to determine a and b; (2) How to allocate distinct private key sets to users to achieve secure communication between each pair of users. To determine value of a and b, we first specify an algorithm to obtain the optimal key allocation solution in terms of both Objective 1 and Objective 2, with the resilience requirement constraint specified in Objective 3. Observing the trade-off between memory usage and resilience against break-ins, we then present an algorithm to fully utilize memory space to achieve better resilience by slightly relaxing the optimality of Objective 1 and Objective 2. With the given value of a and b, we will discuss key allocation details of SMOCK.

16.3.4.1 Optimization of Design Objectives

Value b affects the complexity of encryption and decryption. Therefore, we would like to relax a to allow b to be small. The extreme case is that a=n and b=1,Footnote 1 where each user keeps a key and every key only has a single copy. The following algorithm helps to determine a and b to achieve the design objectives. Assume the network size is n.

Objective 1 requires \( {\frac{a}{n}}\) to be small for key storage efficiency. Meanwhile Objective 3 requires \( \frac{a}{b}\) to be large for good resilience. Therefore, there are two conflicting objectives. Algorithm 16.1 trades off between memory efficiency and good resilience.

Figure 16

Algorithm 16.1 Determine value a and b

(1) Initialize l=2.

While \((C(l,\left\lfloor {\frac{l}{2}} \right\rfloor )<n)\) do { \({ l = l + 1}\)};

\(a = l,\quad b = \left\lfloor {\frac{l}{2}}\right\rfloor\);

(2) While \((C(a,b - 1)>n)\) do \(\{ b = b - 1\}\);

(3) While \((C(a + 1,b - 1)>n)\)

 do \(\{ a = a + 1,\quad b = b - 1\}\)

(4) While (Equation (16.4) is not satisfied)

 do {

 if( \(C(a + 1,b - 1)>n\))

    \(\{ a = a + 1,\quad b = b - 1\}\)

else \(\{ a = a + 1\}\)

}

(5)  \(\left| {\cal K} \right| = a\ and\ \left|{\cal K}_{i} \right| = b\).

Step (1) of Algorithm 16.1 calculates the minimum number of memory slots to store public keys in order to support the secure communication among n nodes. Step (2) minimizes Objective 1. Step (3) further optimizes the Objective 2 while keeping Objective 1 unchanged. Step (4) ensures that the key allocation meet Objective 3. If the resulting a and b do not satisfy the resilience requirement specified by Objective 3, we either increase a, or simultaneously increase a and decrease b. Thus \( \frac{a}{n}\) is increased by \( \frac{1}{n}\) and \( \frac{a}{b}\) is increased by \( \frac{1}{b}\) or \( \frac{b+1}{b(b-1)}\). For \(n \gg b\), it is a reasonable trade-off of memory slots to achieve better resilience.

16.3.4.2 Meeting Key Storage Constraint

Total memory slots for key storage are often limited by \(M\), where \(M\) is large enough to support \(n\) nodes. In this case, we should fully utilize the memory slots to optimize Objective 2 and achieve the best resilience given by \(\displaystyle\frac{{C(k_c (x),b)}}{{C(a,b)}}\) in Equation 16.4. Thus, we come up with Algorithm 16.2.

Algorithm 16.2 Determine value \(a\) and \(b\) with storage constraint

(1)  Let \(a = \lceil\frac{{2\,M}}{3}\rceil, b =\lfloor\frac{M}{3}\rfloor\);

(2)  While \((C(a + 1,b - 1)>n)\) do \(\{ a = a + 1, b = b - 1\}\);

(3)  Then \(\vert{\cal K}\vert = a\ and \ \vert {\cal K}_i\vert =b.\)

16.3.5 Key Allocation

For a given network size \(n\), we have determined \(a\) and \(b\). The key assignment should satisfy \({\cal K}_i \not\subseteq {\cal K}_j\) and \({\cal K}_i \not\supseteq {\cal K}_j\), so that the key allocation described above can support the pair-wise secure communication for a network of size \(n = C(a,b)\). Assuming a single private key can be assigned to at most \(y\) nodes, we have \(b \times n = a \times y\) (both sides indicate the total copies of private keys in the system). Therefore, \(y = {\frac{b}{a}}\, n = { \frac{b}{a}}\,C(a,b)\). We randomly assign \(b\) private keys to network nodes in the key allocation, where a single key should be assigned to at most \( \frac{b}{a}\,C(a,b)\) nodes. Otherwise, we cannot get a valid key allocation. In the example given in Table 16.2, each key is assigned four times, where \(a = 5\), \(b = 2\). For a key assignment, we just need to assign a random unused private key combination to a node (totally, there are \(C(a,b)\) possible combinations). Algorithm 16.3 illustrates the procedure to assign a subset private keys to a node. Note that very small \(a\) and \(b\) can support a very large network. For example, if we ignore the resilience requirement, \(a = 20, b = 4\), the network size can be as large as 4845.

Algorithm 16.3 Key allocation

(1)  For the \(i\)th node \((i \leq C(a, b))\), randomly select \(b\) distinct private keys to generate a subset of keys, where either of these \(b\) private keys has been assigned more than \( \frac{b}{a}\,C(a, b)\) times;

(2)  If (the generated key set = an assigned key set) Adjust key by key in the generated key set to get unassigned key set;

(3)  Assign the generated key set to node \(i\).

16.3.6 Secure Communication Protocols

We have shown how to determine \(a\), \(b\) and how to assign private-key set to a node if network size \(n\) is given. In this section we specify detailed protocols used for initialization, communication, and bootstrapping when new nodes are deployed. The initialization phase is performed before deployment. Since communication and bootstrapping are on-line procedures, they have to be very efficient in terms of communication overhead (using a small number of messages).

16.3.6.1 Initialization

The initialization phase is to assign keys and identifications to each node. A node’s identification (ID) is a good indicator to show what subset of private keys the node carries. If two nodes want to exchange a secure message, each needs to know the ID of the other. Node IDs do not have to form a contiguous range. After key allocation, each node knows the private keys assigned to it, and all the public keys. We label the keys by numbers \(0,1,2\ldots\). Let \(keyID_i^{\ j}\) be the \(i\)-th private key held by node \(j\). For each node \(j\), we have \(keyID_1^{\ j}<keyID_2^{\ j}<\cdots<keyID_b^{\ j}\). The ID field spans \(b \times \left\lceil {\log_2 a} \right\rceil\) bits as shown in Fig. 16.1. Each \(keyID_i^j\) takes \(\left\lceil {\mathop {log}\nolimits_2 a}\right\rceil\) bits. It is clear that the node ID is unique as long as each node is assigned a unique subset of private keys.

Fig. 16.1
figure 1

ID field of node \(j\)

In the example shown in Table 16.2, user 7’s private key set is \({\cal K}_7^{priv} = \{k_{priv}^2, {k}_{priv}^5\}\). Correspondingly, the ID of the user 7 is “010|101”, where \(a = 5\), \(b = 2\). We can see that a node automatically obtains an ID after it has been assigned a private-key set. If other peer nodes know user 7’s ID, they can infer that user 7 has private key number \(2(k^2_{priv})\) and private key number \(5(k^5_{priv})\). If user 7 claims a fake identity, other nodes will use public keys represented by the fake identity to encrypt the messages. Therefore, the user 7 cannot decrypt the message. In this way, SMOCK scheme is able to resist against the Sybil attack.

16.3.6.2 Secure Communication

Figure 16.2 shows a protocol of secure communication between Alice and Bob, where Alice and Bob establish a secure communication channel. If Alice already knows Bob’s ID, she can send an encrypted message (EncMsg) directly to Bob. Otherwise, she needs to send a ID request message to Bob, and Bob replies with his ID (possibly in plain text). After Alice receives Bob’s ID, she can figure out which private keys Bob is associated with, and she encrypts the message correspondingly before she sends the message.

Fig. 16.2
figure 2

Secure Communication Protocol Between Alice and Bob

16.3.6.3 Bootstrapping to Accommodate New Nodes

In some cases, we need to deploy new nodes to an existing ad hoc network. In SMOCK, it is easy to make previously deployed nodes to trust newly deployed nodes. Let us assume that n nodes are already deployed in a network with a public keys and each node stores b private keys, and m new nodes are being assigned into the network. If \(n+m<C(a,b)\) and resilience requirement (Equation (16.4)) are still satisfied after we deploy m more nodes, then no bootstrapping is necessary, since the newly deployed nodes can be assigned with unused combinations of private keys from the existing key pool owned by off-line trusted server before they are deployed. However, if network size \(n+m<C(a,b)\) or resilience requirement is violated after incremental deployment, then the system needs to generate more key pairs, say \(a^{\prime}\) new key pairs. We can still assign b private keys to the additional nodes before their deployment. Additionally, the newly generated public keys and a' are broadcast to those previous deployed nodes. Since we fix b, those previously deployed nodes can adjust the existing ID field to span \(b\times\lceil\log_{2}(a+a^{\prime})\rceil\) bits.

It can be verified that, given C(a,b), the increment of a by 1 brings \(C(a,b-1)\) new valid key sets for new nodes. Therefore, with \(a^{\prime}\) new key pairs, the network is able to accommodate \(\sum\limits_{i=0}^{a^{\prime}-1}C(a+i,b-1)\) new nodes. Note that keeping b unchanged and increasing a does not violate the resilience bound \({\cal P}\) given in Objective 3.

16.3.7 Evaluations

Next, we will show the performance evaluation of SMOCK based on key storage usage, communication overhead and resilience to break-ins.

16.3.7.1 Small Memory Footprint

In SMOCK, a few key pairs can support secure communication of a very large network. According to Algorithm 16.1, 18 key pairs in the network can support end-to-end secure communication among up to 1000 nodes without resilience consideration. In Fig. 16.3, we show the minimum number of keys needed at each node for typical mission-critical network sizes. Therefore, we can achieve very small memory footprint under the SMOCK scheme.

Fig. 16.3
figure 3

The minimum number of keys needed

A total of \(a\) public keys can support at most \(C(a, \lfloor\frac{a}{2}\rfloor)\) nodes in the network. By Stirling’s Approximation, the total number of key pairs required is at a level of \(\Theta ({\rm lg}n)\), which can be verified by Fig. 16.3. We conclude that the SMOCK scheme yields very small memory footprint.

If we relax the storage limitation, the number of private keys needed decreases, and computational complexity is reduced accordingly. Figure 16.4 shows the tradeoff between computational complexity and key storage space for different network scales, where the computational complexity is inferred by the number of private keys needed. We can conclude that the larger the storage space is, the smaller number of private keys are kept at each node, thus the smaller computational complexity is.

Fig. 16.4
figure 4

Tradeoff between storage space and computational complexity (M is total memory slots for key storage)

16.3.7.2 Communication Overhead for Key Management

Since SMOCK is a self-contained public-key management scheme, a node does not need to contact/trust other nodes for certificate verification. Only during the bootstrapping phase when new nodes join the network and the key revocation process, communication is needed for key management. Therefore, SMOCK has little communication overhead for key management.

16.3.7.3 Resilience to Break-ins

The break-in of any single node by an adversary does not release enough information to the adversary to break secure communication for any pair of nodes. However, break-ins of multiple nodes may compromise a set of other nodes. Assume \(x\) nodes are compromised and \(k_c(x)\) is the expected number of keys disclosed correspondingly. As shown before, \(k_c(x)= \lfloor a-(a-b) \left(\frac{a-b}{a}\right)^{x-1}\rfloor\). Then \(\frac{C(k_c(x),b)}{C(a,b)}\) percentage of nodes will be compromised. Let us assume \(n=100\), Fig. 16.5 shows the average case percentage of compromised nodes when a small portion of nodes are controlled by adversaries.

Fig. 16.5
figure 5

Percentage of compromised nodes with break-ins \((b=4)\)

16.4 Broadcast Authentication with Resource Constraints

Public key scheme and one-way hash function scheme (TESLA) are two common cryptographic primitives for broadcast authentication (Other alternatives are extensively surveyed in [24]). TESLA is an efficient protocol that utilizes one-way hash chains and delayed key disclosure to authenticate broadcast traffic. However, TESLA incurs security vulnerability if used together with the “authenticate-first” policy. This is because at the time the intermediate nodes are able to authenticate a packet Msg, the secret hash key hKey used to sign Msg is already released by the source. Afterwards, no nodes can trust any newly received packets signed byhKey. Hence, the intermediate hop has to resign Msg under his identity before forwarding. By manipulating resigning process, a compromised node could flood as many packets as it wants and viciously claim that those packets are sent by other innocent sources.

In this section, we present a novel broadcast authentication scheme, called DREAM[16]. It effectively limits false data injection via frequently using “authenticate-first” policy based on public-key authentication. It also reduces the end-to-end delay by allowing a small percentage of unverified packets forwarded probabilistically via “forward-first” policy so that remote nodes obtain the broadcast messages quickly. The assumption we make are : (a) Devices are static or with moderate mobility. (b) Free-to-move attackers want to permeate as many false packets throughout the network as possible.

16.4.1 Main Idea

DREAM addresses the end-to-end delay issue by relaxing the containment requirement. The main idea is to allow a small and controlled number of packets transmitted to remote locations quickly without authentication in a probabilistic manner. A path segment over which packet \(P\) is not validated before forwarding is referred to as an unverified forwarding path for \(P\). Nodes along the unverified forwarding paths forward packets before authenticating; while the other nodes, representing the majority of the network, authenticate packets before forwarding. Those unverified transmissions virtually reduce the network radius from the broadcast sources and thus decrease end-to-end delay.Footnote 2

DREAM is integrated with broadcast protocol (because of the diversified suppression policies of broadcast protocols) and independently runs at each device. The architecture of DREAM contains three modules as shown in Fig. 16.6.

  • Packet Authentication Module is responsible for signing and validating packets. There are two operating modes yielding different delay and containment trade-offs.

  • Risk Management Module continuously monitors the contextual threat. When the evidence of false packet injection shows up, the module adjusts the operating mode in Packet Authentication Module to a more defensive and secure mode.

  • Neighbor Management Module periodically exchanges hello messages with one-hop neighbors. Hello messages flag both a node's liveness and its one-hop neighborhood size. In order to prevent malicious tampering, hello messages are signed and verified.

Fig. 16.6
figure 6

DREAM architecture

Next two sections are devoted to Packet Authentication and Risk Management modules separately.

16.4.2 Packet Authentication

Packet Authentication Module comprises two parts: (a) signing at sources and (b) verification and forwarding at receivers. Whenever a source sends out a broadcast packet, it signs the packet. Upon receiving a broadcast message, a node probabilistically determines to forward it first or to authenticate it first. When a node authenticates the packet first, we call the node an authenticating node. No matter which choice is made, each node will (a) only forward a unique broadcast packet at most once (it does not forward identified false packets); and (b) validate the packet and send the authentic one to applications. The message format is:

$$ID_{src}, Seqno, Msg, PubKeySign_{src}, Ht, Id_{fwder}$$

wherein

$$PubKeySign_{src}=Sign_{PubKey}^{src}(ID_{src},Seqno, Msg)$$

\(ID_{src}\) and \(ID_{fwder}\) are IDs of the source and the last forwarder, respectively. When the source signs Msg, \(ID_{fwder}\) is set to \(ID_{src}\) since source is the last forwarder. \(PubKeySign_{src}\)is the public key signature signed by the source and is never changed during forwarding process. HT is the number of hops traversed since the last authenticating node. It is reset to 0 at every authenticating node. Sources always set HT to 0. Each forwarder, who forwards the unverified copies, increases HT by 1. Here, we assume that receivers know the public key of the source; otherwise, sources’ certificates should be broadcast as well.

Next, we will describe packet verification and forwarding algorithm at receiver sides based on flooding.

We reduce average end-to-end delay at the cost of imperfect containment. Therefore, it is crucial to carefully control unverified forwarding. We have the following requirements:

  • The decision is independently made to avoid message negotiation.

  • The decision is probabilistically made to achieve load balancing and to prevent a single point of failure.

  • The number of hops that an unverified message is allowed to travel is controlled so that, on one hand, broadcast packets spread out in space fast and, on the other hand, false injected packets are contained near their originators.

  • The number of “forward-first” nodes is kept small to reduce the cost of communication and public key computation wasted on false packets.

Public key signature verification is an expensive operation and usually takes time in seconds or tens of milliseconds in resource-constrained wireless mobile devices. Hence, a verification queue is placed in DREAM to buffer the packets waiting for signature verification. We assume for simplicity that, at any moment, only one signature verification is performed so that spare CPU resources are reserved for other tasks. A verification demon process, whenever free, continuously monitors the verification queue. Once a packet is found at the head of the queue, the verification process removes the packet from queue, verifies its public key signature, and relays the authentic one to applications. In addition, if the authentic packet has not been forwarded yet, it rebroadcasts the packet not under suppression. When the demon process completes processing a packet, it becomes free.

Upon receiving a broadcast message \(m\), a node \(\nu\) probabilistically determines whether to forward \(m\) first or authenticate \(m\) first according to Algorithm 16.4. Randis a random number generated uniformly from [0, 1]. \(b\), \(c\) and \(K\) are system parameters. \(b\) and \(c\) are the expected numbers of neighbors in the one-hop neighborhood of the source and the last forwarder \(m. ID_{fwder}\) (other than the source), respectively, who forwards \(m\) first. \(K\) is the maximum number of hops that \(m\) is allowed to travel without verification.

Figure 16

Algorithm 16.4 Verification and forwarding algorithm

input: An overheardpc broadcast message m

 1if (overheard messages with the same  (m.ID src , m.seqno) before) then return;

 2if (m.HT = = 0 & m.ID fwder is unknown neighbor)  then return;

 3 if (I am one-hop neighbor of m.ID src ) then

 4  | prob = b/ | Nbr(m.ID fwder ) |;

 5 else

 6  | prob = 2 * c/ | Nbr(m.ID fwder ) |;

 7 end

 8 if (Rand > prob or m.HT = = K) then

 9  |  // authenticate m first;

10 |  m.HT = 0;

11 | place m into verification queue;

12 else

13 | // forward m first;

14 | m.HT ++;

15 | rebroadcast m;

16 | place m into verification queue

17 end

Algorithm 16.4 incorporates 2 steps:

  1. (i)

    Suppression and Filtering (Line 1–2): If \(\nu\) has received at least one message with the same sequence number from the source before, message \(m\) is dropped. Thereby, a node forwards each broadcast message at most once. If \(\nu\) is one-hop away from the last authenticator, which is an unknown neighbor, message \(m\) is ignored due to lack of trust.

  2. (ii)

    Probabilistic Pruning (Line 3–17): \(\nu\) decides whether to forward \(m\) first or authenticate \(m\) first probabilistically. \(|Nbr(m.ID_{fwder})|\) is the neighborhood size of last forwarder \(m.ID_{fwder}\). This value is available via Neighbor Management module. \(b\) is usually set to an appropriate value so that unverified broadcast messages can cover most directions (4 for instance). \(c\) is selected from [1, 2] to make the event that all the one-hop neighbors are pruned unlikely. Both \(b\) and \(c\) cannot be too large, because we need to control the number of unverified packets. The constant 2 before \(c\) in line 6 accounts for the fact that on average, half of the neighbors of the last forwarder have already heard \(m\). Considering a case that \(m\) is forwarded from node \(A\) to \(\nu\), via \(B\), averagely half neighbors of \(B\) have already heard \(m\) broadcast by \(A\), thus suppressing \(m\) according to line 1.

If \(m\) has already traversed \(K\) hops without verification, it is better to authenticate \(m\) first to confine its permeation in case of a false packet. If \(\nu\) decides to authenticate \(m\) first, it resets HT field and places \(m\) in the verification queue. If \(\nu\) decides to forward \(m\) first, it increases HT field by 1, forwards \(m\) and then places \(m\) in the verification queue. The verification process is responsible for rebroadcast after signature verification. If \(m.HT>0\), verification process knows that \(m\) has already been forwarded once.

Since we use \(|Nbr(m.ID_{fwder})|\) to make probabilistic forwarding decision (in Line 4 and 6), network topology should be relative stable so that during the Hello period, the neighbor information is accurate. Otherwise, inaccurate and outdated neighborhood sizes may result in more or less number of nodes forwarding the packet first from the last forwarder.

16.4.3 Risk Management

Nodes working as in previous section are said to be in Normal Mode. Via signature verification, they are able to detect emergence of false packet attacks if the number of packets received with invalid signature in a time interval exceeds a predetermined threshold. They switch to hop-by-hop authentication scheme and totally disable the “forward first” policy to contain false packet injection in a defensive way. They are then said to be in Alert Mode. Conversely, when the false packet attack lessens, nodes switch back to Normal Mode to trade for improved end-to-end delay. The transition between two modes is shown in Fig. 16.7.

Fig. 16.7
figure 7

Adaptation to contextual threat

Each node continuously monitors the number of detected false packets every riskWindow interval. The node switches to Alert Mode if this number reaches \(\alpha\) and switches back to Normal Mode if this number drops to \(\beta\). \(\alpha\) and riskWindow are related to the tolerable percentage of CPU resource spent in evaluating false packets before switching to Alert Mode. Suppose the processing time to validate one public key signature is \(T_{val}\) seconds, a node can tolerate

$$\frac{T_{{val\,}}{}^{\ast} \alpha}{riskWindow}$$

CPU resource wasted on validating false packets. For mission critical networks, this percentage can be set to a small value. \(\beta\) is there to avoid frequent switches and instability.

Switching to Alert Mode adversely increases end-to-end delay. However, we isolate the rest of network from infection and their computation and communication resources are protected. It is expected that, when converging, only the nodes around the attackers enter Alert Mode. Other nodes can still receive the packets through alternative broadcast paths quickly.

16.4.4 Evaluation

In this section, we evaluate the performance of DREAM in ns2 network simulator by comparing it with hop-by-hop authentication scheme and Dynamic Window (DW) scheme proposed in [30]. In Dynamic Window scheme, the forwarding decision is made by comparing the size of a locally maintained dynamic window on sensor nodes and the number of hops the incoming message traversing after its last authentication: if window size is the larger, they use forwarding-first; otherwise, they use authentication-first. Additive Increase Multiplicative Decrease (AIMD) technique is used to dynamically manage the window on sensor nodes: if an authentic message is received, the window size increases; otherwise, window size decreases.

The criteria of our evaluation represents two aspects: penalty on authentic messages and containment capability of false messages. We have the following metric for the legitimate packets:

  • The average end-to-end delay: End-to-end (authentication) delay of packet \(m\) from broadcast source \(src\) at node \(\nu\) is defined as the interval between the moment src broadcasts \(m\) into wireless networks and the moment \(\nu\) finishes verification of \(m\).

We have the following metrics for false packets:

  • The number of nodes forwarding the false packets.

  • The number of nodes receiving the false packets.

  • The number of public-key signature validation.

The default public-key signature validating time is 0.5 s. At any moment, a device can perform only a single public key validation and all the other validation requests (up to 50) are queued. The public-key signature has size 40 bytes. Each node is equipped with an omni-directional antenna operating on a single channel. The channel model we use in simulation is two-ray path-loss propagation model. The broadcast traffic is sent via CBR (constant bit rate) traffic with packet size 64 bytes in UDP. 802.11 DCF Medium Access Control protocol is used with default configuration. Transmission range is 250 m.

We investigate two network topologies, i.e., grid topology and random topology.

16.4.5 Grid Topology

The first scenario we study is grid topology and each data point is averaged over three runs of simulations. Since delay penalty is critical in large-scale networks with long diameter, we choose to place 400 nodes in a grid. Each row (column) contains 20 nodes equally spaced with distance 240 m apart. There is a single broadcast source located at the left top corner. It continuously sends CBR traffic at the rate of a packet every 2 s. The average network radius from the broadcast source is 10 hops based on the setting.

First, we study end-to-end authentication delay for legitimate traffic in the absence of attackers. We vary K from 3 to 5 and c from 1.0 to 2.0. Due to space limitation, we do not evaluate the sensitivity to b and b is set equal to c. The length of riskWindow is 6 s. \(\alpha = 6\) and \(\beta = 1\). For Dynamic Window Scheme, the initial window size is set to 64. Additive increase and multiplicative values are 1 and 2 separately, default as in authors’ paper.

As shown in Fig. 16.8, the average end-to-end delay in hop-by-hop authentication is the worst, above 4.6 s because the average length of paths from the broadcast source to destinations is 10 hops and the public-key validation delay is 0.5 s. Clearly, the average end-to-end delay in Dynamic Window scheme is optimal at about half a second. Because there are no false packets detected, the window size is at least 64, which is always greater than HT, the number of hops traversed since last authenticator. The performance of DREAM is shown by the middle four lines. As c increases, the end-to-end delay decreases since more number of nodes are inclined to forward the packets first, thus virtually reducing the end-to-end path length. However, as c increases above 1.5, the degree of improvement levels off. When K increases, the end-to-end delay decreases because the distance between two successive authenticators is lengthened and the entire network can be quickly covered.

Fig. 16.8
figure 8

End-to-End delay

Next, we study the containment capability of DREAM and its response to malicious injection from Fig. 16.9 to 16.10. Clearly, if attackers always flood less than \(\alpha\) false packets in riskWindow interval, DREAM never switches to Alert Mode. However, this is not the best interest for attackers. Therefore, we focus on cases of high-rate false injection. Again, we use the same configuration as before except that an attacker is flooding simultaneously near the source at the left top corner of grid. The attacker floods packets at the rate of 1 packet per second. During the false injection attack, it is out of question that all the neighboring nodes have to receive the false packets and validate them. But our scheme can effectively confine the false packets within a small number of nodes. The number of false packets is normalized by the total number of false packets sent by the attackers. The normalized value is below 45 for both DREAM and Dynamic Window scheme since they both switch to defensive mode. However, before the dynamic window size in DW scheme is adjusted down, many false packets are broadcast to the whole network. Furthermore, the mixing traffics of good and false packets interdict the slowdown of dynamic window. For every packet sent by attacker, Dynamic Window scheme has 12 nodes to forward, 43 nodes to receive, and 19 nodes to verify the message. On the contrary, the containment performance of DREAM is close to hop-by-hop authentication. The infection of false injection increases as c or K increases. However, waste on both transmission and computational resources by the false packets are under control.

Fig. 16.9
figure 9

Resilience against false injection

Fig. 16.10
figure 10

Normalized number of false packets validated

The end-to-end authentication delay in presence of the attacker is shown in Fig. 16.11. Delay for both DREAM and Dynamic Window scheme increases, compared with the case in absence of attackers.

Fig. 16.11
figure 11

End-to-End delay

From the above measurements, we see clear tradeoff between end-to-end authentication delay and containment capability, i.e. public-key signature computational overhead and forwarding overhead for false packets.

16.4.6 Random Topology

The second scenario we study is random topology with 400 nodes randomly placed in a 4000 × 4000 m network. We make sure that the resulting topology is almost connected. There are 1 broadcast source randomly selected to send CBR traffic at rate of 2 packets per 3 s and two attackers randomly selected to send CBR traffic at rate of 1 packets per 2 s each. Nodes switch to Alert Mode whenever the number of false packets in 6-second interval reaches 3 and falls back to Normal Mode when this number drops to 1. Since the processing time for one public-key signature validation is 0.5 s, a node can tolerate at most \(3 \times 0.5/6 = 25\%\) CPU spend in evaluating false signatures.

The end-to-end delay for two example configurations of DREAM is compared with the delay for Dynamic Window and hop-by-hop authentication schemes in Fig. 16.12. X-axis shows three random scenarios with different location deployments. As usual, hop-by-hop authentication has the worst end-to-end delay for authentic traffic. DREAM with c set to 1.3 halves the end-to-end delay with further improvement with c set to 1.5. Even though Dynamic Window Scheme has the best performance in terms of end-to-end delay, it is not resilient to false packet injection mixed with the legitimate traffics. As shown in Table 16.3, for the total 330 false packets, Dynamic Window scheme forwards more than 5000 false packets in order to achieve the desired delay. But DREAM only forwards tens of false packets before nodes around attackers switch to Alert Mode. Due to space limit, we do not place the results for the number of false packets received and validated here. Those two results are similar as the number of false packets being forwarded.

Fig. 16.12
figure 12

End-to-End delay

Table 16.3 Normalized number of false packets forwarded

16.5 Thoughts for Practitioners

We implemented SMOCK under the context of Trustworthy Cyber Infrastructure for Power grid (TCIP) by C language in Linux operating system. In our implementation, all SMOCK public keys are sent via SSL channel from trusted authority before secure communication. We measured the time taken to encrypt and decrypt a message in a standard IBM T60 laptop. For a pair of private keys held per node, the measured encryption and decryption time are 9.82 ms and 4.88 ms for 1024-bit key and 32-byte data packet, respectively. However, if applied to low-end sensors or hand-held devices, encryption and decryption time could be much longer, in the order of seconds. SMOCK can then be used to negotiate light-weighted session keys, such as symmetric keys.

Bearing the similar deficiency as dynamic window scheme, DREAM relies on public key cryptography. For low broadcast traffic rate and infrequent communication, it is fine. However, how to build efficient broadcast authentication using light-weighted cryptographic primitive is still needed.

16.6 Direction for Future Research

SMOCK presents our initial effort to make memory overhead for public key pre-distribution scalable to a large number of devices. Future improvement can be done in the following directions: (1) to perform private key combination update upon compromise detection; (2) to further reduce memory consumption by location-aware combination assignment. (For example, a group of nodes, which are likely to communicate with each other, is assigned a set of private-key combinations with many keys shared.)

Idea of DREAM to reduce end-to-end authentication delay is applicable to other broadcast protocols than flooding. Further effort can be made to integrate DREAM with advanced broadcast forms [28, 31] and test their performance. DREAM relies on probabilistic decision to push unverified packets remotely. But the unverified forwarding path may not be optimal for quick coverage. GPSR routing protocol [18] can be used to deterministically forward unverified packets further along the best direction. Additionally, counter measurement for attacks on broadcast suppression is worthy investigation, wherein attackers pretend to be the broadcast source and send packets with higher or equal sequence numbers before the true source.

16.7 Conclusions

Security is a challenging and important issue for wireless ad hoc networks. To design better security solution, we need to correctly understand the network model and attacker model. Moreover, security incurs overhead. Resource constraints of mobile devices, such as memory, computation, communication and energy, needs to be carefully considered in the solution. To balance the resource constraint, security, and real-time performance requirement, adaptivity is a promising way for the mission critical wireless network.

16.8 Terminologies

  1. 1.

    Self-contained key management scheme

    A key management scheme where all necessary cryptographic keys (certificates) are stored at individual nodes before deployment.

  2. 2.

    Key Pool

    A set of public–private key pairs generated at trusted server in the initialization phase.

  3. 3.

    Private key set

    A set of private keys held by a user

  4. 4.

    A valid key allocation

    A key allocation satisfying the following property:

    $$\forall {\rm i} \forall {\rm j}\,{\rm not}\,({\rm K}_{\rm j}\subseteq {\rm K}_{\rm j}\,{\rm or}\, {\rm K}_{\rm j}\subseteq K_{\rm i}),\,{\rm if}$$
  5. 5.

    Isometric key allocation

    A key allocation is isometric, if \(|{\rm K}_1|=|{\rm K}_2|=\cdots =|{\rm K}_{{\rm n}}|={\rm b}.\)

  6. 6.

    Vulnerability metrics

    The percentage of communications being compromised when xnodes are broken in.

  7. 7.

    False injection containment

    Faked messages are detected and dropped near the injector.

  8. 8.

    Hop-by-Hop Message Authentication

    An authentic packet is forwarded to next-hop after being validated. False packets are dropped after validation failure.

  9. 9.

    Normal Mode

    Nodes probabilistically forward some packets without authentication so as to achieve small end-to-end delay by trading imperfect false packet containment capability.

  10. 10.

    Alert Mode

    Upon emergence of false packet attacks, nodes switch to hop-by-hop authentication and work in a defensive way

16.9 Questions

  1.  (1)

    Give a formal definition for vulnerability to compromise. For different key management schemes mentioned in Section 16.2 and Section 16.3, please compare their vulnerability metrics.

  2.  (2)

    How to eliminate collision of private key sets assigned to a pair of users?

  3.  (3)

    Which key management schemes do not support full connectivity?

  4.  (4)

    Compare SMOCK and certificate approach?

  5.  (5)

    In random key distribution mechanism, the probability that any pair of nodes possesses at least one common key is

    $$1-\frac{\left(\right.({\rm K-k)!)^2}}{({\rm K-2k)!k!}}$$

    In the modification of basic random key distribution scheme, multiple common keys are needed to establish a secure link in key-setup phase so that resilience against node compromise improves. What is the probability that any pair of nodes possesses at least i common keys? K and k are defined as before.

  6.  (6)

    Suppose, b = 3, derive the value of for a network size of 10 000 without considering memory and resilience objectives. In this setting, what is the size of ID field? What is the ID for Node 300 if it has been assigned private key set \(\{{\rm k}_{{\rm priv}}^{30},\,{\rm k}_{{\rm priv}}^{2},\,{\rm k}_{\rm priv}^{15}\}.\)

  7.  (7)

    Explain why hop-by-hop TESLA does not work in broadcast authentication.

  8.  (8)

    In DREAM, why a packet includes HT field?

  9.  (9)

    Design a packet authentication scheme with containment capability for unicast data traffic.

  10. (10)

    Is DREAM resilient to compromise?