Keywords

1 Introduction

Location-based services (LBS) have provided exciting opportunities to use position related information such as location, proximity or distance in improving system security, control access, and personalized service delivery [14, 16, 29]. One of the earliest applications of user location is for securing authentication systems [9] against man-in-the-middle (MiM) attack. Secure authentication protocols are challenge-response protocols between a prover and a verifier. In an MiM attack against these protocols, an attacker runs two simultaneous sessions of the protocol, one with an honest prover and one with the (honest) verifier, and by passing the responses of the prover to the verifier, succeeds in making the protocol accept their claim. Desmedt et al. [9] showed that protection against this attack, that does not have any cryptographic solution, can be provided if the verifier uses an estimate of the location of the prover (e.g. distance to the verifier) as a second factor in authentication. Distance (Upper) Bounding (DUB) protocols [5] are challenge-response authentication protocols that provide cryptographic authentication security with the extra guarantee that the user is within a distance bound to the verifier. These protocols have been widely studied, their security has been formalized, and protocols with provable properties have been proposed [4, 5, 10, 22]. Successful authentication allows the user to perform privileged actions, e.g., open the car door [12], or access a special system resource.

In this paper we consider the problem of controlling access with respect to a region \(\mathcal {R}\), that is called policy region. The user has to “prove” to the verifier(s) that, (i) they have the secret key \(k_u\), and (ii) they are within the region \(\mathcal {R}\). This setting naturally arises when a privileged service is offered in a region \(\mathcal {R}\). For example a project team in a software development company can access proprietary project information when they are within their work area. In this setting authentication protocol must prove the conjunction,

$$\begin{aligned} User \text { has the shared secret }k_u \wedge (User \text { is in } \mathcal {R}). \end{aligned}$$
(1)

We propose a new authentication system that is called in-Region Authentication (inRA), that proves that the above conjunction holds.

A simplistic solution to prove the conjunction (1) is to use a secure cryptographic authentication to allow the user to prove that they know the secret \(k_u\), and then use a secure location verification protocol to prove their location. This solution however will be insecure because, firstly, proving the two clauses separately allows new attacks, for example the prover changing the location in between the two steps, and secondly, secure location verification protocols [8, 19] start with the prover claiming a location, which needs them to access GPS signal. This not only limits the application of the protocol to locations where GPS signal is available, but also opens the possibility of GPS spoofing attacks [24].

One can combine the two steps when \(\mathcal {R}\) is a circular region by employing a secure DUB protocol: the verifier of the DUB protocol will be placed at the center of the region and the distance bound will be chosen as the radius of \(\mathcal {R}\) (Fig. 1a). The approach works perfectly because \(\mathcal {R}\) is perfectly covered with the circle associated with the boundary of the DUB protocol. For arbitrary \(\mathcal {R}\), one can use an approximate cover by using one or more verifiers (See Fig. 1b and c): the prover must prove its distance to the corresponding verifier of each part of the region. This is the approach in [17] to solve the closely related problem of in-region location verification where the goal is to verify that the prover is within the region, without requiring secured authentication of the user or quantifying error.

Using multiple verifiers to cover \(\mathcal {R}\) requires one to determine the verifier configuration, which is specified by (i) the number of verifiers, (ii) their locations, and (iii) their associated distance bounds. Note that the error associated to a configuration does not have an algebraic form and one cannot use traditional optimization methods to find the optimal configuration, and this is true even if the number and location of verifiers are known.

Fig. 1.
figure 1

Location verification for (a) circular region \(\mathcal {R}\) that is perfectly covered by a single verifier placed at the center of \(\mathcal {R}\), (b) arbitrary shaped region \(\mathcal {R}\), a single verifier does not give perfect coverage, and (c) arbitrary shaped region \(\mathcal {R}\), multiple verifiers are placed inside \(\mathcal {R}\) ([17]’s approach), also does not give perfect coverage.

Our Work

Model. Our goal is to design provably secure authentication protocols that allow the prover to prove the conjunction (1), while minimizing protocol error. In Sect. 3 we formally define an inRA system for a set of registered (provers) and a set of unregistered users, a set of collaborating verifiers, and an inRA protocol whose correctness is defined using FA (False acceptance) and FR (False rejection) with respect to the policy region \(\mathcal {R}\). Our security definition formalizes attacks that involve a malicious prover outside \(\mathcal {R}\), an unregistered user inside \(\mathcal {R}\), and a collusion of a malicious prover and a helper who is inside \(\mathcal {R}\). A significant challenge in modelling and achieving security is the possibility of the prover moving between their interactions with different verifiers. Our security model uses ITMs (Interactive Turing Machines) to model the prover and verifiers and does not formalize time (movement of the prover). We assume prover movement will be detected through other mechanisms, and our protocol introduces a mechanism that does that, allowing us to use our security model.

Construction. Armed with this model and definition, we propose a systematic approach to designing inRA protocols for the proof of the conjunction (1) and with quantifiable correctness error, and give an efficient algorithm to minimizing the error (see below). The approach in its basic form uses two verifiers \(V_0\) and \(V_1\), and covers the region \(\mathcal {R}\) with a pseudo-rectangle (P-rect) \(R'(V_0,V_1)\) that is formed by two rings centered at the two verifiers (See Fig. 2 and Sect. 3). A ring is formed by a verifier running a DUB protocol followed by a Distance Lower Bounding (DLB) protocol [28] (that guarantees a lower bound on the distance of the prover to the verifier- See Sect. 2), with the prover. The two verifiers work in tandem, with the second immediately following the first. Verifiers use omnidirectional antennas during the protocol initialization, and use directional antennas for the challenge-response phase.

This basic inRA protocol approximates \(\mathcal {R}\) with a P-rect and results in FA and FR. We define the total error as the sum of FA and FR errors and aim at minimizing it. Our approach however can be easily extended to the case that the two types of errors have different significance – see Sect. 6.

Minimizing Error. For fixed locations of \(V_0\) and \(V_1\), the total error is a function of the distance bounds of the two verifiers. To minimize error one can use brute force method and for every possible values of distance bounds, find the error and select the minimal value. This is an infeasible task in practice. We give an innovative approach that uses maximum subarray problem [13] algorithm to solve the optimization problem of finding a P-rect that is proved to minimize the total error in approximate coverage of \(\mathcal {R}\) with a P-rect. The algorithm has complexity \(O(n^3)\) where n is the size of \(\mathcal {R}\) represented as a point set. This basic algorithm can be employed multiple times using more verifiers, to increase accuracy. In Sect. 6 we show that using two P-rects to cover the region reduces the total error by up to 15%. We leave the problem of optimizing the number and the locations of the verifiers as an interesting direction for future work.

Security Proof. In our basic protocol (Sect. 4) we will use a novel approach to detecting movement of the prover during protocol execution, by using each verifier to play the role of an observer for the other verifier’s interaction with the prover. We will then use our security model to prove security against attacks. We discuss how protection against a new attack called key splitting attack that is the result of using a pair of DUB and DLB protocols with two verifiers, can be avoided by using keys shared with \(V_0\) and \(V_1\) both, to generate the fast phase responses to each verifier.

Implementation and Experimental Results. We implemented the optimization algorithm for two verifiers and applied it to four policy regions corresponding to buildings in our University (Sect. 6). We started with a \(640 \times 640\) Google Map image of the policy region, and converted it into a binary image for point-set representation of the policy regions. To achieve higher accuracy, we used two P-rects to cover the policy region. Table 1 summarizes our results. The highest accuracy is obtained for the most regularly shaped rectangular region. In all cases FA and FR range between \(0.81\%\) to \(5.16\%\), and \(3.89\%\) to \(5.58\%\), respectively.

We compared our approach with the scheme in Sastry et al. [17]. This is the only system with comparable security goals and assumptions. Comparison (Sect. 6) clearly shows superior performance of our approach: [17] uses 5 verifiers to achieve \(93\%\) accuracy and uses informal security analysis, while we use 2 verifiers, achieve \(96.4\%\) accuracy, and provide formal security proof.

Extensions. One can define weights for each type of FA and FR error depending on the application, and use optimization approach on the weighted error function. The approach raises numerous interesting open questions such as optimizing the total error when there are more than two verifiers, and one needs to select their locations and distance bounds. We leave these for future work.

Organization. Section 2 is preliminaries. Section 3 describes our inRA model. Section 4 details the inRA protocol \(\varPi _{rect}\), and the security analysis. Section 5 provides our approach to minimize error. Section 6 includes our experimental results. Section 7 presents related works and Sect. 8 concludes the paper.

2 Preliminaries

Distance Bounding. Secure distance bounding protocols have three phases: (i) initialization phase, (ii) Challenge-response phase, and (iii) Verification phase. The round-trip time of a challenge and response is used to estimate distance. The goal of a distance Upper bounding (DUB) protocol is to ensure that a prover P located at distance \(d_{PV}\) satisfies \(d_{PV} \le \mathcal {B}_U \) where \(\mathcal {B}_U\) is a fixed upperbound.

The main attacks on distance bounding protocols are, (i) Distance fraud attack: a far away dishonest prover tries to claim a shorter \(d_{PV}\) and be accepted by V; (ii) Mafia fraud attack: an external attacker uses the communication of an honest prover to get accepted by the verifier, and (iii) Terrorist attack (also known as collusion attack): a dishonest prover gets help from a helper that is close to the verifier, to get accepted by the verifier. A number of formal security models that capture above attacks, and protocols with provable security have been proposed [10, 22]. Secure DUB protocols are vulnerable to distance enlargement attack but not to distance reduction attack [6].

The goal of distance lower bounding (DLB) protocols [28] is the converse: a prover wants to prove that their distance to the verifier is larger than a given bound. Zheng et al. [28] showed that one cannot simply use DUB protocols to guarantee a lower bound on the distance of the prover. They proposed a security model for DLB that is inline with the DUB security model, and constructed a DLB protocol with provable security in their model. Our construction of inRA protocol \(\varPi _{Prect}\) uses DLB protocol together with a DUB protocol.

Maximum Subarray Problem. Optimizing P-rect uses maximum subarray problem (MSP), first proposed in [13]. The problem is to select a contiguous segment of an array that has the largest sum over all possible array segments. Efficient algorithms for MSP problem have applications in computer vision, data mining and genomic sequence analysis [11, 21, 25]. For a 2D array \(a[1{\dots }m][1{\dots }n]\), the maximum sub-array M is given by [3],

$$\begin{aligned} M = \max \sum _{x=i, y = g}^{j, h}{a[x][y] | 1 \le i \le j \le m, 1 \le g \le h \le n} \end{aligned}$$
(2)

Solutions have complexity cubic or sub-cubic [20]. To find the P-rect with the lowest total error, or equivalently maximum accuracy, we will use FindOptimalBounds algorithm (Sect. 4) that uses the extended Kadane’s algorithm [3].

3 In-Region Authentication Systems

Consider a two-dimensional planar connected (path connected) geographic area represented by an array of points, each point representing a geolocationFootnote 1. Let \(\mathcal {U}\) denote the universe of all points of interest, and \(\mathcal {R} \subset \mathcal {U}\), be the policy region. There are multiple parties, each represented by a polynomially bounded Interactive Turing Machine (ITM), and associated with a location loc.

A protocol instance between two honest parties P and V is modelled by a probabilistic experiment where each party uses its algorithm on its input and random coin. This is shown by \(P(x;r_P) \leftrightarrow V(y;r_V)\), x and y are the inputs, \(r_P\) and \( r_V\) are the random coins of the two participants, respectively. We can “enlarge” the experiment to include an adversary’s algorithm, shown as: \(P(x;r_P) \leftrightarrow A(r_A) \leftrightarrow V(y;r_V)\). This means that an adversary A is interfering with the communication between the honest participants.

inRA Protocols. Let \(\mathcal {R}\) be a connected policy region (Fig. 2). The verifying system consists of a set of verifiers \(\mathcal {V}= \{ V_0\cdots V_{m-1}\}\), with publicly known locations. Verifiers are trusted to follow the protocol and can communicate among themselves through secure channels to exchange information and coordinate their actions. Verifiers are equipped with directional antennas whose signals can be received in a conic region of space that covers \(\mathcal {R}\). A prover P with location \(loc_P\), has shared keys with the verifier set \(\mathcal {V}\). The prover is not trusted.

Fig. 2.
figure 2

(a) The policy region \(\mathcal {R}\) is the yellow arbitrary shaped region. The blue (almost) rectangular area is P-rect \(\mathcal {R}_\varPi \) for the inRA protocol in Sect. 4. The dark blue area of \(\mathcal {R}\) is correctly covered. The remaining yellow and blue areas are \(FR_{\varPi ,\mathcal {R}}\) and \(FA_{\varPi ,\mathcal {R}}\), respectively. (b) The upper intersection forms \(R_{rect}\) to cover \(\mathcal {R}\) (blue). The lower intersection forms \(R'_{rect}\) (red), an ambiguous region. (Color figure online)

An in-region authentication protocol is a protocol \(\varPi \) between P and \(\mathcal {V}\), at the end of which \(\mathcal {V}\) outputs \(Out_\mathcal {V} = 0\) or 1, denoting reject and accept of the prover’s claim, respectively. Prover does not have an output and so \(Out_\mathcal {V}\) is the protocol output. The prover’s claim is stated as the conjunction in (1).

DUB protocols can be seen as inRA protocols where the second proposition is, P is within a distance bound from the verifier.

Error and Accuracy in inRa Protocols. Consider an instance of a protocol \(\varPi \) between an honest prover P and the verifier set, in the absence of an adversary. Let \(\mathcal {R}_\varPi \subset \mathcal {U}\) denote the set of points \(u \in \mathcal {U}\) that \(\varPi \) will have \(Out_\mathcal {V} = 1\). We define two types of errors for the protocol \(\varPi \) with respect to the region \( \mathcal {R}\): \(FA_{\varPi ,\mathcal {R}}\) and \(FR_{\varPi ,\mathcal {R}}\), denoting false acceptance and false rejection of the protocol \(\varPi \), respectively, where, (i) \(FA_{\varPi ,\mathcal {R}}\) is the set of locations that are in \(R_\varPi \setminus \mathcal {R}\),Footnote 2 and \(FR_{\varPi ,\mathcal {R}}\) is the set of locations that are in \(\mathcal {R} \setminus \mathcal {R}_\varPi \). Accuracy ratio can be defined as follows [15]:

$$\begin{aligned} Accuracy \ ratio = \frac{TA_{\varPi ,\mathcal {R}} + TR_{\varPi ,\mathcal {R}}}{TA_{\varPi ,\mathcal {R}} + TR_{\varPi ,\mathcal {R}} + FA_{\varPi ,\mathcal {R}} + FR_{\varPi ,\mathcal {R}}} \end{aligned}$$
(3)

where \(TA_{\varPi ,\mathcal {R}}\) and \(TR_{\varPi ,\mathcal {R}}\) denote the true acceptance and true rejection sets, \(TA_{\varPi ,\mathcal {R}}\) is the set of points in \( \mathcal {R} \cap \mathcal {R}_\varPi \) and are accepted by the algorithm, and \(TR_{\varPi ,\mathcal {R}}\) is the set of points in \(\mathcal {U} \setminus \{ \mathcal {R} \cup \mathcal {R}_\varPi \}\) and are rejected by the algorithm. Now, \(Error\ ratio =1 - Accuracy\ ratio\), and can be expressed as,

$$\begin{aligned} Error\ ratio = \frac{FA_{\varPi ,\mathcal {R}} + FR_{\varPi ,\mathcal {R}}}{TA_{\varPi ,\mathcal {R}} + TR_{\varPi ,\mathcal {R}} + FA_{\varPi ,\mathcal {R}} + FR_{\varPi ,\mathcal {R}}} \end{aligned}$$
(4)

Since \(\mathcal {U}= (TA_{\varPi ,\mathcal {R}}+ TR_{\varPi ,\mathcal {R}}+ FA_{\varPi ,\mathcal {R}}+ FR_{\varPi ,\mathcal {R}})\) is constant, to minimize error one needs to minimize \((FA_{\varPi ,\mathcal {R}}+ FR_{\varPi ,\mathcal {R}})\). In our work we use error \(E_{\varPi , \mathcal {R}}\) given by,

(5)

Note that one can attach weights to points in \(FA_{\varPi ,\mathcal {R}}\) or \(FR_{\varPi ,\mathcal {R}}\) to reflect their importance in a particular application. In this paper we assume the same significance for the two types of errors. For , we can write,

$$\begin{aligned} FA_{\varPi ,\mathcal {R}} + FR_{\varPi ,\mathcal {R}}&=FA_{\varPi ,\mathcal {R}} + (\mathcal {R} - TA_{\varPi ,\mathcal {R}}) \\&=\mathcal {R} - (TA_{\varPi ,\mathcal {R}} - FA_{\varPi ,\mathcal {R}}). \end{aligned}$$

\(\mathcal {R}\) is fixed and so minimizing \((FA_{\varPi ,\mathcal {R}}+ FR_{\varPi ,\mathcal {R}})\) is equivalent to maximizing \((TA_{\varPi ,\mathcal {R}}- FA_{\varPi ,\mathcal {R}})\). We say that in our \(\mathcal {R}\) coverage problem, error is minimized by minimizing \((FA_{\varPi ,\mathcal {R}}+ FR_{\varPi ,\mathcal {R}})\), or equivalently, accuracy is maximized by maximizing \((TA_{\varPi ,\mathcal {R}}- FA_{\varPi ,\mathcal {R}})\). Therefore, we define Accuracy \(A_{\varPi , \mathcal {R}}\) as:

(6)

Definition 1

(in-Region Authentication). An in-region authentication (inRA) protocol \(\varPi \) is a tuple \(\varPi =(Gen, P, \mathcal {V} = \{V_0\cdots V_{m-1}\},\mathcal {R})\) where:

  1. 1.

    \(\mathcal {X} \leftarrow Gen(1^{s}, r_{k})\) is a randomized key generation algorithm that generates a vector \(\mathcal {X} = \lbrace x_0,\dots ,x_{m-1} \rbrace \) of n secret keys, where \(x_i\) is the prover’s shared secret key with \(V_i\), and \(r_{k}\) denoting the random coins of Gen. s is the security parameter.

  2. 2.

    \(P(\mathcal {X}; r_{P})\), is a ppt. (probabilistic polynomial time) ITM (Interactive Turing Machine) running the prover algorithm with random input \(r_{P}\) and the secret key vector \(\mathcal {X} = \lbrace x_0,\dots ,x_m-1 \rbrace \).

  3. 3.

    \(\mathcal {V} = (V_0, \dots , V_{m-1})\) is a set of verifiers, each verifier \(V_i(x_i; r_{V_i}) \in \mathcal {V}\) is a ppt. ITM running algorithm with random input \(r_{V_i}\) and shared secret \(x_i\). We write \(\mathcal {V}(\mathcal {X}, r_{\mathcal {V}})\) to denote the set of the verifiers’ algorithms.

  4. 4.

    \(\mathcal {R}\) is a set of points corresponding to a contiguous region. This is the policy region.

The protocol satisfies the following properties:

  • Termination: \((\forall s)\ (\forall \mathcal {Z})\ (\forall (r_{k}, r_{\mathcal {V} }))\ (\forall loc_{\mathcal {V}})\) if \(\mathcal {X} \leftarrow Gen(1^{s}, r_{k})\) and \((\mathcal {Z} \longleftrightarrow \mathcal {V}(\mathcal {X}; r_{\mathcal {V}}))\) is the execution where \(\mathcal {Z}\) is any set of prover algorithms, then \(\mathcal {V}\) halts in polynomial number of computational steps (Poly(s));

  • p-Completeness: \((\forall s)\ (\forall (loc_{\mathcal {V}}, loc_{P}))\) such that \(loc_P \in \mathcal {R}\) we have

    $$\begin{aligned} \underset{{r_{k}, r_{P}, r_{\mathcal {V}}}}{Pr} \left[ Out_{\mathcal {V}} = 1 : \begin{array}{ll} \mathcal {X} \leftarrow Gen(1^{s}, r_{k})\\ P(\mathcal {X}; r_{P}) \leftrightarrow \mathcal {V}(\mathcal {X}; r_{\mathcal {V}})\\ \end{array} \right] \ge p. \end{aligned}$$
    (7)

Similar definition of termination and completeness is used for DB prortocols [1, 22, 28].

3.1 inRA Security

We consider a prover, possibly malicious, who may receive help from a helper who is in \(\mathcal {R}\) but does not have a secret key.

The adversary attempts to prove that their location is inside \(\mathcal {R}\) (while they are actually outside) and their success chance must be negligible even if they know the shared key. We use a game-based approach in defining security, and define security in terms of the success chance of an adversary in the following security games against a challenger. Each game starts with a setup phase where the challenger sets the keys and locations of participants. This is followed by the adversary corrupting some of the participants (depending on the game), engaging them in a learning phase and finally the attack phase. We omit the details because of space and outline the steps of each game in the definition of each attack. In the following, a dishonest prover is denoted by \(P^*\).

in-Region Fraud (inF). In this attack, a corrupted prover \(P^*\) who has the secret key and is in \(\mathcal {U} \setminus \mathcal {R}\) wants to prove that they are inside \(\mathcal {R}\).

Definition 2

(inF-resistance). An inRA protocol \(\varPi \) is \(\alpha \)-resistant to in-region fraud if \((\forall s)(\forall P^{*})(\forall loc_{\mathcal {V}})\) such that , and \((\forall r_{k})\) we have,

$$\begin{aligned} \underset{{r_{\mathcal {V}}}}{Pr} \left[ Out_{\mathcal {V}} = 1 : \begin{array}{ll} \mathcal {X} \leftarrow Gen(1^{s}, r_{k}) \\ P^{*}(\mathcal {X}) \leftrightarrow \mathcal {V}(\mathcal {X}; r_{\mathcal {V}})\\ \end{array} \right] \le \alpha . \end{aligned}$$
(8)

The above definition also captures a special type of attack - in-region hijacking (follows from a similar type of attack in DB protocols - distance hijacking). A dishonest prover \(P^{*}\) located outside \(\mathcal {R}\) uses the inRA communications of unaware honest provers (inside \(\mathcal {R}\)) to get authenticated as an honest prover.

in-Region Man-in-the-Middle (inMiM). A corrupted participant who does not have a key but is inside \(\mathcal {R}\), interacts with multiple provers P’s and the verifier set \(\mathcal {V}\), and uses transcripts of these protocols to succeed in the inRA protocol.

Definition 3

(inMiM-resistance). An inRA protocol \(\varPi \) is \(\beta \)-resistant to inMiM attack if, \((\forall s)(\forall m, l, z)\) that are polynomially bounded, \((\forall \mathcal {A}_{1}, \mathcal {A}_{2})\) that are polynomially bounded, for all locations s.t. , where \(j \in \lbrace q+1,\dots ,t \rbrace \), we have

$$\begin{aligned} Pr \left[ Out_{\mathcal {V}} = 1 : \begin{array}{ll} \mathcal {X} \longleftarrow Gen(1^{s}, r_{k})\\ P_{1}(\mathcal {X}),\dots ,P_{q}(\mathcal {X}) \longleftrightarrow \mathcal {A}_{1} \longleftrightarrow \mathcal {V}_{1}(\mathcal {X}),\dots , \mathcal {V}_{z}(\mathcal {X})\\ P_{q+1}(\mathcal {X}),\dots ,P_{t}(\mathcal {X}) \longleftrightarrow \mathcal {A}_{2}(View_{\mathcal {A}_{1}}) \longleftrightarrow \mathcal {V}(\mathcal {X}) \end{array} \right] \le \beta . \end{aligned}$$
(9)

The attacker is a pair of algorithms \((\mathcal {A}_{1}, \mathcal {A}_{2})\), where \(\mathcal {A}_{1}\) denotes the learning phase during which the attacker interacts with the protocol-runs of q provers that can be anywhere, and provides this view to \(\mathcal {A}_{2}\) in the second stage of the attack. Definition 3 is general and captures other attack settings that are traditionally referred to as mafia fraud and impersonation attack, in DB protocols. Mafia fraud is an MiM attack as defined above but without a learning phase. In impersonation attack the attacker uses multiple possibly concurrent interactions with the verifiers to make the verifier output 1.

in-Region Collusion Fraud (inCF). Arguably the strongest attack and involves the collusion of a corrupted prover who is in \(\mathcal {U} \setminus \mathcal {R} \), and a helper who is inside \(\mathcal {R}\). In collusion fraud the assumption is that the corrupted prover does not want their long-term secret key to be learnt by the helper as otherwise the helper would have a better chance to succeed in other attacks individually. The prover however attempts to use the helper’s location to succeed in the attack. In the following definition of rCF-resistance, success of the attacker in inCF implies that - the attacker in a MiM attacker as defined above (and realized by the helper), will also succeed. \(P^{(*)}(\mathcal {X})\) denotes honest or dishonest prover.

Definition 4

(inCF-resistance). An inRA protocol \(\varPi \) is \((\gamma , \eta )\)-resistant to collusion fraud if \((\forall s)\) \((\forall P^{*})\) \((\forall loc_{\mathcal {V}_{0}}\) s.t. \((\forall \mathcal {A}^{CF} ppt.)\) s.t.

$$\begin{aligned} Pr \left[ Out_{\mathcal {V}_{0}} = 1 : \begin{array}{ll} \mathcal {X} \longleftarrow Gen(1^{s})\\ P^{(*)}(x) \longleftrightarrow \mathcal {A}^{CF} \longleftrightarrow \mathcal {V}_{0}(\mathcal {X}) \end{array} \right] \ge \gamma , \end{aligned}$$
(10)

over all random coins, there is a two stage attacker \((\mathcal {A}_{1}, \mathcal {A}_{2})\) as defined in MiM with the additional relaxation that in the learning phase, the attacker can interact with the malicious prover also, such that,

$$\begin{aligned} Pr \left[ Out_{\mathcal {V}} = 1 : \begin{array}{ll} \mathcal {X} \leftarrow Gen(1^{s})\\ P_{1}^{(*)}(\mathcal {X}),\dots ,P_{q}^{(*)}(\mathcal {X}) \longleftrightarrow \mathcal {A}_{1} \longleftrightarrow \mathcal {V}_{1}(\mathcal {X}),\dots ,\mathcal {V}_{z}(\mathcal {X})\\ P_{q+1}(\mathcal {X}),\dots ,P_{r}(\mathcal {X}) \longleftrightarrow \mathcal {A}_{2}(View_{\mathcal {A}_{1}}) \longleftrightarrow \mathcal {V}(\mathcal {X}) \end{array} \right] \ge \eta . \end{aligned}$$
(11)

The above definition of inCF captures a widely used attack model for DB protocols, which we call in-Region Terrorist fraud (inTF) in which \(P^{*}\), with , uses a helper who does not have the secret key, to succeed in an instance of the protocol.

We do not consider jamming attacks blocking all communication. A secure inRA protocol provides security against inF, inMiM and inCF.

4 Pseudo-rectangle (P-rect) Cover Approach to inRA

We assume the setting of Sect. 3 and describe our approach using basic inRA protocol that uses two verifiers \(V_0, V_1\) with (publicly known) location \(loc_{V_0}\) and \(loc_{V_1}\). The prover P shares the secret keys \(x_0\) and \( x_1\) with \(V_0, V_1\), respectively.

4.1 Basic (Two-Verifier) P-rect Approach

Protocol Communication. We assume radio signal travel at the speed of light and the round trip time of a challenge and response can provide a reliable estimate of distance. There are two collaborating verifiers who interact with the prover using, slow communication that is used for time-insensitive messages over reliable channels, and fast communication that are time sensitive messages that are used for estimating distance and are sent over the physical channel that is noisy. For simplicity, we do not consider noise. Our results however can be easily extended to noisy channels by modifying the protocol parameters (thresholds). Verifiers are equipped with omnidirectional and directional antennas, although in each run of the protocol we require only one of them to use their directional antenna for communication with the prover. Communication between the verifiers takes place over a secure and reliable channel and is not time sensitive.

P-rectangle. For a fixed pair of verifiers, \(V_0\) and \(V_1\), with lower and upper bound pairs, \(\{ \ell _{V_0}, u_{V_0}\}, \{ \ell _{V_1}, u_{V_1}\} \), respectively, a P-rect is defined as the set of points \(x\in \mathcal {U} \) that satisfy the following inequalities:

$$\begin{aligned} d(x, loc_{V_0}) \le u_{V_0}, \; d(x, loc_{V_0}) \ge \ell _{V_0}, \; d(x, loc_{V_1}) \le u_{V_1}, \; d(x, loc_{V_1}) \ge \ell _{V_1}\end{aligned}$$

where d(., .) is the Euclidean distance. Consider the two pairs of concentric circles, centered at \(loc_{V_0}\) with radii \(\{ \ell _{V_0}, u_{V_0}\}\) and at \(loc_{V_1}\) with radii \(\{ \ell _{V_1}, u_{V_1}\}\), respectively. The intersection of the four circles defines two P-rects (Fig. 2b).

We denote the two mirrored rectangles by \(R_{rect}(loc_{V_0}, loc_{V_1}, \ell _{V_0}, u_{V_0}, \ell _{V_1}, u_{V_1})\) and \(R'_{rect}(loc_{V_0}, loc_{V_1}, \ell _{V_0}, u_{V_0}, \ell _{V_1}, u_{V_1})\). We use \(R_{rect}\) and \(R'_{rect}\) when parameters are known from the context. These P-rects are formed when \(V_0\) and \(V_1\) each executes a pair of DUB and DLB protocols with corresponding upper and lower bounds. To distinguish between the two, one of the verifiers can use a directional challenge towards the target region \(\mathcal {R}\). The inRA protocol \(\varPi _{rect}\) below uses a P-rect to cover \(\mathcal {R}\). We quantify the error and prove security of this protocol.

Protocol \(\varPi _{rect}\). For given values of \(loc_{V_0}, loc_{V_1}, \ell _{V_0}, u_{V_0}, \ell _{V_1}, u_{V_1}\), the protocol bounds the prover within- \(R_{rect}(loc_{V_0}, loc_{V_1}, \ell _{V_0}, u_{V_0}, \ell _{V_1}, u_{V_1})\) (see Fig. 3).

Fig. 3.
figure 3

inRA protocol \(\varPi _{rect}\) between a prover and 2 verifiers. In initialization phase prover and verifiers generate and exchange nonces \(N_{v_i}^l, N_{v_i}^u\). Fast Exchange phase is 2n rounds of challenge (\(c^u_{i_\tau }, c^l_{i_\omega }\)) and responses (\(r^u_{i_\tau },r^l_{i_\omega }\)) for DUB and DLB. The responses are calculated using a pseudo-random function with special properties. In verification phase, verifiers check round-trip time and correctness of responses.

Initialization Phase. Prover P and verifiers \(V_0, V_1\) have shared secret \(x_i, i = 0,1\) and security parameter k at the start of the protocol. Prover picks four independently generated nonces \(N_{p_i}^l, N_{p_i}^u, i = \lbrace 0, 1 \rbrace \), each of length k, and sends a pair of nonces to each verifier \(V_i, i = \lbrace 0, 1 \rbrace \). Each verifier \(V_i\) picks two independently generated nonces of the same length, \(N_{v_i}^l, N_{v_i}^u\), and two random strings \(A_i^u, A_i^l\), each of length 2n (2n corresponds to number of rounds in fast-exchange phase) and calculates, \(M_i^u = A_i^u \oplus f_x(N_{p_i}^l, N_{v_i}^l)\) and \(M_i^l = A_i^l \oplus f_x(N_{p_i}^u, N_{v_i}^u)\). f is a Pseudo Random Function (PRF). \(N_{p_i}^u, N_{v_i}^u, M_i^u, M_i^l\) are sent to the prover who decrypts and stores \(A_i^u, A_i^l\). These are the response tables of the distance upper and lower bound challenges for the respective verifiers, in the fast-exchange phase. All communications between the prover and the verifiers use omnidirectional antenna in the initialization phase.

Fast-Exchange (FE) Phase. WLOG assume \(V_0\) starts the FE phase and notifies \(V_1\) to start its FE phase right after sending its last challengeFootnote 3.

\(V_0\) will use an omnidirectional antenna to send its challenges, while \(V_1\) will use a directional antenna with the direction and the angle of the beam chosen to cover only one of the two mirrored P-rects \(R_{rect}\) and \(R'_{rect}\) (See Fig. 2b). This means that only the points in \(R_{rect}\) will receive the challenge from \(V_1\).

The FE phase of each verifier consists of 2n consecutive rounds of challenge-response (\(n \in \varOmega (k) \)), where the first n rounds are used for distance upper bounding, and the last n rounds for distance lower bounding. In each distance upper bounding round \(\tau , \tau = \lbrace 1,\dots ,n \rbrace \), verifier \(V_i\) picks a challenge value \(c_{i_\tau }^u \in \lbrace 1, 2, 3, 4 \rbrace \), and sends it to the prover, who must respond immediately with \(r_{i_\tau }^u\), as shown in Fig. 3.

Fig. 4.
figure 4

(a) Prover movement attack: Prover responds to DUB and DLB challenges of verifier \(V_0\) while in Region 1 (R1), and DUB and DLB challenges of verifier \(V_1\) while in Region 2 (R2). (b) Key splitting attack: Prover responds to DUB and DLB challenges of \(V_1\), and also DUB challenges of \(V_0\) while asking the helper to respond to the DLB challenges of \(V_0\).

Note that the prover’s response, when the challenge value is in the set \(\{ 1,2\}\), depends on the nonces of the verifier that has sent the challenge, but when the challenge value is in the set \(\{ 3,4 \}\), their response value depends on both verifiers’ nonces. This is to prevent key-splitting attack in which a malicious prover who is located in specific parts of the plane (outside \(\mathcal {R}\)), can combine parts of the secret keys of the two verifiers to succeed in their attack (more in Sect. 4.2). Verifiers will verify the responses at the end of the protocol and after sharing their nonces. To estimate the distance, each verifier measures the round-trip-time \(RTT_{i_\tau }^u\), from sending \(c_{i_\tau }^u\) to receiving \(r_{i_\tau }^u\), of a round.

Rounds \(\omega , \omega = \lbrace n+1,\dots ,2n \rbrace \), are for DLB protocol. In each such round the verifier \(V_i\) picks a random challenge \(c_{i_\omega }^l \in \lbrace 1, 2, 3, 4 \rbrace \), together with an erasure sequence \(RS_{i_\omega }\) of length \(z_{i_\omega }\)Footnote 4, that is used to prevent prover from delaying the response and claiming a farther distance. Prover has to send a response as shown in Fig. 3, as well as the proof of receiving the erasure sequence. Verifier also measures and stores the round-trip-time \(RTT_{i_\omega }^l\), from sending \(c_{i_\omega }^l\) to receiving \(r_{i_\omega }^l\), in each round.

Verification Phase. Firstly, verifiers check correctness of the responses (\(r_{i_\tau }^u, r_{i_\omega }^l\)), as well as the proof of erasures, \(h_{i_\omega }\). Then each verifier checks if the round-trip-time of the FE challenge-responses in each of the first n rounds satisfies: \(\frac{RTT_{i_\tau }^u}{2} \le u_{V_i}\), and each of the last n rounds satisfies \(\frac{RTT_{i_\omega }^l}{2} \ge \ell _{V_i} + T(z_{i_\omega } -1)\). \(T(z_{i_\omega } -1)\) is the maximum processing time required by the prover to store the erasure and compute the proof of erasure. If the above checks succeed, then verifier \(V_i\) outputs \(Out_{V_i} =1\). If both verifiers output 1, then P is accepted, otherwise P is rejected.

4.2 Security Analysis

\(\varPi _{rect} \) uses a pair of DUB and DLB protocols with two verifiers. To prove security of the protocol we first eliminate attacks that are because of the ability of the prover to change its location between its interaction with the two verifiers, or leaking part of its key to the helper such that it succeeds in lying about its location without enabling the helper to succeed in its individual attack.

Prover Movement. Location verification protocols that consider prover’s communication with multiple verifiers are vulnerable to attacks that involve movement of the prover. Figure 4a shows such a scenario. A malicious prover located outside the P-rect attempts to get accepted by moving from one place to another. Consider two regions: Region 1 (R1) contains all the points that are within the ring centered at \(V_0\) and inside the lower bound of \(V_1\), and Region 2 (R2) contains all the points that are within the ring centered at \(V_1\) and inside the lower bound of \(V_0\). Now the prover changes its location, and can succeed by responding to DUB and DLB challenges of verifier \(V_0\) while in Region 1, and DUB and DLB challenges of verifier \(V_1\) while in Region 2. Similar attack can take place by the prover moving between Region 2,3, or Region 3,4, or Region 4,1.

Chiang et al. proposed a solution to prover movement [8] that uses simultaneous challenge from the verifiers. However, this requires the prover to claim a location first and this needs GPS signal (or other location determination infrastructure) and so not directly applicable to indoor area. We propose a novel approach to detecting the prover movement in which each verifier acts as an observer for the other verifier. More details below.

Let \(V_0\) be an observer who passively records the timing of the signals for the communication between the prover and verifier \(V_1\), and \(V_1\) play a similar role for \(V_0\). Let us revisit the prover movement scenario in Fig. 4a. First, we consider the prover movement between Region 1,2. In this case, we only consider the communication in the fast exchange phase of the DLB protocols. Notice that \(P^*\) must be in Region 1 (R1) while responding to the DLB challenge from \(V_0\), and in Region 2 (R2) while responding to the DLB challenge from \(V_1\). Consider the following time-stamps (all challenges are DLB challenges): \(t_0\): \(V_0\) sends challenge to \(P^*\) in Region 1; \(t_1\): \(V_1\) sends challenge to \(P^*\) in Region 2; \(T_0\): \(V_0\) receives response from \(P^*\) sent from Region 1; \(T_1\): \(V_1\) receives response from \(P^*\) sent from Region 2; \(T'_0\): \(V_0\) listens to the response of \(P^*\) sent from Region 2; \(T'_1\): \(V_1\) listens to the response of \(P^*\) sent from Region 1.

We assume the prover’s processing time is known and is public. \(V_0\), from DLB communication, will compute the distance between itself and \(P^*\) using their challenge and response round trip time as: \(d_{V_0P^*} = \frac{(T_0 - t_0) \times C}{2}\), where C is the speed of radio wave. Similarly, \(V_1\) will compute its distance to \(P^*\) as: \(d_{V_1P^*} = \frac{(T_1 - t_1) \times C}{2}\). By listening to the other DLB communication, \(V_0\) will compute the distance between itself and \(P^*\) based on the response times of \(P^*\) as: \(d'_{V_0P^*} = \left( T'_0 - \frac{T_1 - t_1}{2}\right) \times C\). This is because the response from \(P^*\) at Region 2 leaves \(P^*\) at time \((T_1 - t_1)/2\), and reaches \(V_0\) at time \(T'_0\). Similarly, \(V_1\) will compute the distance between itself and \(P^*\) at Region 1, using its listening time of the response of \(P^*\), as: \(d'_{V_1P^*} = \left( T'_1 - \frac{T_0 - t_0}{2}\right) \times C\). This is because the response from \(P^*\) at Region 1 leaves \(P^*\) at time \((T_0 - t_0)/2\), and reaches \(V_1\) at time \(T'_1\). The system detects movement of the prover if any of the following checks do not hold:

$$\begin{aligned} d_{V_0P^*} = d'_{V_0P^*}, d_{V_1P^*} = d'_{V_1P^*}. \end{aligned}$$
(12)

The protocol immediately rejects and aborts when multiple provers are detected.

A similar approach for each type of communication, e.g., DLB or DUB, can detect the prover movement between Region 2,3, or Region 3,4, or Region 4,1.

Key Splitting Attack. This attack is a result of using a pair of DUB and DLB protocols with two verifiers. In a key splitting attack, the prover leaks part of their key information to a helper to allow them to succeed in its attack, without allowing the helper to have a better chance to succeed on its own. Figure 4b shows a scenario for such an attack. Here, a malicious prover \(P^*\) is located within the ring centered at \(V_1\) and inside the lower bound of verifier \(V_0\). \(P^*\) shares key \(x_0, x_1\) with \(V_0, V_1\) respectively. A helper H is located inside the P-rect. \(P^*\) gives \(x_0\) to H. Now the prover will succeed by correctly responding to DUB and DLB challenges of \(V_1\), and also DUB challenges of \(V_0\) while asking the helper to respond to the DLB challenges of \(V_0\). Note that the attack is successful because this key leakage will not directly result in a successful inMiM (H requires both keys (\(x_0, x_1\)) to succeed in inMiM) and so according to inCF Definition (Definition 4), the protocol is not secure.

We thwart this attack by including both keys \((x_0, x_1)\) in generating the response to the challenges of each verifier. As shown in Fig. 3, upon receiving a challenge \(c_{i_\tau }^u = 3\) from verifier \(V_i\) (\(i = \{0,1\}\)), generating the response \(r_{i_\tau }^u\) requires key and response table shared with verifier \(V_i\). If \(c_{i_\tau }^u = 4\), it requires key and response table shared with verifier \(V_{i+1}\).

Revisiting the above key splitting scenario, to get accepted in \(\varPi _{rect}, P^*\) must share both keys \(x_0, x_1\) with the helper, otherwise helper would not be able to generate the responses to the DLB challenges \(c_i^l = 4\) from \(V_0\). This will lead to a successful inMiM by H - which guarantees security (Definition 4) of our protocol.

Security Against inF, inMiM and inCF. By removing the threats described above, we are ready to analyze the security of \(\varPi _{rect}\) against the three attacks defined in Sect. 3.1: inF, inMiM and inCF.

Let, \(\varPi _{rect}^{DUB}\) and \(\varPi _{rect}^{DLB}\) denote DUB and DLB protocols used in \(\varPi _{rect}\). The detailed inRA protocol is presented in Fig. 3. We use the constructions of [22] and [28] for DUB and DLB protocols, respectively. These protocols are provably secure against the main three attacks (distance fraud, man-in-the-middle and collusion fraud) of distance bounding protocols that have been defined consistent with the corresponding attacks of inRA in Sect. 3. Security of these component protocols does not directly lead to the security of inRA with respect to the P-rect formed by these protocols,i.e., we need to consider attack scenarios that yield from a single verifier running two different protocols (DUB and DLB).

For each verifier \(V_i \in \mathcal {V}\), the response table \(a_u\) of the DUB protocol \(\varPi _{rect}^{DUB}\) and \(a_l\) of the DLB protocol \(\varPi _{rect}^{DLB}\) are independently generated from each other and for each verifier. This holds because verifiers are honest and a response tables is constructed using the randomness of the prover and corresponding verifier.

Because of space limitation, we put security models for \(\varPi _{rect}^{DUB}\) and \(\varPi _{rect}^{DLB}\) (including Definitions 5–10) as well as the proof of following theorem in the full version of this paper [2].

Theorem 1

For a region \(\mathcal {R}\), the protocol \(\varPi _{rect}\) satisfies the following:

  1. 1.

    If \(\varPi _{rect}^{DUB}\) and \(\varPi _{rect}^{DLB}\) are secure against distance fraud attack with probability \(\alpha _u, \alpha _\ell \) in Definition 5 and Definition 8, respectively, then \(\varPi _{rect}\) is secure against in-region fraud attack with probability \(\alpha \ge \max (\alpha _u, \alpha _\ell )\) in Definition 2.

  2. 2.

    If \(\varPi _{rect}^{DUB}\) and \(\varPi _{rect}^{DLB}\) are secure against man-in-the-middle attack with probability \(\beta _u, \beta _\ell \) in Definition 6 and Definition 9 respectively, then \(\varPi _{rect}\) is secure against in-region man-in-the-middle with probability \(\beta \ge \max (\beta _u, \beta _\ell )\) in Definition 3.

  3. 3.

    If \(\varPi _{rect}^{DUB}\) and \(\varPi _{rect}^{DLB}\) are secure against collusion fraud with probability \((\gamma _u, \eta _u)\) in Definition 7 and \((\gamma _\ell , \eta _\ell )\) in 10 respectively, then \(\varPi _{rect}\) is secure against in-region collusion fraud with probability \((\gamma , \eta )\) where \(\gamma \ge \max (\gamma _u, \gamma _\ell )\) and \(\eta \ge \max (\eta _u, \eta _\ell )\) in Definition 4.

5 Optimizing Error

The basic \(\varPi _{rect}\) protocol covers \(\mathcal {R}\) with a P-rect. For given locations of verifiers \(loc_{V_0}, loc_{V_1}\), and and distance bounds \( \{ \ell _{V_0}, u_{V_0}\}, \{ \ell _{V_1}, u_{V_1}\} )\), the error in the coverage can be computed. In this paper we consider the total error which is \(FA+FR\). To minimize this error, one can use a two step algorithm: (i) for fixed \(loc_{V_0}, loc_{V_1}\), find \( \{ \ell _{V_0}, u_{V_0}\}, \{ \ell _{V_1}, u_{V_1}\} )\) that minimizes the error, Denote it by \(E_{min}( loc_{V_0}, loc_{V_1})\). (ii) find \(loc_{V_0}, loc_{V_1}\) that minimizes \(E_{min}( loc_{V_0}, loc_{V_1})\). Both these minimizations can be solved by exhaustive search, which for an \(n\times n\) size universe \(\mathcal {U}\) will have the cost of \(O(n^4)\) each.

In the following we provide an efficient algorithm FindOptimalBounds, or FOB for short (Algorithm 1) to solve (i). Let the size of a P-rectangle be the number of points in the rectangle. The algorithm works as follows.

figure a
  1. (i)

    Selects an initial \(R_{rect}\) (Line 1). This rectangle \(\mathcal {R} \subset R_{rect}\) is constructed by choosing the radii to touch the region \(\mathcal {R}\);

  2. (ii)

    \(R_{rect}\) is subdivided into P-squares (equal size sides) of size \(\varDelta \) (Line 2). P-squares are used as measuring units, and is used to quantify the accuracy (given by expression 6 in Sect. 3) of \(R_{rect}\) in covering \(\mathcal {R}\);

  3. (iii)

    The P-rect that maximizes the accuracy (therefore minimizes total error - see Sect. 3) for this \(\varDelta \), is found by formulating the accuracy as the objective function of a maximum sum sub-array problem and using an algorithm (presented in Algorithm 7, page 18 of [3]) to efficiently solve the problem (Lines 3–14).

The output of FOB is \(OptR_{rect}\), a contiguous 2D sub-array (P-rect) with maximum sum (Line 15), that is the optimal P-rect for P-squares of size \(\varDelta \).

Lemma 1

For fixed values of \(loc_{V_0}, loc_{V_1}\), the initial P-rect in FindOptimalBounds algorithm achieves higher accuracy compared to any larger P-rect.

Proof

Let, the initial P-rectangle be denoted by \(initR_{rect}\). This rectangle is chosen to be the smallest P-rectangle that contains all points in \(\mathcal {R}\). That is, \(initR_{rect}\) has maximum TA. Let the false acceptance associated with this P-rectangle be \(FA_{initR_{rect}}\). The accuracy of \(initR_{rect}\) is given by, \(A_{initR_{rect}} = TA_{max} - FA_{initR_{rect}}\). Let \(R_{rect}\) be a P-rectangle that is larger than \(initR_{rect}\) and fully covers \(\mathcal {R}\). The accuracy of \(R_{rect}\) is expressed as - \(A_{R_{rect}} = TA_{R_{rect}} - FA_{R_{rect}}\). Because \(initR_{rect}\) is the “smallest” P-rectangle that covers \(\mathcal {R}\), \(R_{rect}\) must have larger false acceptance. That is, \(FA_{R_{rect}} > FA_{initR_{rect}}\).

Because \(TA_{max} \ge TA_{R_{rect}}\), we conclude that, \(A_{initR_{rect}} > A_{R_{rect}}\).

Theorem 2

(Optimality). Let the maximum sub-array algorithm return a contiguous 2D sub-array with the largest sum. Then the FindOptimalBounds algorithm returns the P-rectangle with maximum accuracy, for \(loc_{V_0}, loc_{V_1}\) and P-square size \(\varDelta \).

Proof

A P-rectangle can be expressed as a 2D array with each point being an element of that array. FOB algorithm is initialized with a 2D array \(initR_{rect}\) of size \(m \times n\) (unit \(\varDelta \)). For maximum accuracy, using Lemma 1 we need not consider larger P-rectangles that contain \(\mathcal {R}\). The accuracy is given by the size of the set \(A_{initR_{rect}} = TA_{max} - FA_{initR_{rect}} = \mathcal {R} \cap initR_{rect} - initR_{rect} \setminus \mathcal {R}\). Thus the contribution of a point \(initR_{rect}[x][y]\) to the accuracy is 1, if it is in \(\mathcal {R} \cap initR_{rect} \) and \(-1\), if it is in \(initR_{rect} \setminus \mathcal {R}\).Footnote 5

Let \(OptR_{rect}\) denote the 2D sub-array with maximum sum that is returned by MaxSubArray(). Using Expression 2 for maximum sum sub-array (see Sect. 2), the 2D array \(OptR_{rect}\) can be written as:

$$\begin{aligned} OptR_{rect}&= \max \left\{ \sum _{x=i, y = g}^{j, h}{R_{rect}[x][y] | 1 \le i \le j \le m, 1 \le g \le h \le n} \right\} \\&= \max \left\{ \sum _{x=i, y = g}^{j, h}{\big (TA_{R_{rect}[x][y]} - FA_{R_{rect}[x][y] }\big )} \right\} \end{aligned}$$

The right hand side of this equation is the 2D sub-array of maximum accuracy, and this concludes the proof.

Location of the Verifiers. The Algorithm 1 assumes that the verifiers’ location are outside \(\mathcal {R}\), and satisfy the following restriction: the initial rings centered at the verifiers \(V_0\) and \(V_1\) must intersect pairwise. This is to ensure a well-formed P-rectangle is constructed. The restriction discards many candidate locations for the verifiers. We leave the problem of efficiently finding the location of the verifiers that results in the smallest error for future work. One can remove the restriction on the location of verifiers, including being outside region \(\mathcal {R}\), by subdividing the region into smaller regions. See Sect. 6.

Higher Accuracy. One can increase the accuracy of the algorithm by subdividing \(\mathcal {R}\) into sub-regions, and for each, choose verifiers’ location and find upper and lower bounds (using FOB). We show this in Sect. 6.

6 Experimental Evaluation

The error in covering \(\mathcal {R}\) with a P-rect depends on the shape of \(\mathcal {R}\), the number of subregions and the distance bounds. We consider the following cases for four policy regions shown in Fig. 5.

  • Direct approach: \(\mathcal {R}\) is completely covered by the P-rect formed by rings centered at \(V_0 \) and \( V_1\) and being the narrowest rings that contain all locations of \(\mathcal {R}\). The resulting P-rect is the smallest P-rectangle covering \(\mathcal {R}\) completely (Fig. 6a).

  • Basic FindOptimalBounds algorithm (FOB): Fig. 6b shows the implementation of basic error optimization algorithm presented in Sect. 5.

  • FindOptimalBounds with adjusted verifiers’ location (\(FOB_{loc}\)): We have adjusted the verifiers’ locations heuristically to observe the impact on accuracy.

  • FindOptimalBounds algorithm with partitioned regions (\(FOB_{part}\)): We partitioned each policy region into two smaller regions, and applied FindOptimalBounds algorithm on each independently. Figure 6c, d show this settings.

Experimental Setup. We take images from Google Map for point-set representation of the policy region, where the pixels represent points. We use “road-map” images with zoom level of 17, and of dimension \(640 \times 640\) containing the policy region \(\mathcal {R}\). Each pixel represents 0.7503 m, which is obtained using the formula for “ground resolution” [18]. Ground resolution is the distance on the ground that can be represented by a single pixel in the map. We convert it into binary image containing only the policy region and store values for all the pixels in a binary matrix. Measurements, including locations, distance, area and errors are all in pixels.

Fig. 5.
figure 5

Policy regions (from left to right): Building B1, B2, B3, B4 in binary image. We considered both regular shaped (B1) and relatively irregularly shaped regions (B2, B3, B4) to provide diversity to the experiment

Fig. 6.
figure 6

(a) B4 is covered using direct approach, each ring touches two sides of the region (b) FOB approach: to reduce total error, a small amount of false rejection area is introduced (c, d) partitioning B4 into two separate regions and applying FOB on each.

Error and Coverage Comparison. Table 1 compares four approaches when applied to B1, B2, B3, B4. Notice that comparatively “regular” shaped policy regions (e.g., B1 in Fig. 5a,) can be covered more accurately than other regions; if we compare the best found errors, B1 has (FA, FR) error only (0.81, 3.89)% against (4.16, 5.58)% (B2), (3.34, 4.53)% (B3) and (5.16, 4.4)% (B4). \(FOB_{part}\) algorithm reduces this irregularity to some extent reduces the total error of FOB by \(7.82\ (B2), 15.48\ (B3)\) and \(12.78\%\ (B4)\). Our algorithm trades much better than naively covering a region (the direct approach), FOB reduces total error from direct approaches by \(10.84\ (B1), 12.68\ (B2), 10.78\ (B3)\) and \(4.31\%(B4)\).

Comparison to Existing Approaches. Computing optimal bounds for verifiers so that the two types of errors are optimized - is only attempted once in existing literature on in-region verification and localization methods, by Sastry et al. [17]. They have placed 5 verifiers inside a 100 m by 100 m room. They were able to achieve a coverage (True Acceptance) of \(93\%\) with \(7\%\) total error. We compare by considering a policy region of \(100 \times 100\) resolution in the universe of \(640 \times 640\) pixels. Each pixel represents 1 m, so we replicate the scenario of covering a 100 m by 100 m room. Using two verifiers, we achieved a \(96.4\%\) coverage (TA) and \(4.1\%\) total error. An illustration of the two approaches is given in the full version of this paper [2].

FA, FR Weight Analysis. In some applications FA is more tolerable, while in others FR. A notable advantage of our error formulation (Eq. 5) is that it can be adjusted to capture requirements of different applications. We give the concept of weighted error metric: . The increased weight for FA reduces FA error. For this analysis, we considered policy region B2 (Fig. 5b) and \(FOB_{Loc}\) approach, and found that for FA weights \(\{1, 2, 3, 4, 5\}\), the resulting FA errors are \(\{7.09, 3.14, 1.43, 0.95, 0.75\} \%\) (Fig. 7).

Table 1. Four coverage approaches are applied to B1, B2, B3, B4. E = FA + FR is total error. Best found FA, FR and E range from 0.81 to 5.16%, 3.89 to 5.58% and 4.71 to 9.74%. Best found total coverage ranges from 94.41 to 96.1%
Fig. 7.
figure 7

FA, FR Error Comparison among different approaches when applied to B1, B2, B3, B4. The regularly shaped region B1 has the lowest FA and FR errors compared to other relatively irregular shaped regions, in all four approaches.

7 Related Work

There are hundreds of papers on location aware security and services. Because of space we only consider those that are directly relevant and consider location verification with respect to a region. As noted earlier, our goal, that is to provide provably secure authentication for users inside \(\mathcal {R}\) together with quantifiable error, for arbitrary region, is novel and not shared by any existing work. The system in [23] provides location verification for a region without using a secret key and without requiring user authentication.

Secure positioning in multiple verifier settings is considered in [7], who proved that security against multiple adversaries (adversaries at multiple locations) is only achievable in the bounded retrieval model. [27] use bounded retrieval model, and like us, they also take advantage of directional antennas to provide in-region security. However, they cannot provide security against adversaries that reside inside the region.

[26] proposed an in-region location verification that uses the inconsistencies between claimed location of the sensor (prover) and observations of their neighbor sensors to detect a false location claim. However, their security is dependent on other sensors’ trust, which is often not desirable.

Numerous distance upper bounding protocols have been proposed to date [4, 5, 10, 22]. However the only distance lower bounding protocol with provable security against three main kind of attacks is [28]. inRA uses the formal model and protocol constructions of [22] and [28] for its DUB and DLB components.

8 Concluding Remarks

We motivated and defined the problem of in-region authentication, and defined correctness and security of inRA protocols for a region \(\mathcal {R}\). We proposed an approach to constructing secure inRA protocols that uses distance bounding protocols to cover \(\mathcal {R}\) with a P-rect, and gave an efficient algorithm to optimize the P-rect by minimizing the total error. We also proposed a basic two-verifier protocol with provable properties. Our approach provides flexibility to define error functions that are suitable for particular applications, and increase accuracy by choosing more verifiers.

We showed error performance of our optimization algorithm on different shaped policy region and verified improved accuracy when the region is subdivided into two. Optimizing error under real life constraints on the location of verifiers, the number of verifiers, particular error function, and optimization in three dimensional spaces are challenging directions for future research.