Keywords

1 Introduction

The discrete logarithm problem (DLP) is at the foundation of a series of public key cryptosystems. Over a generic group of cardinality N, the best known algorithm to solve the DLP has an exponential running time of \(O(\sqrt{N})\). However, if the group has a special structure one can design better algorithms, as is the case for the multiplicative group of finite fields \({\mathbb F}_Q = {\mathbb F}_{p^n}\) where the DLP can be solved much more efficiently than in the exponential time. For example, when the characteristic p is small compared to the extension degree n, the best known algorithms have quasi-polynomial time complexity [6, 21].

DLP Over Fields of Medium and Large Characteristic. Recall the usual \(L_Q\)-notation,

$$\begin{aligned} L_Q ( \ell , c ) = \exp \big ( (c+o(1)) (\log Q)^\ell (\log \log Q)^{1-\ell } \big ), \end{aligned}$$

for some constants \(0 \le \ell \le 1\) and \(c >0\). We call the characteristic \(p = L_Q( \ell _p , c_p )\) medium when \(1/3< \ell _p < 2/3 \) and large when \( 2/3 <\ell _p \le 1\). We say that a field \({\mathbb F}_{p^n}\) is in the boundary case if \(\ell _p=2/3\).

For medium and large characteristic, in particular when Q is prime, all the state-of-the-art attacks are variants of the number field sieve (NFS) algorithm. Initially used for factoring, NFS was rapidly introduced in the context of DLP [20, 32] to target prime fields. One had to wait almost one decade before the first constructions for \({\mathbb F}_{p^n}\) with \(n>1\) were proposed [33], known today [7] as the tower number field sieve (TNFS). This case is important because it is used to choose the key sizes for pairing based cryptosystems. Since 2006 one can cover the complete range of large and medium characteristic finite fields [22]. This latter approach that we denote by JLSV has the advantage to be very similar to the variant used to target prime fields, except for the first step called polynomial selection where two new methods were proposed: JLSV\(_1\) and JLSV\(_2\).

In the recent years NFS in fields \({\mathbb F}_{p^n}\) with \(n>1\) has become a laboratory where one can push NFS to its limits and test new ideas which are ineffective or impossible in the factorization variant of NFS. Firstly, the polynomial selection methods were supplemented with the generalized Joux-Lercier (GJL) method [5, 27], with the Conjugation (Conj) method [5] and the Sarkar-Singh (SS) method [31]. One can see Table 1 for a summary of the consequences of these methods on the asymptotic complexity. In particular, in all these algorithms the complexity for the medium prime case is slightly larger than that of the large prime case.

Table 1. The complexity of each algorithms in the medium and large prime cases. Each cell indicates c if the complexity is \(L_Q(1/3,(c/9)^\frac{1}{3}\)).

Secondly, a classical idea which was introduced in the context of factorization is to replace the two polynomials f and g used in NFS by a polynomial f and several polynomials \(g_i\), \(i=1,2,\ldots \) which play the role of g. All the currently known variants of NFS admit such variants with multiple number fields (MNFS) which have a slightly better asymptotic complexity, as shown in Table 2. The discrete logarithm problem allows to have a case with no equivalent in the factorization context: instead of having a distinguished polynomial f and many sides \(g_i\) all the polynomials are interchangeable [8].

Table 2. The complexity of each algorithms using multiple number fields. Each cell indicates an approximation of c if the complexity is \(L_Q(1/3,(c/9)^\frac{1}{3})\)

Thirdly, when the characteristic p has a special form, as it is the case for fields in several pairing-based cryptosystems, one might speed-up the computations by variants called special number field sieve (SNFS). In Table 3 we list the asymptotic complexity of each algorithm. Once again, the medium characteristic case has been harder than the large characteristic one.

Table 3. The complexity of each algorithms used when the characteristic has a special form (SNFS) Each cell indicates an approximation of c if the complexity is \(L_Q(1/3,(c/9)^\frac{1}{3})\)

Our Contributions. Let us place ourselves in the case when the extension degree is composite with relatively prime factors, \(n=\eta \kappa \) with \(\gcd (\eta , \kappa )=1\). If the particular cases \(\eta =1\) and \(\kappa =1\) we obtain known algorithms but we don’t exclude thses cases from our presentation. The basic idea is to use the trivial equality

$$\begin{aligned} {\mathbb F}_{p^n}={\mathbb F}_{ (p^\eta )^\kappa }. \end{aligned}$$

In the JLSV algorithm, \({\mathbb F}_{p^n}\) is constructed as \({\mathbb F}_p[x]/k(x)\) for an irreducible polynomial k(x) of degree n. In the TNFS algorithm \({\mathbb F}_{p^n}\) is obtained as R / pR where R is a ring of integers of a number field where p is inert. In our construction \({\mathbb F}_{p^\eta }= R/pR\) as in TNFS and \({\mathbb F}_{p^n}=(R/pR)[x]/(k(x))\) where k is a degree \(\kappa \) irreducible polynomial over \({\mathbb F}_{p^\eta }\).

Interestingly, this construction can be integrated in an algorithm, that we call the extended number field sieve (exTNFS), in which we can target \({\mathbb F}_{p^{\eta \kappa }}\) with the same complexity as \({\mathbb F}_{P^\kappa }\) for a prime P of the same bitsize as \(p^\eta \). Hence we obtain complexities for composite extension degrees which are similar in the medium characteristic case to the large characteristic case. This is because our construction lets us to consider the norm of an element from a number field \(K_f\) that is ‘doubly’ extended by h(t) and f(x), i.e. \(K_f :={\mathbb Q}(\iota , \alpha _f)\), where \(\iota \) and \(\alpha _f\) denote roots of h and f, respectively. It provides a smaller norm size, which plays an important role during the complexity analysis than when we work with an absolute extension of the same degree.

Since the previous algorithms have an “anomaly” in the case \(\ell _p=2/3\), where the complexity is better than in the large prime case, when n is composite we obtain a better complexity for the medium prime case than in the large prime case.

Overview. We introduce the new algorithm in Sect. 2 and analyse its complexity in Sect. 3. The multiple number field variant and the one dedicated to fields of SNFS characteristic are discussed in Sect. 4. In Sect. 5 we make a precise comparison to the state-of-the-art algorithms at cryptographic sizes before deriving new key sizes for pairings in Sect. 6. We conclude with cryptographic implications of our result in Sect. 7.

2 Extended TNFS

2.1 Setting

Throughout this paper, we target fields \({\mathbb F}_Q\) with \(Q = p^n\) where \(n = \eta \kappa \) such that \(\eta ,\kappa \ne 1\), \(\gcd (\eta , \kappa )=1\) and the characteristic p is medium or large, i.e. \(\ell _p>1/3\).

First we select a polynomial \(h(t) \in {\mathbb Z}[t]\) of degree \(\eta \) which is irreducible modulo p. We put \(R := {\mathbb Z}[t]/h(t)\) and note that \(R/pR\simeq {\mathbb F}_{p^\eta }\). Then we select two polynomials f and g with integer coefficients whose reductions modulo p have a common factor k(x) of degree \(\kappa \) which is irreducible over \({\mathbb F}_{p^\eta }\). Our algorithm is unchanged if f and g have coefficients in R because in all the cases we use the number fields \(K_f\) (resp. \(K_g\)) defined by f (resp. g) above the fraction field of R but this generalization is not needed for the purpose of this paper, except in a MNFS variant.

The conditions on f, g and h yield two ring homomorphisms from R[x] / f(x) (resp. R[x] / g(x)) to \((R/pR)/ k(x) = {\mathbb F}_{p^{\eta \kappa }}\): in order to compute the reduction of a polynomial in R[x] modulo p then modulo k(x) one can start by reducing modulo f (resp. g) and continue by reducing modulo p and then modulo k(x). The result is the same if we use f as when we use g. Thus one has the commutative diagram in Fig. 1 which is a generalization of the classical diagram of NFS.

Fig. 1.
figure 1

Commutative diagram of exTNFS. When \(R = {\mathbb Z}\) this is the diagram of NFS for non-prime fields. When \(k(x)=x-m\) for some \(m\in R\) this is the diagram of TNFS. When both \(R={\mathbb Z}\) and \(k(x)=x-m\) this is the diagram of NFS.

After the polynomial selection, the exTNFS algorithm proceeds as all the variants of NFS, following the same steps: relations collection, linear algebra and individual logarithm. Most of these steps are very similar to the TNFS algorithms as we shall explain below.

2.2 Detailed Descriptions

2.2.1 Polynomial Selection.

Choice of h. We have to select a polynomial \(h(t)\in {\mathbb Z}[t]\) of degree \(\eta \) which is irreducible modulo p and whose coefficients are as small as possible. As in TNFS we try random polynomials h with small coefficients and factor them in \({\mathbb F}_p[t]\) to test irreducibility. Heuristically, one succeeds after \(\eta \) trials and since \(\eta \le 3^\eta \) we expect to find h such that \(||{ h }||_\infty =1\). For a more rigorous description on the existence of such polynomials one can refer to [7].

Next we select f and g in \({\mathbb Z}[x]\) which have a common factor k(x) modulo p of degree \(\kappa \) which remains irreducible over \({\mathbb F}_{p^\eta }\). It is here that we use the condition \(\gcd (\eta ,\kappa )=1\) because an irreducible polynomial \(k(x)\in {\mathbb F}_p[x]\) remains irreducible over \({\mathbb F}_{p^\eta }\) if and only if \(\gcd (\eta , \kappa )=1\). If one has an algorithm to select f and g in R[x] one might drop this condition, but in this paper f and g have integer coefficients. Thus it is enough to test the irreducibility of k(x) over \({\mathbb F}_p\) and we have the same situation as in the classical variant of NFS for non-prime fields (JLSV): JLSV\(_1\), JLSV\(_2\), Conjugation method, GJL and Sarkar-Singh. Let us present two of these methods which are important for results of asymptotic complexity.

JLSV \(_2\) Method. We briefly describe the polynomial selection introduced in Sect. 3.2 of [22]. One first chooses a monic polynomial \(f_0(x)\) of degree \(\kappa \) with small coefficients, which is irreducible over \({\mathbb F}_{p}\) (and automatically over \({\mathbb F}_{p^\eta }\) because \(\gcd (\eta ,\kappa )=1\)). Set an integer \(W \approx p^{1/(D+1)}\), where D is a parameter determined later subject to the condition \(D \ge \kappa \). Then we define \(f(x):= f_0(x+W)\). Take the coefficients of g(x) as the shortest vector of an LLL-reduced basis of the lattice L defined by the columns:

$$\begin{aligned} L := ( p \cdot \mathbf {x^0}, \dots , p \cdot \mathbf {x^\kappa }, \mathbf {f(x)}, \mathbf {x f(x)}, \dots , \mathbf {x^{D+1-\kappa } f(x)}). \end{aligned}$$

Here, \(\mathbf {f(x)} \) denotes the vector formed by the coefficients of a polynomial f. Finally, we set \(k = f\) then we have

  • \(\deg (f) = \kappa \) and \( {||f ||_\infty } = O(p^\frac{\kappa }{D+1}\));

  • \(\deg (g) = D \ge \kappa \) and \( {||g ||_\infty } = O(p^\frac{\kappa }{D+1}\)).

Conjugation Method. We recall the polynomial selection method in Algorithm 4 of [5]. First, one chooses two polynomials \(g_1(x)\) and \(g_0(x)\) with small coefficients such that \(\deg {g_1} < \deg {g_0} = \kappa \). Next one chooses a quadratic, monic, irreducible polynomial \(\mu (x) \in {\mathbb Z}[x]\) with small coefficients. If \(\mu (x)\) has a root \(\delta \) in \({\mathbb F}_p\) and \(g_0 + \delta g_1\) is irreducible over \({\mathbb F}_{p}\) (and automatically over \({\mathbb F}_{p^\eta }\) because \(\gcd (\eta ,\kappa )=1\)), then set \(k= g_0 + \delta g_1\). Otherwise, one repeats the above steps until such \(g_1\), \(g_0\), and \(\delta \) are found. Once it has been done, find u and v such that \( \delta \equiv u/v \pmod p\) and \(u,v\le O(\sqrt{p})\) using rational reconstruction. Finally, we set \(f = {\text {Res}}_{Y} ( \mu (Y) , g_0(x) + Y g_1(x) )\) and \(g = v g_0 + u g_1\). By construction we have

  • \(\deg (f)=2 \kappa \) and \(\Vert f\Vert _\infty =O(1)\);

  • \(\deg (g)= \kappa \) and \(\Vert g\Vert _\infty =O(\sqrt{p})=O(Q^\frac{1}{2\eta \kappa }\)).

The bound on \(||{ f }||_\infty \) depends on the number of polynomials \(g_0 + \delta g_1\) tested before we find one which is irreducible over \({\mathbb F}_{p}\). Heuristically this happens on average after \(2\kappa \) trials. Since there are \(3^{2\kappa }>2\kappa \) choices of \(g_0\) and \(g_1\) of norm 1 we have \(||{ f }||_\infty =O(1)\).

Relation Collection. The elements of \(R={\mathbb Z}[t]/h(t)\) can be represented uniquely as polynomials of \({\mathbb Z}[t]\) of degree less than \(\deg h\).

We proceed as in TNFS and enumerate all the pairs \((a,b)\in {\mathbb Z}[t]^2\) of degree \(\le \eta -1\) such that \(||{ a }||_\infty \), \(||{ b }||_\infty \le A\) for a parameter A to be determined. We say that we obtain a relation for the pair (ab) if

$$\begin{aligned} \begin{array}{l} N_f(a,b):={{\text {Res}}}_t({{\text {Res}}}_x(a(t)-b(t)x,f(x)),h(t))\text { and }\\ N_g(a,b):={{\text {Res}}}_t({{\text {Res}}}_x(a(t)-b(t)x,g(x)),h(t)) \end{array} \end{aligned}$$

are B-smooth for a parameter B to be determined (an integer is B-smooth if all its prime factors are less than B). If \(\iota \) denotes a root of h in R our enumeration is equivalent to putting linear polynomials \(a(\iota )-b(\iota )x\) in the top of the diagram of Fig. 1.

One can put non-linear polynomials \(r(x)\in R[x]\) of degree \(\tau -1\) in the diagram for any \(\tau \ge 2\) but this is not necessary in this paper. Indeed, in this paper we enumerate polynomials r to attack \({\mathbb F}_{p^{\kappa \eta }}\) of the same degree as those that one would use to attack \({\mathbb F}_{P^\kappa }\) for a prime \(P\approx p^\eta \). It happens that in the large prime case and for the best parameters of the boundary case the optimal value of \(\tau \) is 2. This determines us to state Lemma 1 only in the case \(\tau =2\) and to write everywhere \(r=a(\iota )-b(\iota )x\), but we bear in mind that r could have a larger degree, prove Lemma 2 in Appendix A, use it in the last paragraph of Sect. 4 and write Table 5 for arbitrary values of \(\tau \) before observing that the optimal value is again \(\tau =2\).

Remark 1

The choice of the polynomials r in the top of the diagram is such that the norm sizes are as small as posible. If one had an algorithm to pinpoint the principal ideals of a number field which have small norms then one would use this algorithm to generate the polynomials r.

As one of the referees notices, the advantage of exTNFS when compared to the classical version of NFS is that our enumeration is less naive. Indeed, since the norms are computed as an iteration of resultants, i.e. \(N_f(r(t,x))={{\text {Res}}}_t({{\text {Res}}}_x(r(t,x),f(x)),h(t))\), we can enumerate polynomials r which make the relative norm \({{\text {Res}}}_x(r(t,x),f(x))\) small in some sense, for example we restrict to linear polynomials r.

For each pair (ab), i.e. \(r=a-bx\), one obtains a linear equation where the unknowns are logarithms of elements of the factor base as in the classical variant of NFS for discrete logarithms. But let us define the factor base in our particular case.

Factor Base. Let \(\alpha _f\) (resp. \(\alpha _g\)) be a root of f in \(K_f\) (resp. of g in \(K_g\)), the number field it defines over the fraction field of R. Then the norm of \(a(\iota )-b(\iota )\alpha _f\) (resp. \(a(\iota )-b(\iota )\alpha _g\)) over \({\mathbb Q}\) is \({{\text {Res}}}_t({{\text {Res}}}_x(a(t)-b(t)x,f(x)),h(t))\) (resp. \({{\text {Res}}}_t({{\text {Res}}}_x(a(t)-b(t)x,g(x)),h(t))\)) up to a power of l(f) (resp. l(g)), the leading coefficient of f (resp. g). We call factor base the set of prime ideals of \(K_f\) and \(K_g\) which can occur in the factorization of \(a(\iota )-b(\iota )\alpha _f\) and \(a(\iota )-b(\iota )\alpha _g\) when both norms are B-smooth. By Proposition 1 in [7] we can give an explicit description of the factor base as \(\mathcal {F}(B):=\mathcal {F}_f(B)\bigcup \mathcal {F}_g(B)\) where

$$\begin{aligned} \begin{array}{ll} \mathcal {F}_f(B) =&{} \left\{ \langle { \mathfrak {q}, \alpha - \gamma } \rangle : \begin{array}{l} \mathfrak {q} \text{ is } \text{ a } \text{ prime } \text{ in } {\mathbb Q}(\iota ) \text{ lying } \text{ over } \text{ a } \text{ prime } \\ p \le B \text{ and } f(\gamma ) \equiv 0 \pmod {\mathfrak q} \end{array} \right\} \\ &{}\bigcup \left\{ \text {prime ideals of }K_f\text { dividing }l(f){{\text {Disc}}}(f)\right\} . \end{array} \end{aligned}$$

and similarly for \(\mathcal {F}_g(B)\).

Schirokauer Maps. If \(\langle a(\iota )-b(\iota )\alpha _f\rangle =\prod _{{\mathfrak q}\in \mathcal {F}_f(B) } {\mathfrak q}^{{{\text {val}}}_{\mathfrak q}(a(\iota )-b(\iota )\alpha _f)}\) and \(\langle a(\iota )-b(\iota )\alpha _g\rangle =\prod _{{\mathfrak q}\in \mathcal {F}_g(B) } {\mathfrak q}^{{{\text {val}}}_{\mathfrak q}(a(\iota )-b(\iota )\alpha _g)}\) we write

$$\begin{aligned} \sum _{{\mathfrak q}\in \mathcal {F}_f(B)}{{\text {val}}}_{\mathfrak q}(a(\iota )-b(\iota )\alpha _f) \log {\mathfrak q}+\epsilon _f(a,b) = \sum _{{\mathfrak q}\in \mathcal {F}_g(B)}{{\text {val}}}_{\mathfrak q}(a(\iota )-b(\iota )\alpha _g) \log {\mathfrak q}+\epsilon _g(a,b) \end{aligned}$$

where the log sign denotes virtual logarithms in the sense of [22, 32] and \(\epsilon _f\) and \(\epsilon _g\) are correction terms called Schirokauer maps which were first introduced in [32].

The novelty for TNFS and exTNFS with respect to JLSV is that \(K_f\) and \(K_g\) are constructed as tower extensions instead of absolute extensions. On the other hand, it is more convenient to work on absolute extensions when we compute Schirokauer maps. We solve this problem by computing primitive elements \(\theta _f\) (resp. \(\theta _g\)) of \(K_f/{\mathbb Q}\) (resp. \(K_g/{\mathbb Q}\)). For a proof we refer to Sect. 4.3 in [22].

Linear Algebra and Individual Logarithm. These two steps are unchanged with respect to the classical variant of NFS. The linear algebra step, comes after relation collection and consists in solving the linear system over \({\mathbb F}_l\) for some prime factor l of the order of \({\mathbb F}_Q^*\). Using Wiedemann’s algorithm this has a quasi-quadratic complexity in the size of the linear system, which is equal to the cardinality of the factor base. In [7] it is shown that the factor base has \((2+o(1))B/\log B\) elements, so the cost of the linear algebra is \(B^{2+o(1)}\).

In the individual logarithm step one writes any desired discrete logarithm as a sum of virtual logarithms of elements in the factor base. Since the step is very similar to the corresponding step in NFS we keep the description for the Appendix.

3 Complexity

The complexity analysis of exTNFS follows the steps of the analysis of NFS in the case of prime fields. It is expected that the stages of the algorithm other than the relation collection and the linear algebra are negligible, hence we select parameters to minimize their cost and afterwards we check that the other stages are indeed negligible.

Let us call T the time spent in average for each polynomial \(r\in R[x]\) enumerated in the relation collection stage (in this paper \(r=a(\iota )-b(\iota )x\)), and let \(P_f\) (resp. \(P_g\)) be the probability that the norm \(N_f\) (resp. \(N_g\)) of r with respect to f (resp. g) is B-smooth. The number of polynomials that we test before finding each new relation is on average \(1/(P_fP_g)\), so the cost of the relations collection is \(\#\mathcal {F}(B)T/(P_fP_g)\).

We make the usual heuristic that the proportion of smooth norms is the same as the proportion of arbitrary positive integers of the same size which are also smooth, so \(P_f={{\text {Prob}}}(N_f,B)\) (resp \(P_g={{\text {Prob}}}(N_g,B)\) ) where \({{\text {Prob}}}(x,y)\) is the probability that an arbitrary integer less than x is y-smooth. The value of T depends on whether we use a sieving technique or we consider each value and test smoothness with ECM [26]; if we use the latter variant we obtain \(T=L_{B}(1/2,\sqrt{2})(\log Q)^{O(1)}\), so \(T=B^{o(1)}\). Using the algorithm of Wiedemann [34] the cost of the linear algebra is \((\#\mathcal F(B))^{2+o(1)}=B^{2+o(1)}\). Hence, up to an exponent \(1+o(1)\), we have

$$\begin{aligned} \text {complexity(exTNFS)}=\frac{B}{{{\text {Prob}}}(N_f,B){{\text {Prob}}}(N_g,B)}+B^2. \end{aligned}$$
(1)

This equation is the same for NFS, TNFS, exTNFS and the corresponding SNFS variants. The differences begin when we look at the size of \(N_f\) and \(N_g\) which depend on the polynomial selection method. In what follows we instantiate Eq. (1) with various cases and obtain equations which have already been analyzed in the literature.

Lemma 1

Let h and f be irreducible polynomials over \({\mathbb Z}\) and call \(\eta :=\deg h\) and \(\kappa :=\deg (f)\). Let \(a(t), b(t) \in {\mathbb Z}[t]\) be polynomials of degree at most \(\eta -1\) with \(||a ||_\infty , ||b ||_\infty \le A\). We put \(N_f(a,b) := {{\text {Res}}}_t({{\text {Res}}}_x(a(t)-b(t)x,f(x)),h(t))\). Then we have

  1. 1.
    $$\begin{aligned} |{ N_f(a,b) }|< A^{\eta \cdot \kappa } ||{ f }||_\infty ^{\eta } ||{ h }||_\infty ^{ \kappa \cdot (\eta - 1)} C(\eta , \kappa ), \end{aligned}$$
    (2)

    where \(C( \eta , \kappa ) = ( \eta +1 )^{ (3 \kappa +1)\eta /2 } ( \kappa +1 )^{ 3\eta /2 }\).

  2. 2.

    Assume in addition that \(||{ h }||_\infty \) is bounded by an absolute constant H and that \(p= L_{Q}(\ell _p,c)\) for some \(\ell _p>1/3\) and \(c>0\). Then

    $$\begin{aligned} N_f(a,b)\le E^\kappa ||{ f }||_\infty ^\eta L_{Q}(2/3,o(1)), \end{aligned}$$
    (3)

    where \(E=A^\eta \)

Proof

  1. 1.

    This is proven in Theorem 3 in [7].

  2. 2.

    The overhead is bounded as follows

    $$\begin{aligned} \log (||{ h }||_\infty ^{\kappa (\eta -1)}C(\eta ,\kappa ))\le & {} \kappa \eta \log H+ 3\kappa \eta \log \eta +3\eta \log \kappa \\= & {} O(\log (Q)^{1-\ell _p}(\log \log Q)^{\ell _p}) \\= & {} o(1) \log (Q)^{2/3}(\log \log Q)^{1/3}. \end{aligned}$$

       \(\square \)

If \(N_f=L_{Q}(2/3)\) then we can forget the overhead \(L_{Q}(2/3,o(1))\) as the Canfield-Erdös-Pomerance theorem states that the smoothness probability satisfies, uniformly on x and y in the validity domain,

$$\begin{aligned} {{\text {Prob}}}(x^{1+o(1)},y)={{\text {Prob}}}(x,y)^{1+o(1)}. \end{aligned}$$

The next statement summarizes our results.

Theorem 1

(under the classical NFS heuristics) If \(Q = p^n\) is a prime power such that

  • \(p = L_Q( \ell _p , c_p )\) with \(1/3 < \ell _p \);

  • \(n = \eta \kappa \) such that \(\eta ,\kappa \ne 1\) and \(\gcd (\eta , \kappa )=1\)

then the discrete logarithm over \({\mathbb F}_Q\) can be solved in \(L_Q(1/3,C)\) where C and the additional conditions are listed in Table 4.

Table 4. Complexity of exTNFS variants.
Table 5. Comparison of norm sizes. \(\tau =\deg r(x)\) while D and K are integer parameters subject to the conditions in the last column.

In the rest of this section we prove this statement. In any case in the table, one shares the conditions \(\kappa =o\left( (\frac{\log Q}{\log \log Q})^\frac{1}{3}\right) \) or \(\kappa \le c (\frac{\log Q}{\log \log Q})^\frac{1}{3} \) for some constant \(c>0\). These are equivalent to say that \(P=p^\eta = L_Q( \ell _P )\) for some \(\ell _P \ge 2/3\).

3.1 exTNFS-JLSV\(_2\)

In this section we assume that n has a factor \(\kappa \) such that

$$\begin{aligned} \kappa =o\left( \left( \frac{\log (Q)}{\log \log (Q)} \right) ^{1/3} \right) . \end{aligned}$$

Let us introduce \(||h ||_\infty = O(1)\) and the values of \(||f ||_\infty , ||g ||_\infty \approx p^{\kappa /(D+1)} \) coming from the JLSV\(_2\) method (Sect. 2.2) in Eq. (2). Then we get

$$\begin{aligned} |{ N_f(a,b) }|< & {} \left( A^{\eta \kappa } (p^{ \frac{\kappa }{ D+1} })^\eta \right) ^{1+o(1)} = \left( E^\kappa P^{ \frac{ \kappa }{ D+1 } } \right) ^{1+o(1)},\end{aligned}$$
(4)
$$\begin{aligned} |{ N_g(a,b) }|< & {} \left( A^{\eta D} (p^{ \frac{\kappa }{ D+1} })^\eta \right) ^{1+o(1)} = \left( E^D P^{ \frac{ \kappa }{ D+1 } } \right) ^{1+o(1)}, \end{aligned}$$
(5)

where we set \(E := A^{\eta }\) and \(P:= |R/pR|= p^\eta \).

One recognizes the expressions for the norms in the large prime case [22, Appendix A.3.], where \(P=p\) and \(\kappa =n\). We conclude that we have the same complexity:

$$\begin{aligned} \text {complexity(exTNFS with JLSV}_2\text {)}=L_{Q}(1/3,\root 3 \of {64/9}). \end{aligned}$$

3.2 exTNFS-GJL

We relax a bit the condition from the previous section: we assume that n has a factor \(\kappa \) such that

$$\begin{aligned} \kappa \le (8/3)^{-\frac{1}{3}} \left( \frac{\log (Q)}{\log \log (Q)} \right) ^{1/3}. \end{aligned}$$

Recall the characteristics of our polynomials: \(||h ||_\infty = O(1)\) and \(\deg h=\eta \); \(||f ||_\infty =O(1)\) and \(\deg f=D+1\) for a parameter \(D\ge \kappa \); \(||g ||_\infty \approx p^{\kappa /(D+1)}\) and \(\deg g=D\). We inject these values in Eq. (2) and we get

$$\begin{aligned} |{ N_f(a,b) }|< & {} E^{D+1} L_{Q}(2/3,o(1)),\end{aligned}$$
(6)
$$\begin{aligned} |{ N_g(a,b) }|< & {} E^D Q^{1/(D+1)} L_{Q}(2/3,o(1)), \end{aligned}$$
(7)

where we set \(E := A^{\eta }\) and \(P:= |R/pR|= p^\eta \). We recognize the expression in the first equation of Sect. 4.2 in [5], so

$$\begin{aligned} \text {complexity(exTNFS with GJL)}=L_{Q}(1/3,\root 3 \of {64/9}). \end{aligned}$$

3.3 exTNFS-Conj

We propose here a variant of NFS which combines exTNFS with the Conjugation method of polynomial selection.

Let us consider the case when \(n=\eta \kappa \) with

$$\begin{aligned} \kappa =\left( \frac{1}{12^{1/3}}+o(1)\right) \left( \frac{\log (Q)}{\log \log (Q)}\right) ^{1/3}. \end{aligned}$$

Note that this implies \(\ell _p\le 2/3\) so that we are in the medium characteristic or boundary case.

As before, evaluating the values coming from the Conjugation method (Sect. 2.2) in Eq. (2), we have

$$\begin{aligned} |N_f(a,b)|< & {} E^{2 \kappa } L_{Q}(2/3,o(1)),\end{aligned}$$
(8)
$$\begin{aligned} |N_g(a,b)|< & {} E^{ \kappa } (p^{\kappa \eta })^{1/(2 \kappa )}L_{Q}(2/3,o(1)). \end{aligned}$$
(9)

When we combine Eqs. (8) and (9) we obtain

$$\begin{aligned} |N_f(a,b)|\cdot |N_g(a,b)|< E^{3 \kappa }Q^{(1+o(1))/(2\kappa )}. \end{aligned}$$

But this is Eq. (5) in [5] when \(\tau =2\) (the parameter \(\tau \) is written as t in [5], the number of coefficients of the sieving polynomial r). The rest of the computations are identical as in point 3. of Theorem 1 in [5], so

$$\begin{aligned} \text {complexity(exTNFS-Conj)}=L_Q(1/3,(48/9)^{1/3}). \end{aligned}$$

4 Variants

4.1 The Case When p has a Special Form (SexTNFS)

In some pairing-based constructions p has a special form, e.g. in the Barreto-Naehrig curves [9] \(p=36u^4+36u^3+24u^2+6u+1\) of embedding degree 12 and in the Freeman pairing-friendly constructions of embedding degree 10 [18, Sect. 5.3] \(p= 25u^4 + 25u^3 + 25u^2 + 10u + 3\). For a given integer d, an integer p is d-SNFS if there exists an integer u and a polynomial \(\varPi (x)\) with integer coefficients so that

$$\begin{aligned} p = \varPi (u), \end{aligned}$$

\(\deg \varPi =d\) and \(||{ \varPi }||_\infty \) is bounded by an absolute constant.

We consider the case when \(n=\eta \kappa \), \(\gcd (\eta ,\kappa )=1\) with \(\kappa = o\left( \left( \frac{ \log Q}{ \log \log Q} \right) ^{1/3} \right) \) and p is d-SNFS. In this case exTNFS is unchanged: we select h, f and g three polynomials with integer coefficients so that

  • h is irreducible modulo p, \(\deg h=\eta \) and \(||{ h }||_\infty =O(1)\);

  • f and g have a common factor k(x) modulo p which is irreducible of degree \(\kappa \).

Choice of f and g Using the Method of Joux and Pierrot (as in SNFS-JP). Find a polynomial S of degree \(\kappa -1\) with coefficients in \(\{-1,0,1\}\) so that \(k(x)=x^\kappa +S(x)-u\) is irreducible modulo p. Since the proportion of irreducible polynomials in \({\mathbb F}_p\) of degree \(\kappa \) is \(1/\kappa \) and there are \(3^\kappa \) choices we expect this step to succeed. Then we set

$$\begin{aligned} \left\{ \begin{array}{lll} g&{}=&{}x^\kappa +S(x)-u\\ f&{}=&{}\varPi (x^\kappa +S(x)). \end{array} \right. \end{aligned}$$

If f is not irreducible over \({\mathbb Z}[x]\), which happens with small probability, start over. Note that g is irreducible modulo p and that f is a multiple of g modulo p. Precisely, as in [23], we choose S(x) so that it is of degree \(O(\log \kappa / \log 3)\). Since \(3^{O(\log \kappa / \log 3)}>\kappa \), we still have enough chance to have irreducible g. By construction we have:

  • \(\deg (g)=\kappa \) and \(||{ g }||_\infty =u=p^{1/d}\);

  • \(\deg (f)=\kappa d\) and \(||{ f }||_\infty =O\big ( (\log \kappa )^d \big )\).

Let us compute the analysis of this particular case of exTNFS. We inject these values in Eq. (2) and obtain

$$\begin{aligned} |N_f(a,b) |\le & {} E^{\kappa d} L_{Q}(2/3, o(1) ) \\ |N_g(a,b) |\le & {} E^{\kappa }P^{1/d} L_{Q}(2/3, o(1)), \end{aligned}$$

where \(E := A^\eta \) and \(P:= |R/pR| = p^\eta \). We recognize the size of the norms in the analysis by Joux and Pierrot [23, Sect. 6.3], so we obtain the same complexity as in their paper:

$$\begin{aligned} \text {complexity(SexTNFS)}=L_Q(1/3,(32/9)^{1/3}). \end{aligned}$$

4.2 The Multiple Polynomial Variants (MexTNFS)

Virtually every variant of NFS can be accelerated using multiple polynomials and exTNFS makes no exception. The multiple variant of exTNFS is as follows: choose f and g which have a common factor k(x) modulo p which is irreducible of degree \(\kappa \) using any of the methods given in Sect. 2.2. Next we set \(f_1=f\) and \(f_2=g\) and select other \(V-2\) irreducible polynomials \(f_i:=\mu _i f_1+\nu _if_2\) where \(\mu _i=\sum _{j=0}^{\eta -1}\mu _{i,j}\iota ^j\) and \(\nu _i=\sum _{j=0}^{\eta -1}\nu _{i,j}\iota ^j \) are elements of \(R={\mathbb Z}[t]/h{\mathbb Z}[t]\) such that \(||{ \mu _i }||_\infty ,||{ \nu _i }||_\infty \le V^\frac{1}{2\eta }\) where \(V=L_{Q}(1/3,c_v)\) is a parameter which will be selected later. Denote \(\alpha _i\) a root of \(f_i\) for \(i=1, 2, \dots , V\).

Once again the complexity depends on the manner in which the polynomials f and g are selected.

MexTNFS-JLSV \(_\mathbf{2}\) . Barbulescu and Pierrot [8, Sect. 5.3] analysed the complexity of MNFS with JLSV\(_2\), so we only need to check that the size of the norm is the same for NFS and exTNFS for each polynomial \(f_i\) with \(1\le i\le V\). By construction we have:

  • \(\deg (f_1)=\kappa \) and \(||{ f_1 }||_\infty =p^\frac{\kappa }{D+1}\);

  • \(\deg (f_i)=D \ge \kappa \) and \(||{ f_i }||_\infty =V^\frac{1}{2\eta }p^\frac{\kappa }{D+1}\) for \(2 \le i \le V\).

As before, we inject these values in Eq. (2) and obtain

$$\begin{aligned} |N_{f_1}(a,b) |<&E^\kappa (p^{\kappa \eta })^\frac{1}{D+1} L_{Q}(2/3, o(1)) \\ |N_{f_i}(a,b) |< & {} E^D (p^{\kappa \eta })^\frac{1}{D+1} L_{Q}(2/3, o(1)) \text{ for } 2 \le i \le V. \end{aligned}$$

We emphasize that \((V^\frac{1}{2\eta })^\eta =V^\frac{1}{2}=L_Q(1/3,c_v/2)=L_{Q}(2/3,o(1))\) which is true without any condition on \(\eta \). Hence we obtain

$$\begin{aligned} \text {complexity(MexTNFS-JLSV}_2)=L_Q \left( 1/3, \Big ( \frac{ 92+26 \sqrt{13} }{ 27 } \Big )^{1/3} \right) . \end{aligned}$$

MexTNFS-Conj and GJL. Pierrot [30] studied the multiple polynomial variant of NFS when the Conjugation method or GJL are used. To show that we obtain the same complexities we need to show that the norm with respect to each polynomial is the same as in the classical NFS, except for a factor \(L_{Q}(2/3,o(1))\), which boils down to testing again that \((V^\frac{1}{2\eta })\eta =L_{Q}(2/3,o(1))\) which is always true. When \(P=p^\eta =L_Q(2/3 , c_P)\) such that \( c_P > (\frac{7+2\sqrt{13}}{6})^{1/3}\) and \(\tau \) is the number of coefficients of the enumerated polynomials r, then the complexity obtained is \(L_Q(1/3,C(\tau ,c_P))\) where

$$\begin{aligned} C(\tau ,c_P)=\frac{2}{c_P \tau }+\sqrt{\frac{20}{9(c_P \tau )^2}+\frac{2}{3}c_P(\tau -1)}. \end{aligned}$$

The best case is when \(c_P=(\frac{56+24\sqrt{6}}{12})^{1/3}\) and \(\tau =2\) (linear polynomials):

$$\begin{aligned} \text {complexity(best case of MexTNFS-Conj)}=L_Q \left( 1/3,\frac{3+\sqrt{3(11+4\sqrt{6})}}{\big ( 18(7+3\sqrt{6}) \big )^{1/3}} \right) , \end{aligned}$$

where the second constant being approximated by 1.71.

5 Comparison and Examples

NFS, TNFS and exTNFS have the same main lines:

  • we compute a large number of integer numbers;

  • we factor these numbers to test if they are B-smooth for some parameter B;

  • we solve a linear system depending on the previous steps.

If we reduce the size of the integers computed in the algorithm we reduce the work needed to find a subset of integers which are B-smooth, which further allows us to adapt the other parameters so that the linear algebra is also cheap. A precise analysis is complex because in some variants one tests smoothness using ECM while in others one can sieve (which is faster). Nevertheless, as a first comparison we use the criterion in which one must minimize the bitsize of the product of the norms.

5.1 Precise Comparison When p is Arbitrary

Each method of polynomial selection has a different expression of the norm bitsize, which depends on the number \(\tau \) of coefficients of the polynomials r(x) that are enumerated during the relation collection. Let us reproduce Table 2 in [31], which we extend with TNFS and exTNFS:

Note that the method of Sarkar and Singh requires that n is composite. The settings based on TNFS (TNFS, exTNFS-GJL etc.) have an overhead due to the combinatorial factor which is not written in this table, so we add the condition that the degree of the intermediate number field must be small. Finally, exTNFS requires the additional condition that \(\kappa \) and \(\eta \) are relatively prime.

Extrapolation of E. The parameter E depends on the implementation of NFS and might be different for one variant to another. Let us take for example three computations with NFS which tackle various problems of the same bitsize:

  • Danilov and Popovyan [16] factored a 180-digit RSA modulus using \(\log _2E\approx 30\) (although the size of the pairs (ab) in theirs computations is not written explicitly, one can compute E using the range of special-q’s and the default cardinality of the sieving space per special-q, which is \(2^{30}\));

  • Bouvier et al. [12] computed discrete logarithms in a 180-digit field \({\mathbb F}_p\) using \(\log _2E\approx 30\) (computed from other parameters).

  • Barbulescu et al. [5] computed discrete logarithms in a 180-digit field \({\mathbb F}_{p^2}\) using \(\log _2E\approx 29\).

We see that in the first approximation E depends only on the bitsize of the field that we target and has the same value as in the factoring variant of NFS. Let us extrapolate E from the pair \((\log _2Q=600,\log _2 E=30)\) using the formula

$$\begin{aligned} E=cL_Q(1/3,(8/9)^{1/3}). \end{aligned}$$

Since exTNFS requires that \(\gcd (\eta ,\kappa )=1\), the first case to study is \(n=6\).

The case of fields. \({\mathbb F}_{p^6}\) When \(n=6\) we can use the general methods

  • NFS-JLSV\(_1\) (bitsize \(E^\frac{24}{\tau }Q^\frac{\tau -1}{6}\), best values of \(\tau \) are 3 and 2)

  • NFS-GJL with D equal to its optimal value, 6 (bitsize \(E^\frac{26}{\tau }Q^\frac{\tau -1}{7}\), best values of \(\tau \) are 3 and 2 )

  • TNFS with \(\deg f=5\), its optimal value for this range of fields (bitsize \(E^\frac{12}{\tau }Q^\frac{\tau -1}{3}\), best value of \(\tau \) is 2)

as well as the methods which exploit the fact that n is composite

  • Sarkar-Singh (NFS-SS) with \(\eta =2\) and \(K=3\), best value so that \(K\ge n/\eta \) for this range of fields, (\(E^\frac{28}{\tau }Q^\frac{\tau -1}{8}\)) respectively \(\eta =3\) and \(K=2\), best value so that \(K\ge n/\eta \) for this range of fields, (bitsize \(E^\frac{30}{\tau }Q^\frac{\tau -1}{9}\), best \(\tau \) are 4 and 3)

  • exTNFS with \(\eta =2\) or \(\eta =3\) and one of two methods for selecting f and g

    • exTNFS-GJL with \(\eta =3\), \(D=2\) its best value so that \(D\ge n/\eta \), (bitsize \(E^\frac{10}{\tau }Q^\frac{\tau -1}{3}\), best value of \(\tau \) is 2)

    • exTNFS-GJL with \(\eta =2\), \(D=3\) its best value so that \(D\ge n/\eta \), (\(E^\frac{14}{\tau }Q^\frac{\tau -1}{4}\), best values of \(\tau \) are 3 and 2)

    • exTNFS-Conj with \(\eta =2\) (bitsize \(E^\frac{18}{\tau }Q^\frac{\tau -1}{6}\), best values of \(\tau \) is 2).

    • exTNFS-Conj with \(\eta =3\) (bitsize \(E^\frac{12}{\tau }Q^\frac{\tau -1}{4}\), best values of \(\tau \) are 3 and 2).

Fig. 2.
figure 2

Plot of the norms bitsize for several variants of NFS. Horizontal axis indicates the bitsize of \(p^n\) while the vertical axis the bitsize of the norms product. (Color figure online)

We plot the values of the norms product in Fig. 2. Note that exTNFS with the Conjugation method seems to be the best choice for fields between 300 and 1000 bits.

For even more insight we enter into details on a specific field.

Example 1: Let us consider the field \({\mathbb F}_{p^6}\) when

$$\begin{aligned} p=3141592653589793238462643383589. \end{aligned}$$

The bitsize of \(Q=p^6\) is 608 and its number of decimal digits is 182. Since the parameter E can only be chosen after an effective computation we are bound to make the hypothesis that it will have a similar value as in a series of record computations with NFS having the same input size:

In the following \(\log _2E=30\). Let us make a list with the norm sizes obtained with each version of NFS:

  1. 1.

    NFS-JLSV\(_1\). We take for example \(f=x^6 - 1772453850905518\) and \(g=1772453850905514x^6 + 96769484157337\). The sieving space contains polynomials of degree two \(r(x)=a+xb+cx^2 \in {\mathbb Z}[x]\), i.e. \(\tau =3\), and the absolute value of the coefficients is bounded by \(E^{2/3}\). The upper bound on the norms’ product is

    $$\begin{aligned} \text {norms bitsize(NFS-JLSV}_1)=8\log _2E+\frac{1}{3}\log _2Q\approx 440. \end{aligned}$$
  2. 2.

    NFS-Conj. We take \(f=x^{12} +3\) and \(g=1016344366092854 x^6 - 206700367981621\). We sieve polynomials \(r \in {\mathbb Z}[x]\) of degree 4, i.e. \(\tau =5\), and the absolute value of the coefficients is bounded by \(E^{2/5}\). Then we obtain

    $$\begin{aligned} \text {norms bitsize(NFS-Conj)}=\frac{36}{5}\log _2E+\frac{1}{3}\log _2Q\approx 418. \end{aligned}$$
  3. 3.

    TNFS. We take \(f=x^5 + 727139x^3 + 538962x^2 + 513716x + 691133\), \(g=x-1257274\) and \(h=t^6 + t^4 + t + 1\). Here, h is chosen so that \({\mathbb F}_{p^6}=({\mathbb Z}[t]/h(t))/p({\mathbb Z}[t]/h(t))\). The sieving polynomials are of the form \(r(x)=a(\iota )-b(\iota )x\), i.e. \(\tau =2\). Here, \(a = \sum _{i=0}^5 a_i \iota ^i\) and \(b= \sum _{i=0}^5 a_i \iota ^i\) are elements in \({\mathbb Z}(\iota )={\mathbb Z}[t]/h(t)\) with the coefficients whose absolute values bounded by \(A =E^{1/\deg (h)}= E^{1/6}\). Note that the parameter \(d=\deg f\) is equal to 5, so that we have

    $$\begin{aligned} \text {norms bitsize(TNFS)}=6\log _2E+\frac{1}{3}\log _2Q \approx 380. \end{aligned}$$
  4. 4.

    exTNFS-Conj with \(\eta =2\) and \(\kappa =3\). We take \(f=x^6-3\), \(g=309331385734750x^3 - 1851661516636217\) and \(h=t^2+2\). We sieve polynomials of the form \(a(\iota )-b(\iota )x\), i.e. \(\tau =2\), where a and b are linear in \(\iota \) with their coefficients bounded by \(A=E^{1/2}\). Hence we obtain

    $$\begin{aligned} \text {norms bitsize(exTNFS} \eta =2\text {)}=9\log _2E+\frac{1}{6}\log _2Q\approx 370. \end{aligned}$$
  5. 5.

    exTNFS-Conj with \(\eta =3\) and \(\kappa =2\). We take \(f=x^4 - 2x^3 + x^2 - 3\), \(g=1542330130901467x^2 - 1542330130901467x - 923667359431967\) and \(h=t^3+t+1\). Again we sieve polynomials of the form \(a(\iota )-b(\iota )x\), i.e. \(\tau =2\), where a and b are quadratic in \(\iota \) with coefficients bounded by \(A=E^{1/3}\). This leads to

    $$\begin{aligned} \text {norms bitsize(exTNFS } \kappa =2\text {)}=6\log _2E+\frac{1}{4}\log _2Q\approx 330. \end{aligned}$$

We conclude that in this example the best choice is exTNFS with \(\kappa =2\).

The condition \(\gcd (\eta ,\kappa )=1\) is also satisfied by \(n=10\), 12, 14, 18, 20, 24 etc., but we do not discuss these cases in detail.

5.2 Precise Comparison When p is SNFS

To compare precise norm sizes when p is a d-SNFS prime, let us consider Table 6.

Table 6. Comparison of norm sizes when p is d-SNFS prime.

Note that SexTNFS encompass SNFS-JP when \(\eta =1\), and STNFS when \(\eta =n\), so we only call it SexTNFS when \(2\le \eta <n\).

As in the case when p is arbitrary, we do not have precise estimations of E, especially in the large range of fields \(\log _2Q\in [1000,10000]\). We are going to extrapolate from the pair \((\log _2Q=1039,\log _2 E=30.38)\), due to the record of [1], using the formula

$$\begin{aligned} E=cL_Q(1/3,(4/9)^\frac{1}{3}). \end{aligned}$$

Let us introduce a notation for the bitsize of SexTNFS, for any integers \(\kappa \ge 1\) and \(\tau \ge 2\):

$$\begin{aligned} C_{norm}(\tau , \kappa ) = \frac{2\kappa (d+1)}{\tau } \log E + \frac{\tau -1}{\kappa d} \log Q. \end{aligned}$$

For each \(\kappa \), \(C_{norm}(\tau , \kappa )\) has a minimum at the integer \(\tau \ge 2\) which best approximates \( \left( \frac{2 \kappa ^2 d(d+1) \log E}{ \log Q} \right) ^{1/2}\).

The Case of 4-SNFS Primes. To fix ideas, we restrict at the case \(d=4\). When \(\kappa =1\), i.e. STNFS, the norm size has its minimum at \(\tau =2\) as soon as \(\frac{\log Q}{\log E} \ge 40/2^2 = 10\). In our range of interest (\(300 \le \log _2 Q \le 10000\)), the ratio \(\log Q/\log E\) is always larger than 19. So, we only take care of sieving linear polynomials in the case of STNFS with \(d=4\). Similarly, it suffices to consider sieving linear polynomials in the case of SexTNFS with \(\kappa =2\) (resp. \(\kappa =3\)) whenever \(\log Q/\log E \ge 40\) (resp. \(\log Q/\log E \ge 90\)). It is satisfied when Q is of at least 1450 bits (resp. 6300 bits).

Let us compare the norm sizes of STNFS and SexTNFS when we sieve only linear polynomials (\(\tau =2\)) in both cases. The value \(C_{norm}(2, \kappa )\) has a minimum at \(\kappa = \left( \frac{ \log Q}{ d(d+1)\log E} \right) ^{1/2}\). In the case of \(d=4\), this value has minimum at \(\kappa =2\) or \(\kappa =3\) whenever \(20 \le \log Q/\log E \le 180=20\cdot 3^2\). Thus, in fields with large size, SexTNFS with \(\kappa =2\) or \(\kappa =3\) is better than STNFS.

In Fig. 3 we plot the norm sizes of SNFS-JP, STNFS, and SexTNFS for \(n=12\) and \(d=4\) for Q is of from 300 bits to 5000 bits. We also compare these values with the best choice for general prime cases (exTNFS with Conjugation when \(\kappa =3\)). From the plots we remark that STNFS could be a best choice for small Q otherwise SexTNFS with small \(\kappa \) becomes an important challenger against any other methods as the size of Q grows.

Fig. 3.
figure 3

Comparison when \(n=12\) and \(d=4\) for \(300 \le \log _2 Q \le 5000\). Horizontal axis indicates the bitsize of \(p^n\) while the vertical axis the bitsize of the norms product. (Color figure online)

To get a better intuition, let us see in detail a specific field.

Example 2: We consider the prime \(p=P_4(u_4)\) where

$$\begin{aligned} P_4(x)=36x^4+36x^3+24x^2+6x+1 \text { and } u_4=2^{158}-2^{128}-2^{68}+1 \end{aligned}$$

(Sect. 6 in [2]), and note that p is 4-SNFS. The bitsize of \(p^{12}\) is 7647 for which we predict by extrapolation that \(\log _2E=76.15\).

Let us make a list with the norm sizes obtained with each version of NFS:

  1. 1.

    STNFS. The size of the norms is \(E^{2(d+1)/\tau }Q^{(\tau -1)/d}\) and has its minimum for \(\tau =2\). Take for example \(h=x^{12}+x^{10}+x^9-x^6-1\), \(f=P_4\) and \(g=x-u_4\).

    $$\begin{aligned} \text {norms bitsize(STNFS)}=5\log _2E+\frac{1}{4}\log _2Q \approx 2292. \end{aligned}$$
  2. 2.

    SNFS-JP. The size of the norms is \(E^{2n(d+1)/\tau }Q^{(\tau -1)/(nd)}\) and has its minimum when \(\tau =8\). Take for example \(f=P_4(x^{12}+x^6+x^3+1)\) and \(g= (x^{12}+x^6+x^3+1)-u_4\).

    $$\begin{aligned} \text {norms bitsize(SNFS-JP)}=\frac{120}{7}\log _2E+\frac{1}{8}\log _2Q \approx 2257. \end{aligned}$$
  3. 3.

    SexTNFS-JP \(\eta =4\). In this case the norm size is \(E^{2\kappa (d+1)/\tau }Q^\frac{(\tau -1)}{\kappa d}\) and has its minimum when \(\tau =2\). Take for example \(h=x^4-x-1\), \(f=P_4(x^3-x^2)\) and \(g=x^3-x^2-u_4\).

    $$\begin{aligned} \text {norms bitsize(SexTNFS)}=15\log _2E+\frac{1}{12}\log _2Q \approx 1779. \end{aligned}$$

One can do a similar analysis in the cases \(d=5\), \(d=6\) etc., but we do not present the details here.

6 On the Necessity to Update Key Sizes

Pairings are not included in the 2012 report of NIST [28] but they are included in the 2013 report of ENISA [17, Table 3.6] where pairings and RSA have the same recommended key sizes. This is in accordance with a general belief stated for example by Lenstra [25, Sect. 5.1]:

‘An RSA modulus n and a finite field \({\mathbb F}_{p^k}\) therefore offer about the same level of security if n and \(p^k\) are of the same order of magnitude.’

Freeman et al. [19] compiled key size recommendations from different sources in Table 1.1, all of which make or are coherent with the above supposition.

The currently recommended key sizes are derived from the complexity \(L[c]:=L_{p^n}(1/3, (c/9)^{1/3})\) with \(c=64\), which corresponds to NFS over fields whose characteristic is large and doesn’t have a special form. This complexity has been a safe choice until recently because the constant \(c=64\) has been the smallest among the variants of NFS over fields of non-small characteristic.

The Case of Primes of General Form. However, exTNFS has a constant \(c=48\) for a vast range of fields, so the safe choice becomes to derive key sizes using L[48]. A more precise evaluation would require to determine what embedding degree is large enough to be in the medium prime case, i.e. \(c=48\), and what degree is smaller so that we use \(c=64\). This seems to be hard to tell, especially after the record computation presented in [5, Sect. 7] showed that the attack in \({\mathbb F}_{p^2}\) was 260 times faster than the attack in \({\mathbb F}_{p'}\) where p and \(p'\) are primes so that \(2\log _2(p)\approx \log _2(p')\).

A crude and naive estimation, when a constant \(c_\text {old}\) is replaced by \(c_\text {new}\), is to write

$$\begin{aligned} L_{Q_\text {new}}(1/3,c_\text {new})=L_{Q_\text {old}}(1/3,c_\text {old}) \end{aligned}$$

which is equivalent to

$$\begin{aligned} \frac{\log Q_\text {new}}{\log Q_\text {old}}=\frac{c_\text {old}}{c_\text {new}}+o(1). \end{aligned}$$
(10)

Overall, we might say that the key size should be increased by \(64/48\approx 1.33\) in an asymptotic sense (simply ignoring the factor o(1)), which allows to comprehend what means a change in the second constant of NFS. We avoid to derive a table of key sizes using the methods in [29, Appendix H] and [25] not because the formulae are difficult but because we lack the experience with record computations needed to validate the formulae.

The Special Prime Case. When the characteristic has a special form the constant c changed twice in three years and there are some subtle points to understand about how the key sizes were computed. Before the algorithm of Joux and Pierrot there was no variant of NFS for \({\mathbb F}_{p^n}\) with \(n>1\) and p of special form. Hence, the recommended values correspond to \(c=64\). Their SNFS algorithm updated the constants to 32 in large characteristic and 64 in the middle prime case. A pessimistic choice would have been to update the key sizes using \(c=32\). Nevertheless, the very important example of Barreto-Naehrig pairings has an embedding degree \(n=12\) which seems to be considered as medium sized (the difference between large and medium characteristic is asymptotic and is hard to translate in practice). Due to SexTNFS the constant is now \(c=32\) for all fields of non-small characteristic, so we don’t need a precise examination anymore, as long as n has a factor \(\ge 2\). We conclude that the key sizes of pairings where p has a special form, in a polynomial of degree \(\ge 3\), should increase roughly by a factor \(c_\text {old}/c_\text {new}=2\).

7 Cryptologic Consequences

Our work comes in a context of recent progress on the DLP in finite fields \(p^n\) of degree \(n\ge 2\). The case \(n=2\) has been the object of precise estimations and real-life computations and is now known to be weaker than the case of prime fields. On the contrary, the cases \(n=6\) and \(n=12\) remained difficult according to precise practical estimations.

In this paper we proposed the exTNFS which allowed us to apply the polynomials constructed in the case \(n=2\), which have good properties, to the highly important case \(n=6\), where the polynomials had less good properties. A precise estimation showed that this invalidates the key sizes currently used and we recommend that they should be updated (see Sect. 6). When p is of special form, as in the Barreto-Naehrig construction, one needs to update the key sizes for large characteristic because of the algorithm proposed by Joux and Pierrot in 2013 but it is not clear if the keys of the Barreto-Naehrig keys had to be updated. Due to exTNFS the key sizes of all pairings of SNFS characteristic need to be updated.

It is interesting to remark that the new variants of NFS exploit those properties of some pairings which made them fast:

  • Special form characteristic. The advantage of using special form characteristic is that it eliminates the cost of modular reductions (see for example [10, Algorithm 4]). It is the same special form of p which allows to use the fastest variant of exTNFS, i.e. SexTNFS, rather than the general case algorithm.

  • Composite embedding degree. In this case the pairings computations are done using tower extension field arithmetic, as explained for example in [10, Sect. 3.1]. The same structure of tower extension field is a main ingredient of exTNFS, as explained in Remark 1.

A large number of pairings have either special form characteristic or an embedding degree divisible by 2 or 3, for example the Barreto-Naehrig curves have both properties. In a recent preprint Chatterjee et al. [13] discussed the pairing constructions which are not affected by our algorithms, in particular the pairings of embedding degree one which are as secure as DSA and RSA. This shows that, regardless on the progress on DLP in \({\mathbb F}_{p^n}\) with \(n>1\), pairings are a secure tool for cryptography. Nevertheless, safe pairings might be very slow and determine cryptographers to use alternatives, as Chillotti et al. did in [14] for an e-voting protocol. We conclude with the question asked by our referee: “Is this the beginning of the end for pairing-based based cryptography?”