Efficient multiple-precision integer division algorithm

https://doi.org/10.1016/j.ipl.2013.10.005Get rights and content

Highlights

  • Multi-precision arithmetic division algorithm is proposed.

  • It fixes the bug in the existing fastest algorithm for this problem in the literature.

  • Its performance remains same as that of the existing fastest algorithm, mentioned above.

Abstract

Design and implementation of division algorithm is one of the most complicated problems in multi-precision arithmetic. Huang et al. [1] proposed an efficient multi-precision integer division algorithm, and experimentally showed that it is about three times faster than the most popular algorithms proposed by Knuth [2] and Smith [3]. This paper reports a bug in the algorithm of Huang et al. [1], and suggests the necessary corrections. The theoretical correctness proof of the proposed algorithm is also given. The resulting algorithm remains as fast as that of [1].

Introduction

Arithmetic operations on large integers are often used in cryptographic algorithms. The usual arithmetic operations are performed in the machine using the built-in functions. Each machine has a base B in its number system, and can store unsigned integers of values {0,1,2,,B1} in built-in integer locations for that machine. The time complexity of a single usual arithmetic operation is assumed to be O(1).

A large integer cannot be stored in machine dependent built-in size for integers, and arithmetic operations on such integer(s) cannot be performed using the built-in routines for those arithmetic operations. The operations on large integers are called multi-precision arithmetic operations. The multi-precision division is the hardest among all the four multi-precision arithmetic operations. Multi-precision division plays a crucial role in cryptographic research [4], and primality testing [5]. The commonly used multi-precision division algorithm is proposed by Knuth [2].

Normalization is one of the key steps of multi-precision division, and it is defined as the act of restoring the individual digits or words in the range [0,B1]. Since each word or digit of the quotient is guessed in each step of the division, so it is difficult to skip normalization. The division algorithm proposed by Smith [3] reduces the intermediate normalization steps. Huang et al. [1] proposed an efficient algorithm for multi-precision integer division that reduces the number of normalizations to a single normalization step. The uniqueness of the algorithm is that, if it is applied for long integer division, then both the quotient and remainder simultaneously gets calculated at the end. There is no need for any correction step or any extra multiplication or subtraction for computing the remainder. It is experimentally shown that the algorithm in [1] is three times faster than the algorithm by Knuth [2].

We have identified a bug in the algorithm by Huang et al. [1], and propose the necessary corrections. We theoretically justify the correctness of our algorithm. The detailed experiment justifies that our corrected version of the algorithm runs with the same efficiency as suggested in [1].

The paper is organized as follows. Section 2 briefly describes the algorithm of Huang et al. [1]. In Section 3, we report the bug by providing an example. We describe the corrected algorithm in Section 4, and also provide the correctness proof. Finally, the concluding remarks appear in Section 5.

Section snippets

Overview of the algorithm of [1]

Let B be the base of the multi-precision integers under consideration. For two multi-precision positive integers a and b (a>b>0), we need to find out multi-precision integers q and r such that a=bq+r. Let m and n denote the number of words required to store a and b respectively. Thus a=a1+a2.B++am.Bm1 and b=b1+b2.B++bn.Bn1. Algorithm starts by copying a into a work array W of size m+1, whose each element can hold integers that require no more than 4 bytes, i.e., it ranges from 2147483648

Description of the bug

In [1], the correctness of the algorithm (i.e., whether a=b.q+r holds) is not established. We could identify a pathological instance to show that the algorithm of Huang et al. [1] is not correct. We choose B=256, a=94 6 142 2 78 236 223 88 169 92 10,1 and b=10 183 116 36 218 189. Thus we have m=11, n=6. The contents of the work array at the end of each iteration is given in Table 2.

Thus, the algorithm of

Corrected algorithm

On the basis of the discussions in Section 3, the pseudo-code of the revised normalization procedure is given in Algorithm 3. We will also justify the correctness of our algorithm.

Algorithm 3

Corrected Normalize (W).

Conclusion

The time complexity analysis of the corrected version of the algorithm remains the same as has been described in the original paper [1]. We have implemented our algorithm and compared its performance against the BigDigits [6] implementation of the Knuthʼs algorithm [2]. BigDigits library includes the classical multiple-precision arithmetic algorithms from Knuth [2] to carry out large natural number calculations as required in cryptographic algorithms. We have conducted our experiments in an

Acknowledgement

We thank the referee of this paper for the helpful comments. It improved the presentation of the paper.

References (6)

There are more references available in the full text version of this article.

Cited by (0)

View full text