An efficient algorithm for the smallest enclosing ball problem in high dimensions

https://doi.org/10.1016/j.amc.2005.01.127Get rights and content

Abstract

Consider the problem of computing the smallest enclosing ball of a set of m balls in Rn. This problem arises in many applications such as location analysis, military operations, and pattern recognition, etc. In this paper, we reformulate this problem as an unconstrained convex optimization problem involving the maximum function max{0, t}, and then develop a simple algorithm particularly suitable for problems in high dimensions. This algorithm could efficiently handle problems of dimension n up to 10,000 under a moderately large m, as well as problems of dimension m up to 10,000 under a moderately large n. Numerical results are given to show the efficiency of algorithm.

Introduction

A ball Bi in Rn with center ci and radius ri > 0 is the close set Bi={xRn:x-ciri}. Given a set of balls B = {B1, B2,  , Bm} in Rn, the smallest enclosing ball problem (SEB problem for short) is to find a ball with the smallest radius that can enclose all balls in B. This problem can be easily formulated as the nonsmooth convex optimization problem:minxRnf(x):=max1im{fi(x)},wherefi(x)=x-ci+ri,i=1,2,,m.Note that function f(x) is coercive, so problem (1) has a solution, and moreover, the solution is unique. Otherwise there would exist two different balls, C1 and C2, of the same radius, such that j=1mBj=Ci, i = 1, 2, and then one can construct a smaller ball containing C1  C2 and thus B as well.

The SEB problem arises in a large number of applications such as location analysis and military operations, and it is itself of interest as a problem in computational geometry. See [1], [2], [3], [4], [5] for details. Many algorithms have been developed for problem (1), particularly, for its special case with all ri degenerating into zero, i.e. the problem of smallest enclosing ball of points. To the best of our knowledge, if the points lie in low dimension n (n  30), methods [3], [6], [7], [8], [9] from the computational geometry can yield quite satisfactory solutions in theory and practice. Nevertheless, these approaches cannot handle most of very recent applications [10], [11], [12] in connection with support vector machines that require the problem to be solved in higher dimensions, because they become inefficient already for moderately high value of n. In addition, though the quadratic programming approach of Gärtner and Schönherr [9] is polynomial in practice, the requirement for arbitrary-precision linear algebra to avoid robustness issues restricts the tractable dimension to n  300.

Recently, taking the special structure of the SEB problem into account, Zhou et al. [13] exploited the log-exponential smooth approximation [14] for the maximum function f(x) to develop an approximate algorithm, which can efficiently handle problems with a large n, and solves problems of dimension n up to 10,000 within hours. Kumar et al. [15] used techniques of second order cone programming and “core-set” to present a polynomial-time (1 + ε)-approximation algorithm for the SEB problem in high dimensions, but their test results are only given up to n = 1400. Following a different direction from these, Fischer et al. [16] proposed a combinatorial algorithm for the smallest enclosing ball of points, which can efficiently deal with point sets in dimensions up to 2000, and solve instances of dimension 10,000 within hours. This algorithm is actually a pivoting scheme resembling the simplex method for linear programming.

The main goal of this paper is to develop an approximate algorithm that can efficiently handle the SEB problems in high dimensions, based on a completely different smooth approximation for problem (1) from [13]. Specifically, we first transform problem (1) with a certain combinatorial property into a deterministic nonsmooth problem involving the max function ϕ(t) = max{0, t}, and then use Chen–Harker–Kanzow–Smale (CHKS) smoothing function [17], [18], [19] to approximate ϕ(t), whereby establishing a globally convergent quasi-Newton algorithm. This algorithm is able to solve problems of dimension n up to 10,000 under a moderately large m, as well as problems of dimension m up to 10,000 under a moderately large n. Numerical results are given for test problems generated randomly, which demonstrate the efficiency of algorithm.

This paper is organized as follows. In Section 2, we reformulate problem (1) as a deterministic unconstrained problem that involves the maximum function ϕ(t), and then present a smooth approximation by virtue of CHKS smoothing function. Based on the resulting smooth problem, we propose in Section 3 a globally convergent algorithm for problem (1) and give its convergence analysis. Section 4 reports numerical results for test problems generated randomly, which show the algorithm here can deal with problems of a large n efficiently. Finally, some conclusions are drawn in Section 5.

In this paper, all vectors are column vectors unless otherwise stated. Let Id denote a d × d identity matrix, and co(S) be the convex hull of a set SRn. For a convex function f:RnR, ∂f(x) represents the subdifferential of f at x.

Section snippets

A new smooth approximation for nondifferentiable problem (1)

Note that f(x) in (1) can be redefined by the following linear optimization problem:f(x)maxλRmi=1mλifi(x):i=1mλi=1,0λi1,i=1,2,,m.Since the linear programming problem of right-hand side has a strictly feasible point λ0 = (1/m,  , 1/m)T, it follows from the strong duality theorem that for any xRn, its optimal value is same as that of its dual linear program. This implies thatf(x)minωR,uRm-ω+i=1mui:uifi(x)+ω,ui0,i=1,2,,m,where ω and ui are Lagrange multipliers associated with the

A globally convergent algorithm for the SEB problem

In what follows, we present a specific algorithm for solving the SEB problem based on the smooth unconstrained problem (9), followed by a global convergence analysis.

Algorithm 1

  • Let σ  (0, 1), (ω0,x0)R×Rn and ε0 > 0 be given. Set k  0.

  • For k = 0, 1, 2,  , do

    • S1. Use an unconstrained minimization method to solveminωR,xRnΦ(ω,x;εk).

      • and let (ωk, xk) denote its minimizer.

    • S2. Set εk+1 = σεk, let k  k + 1 and go to step S1.

  • End

Lemma 2

Let {ωk, xk} be the sequence of points produced by Algorithm 1. Then, any limit points of {xk} are

The implementation of algorithm and computational results

We implemented Algorithm 2 described in Section 3 with our code and the numerical experiments were done by using a Pentium IV 2.4 GHz personal computer. When n  1000, we use a BFGS algorithm to solve the unconstrained optimization problem (13) and the parameters in Algorithm 2 are set as follows:σ=0.1,tol1=1.0e-6,tol2=1.0e-2,ε0=1.When n > 1000, we choose a limited-memory BFGS algorithm and 7 limited-memory vector updates to solve (13). For this case, we use the following parameters:σ=0.08,tol1=1.0e-

Conclusions

In this paper, through reformulating problem (1) as a deterministic optimization problem with the nondifferentiable function ϕ(t), we develop another approximate algorithm that can efficiently deal with the smallest enclosing ball problems where the dimension n is large. Preliminary numerical experiments show that our algorithm is efficient whether in the accuracy of numerical results or in the computation speed.

References (19)

  • J. Elzinga et al.

    The minimum covering sphere problem

    Management Science

    (1972)
  • D. Hearn et al.

    Efficient algorithms for the minimum circle problem

    Operations Research

    (1982)
  • N. Megiddo

    Linear-time algorithms for linear programming in R3 and related problems

    SIAM Journal on Computing

    (1983)
  • F.P. Preparata et al.
  • M.I. Shamos et al.

    Closest-point problems

  • M.E. Dyer, A class of convex programs with applications to computational geometry, in: Proceedings of 8th Annual ACM...
  • E. Welzl

    Smallest enclosing disks (balls and ellipsoids)

  • B. Gärter

    Fast and robust smallest enclosing balls

  • B. Gärter, S. Schönherr, An efficient, exact and generic quadratic programming solver for computational geometric...
There are more references available in the full text version of this article.

Cited by (10)

  • A smoothing trust-region Newton-CG method for minimax problem

    2008, Applied Mathematics and Computation
    Citation Excerpt :

    Of course, there are other ways to express f(x) in terms of the maximum function, for example in [11]. Recently, the above equivalent reformulation has been given in Ref. [19] by using standard duality theory from linear programs. Because max(0, z), z ∈ R is a non-smooth function, then F(x, t) is a non-smooth function too.

  • Simple and Efficient Acceleration of the Smallest Enclosing Ball for Large Data Sets in E<sup>2</sup>: Analysis and Comparative Results

    2022, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
View all citing articles on Scopus

This work is supported by Natural Science Foundation of South China University of Technology.

View full text