Skip to main content
Log in

Weiszfeld’s Method: Old and New Results

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In 1937, the 16-years-old Hungarian mathematician Endre Weiszfeld, in a seminal paper, devised a method for solving the Fermat–Weber location problem—a problem whose origins can be traced back to the seventeenth century. Weiszfeld’s method stirred up an enormous amount of research in the optimization and location communities, and is also being discussed and used till these days. In this paper, we review both the past and the ongoing research on Weiszfed’s method. The existing results are presented in a self-contained and concise manner—some are derived by new and simplified techniques. We also establish two new results using modern tools of optimization. First, we establish a non-asymptotic sublinear rate of convergence of Weiszfeld’s method, and second, using an exact smoothing technique, we present a modification of the method with a proven better rate of convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Eiselt, H.A., Marianov, V.: Pioneering developments in location analysis. In: Eiselt, H.A., Marianov, V. (eds.) Foundations of Location Analysis, International Series in Operations Research & Management Science, vol. 155, pp. 3–22. Springer, Berlin (2011)

    Chapter  Google Scholar 

  2. Drezner, Z., Klamroth, K., Schöbel, A., Wesolowsky, G.O.: The Weber problem. In: Drezner, Z., Hamacher, H.W. (eds.) Facility Location: Applications and Theory, International Series in Operations Research & Management Science, pp. 1–36. Springer, Berlin (2002)

    Chapter  Google Scholar 

  3. Boltyanski, Y., Martini, H., Soltan, V.: Geometric Methods and Optimization Problems, Combinatorial Optimization, vol. 4. Kluwer Academic Publishers, Dordrecht (1999)

    Book  Google Scholar 

  4. Jahn, T., Kupitz, Y.S., Martini, H., Richter, C.: The Fermat-Torricelli problem, Part II: Geometric aspects, open and related problems, and new generalizations. J. Optim. Theory Appl. (to appear)

  5. Weiszfeld, E.V.: Sur le point pour lequel la somme des distances de n points donnes est minimum. Tohoku Math. J. 43, 355–386 (1937)

    Google Scholar 

  6. Vazsonyi, A.: Pure mathematics and the Weiszfeld algorithm. Decis. Line 33, 12–13 (2002)

    Google Scholar 

  7. Sturm, R.: Ueber den Punkt kleinster Entfernungssumme von gegebenen Punkten. J. Rein. Angew. Math. 97, 49–61 (1884)

    MATH  Google Scholar 

  8. Ostresh, L.M.: On the convergence of a class of iterative methods for solving the Weber location problem. Oper. Res. 26, 597–609 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  9. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms, 3rd edn. Wiley, London (2006)

    Book  Google Scholar 

  10. Kupitz, Y.S., Martini, H., Spirova, M.: The Fermat-Torricelli problem, part I: a discrete gradient-method approach. J. Optim. Theory Appl. 158(2), 305–327 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  11. Weiszfeld, E.: On the point for which the sum of the distances to \(n\) given points is minimum. Ann. Oper. Res. 167, 7–41 (2009). Translated from the French original [Tohoku Math. J. 43 (1937), 355–386] and annotated by Frank Plastria

  12. Kuhn, H.W., Kuenne, R.E.: An efficient algorithm for the numerical solution of the generalized Weber problem in spatial economics. J. Reg. Sci. 4, 21–34 (1962)

    Article  Google Scholar 

  13. Beck, A., Teboulle, M.: Gradient-based algorithms with applications to signal recovery problems. In: Palomar, D., Eldar, Y. (eds.) Convex Optimization in Signal Processing and Communications, pp. 139–162. Cambridge University Press, New York (2009)

    Google Scholar 

  14. Kuhn, H.W.: A note on Fermat’s problem. Math. Program. 4, 98–107 (1973)

    Article  MATH  Google Scholar 

  15. Miehle, W.: Link-length minimization in networks. Oper. Res. 6, 232–243 (1958)

    Article  MathSciNet  Google Scholar 

  16. Cooper, L.: Location-allocation problems. Oper. Res. 11, 331–343 (1963)

    Article  MATH  Google Scholar 

  17. Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont, MA (1999)

    MATH  Google Scholar 

  18. Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, Boston, MA (2004)

    Book  MATH  Google Scholar 

  19. Chandrasekaran, R., Tamir, A.: Open questions concerning Weiszfeld’s algorithm for the Fermat-Weber location problem. Math. Program. 44, 293–295 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  20. Brimberg, J.: The Fermat-Weber location problem revisited. Math. Program. 71, 71–76 (1995)

    MATH  MathSciNet  Google Scholar 

  21. Cánovas, L., Cañavate, R., Marín, A.: On the convergence of the Weiszfeld algorithm. Math. Program. 93, 327–330 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  22. Vardi, Y., Zhang, C.H.: A modified Weiszfeld algorithm for the Fermat-Weber location problem. Math. Program. 90, 559–566 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  23. Rautenbach, D., Struzyna, M., Szegedy, C., Vygen, J.: Weiszfeld’s Algorithm Revisited Once Again. Forschungsinstitut für Diskrete Mathematik. Rheinische Friedrich-Wilhelms-Universität, Bonn, Report (2004)

  24. Chen, R.: Solution of location problems with radial cost functions. Comput. Math. Appl. 10, 87–94 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  25. Drezner, Z.: A note on the Weber location problem. Ann. Oper. Res 40, 153–161 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  26. Drezner, Z.: A note on accelerating the Weiszfeld procedure. Locat. Sci. 3, 275–279 (1995)

    Article  MATH  Google Scholar 

  27. Calamai, P.H., Conn, A.R.: A projected newton method for \(l_p\) norm location problems. Math. Program. 38, 75–109 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  28. Overton, M.L.: A quadratically convergent method for minimizing a sum of euclidean norms. Math. Program. 27, 34–63 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  29. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2001)

    Book  MATH  Google Scholar 

  30. Nesterov, Y., Nemirovski, A.: Interior-point Polynomial Algorithms in Convex Programming. SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1994)

    Book  Google Scholar 

  31. Andersen, K.D., Christiansen, E., Conn, A.R., Overton, M.L.: An efficient primal-dual interior-point method for minimizing a sum of Euclidean norms. SIAM J. Sci. Comp. 22, 243–262 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  32. Katz, I.N.: Local convergence in Fermat’s problem. Math. Program. 6, 89–104 (1974)

    Article  MATH  Google Scholar 

  33. Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering. Optimization Software Inc., Publications Division, New York (1987). Translated from the Russian. With a foreword by Dimitri P, Bertsekas

  34. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  35. Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  36. Beck, A., Teboulle, M.: Smoothing and first order methods: a unified framework. SIAM J. Optim. 22, 557–580 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  37. Ben-Tal, A., Teboulle, M.: A smoothing technique for nondifferentiable optimization problems. In: Optimization (Varetz, 1988), Lecture Notes in Math., vol. 1405, pp. 1–11. Springer, Berlin (1989)

  38. Bertsekas, D.P.: Nondifferentiable optimization via approximation. Math. Program. Stud. pp. 1–25 (1975)

  39. Bertsekas, D.P.: Constrained Optimization and Lagrangian Multipliers. Academic Press, New York (1982)

    Google Scholar 

  40. Moreau, J.J.: Proximité et dualité dans un espace Hilbertien. Bull. Soc. Math. Fr. 93, 273–299 (1965)

    MATH  MathSciNet  Google Scholar 

  41. Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103, 127–152 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  42. Huber, P.J.: Robust Statistics. Wiley, New York (1981)

    Book  MATH  Google Scholar 

  43. Radó, F.: The Euclidean multifacility location problem. Oper. Res. 36, 485–492 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  44. Ben-Israel, A., Iyigun, C.: A generalized Weiszfeld method for the multi-facility location problem. Oper. Res. Lett. 38, 207–214 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  45. Teboulle, M.: A unified continuous optimization framework for center-based clustering methods. J. Mach. Learn. Res. 8, 65–102 (2007)

    MATH  MathSciNet  Google Scholar 

  46. Katz, I.N., Vogl, S.R.: A Weiszfeld algorithm for the solution of an asymmetric extension of the generalized Fermat location problem. Comput. Math. Appl. 59, 399–410 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  47. Cooper, L.: An extension to the generalized Weber problem. J. Reg. Sci. 8, 181–197 (1968)

    Article  Google Scholar 

  48. Morris, J.G.: Convergence of the Weiszfeld algorithm for Weber problems using a generalized “distance” function. Oper. Res. 29, 37–48 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  49. Erdős, P.: Open problems. J. Math. Phys. High Sch. Stud. 43(10), 444 (1993)

    Google Scholar 

  50. Beck, A., Teboulle, M., Chikishev, Z.: Iterative minimization schemes for solving the single source localization problem. SIAM J. Optim. 19, 1397–1416 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  51. Biswas, P., Lian, T.C., Wang, T.C., Ye, Y.: Semidefinite programming based algorithms for sensor network localization. ACM Trans. Sen. Netw. 2(2), 188–220 (2006)

    Article  Google Scholar 

  52. Beaton, A.E., Tukey, J.W.: The fitting of power series, meaning polynomials, illustrated on bandspectroscopic data. Technometrics 16, 147–185 (1974)

    Article  MATH  Google Scholar 

  53. Ben-Tal, A., Teboulle, M., Yang, W.H.: A least-squares-based method for a class of nonsmooth minimization problems with applications in plasticity. Appl. Math. Optim. 24, 273–288 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  54. Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.S.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63, 1–38 (2010)

    Article  MATH  Google Scholar 

  55. Lawson, C.L.: Contributions to the Theory of Linear Least Maximum Approximation. ProQuest LLC, Ann Arbor, MI (1961). Thesis (Ph.D.)-University of California, Los Angeles

  56. O’Leary, D.P.: Robust regression computation using iteratively reweighted least squares. SIAM J. Matrix Anal. Appl. 11, 466–480 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  57. Rice, J.R., Usow, K.H.: The Lawson algorithm and extensions. Math. Comp. 22, 118–127 (1968)

    Article  MATH  MathSciNet  Google Scholar 

  58. Kuhn, H.W.: On a pair of dual nonlinear programs. Nonlinear Programming (NATO Summer School, Menton, 1964), pp. 37–54. North-Holland, Amsterdam (1967)

  59. Fasbender, E.: Über die gleichseitigen Dreiecke welche um ein gegebenes Dreieck gelegt werden können. J. F. Math 30, 230–231 (1846)

    MATH  Google Scholar 

Download references

Acknowledgments

We would like to thank an anonymous reviewer and the editor in chief for their useful comments which helped to improve the presentation of the paper. This work was partially supported by ISF grant #25312.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amir Beck.

Appendices

Appendix A: Notations

Following is a list of notations that are used throughout the paper.

  • \({\mathcal {A}}= \left\{ \mathbf{a}_{1} , \mathbf{a}_{2} , \ldots , \mathbf{a}_{m} \right\} \)—the set of anchors.

  • \(\omega _{1} , \omega _{2} , \ldots , \omega _{m}\)—given positive weights.

  • \(\omega = \sum _{i = 1}^{m} \omega _{i}\)—sum of weights.

  • \(f\left( \mathbf{x}\right) = \sum _{i = 1}^{m} \omega _{i}\left\| {\mathbf{x}-\mathbf{a}_{i}} \right\| \)—the Fermat–Weber objective function.

  • \(\mathbf{x}^{*}\)—an optimal solution of the Fermat–Weber problem. If the anchors are not collinear, then \(\mathbf{x}^{*}\) is the unique optimal solution.

  • \(X^{*}\)—the optimal solution set of the Fermat–Weber problem. When the anchors are not collinear, \(X^{*}\) is the singleton \(\{ \mathbf{x}^{*} \}.\)

  • \(f^{*}\)—the optimal value of the Fermat–Weber problem.

  • \(T(\mathbf{x}) = \frac{1}{\sum _{i = 1}^{m} \frac{\omega _{i}}{\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| }}\sum _{i = 1}^{m} \frac{\omega _{i}\mathbf{a}_{i}}{\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| }\)—the operator defining Weiszfeld’s method.

  • \(h(\mathbf{x}, \mathbf{y}) := \sum _{i = 1}^{m} \omega _{i}\frac{\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| ^{2}}{\left\| {\mathbf{y}- \mathbf{a}_{i}} \right\| }\)—an auxiliary function used to analyze Weiszfeld’s method.

  • \(L(\mathbf{x}) = \sum _{i = 1}^{m} \frac{\omega _{i}}{\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| }\)—serves as a kind of “Lipschitz” constant, and the operator \(T\) can be written as taking a gradient step with stepsize \(1/L(\mathbf{x})\): \(T(\mathbf{x}) = \mathbf{x}- 1/L(\mathbf{x})\nabla f(\mathbf{x})\).

  • \(\mathbf{R}_{j} = \sum _{i = 1 , i \ne j}^{m} \omega _{i} \left( \mathbf{a}_{j} - \mathbf{a}_{i}\right) /\left\| {\mathbf{a}_{i} - \mathbf{a}_{j}} \right\| \), \(j = 1 , 2 , \ldots , m\). An important property related to \(\mathbf{R}_{j}\) is that \(\mathbf{a}_{j}\) is optimal if and only if \(\left\| {\mathbf{R}_{j}} \right\| \le w_{j}\).

  • \(\mathbf{d}_{j} = -\mathbf{R}_{j}/\left\| {\mathbf{R}_{j}} \right\| \)—the steepest descent direction of \(f\) at \(\mathbf{a}_{j}\).

Appendix B: Proof of Lemma 7.1

Let \(\mathbf{x}:= \mathbf{a}_{j} + t_{j}\mathbf{d}_{j}\). Then, from the definition of \(\mathbf{x}\), we have that \(\mathbf{d}_{j} = (1/t_{j})\left( \mathbf{x}- \mathbf{a}_{j}\right) \) and hence

$$\begin{aligned} -\frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2}&= \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} - 2\left\langle {\mathbf{x}- \mathbf{a}_{j} , \frac{\mathbf{x}- \mathbf{a}_{j}}{t_{j}}} \right\rangle \nonumber \\&= \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} - 2\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{d}_{j}} \right\rangle \nonumber \\&= \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} - 2\left\langle {\mathbf{x}- \mathbf{a}_{j} , -\frac{\mathbf{R}_{j}}{\left\| {\mathbf{R}_{j}} \right\| }} \right\rangle \nonumber \\&= \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} + \frac{2}{\left\| {\mathbf{R}_{j}} \right\| }\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_j} \right\rangle . \end{aligned}$$
(33)

Now, we will expand the first term of the right-hand side of (33)

$$\begin{aligned} \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_j} \right\| ^{2}&= \frac{L\left( \mathbf{a}_{j}\right) }{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} \\&= \frac{1}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \frac{\omega _{i}}{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| }\right) \left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} \\&= \frac{1}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\sum _{i = 1 , i \ne j}^{m} \omega _{i}\frac{\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } \\&= \frac{1}{\left\| {\mathbf{R}_{j}} \right\| \!-\! \omega _{j}}\sum _{i = 1 , i \ne j}^{m} \omega _{i}\left( \frac{\left\| {\mathbf{x}\!-\! \mathbf{a}_{i}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| } \!+\! 2\frac{\left\langle {\mathbf{x}\!-\! \mathbf{a}_{i} , \mathbf{a}_{i} \!-\! \mathbf{a}_{j}} \right\rangle }{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } \!+\! \frac{\left\| {\mathbf{a}_{i} \!-\! \mathbf{a}_{j}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| }\right) \\&\!=\! \frac{1}{\left\| {\mathbf{R}_{j}} \right\| \!-\! \omega _{j}}\sum _{i = 1 , i \ne j}^{m} \omega _{i}\left( \frac{\left\| {\mathbf{x}\!-\! \mathbf{a}_{i}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| } \!+\! 2\frac{\left\langle {\mathbf{x}\!-\! \mathbf{a}_{i} , \mathbf{a}_{i} \!-\! \mathbf{a}_{j}} \right\rangle }{\left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| } \!+\! \left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| \right) . \end{aligned}$$

The middle term can be also written as follows

$$\begin{aligned} 2\frac{\left\langle {\mathbf{x}- \mathbf{a}_{i} , \mathbf{a}_{i} - \mathbf{a}_{j}} \right\rangle }{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| }&= 2\frac{\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{a}_{i} - \mathbf{a}_{j}} \right\rangle }{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } + 2\frac{\left\langle {\mathbf{a}_{j} - \mathbf{a}_{i} , \mathbf{a}_{i} - \mathbf{a}_{j}} \right\rangle }{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } \\&= 2\frac{\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{a}_{i} - \mathbf{a}_{j}} \right\rangle }{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } - 2\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| . \end{aligned}$$

Thus, from the definition of \(\mathbf{R}_{j}\), we have

$$\begin{aligned} \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2}&\!=\! \frac{1}{\left\| {\mathbf{R}_{j}} \right\| \!-\! \omega _{j}}\sum _{i \!=\! 1 , i \ne j}^{m} \omega _{i}\left( \frac{\left\| {\mathbf{x}\!-\! \mathbf{a}_{i}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| } \!+\! 2\frac{\left\langle {\mathbf{x}\!-\! \mathbf{a}_{j} , \mathbf{a}_{i} \!-\! \mathbf{a}_{j}} \right\rangle }{\left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| } \!-\! \left\| {\mathbf{a}_{j} \!-\! \mathbf{a}_{i}} \right\| \right) \\&= \frac{1}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\frac{\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } \right. \\&\quad +\left. 2\left\langle {\mathbf{x}- \mathbf{a}_{j} , \sum _{i = 1 , i \ne j}^{m} \omega _{i}\frac{\mathbf{a}_{i} - \mathbf{a}_{j}}{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| }} \right\rangle - f\left( \mathbf{a}_{j}\right) \right) \\&= \frac{1}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\frac{\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } - 2\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle - f\left( \mathbf{a}_{j}\right) \right) . \end{aligned}$$

Note that, by the fact that \(\frac{a^{2}}{b} \ge 2a - b\) for any \(a \in \mathbb {R}\) and \(b \in \mathbb {R}_{++}\) we have

$$\begin{aligned} \frac{\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| ^{2}}{\left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| } \ge 2\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - \left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| . \end{aligned}$$

Hence

$$\begin{aligned} \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2}&\ge \frac{1}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left( 2\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - \left\| {\mathbf{a}_{j} - \mathbf{a}_{i}} \right\| \right) \right. \\&\quad -\left. 2\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle - f\left( \mathbf{a}_{j}\right) \right) \\&\quad = \frac{1}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( 2\sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - 2\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle - 2f\left( \mathbf{a}_{j}\right) \right) \\&\quad = \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - \left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle - f\left( \mathbf{a}_{j}\right) \right) . \end{aligned}$$

Plugging the last inequality in (33) yields

$$\begin{aligned} -\frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2}&= \frac{1}{t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} + \frac{2}{\left\| {\mathbf{R}_{j}} \right\| }\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle \\&\ge \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - \left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle - f\left( \mathbf{a}_{j}\right) \right) \\&\quad + \frac{2}{\left\| {\mathbf{R}_{j}} \right\| }\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle \\&= \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - f\left( \mathbf{a}_{j}\right) \right) \\&\quad - \left( \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}} - \frac{2}{\left\| {\mathbf{R}_{j}} \right\| }\right) \left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle \\&= \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - f\left( \mathbf{a}_{j}\right) \right) \\&- \frac{2\omega _{j}}{\left\| {\mathbf{R}_{j}} \right\| \left( \left\| {\mathbf{R}_{j}} \right\| - \omega _{j}\right) }\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{R}_{j}} \right\rangle \\&= \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - f\left( \mathbf{a}_{j}\right) \right) \\&+ \frac{2\omega _{j}}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{d}_{j}} \right\rangle \\&= \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - f\left( \mathbf{a}_{j}\right) + \omega _{j}\left\langle {\mathbf{x}- \mathbf{a}_{j} , \mathbf{d}_{j}} \right\rangle \right) \\&= \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( \sum _{i = 1 , i \ne j}^{m} \omega _{i}\left\| {\mathbf{x}- \mathbf{a}_{i}} \right\| - f\left( \mathbf{a}_{j}\right) + \omega _{j}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| \right) \\&= \frac{2}{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}\left( f\left( \mathbf{x}\right) - f\left( \mathbf{a}_{j}\right) \right) , \end{aligned}$$

where the second equality from below follows from the fact that \(1 = \left\| {\mathbf{d}_{j}} \right\| = \left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| /t_{j}\). Hence,

$$\begin{aligned} f\left( \mathbf{a}_j\right) - f\left( \mathbf{x}\right) \ge \frac{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}{2t_{j}}\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| ^{2} = t_{j}\frac{\left\| {\mathbf{R}_{j}} \right\| - \omega _{j}}{2} = \frac{\left( \left\| {\mathbf{R}_{j}} \right\| - \omega _{j}\right) ^{2}}{2L\left( \mathbf{a}_{j}\right) }, \end{aligned}$$

where the first equality follows from the fact that \(\left\| {\mathbf{x}- \mathbf{a}_{j}} \right\| = t_{j}\), and the last equality follows from the definition of \(t_{j}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Beck, A., Sabach, S. Weiszfeld’s Method: Old and New Results. J Optim Theory Appl 164, 1–40 (2015). https://doi.org/10.1007/s10957-014-0586-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-014-0586-7

Keywords

Mathematics Subject Classification

Navigation