Inexact overlapped block Broyden methods for solving nonlinear equations

https://doi.org/10.1016/S0096-3003(02)00026-7Get rights and content

Abstract

In this paper a parallelizable overlapped block Broyden method is presented for solving large systems of nonlinear equations. The basic idea is to perform the block Broyden iteration described in [SIAM J. Sci. Comput. 18 (1997) 1367] for the overlapped blocks and then assemble the overlapping solutions in a weighted average manner at each iteration. A family of nonlinear overlapped solvers could be generated combining it with some iterative or direct linear solvers. The conditions under which the algorithm is locally convergent are studied and many useful techniques regarding the implementation are also considered.

Introduction

In this paper we are concerned with the problem of solving the large system of nonlinear equationsF(x)=0,where F(x)=(f1,…,fN)T is a nonlinear operator from RN to RN. Such systems often arise when solving initial or boundary value problems in ordinary or partial differential equations. As well known, Newton methods and its variations [1], [2], etc. coupled with some direct solution technique such as Gaussian elimination are powerful solvers for these nonlinear systems in case one has a sufficiently good initial guess x0 and N is not too large. When the Jacobian is large and sparse, inexact Newton methods [3], [4], [5], [6], [7], etc. or some kind of nonlinear block-iterative methods [8], [9], [10], etc. may be used.

Inexact Newton methods is actually a two stage iterative method which has the following general form [3]:

  • For k=0 step 1 until convergence do

    • Find some sk which satisfies

      • F(xk)sk=−F(xk)+rk, where ∥rk∥F(xk)∥⩽ηk.

    • Set xk+1=xk+sk.

Here {ηk} is a sequence of forcing terms such that 0⩽ηk<1. The inner iteration is certain iterative method for solving the Newton equations F(xk)sk=F(xk) approximately with the residual rk. The stopping relative residual control ∥rk∥/∥F(xk)∥⩽ηk guarantees the local convergence of the method under the usual assumptions for Newton’s method [3]. In [6] the author considered a scaled relative residual control which assures linear convergence of inexact methods in terms of forcing terms uniformly less than one and for an arbitrary norm on RN. Furthermore, it needs to be pointed out that the choice of the forcing terms is very important for achieving desirably fast local convergence and avoiding oversolving in the inexact method. Some choices have been presented in [12].

Recently with the development of Krylov subspace projection methods, this class methods such as Arnoldi’s method [13] and the generalized minimum residual method (GMRES) [14] is widely used as the inner iteration for inexact Newton methods [4], [5], etc. This combined method is called inexact Newton–Krylov methods or nonlinear Krylov subspace projection methods. The Krylov methods have the virtue of requiring almost no matrix storage, resulting in a distinct advantage over direct methods for solving the large Newton equations. In particular, the product of Jacobian and some fixed vector (F(xk)v) is only utilized in a Krylov method for solving F(xk)sk=F(xk), and the product can be approximated by the difference quotientF(xk)v≈F(xk+σv)−F(xk)σ,where σ is a scalar. So the Jacobian need not be computed explicitly. In [4] Brown gave the local convergence results for inexact Newton–Krylov methods with the difference approximation of Jacobian.

Nonlinear block-iterative methods in parallel is another class for large and sparse nonlinear equations, which chiefly consists of block Newton-type and block quasi-Newton methods. The classical nonlinear block-Jacobi method and nonlinear block-Gauss–Seidel method [1] are two original versions. A block-parallel Newton method via overlapping epsilon decompositions ([15]) was presented by Zecevic and Siljak [8]. In [9] the authors described a parallelizable Jacobi-type block Broyden method and more recently a partially asynchronous block Broyden method has been proposed by Xu [10].

In this paper, we consider an inexact block Broyden method with partially overlap which is a generalization of the parallelizable Jacobi-type block Broyden method. The basic idea is to directly perform the block Broyden iteration for the overlapped blocks and assemble the overlapping parts of the results in an average weighted manner at each iteration. The goal is to accelerate the block Broyden method by the overlapping parts. In Section 2, the new methods are presented. Section 3 describes the sufficient conditions under which the new methods are convergent. In Section 4, some useful techniques are considered. The numerical results for solving a function used as a test problem in [11] are also given. In Section 5, we draw conclusions and discuss the future work on this subject.

Section snippets

The new algorithm

In the following discussion, y*RN is an exact solution of system (1), i.e., F(y*)=0. Let y0 be an initial guess of y*, and suppose that it is possible to generate a new approximation yk of y* for k=0,1,… Suppose the components of y and F are conformally partitioned as follows:{F}={F1,…,FM},{y}={y1,…,yM},whereFi:RN→Rni,Fi=(fi1,…,fini)T,yi∈Rni,yi=(yi1,…,yini)T,i=1,…,M.Let Si={i1,…,ini}, the partition is chosen such that ⋃i=1MSi={1,2,…,N} and Si∩Si+1i,i+1≠∅,i=1,…,M−1. This partition may be

Local convergence

In this section, we give the conditions under which the new methods presented in the previous section are locally convergent.

Let Bα(B1,…,BM)=diag(B1I1−1,0)+⋯+diag(0,BiIi−1,0)+⋯+diag(0,BMIM−1), where Ii=diag(di1,…,dini) denotes a diagonal matrix of order ni for i=1,…,Mdij=αi,ij∈Ωi,i+1,1−αi−1,ij∈Ωi−1,i,1,ij∈Si,ij∉Ωi+1,i∪Ωi,i+1.A nonlinear function Φ:RN×DRN could be defined byΦ(y,B)=y−B−1F(y),where D={B=Bα(B1,…,BM)|BiRni×ni is nonsingular, i=1,…,M}. Obviously, if there exist y*RN and E*D

Implementation and examples

In this section, several strategies for partitioning the Jacobian into weakly coupled, partially overlapped blocks are briefly described first, follow by the restarted technique and finally, a test function is described and some numerical results for solving it are presented.

As well known, the zero–nonzero structure of an N×N symmetric matrix A=(Aij) could be associated with a undirected graph G(A)=〈V,E〉, where the set V has N vertices {i,…,N} and E is a collection of unordered pairs of

Conclusions and discussion

In this paper a partially overlapped block Broyden method (Algorithm 1) is presented for solving large nonlinear systems. The basic idea is to directly perform the block Broyden iteration for the overlapped blocks and assemble the overlapping parts of the iterative results in an appropriate manner at each iteration. Combining it with some iterative or direct linear solvers, it is possible to obtain a family of nonlinear solvers which can be easily parallelized. In particular, an inexact version

References (22)

  • I.K. Argyros

    Convergence rates for inexact Newton-like methods at singular points and applications

    Appl. Math. Comput.

    (1999)
  • J.-J. Xu

    Convergence of partially asynchronous block quasi-Newton methods for nonlinear systems of equations

    J. Appl. Math. Comput.

    (1999)
  • J.M. Ortega et al.

    Iterative Solution of Nonlinear Equations in Several Variables

    (1970)
  • W.C. Rheinboldt

    Methods for Solving Systems of Nonlinear Equations

    (1998)
  • R.S. Dembo et al.

    Inexact Newton methods

    SIAM J. Numer. Anal.

    (1982)
  • P.N. Brown

    A local convergence theory for combined Inexact-Newton/Finite-Difference projection methods

    SIAM J. Numer. Anal.

    (1987)
  • P.N. Brown et al.

    Hybrid Krylov methods for nonlinear systems of equations

    SIAM J. Sci. Statist. Comput.

    (1990)
  • B. Morini

    Convergence behaviour of inexact Newton methods

    Math. Comp.

    (1999)
  • A.I. Zecevic et al.

    A block-parallel Newton method via overlapping epsilon decompositions

    SIAM J. Matrix Anal. Appl.

    (1994)
  • G. Yang et al.

    Inexact block Jacobi–Broyden methods for solving nonlinear systems of equations

    SIAM J. Sci. Comput.

    (1997)
  • M.A. Gomes-Ruggiero et al.

    Comparing algorithms for solving sparse nonlinear systems of equations

    SIAM J. Sci. Statist. Comput.

    (1992)
  • Cited by (13)

    • Weak convergence conditions for Inexact Newton-type methods

      2011, Applied Mathematics and Computation
      Citation Excerpt :

      For approximating a locally unique solution of a nonlinear equation in a Banach spaces setting, new semilocal convergence results for (INTM) are provided. Using a combination of Lipschitz/center–Lipschitz conditions, instead of only Lipschitz conditions, we provided an analysis with the following advantages over the works in [15–31]: weaker sufficient convergence conditions and larger convergence domain. Numerical examples further validating the results and special case of (INTM) are also provided in this study.

    • Convergence behaviour of inexact Newton methods under weak Lipschitz condition

      2006, Journal of Computational and Applied Mathematics
    • Inexact block Newton methods for solving nonlinear equations

      2005, Applied Mathematics and Computation
    • Research in mathematics at Cameron University

      2021, Research in Mathematics at Cameron University
    • Advances on iterative procedures

      2011, Advances on Iterative Procedures
    View all citing articles on Scopus

    The work has been partly Supported by National Key Basic Research Special Fund (No. 1998020306), and by CNSF (No. 19871047).

    View full text