A numerical method for finding positive solution of elliptic equation with Neumann boundary condition

https://doi.org/10.1016/j.amc.2006.11.070Get rights and content

Abstract

Using Gradient Newton Galerkin algorithm, we will obtain positive solution to the boundary value problems of the form −Δu = f(u) for x  Ω, with Neumann boundary condition.

Introduction

Consider the semilinear elliptic BVP-Δu(x)+f(x,u(x))=0,xΩ,u(x)n=0,xΩ,where Ω is a bounded domain in RN with smooth boundary, Δ the standard Laplacian operator, and un is the normal derivative of u on ∂ Ω.

Let H be Sobolev space H01,2(Ω) (see [1]), we define the action functional J:HR byJ(u)=Ω12|u|2-F(u)dx.One easily sees that critical points of J are weak solution to (1). In fact by regularity theory for elliptic boundary value problems u is a solution (classical) to (1) if and only if u is a critical point of the functional J (see [3]).

In the computation of numerical solutions of semilinear elliptic boundary value problems (BVPs) in the form (1) several algorithms have been effective ([2]). Each algorithm has its advantages and disadvantages and should be chosen according to certain a priori knowledge of the solution manifold structure, if any such knowledge is available. For example, direct iteration algorithm (DIA) and monotone iteration algorithm (MIA) are useful in capturing the so-called stable solutions while mountain pass algorithm (MPA) and scaling iteration algorithm (SIA) are useful in capturing the so-called mountain pass solutions which are unstable.

Obviously, MPL and MIS and their numerical implementation are not the only feasible methods for computing solutions of (1). We may mention Newton’s method, which is a generic method for solving nonlinear problems, and can be applied in variational or other settings in order treat (1). In the case of nonlinear elliptic PDEs, the application of Newton’s method to the corresponding variational functional appears quite straightforward at first. However, a basic assumption to use Newton’s method to find u satisfying J′(u) = 0, i.e. a critical point u of a functional J in a Banach space, is that J″(u) be invertible as a bounded linear operator.

For the PDE (1) on square region, Ω = (0, 1) × (0, 1), it is well-known that the (doubly indexed) eigenvalues and eigenfunctions of −Δ with Neumann boundary condition areλm,n=(m2+n2)π2andψm,n(x,y)=cos(mπx)cos(nπy).We will useψm,n(x,y)=1form=n=0,2cos(mπx)cos(nπy)form=0,n0orn=0,m0,2cos(mπx)cos(nπy)forn0andm0,which is normalized in L2 = L2(Ω) and form a basis for L2 and H. For convenience, we use {ψi}i=1M to denote this basis. Thus, for u  G there exists Fourier coefficients {ai}i=1MRM such that u=i=1Maiψi.

To apply Newton’s method to our problem, we truncate the infinite dimensional Galerkin space G=span{ψi}i=1M. We restrict all of our computations to the M  dimensional subspace G  H. Recall that we have normalized our eigenfunctions in L2 so that Ωψiψjdx=δij and Ωψiψjdx=δi,jλij, where δij is the Kronecker delta function.

Thus, we identify G with RM and define Ĵ:RM  R by Ĵ(a) = J(u), where u=i=1MaiψiG. The L2 gradient of Ĵ is the vector functiong(a)=Jˆ(a)a1,,Jˆ(a)aM=(J(u)(ψ1),,J(u)(ψM))RMand the Hessian matrix isA(a)=2Jˆ(a)aiaji,j=1M=(J(ψi,ψj))i,j=1M.

Using integration by parts, we see that we can compute the entriesgk(a)=Jˆ(a)ak=-Ω(Δu+f(u))ψkdx=akλk-Ωf(u)ψkdxandAjk(a)=δjkλj-Ωf(u)ψiψjdx,where u=i=1Maiψi. For informational purposes, one might also want to compute Ĵ(a) = J(u) in a similar fashion. Note that we only need to do numerical integration on the nonlinear terms. This greatly increases the accuracy of our approximations. With a gradient and Hessian in hand, the Newton iteration becomesan+1=an-(A(an))-1g(an).In practice, one can introduce stability and remain in a desired basin of attraction by imitating a continuous Newton’s flow by taking smaller steps, i.e. choose δ small and iteratean+1=an-δ(A(an))-1g(an).When the sequence of coefficient an converges to a vector a, our approximate solution is given by u=i=1Maiψi.

Section snippets

Numerical results

The algorithm, GNGA (see [5]), consists of performing Newton’s method iterations on the Fourier coefficients to find zeros of ∇J. To summarize the GNGA algorithm for our semilinear elliptic Dirichlet problem, one performs the following steps:

  • 1.

    Define region Ω, nonlinearity f, and step size δ.

  • 2.

    Obtain orthonormal basis {ψk}i=1M for a sufficiently large subspace G  H.

  • 3.

    Choose initial coefficient a=a0={ak}k=1M, set u=u0=akψk, and set n = 0.

  • 4.

    Loop

    • (a)

      Calculate g=gn+1=(J(u)(ψk))k=1MRM (gradient vector).

    • (b)

References (5)

  • R.A. Adams

    Sobolev Spaces

    (1975)
  • G. Chen et al.

    Algorithms and visualization for solution of nonlinear equations

    International Journal of Bifurcation and Chaos

    (2000)
There are more references available in the full text version of this article.

Cited by (0)

View full text