A numerical method for finding positive solution of elliptic equation with Neumann boundary condition
Introduction
Consider the semilinear elliptic BVPwhere Ω is a bounded domain in RN with smooth boundary, Δ the standard Laplacian operator, and is the normal derivative of u on ∂ Ω.
Let H be Sobolev space (see [1]), we define the action functional byOne easily sees that critical points of J are weak solution to (1). In fact by regularity theory for elliptic boundary value problems u is a solution (classical) to (1) if and only if u is a critical point of the functional J (see [3]).
In the computation of numerical solutions of semilinear elliptic boundary value problems (BVPs) in the form (1) several algorithms have been effective ([2]). Each algorithm has its advantages and disadvantages and should be chosen according to certain a priori knowledge of the solution manifold structure, if any such knowledge is available. For example, direct iteration algorithm (DIA) and monotone iteration algorithm (MIA) are useful in capturing the so-called stable solutions while mountain pass algorithm (MPA) and scaling iteration algorithm (SIA) are useful in capturing the so-called mountain pass solutions which are unstable.
Obviously, MPL and MIS and their numerical implementation are not the only feasible methods for computing solutions of (1). We may mention Newton’s method, which is a generic method for solving nonlinear problems, and can be applied in variational or other settings in order treat (1). In the case of nonlinear elliptic PDEs, the application of Newton’s method to the corresponding variational functional appears quite straightforward at first. However, a basic assumption to use Newton’s method to find u satisfying J′(u) = 0, i.e. a critical point u of a functional J in a Banach space, is that J″(u) be invertible as a bounded linear operator.
For the PDE (1) on square region, Ω = (0, 1) × (0, 1), it is well-known that the (doubly indexed) eigenvalues and eigenfunctions of −Δ with Neumann boundary condition areandWe will usewhich is normalized in L2 = L2(Ω) and form a basis for L2 and H. For convenience, we use to denote this basis. Thus, for u ∈ G there exists Fourier coefficients such that .
To apply Newton’s method to our problem, we truncate the infinite dimensional Galerkin space . We restrict all of our computations to the M − dimensional subspace G ⊂ H. Recall that we have normalized our eigenfunctions in L2 so that and , where δij is the Kronecker delta function.
Thus, we identify G with RM and define Ĵ:RM → R by Ĵ(a) = J(u), where . The L2 gradient of Ĵ is the vector functionand the Hessian matrix is
Using integration by parts, we see that we can compute the entriesandwhere . For informational purposes, one might also want to compute Ĵ(a) = J(u) in a similar fashion. Note that we only need to do numerical integration on the nonlinear terms. This greatly increases the accuracy of our approximations. With a gradient and Hessian in hand, the Newton iteration becomesIn practice, one can introduce stability and remain in a desired basin of attraction by imitating a continuous Newton’s flow by taking smaller steps, i.e. choose δ small and iterateWhen the sequence of coefficient an converges to a vector a, our approximate solution is given by .
Section snippets
Numerical results
The algorithm, GNGA (see [5]), consists of performing Newton’s method iterations on the Fourier coefficients to find zeros of ∇J. To summarize the GNGA algorithm for our semilinear elliptic Dirichlet problem, one performs the following steps:
- 1.
Define region Ω, nonlinearity f, and step size δ.
- 2.
Obtain orthonormal basis for a sufficiently large subspace G ⊂ H.
- 3.
Choose initial coefficient , set , and set n = 0.
- 4.
Loop
- (a)
Calculate (gradient vector).
- (b)
- (a)
References (5)
Sobolev Spaces
(1975)- et al.
Algorithms and visualization for solution of nonlinear equations
International Journal of Bifurcation and Chaos
(2000)