Sinc-Galerkin method for solving nonlinear boundary-value problems

Sinc-Galerkin method for solving nonlinear boundary-value problems

An Intemation-I Journal Available online at www.sciencedirect.com .c,..c. computers & mathematics with applications ELSEVIER Computers and Mathema...

702KB Sizes 0 Downloads 33 Views

An Intemation-I Journal Available online at www.sciencedirect.com

.c,..c.

computers & mathematics

with applications

ELSEVIER Computers and Mathematics with Applications 48 (2004) 1285-1298 www.elsevier.com/locate/camwa

Sinc-Galerkin M e t h o d for Solving Nonlinear Boundary-Value Problems M. EL-GAMEL Department of Mathematical Sciences Faculty of Engineering, Mansoura University, Egypt

gamel_eg©yahoo, corn A . I. ZAYED Department of Mathematical Sciences DePaul University, Chicago, IL 60614, U.S.A. azayed~math, depaul, edu

A b s t r a c t - - T h e sinc-Galerkin method is used to approximate solutions of nonlinear problems involving nonlinear second-, fourth-, and sixth-order differential equations with homogeneous and nonhomogeneous boundary conditions. The scheme is tested on four nonlinear problems. The results demonstrate the reliability and efficiency of the algorithm developed. © 2004 Elsevier Ltd. All rights reserved. Keywords--Sinc-Galerkin, Sinc function, Nonlinear differential equations, Numerical solutions, Newton's method. 1. I N T R O D U C T I O N Linear two-point boundary-value problems can be readily solved by many methods, e.g., shooting, band matrix, parallel shooting, collocation, Ritz-Galerkin [1], sinc-Galerkin [2-5]. Even singular linear two-point boundary-value problems can be handled by the Ritz-Galerkin method, as was shown by Jespersen [6]. Nonlinear problems result in a nonlinear system of equations to solve, and the typical suggestion [1] is that this system of equations be solved by a quasi-Newton method, or by embedding if a good initial approximation is not known. Broyden's, Newton, and Steffensen's methods for solving a nonlinear system of equations are local in nature and may fail if the starting point is not close to the solution. Embedding is an attempt to overcome this difficulty, but unfortunately embedding also fails frequently due to "singular points" [1]. There are conditions, somewhat restrictive though, which preclude the existence of "singular points" [7]. Accurate and fast numerical solution of two-point boundary value ordinary differential equations is necessary in many important scientific and engineering applications, e.g., boundary layer theory, the study of stellar interiors, control and optimization theory, and flow networks in biology. The sinc-Galerkin methods for ordinary differential equations have many salient features due to the properties of the basis functions and the manner in which the problem is discretized. Of 0898-1221/04/$ - see front matter (~) 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.camwa.2004.10.021

Typeset by ~4MS-TEX

1286

M. EL-GAMELAND A. I. ZAYED

equal practical significance is the fact that the method's implementation requires no modification in the presence of singularities. The approximating discrete system depends only on parameters of the differential equation regardless of whether it is singular or nonsingular. In this paper, we consider nonlinear differential equations of order 2m, m -- 1, 2, 3,

Lu = u (2"~)+ ~-(x)uu' + ~(x)H(u) = f(x),

0 < x < 1,

(1.1)

subject to boundary conditions uO)(0) = 0,

uO)(1) = 0,

0 _< j < m - 1,

(1.2)

where H(u) may be a polynomial or a rational function, or exponential. Due to the large number of different possibilities, our work will be focused mainly on the following forms H(u):

• H(u)=un, n > l , • H(u) = exp(+u), cos(u), sin(u), sinh(u), cosh(u), . . . , • H(u) = 1/(1 =t=u) n, 1/(1 =t=u2)n, 1/(u 2 =l=1) n, n ¢ 0, or any analytic function of u which has a power series expansion. Agarwal and Akrivis [8] have discussed in detail the existence and uniqueness of (1.1),(1.2). Throughout this paper in keeping with Stenger [9], we shall assume that u(x), ~-(x), ~;(x), and f(x) are analytic with respect to x in a neighborhood of [0, 1]. The sinc-Galerkin method utilizes a modified Galerkin scheme to discretize (1.1),(1.2). The basis elements that are used in this approach are the sinc function composed with a suitable conformal map. A thorough description of the sinc function properties may be found in [9]. The outline of the paper is as follows. In Section 2, we review some of the main properties of sinc-Galerkin that are necessary for the formulation of the discrete system. In Section 3, we illustrate how the sinc-Galerkin method may be used to replace equation (1.1) by an explicit system of nonlinear algebraic equations that is solved by Newton's method. Section 4 presents appropriate techniques to treat nonhomogeneous boundary conditions. Finally, some numerical examples are presented in Section 5, where the scheme is tested on four nonlinear problems. The results demonstrate the reliability and efficiency of the algorithm developed. 2. S I N C

FUNCTION

PRELIMINARIES

The sinc function is defined on the whole real line by sine(x) = sin(wx) ,

- o o < x < ec.

(2.1)

7rx

For h > 0, the translated sinc functions with evenly spaced nodes are given as

S(k,h)(x)=sinc([email protected]),

k-- 0+1,+2,....

(2.2)

If f is defined on the real line, then for h > 0 the series

C(f,h)= ~

f(hk)sinc(~h hk)

(2.3)

k~--o¢)

is called the Whittaker cardinal expansion of f whenever this series converges. The properties of (2.3) have been extensively studied. A comprehensive survey of these approximation properties is found in [10]. To construct approximations on the interval (0, 1), which are used in this paper, consider the conformal maps

Sinc-Galerkin Method

1287

The map ¢ carries the eye-shaped region

DE=-- z = x + iy :

< d <_

,

(2.5)

onto the infinite strip

Da -= { ( = ~ + i~? : l~?l< d < 2 } .

(2.6)

S#(x) = S(h,j) o ¢(x) --=sine (¢(x)-h- j h )

(2.7)

The composition

defines the basis element for equation (1.1) on the interval (0, 1). The "mesh size" h is the mesh size in Dd for the uniform grids {kh}, - c ~ < k < c~. The sinc grid points zk E (0, 1) in DE will be denoted by Xk because they are real. The inverse images of the equispaced grids are

ekh Xk = ¢-l(kh) = 1 + e kh"

(2.8)

DEFINITION 2.1. Let DE be a simply connected domain in the complex plane C, and let ODE denote the boundary of DE. Let a, b (a ¢ b) be points on ODE, and ¢ be a conformal map DE onto Dd such that ¢(a) = - o o and ¢(b) = c~. If the inverse map of ¢ is denoted by ~p, define r =

:

<

<

and Zk --=¢(kh), k = 0, ±1, ± 2 , . . . . DEFINITION 2.2. Let B(DE) be the class of functions F that are analytic in DE and satisfy

]~

IF(z) dz I --+ O,

as u = ±co,

(2.9)

(LWu)

where

L={iy:lYl
(2.10)

and on the boundary of DE (denoted ODE) satisfy T(F) = f IF(z) dz I < oo. Ja DE

(2.11)

The importance of the class B(DE) with regard to numerical integration is summarized in the following theorems [10]. THEOREM 2.1. Let F be (0, 1), if F 6 B(DE), then for h > 0 sut~ciently small

fr

~ F(z#) i fo F(z)k(¢'h)(Z) dz =- ZF, F(z) d z - h E ¢'(z#) = 2 sin(Tr¢(z)/h) #=-oo D

(2.12)

where

,k(¢,h),zeOD = exp [ ~ s g n ( l m ¢ ( z ) ) ]

z6oD m e -~rd/h.

(2.13)

For the sinc-Galerkin method, the infinite quadrature rule must be truncated to a finite sum. The following theorem indicates the conditions under which exponential convergence results.

1288

M. EL-GAMEL AND A. I. ZAYED

THEOREM 2.2. If there exist positive constants a,/3, and C such that

F(x) ~ C{ exp(-al¢(x)l)' ¢'(x) l exp(-~l¢(z)l), x e ¢((0, ~)),

(2.14)

then the error bound for the quadrature rule (2.12) is

N F(xg) F(x)

- h Z

j=-M

¢'(xj)

<--c(e-~Mh + ~ ) +

IIFI.

(2.15)

The infinite sum in (2.12) is truncated with the use of (2.14) to arrive at this inequality (2.15). Making the selections

h= i~d ~--~,

(2.16)

and N -

+ 1

(2.17)

where [x] is the integer part of x, then

F(xj) EN -¢'(xj) -+o

f

Jr

(e_(~radM)~/2}

(2ns)

j=--M

Theorems 2.1 and 2.2 are used to approximate the integrals that arise in the formulation of the discrete systems corresponding to equations (1.1),(1.2).

3. S I N C - G A L E R K I N

METHOD

We start with the case H(u) = u ~, where n is a nonnegative integer, and assume an approximate solution of the form N

uQ(x) = E

cjSj(x),

Q = M + N + I,

(3.1)

j=-M where Sj (x) is the function S(j, h) o ¢(x) for some fixed step size h. The unknown coefficients {cj}NM in (3.1) are determined by orthogonalizing the residual LuQ - f with respect to the functions {sk}N__M. This yields the discrete system

(LuQ - f, Sk) = 0,

(3.2)

for k -- - M , - M + 1,... ,N. The weighted inner product (, } is taken to be 1

(g(x), f(x)} = Y~0 g ( x ) f ( x ) w ( x ) dx.

(3.3)

Here, w(x) plays the role of a weight function which is chosen depending on the boundary conditions, the domain, and the differential equation. For the case of 2m-order boundary value problems, it is convenient to take 1

~(x) = (¢,(x))~"

(3.4)

A complete discussion on the choice of the weight function can be found in [3,9]. The most direct development of the discrete system for equation (3.1) is obtained by substituting (3.1) into (1.1). The system can then be expressed in integral form via (3.3). This approach, however, obscures

Sinc-Galerkin M e t h o d

1289

the analysis which is necessary for applying sinc quadrature formulae to (3.2). An alternative approach is to analyze instead

(u(2m),Sk>++=
k=-M,...,N.

(3.5)

The method of approximating the integrals in (3.5) begins by integrating by parts to transfer all derivatives from u to Sk. The approximation of the last inner products on the right-hand side of (3.5) h I(xk)w(xk) (3.6) (f, Sk) -¢'(xk) ' We need the following two theorems. THEOREM 3.1.

The [ollowing relations hold: N

2m

:h E E

u(xj) 6(,)~ i x . x 5=-Mi=O ¢'(x~)hi kju2-~,~ 3J,

(3.7)

for some functions g2m# to be determined. PROOF. The inner product with sinc basis element is given by


(3.8)

This expression contains 2m derivative of u but the desired result is the variable u with no derivatives. Integrating by parts to remove 2m derivatives from the dependent variable u leads to the equality

= B= + ~lu(x)(Sk(X)W(X))(2m) dx,

(3.9)

where the boundary term [2m-1

l

L i=O

J x=O

Bx = ] ~ (-1)'u(2m-l-')(Skw)(') I

1

=0.

(3.10)

Setting dn

den [sk¢)] = S k(n) (x),

0
and noting that

~[Sk(x)] = s~~(x)¢'(x), we obtain by expanding the derivatives under the integral in (3.9) (3.11)

where

g2m,1 are

CASE

m

=

given as the following.

1.

g2,~(~) = ~(¢,)2,

g~,l(z) = ~(¢)" + 2~'¢',

g~,o(x) = ~".

(3.12)

C A S E m --~ 2.

g4,o(x) : w (4),

g4,a(x) ---- w(¢') 4,

g4,3 (x) = 6w(¢')2¢ '' ÷ 4w'(¢') 3,

g4,2(x) = 3w(¢") 2 + 4w¢'¢'" + 12w'¢'¢" + 6w"(¢') 2, g4,1(x) = ~(¢)4 + 4~'(¢)'" + 6~"¢" + 4~'"¢'.

(3.13) (3.14) (3.15)

1290 CASE

M. EL-GAMEL

AND A~ I. gAZED

m = 3. 96,0 = w (6),

96,6 = w(¢') 6,

g6,5 = 15w(¢')4¢ '' + 6w'(¢') 5,

96,4 = 20W¢ (3) (¢,')(3) _~ 45W(¢t)2(¢,,)2 .~ 60w,(¢,)3¢,, + 15wt,(¢¢)(4)

(3.16) (3.17)

9~,~ = 15~(¢") a + 15~(¢')2(¢) (4) + 60~¢'¢"¢'" + 60~'(¢')2¢ '''

+ 90w'¢'(¢") 2 + 80w"¢"(¢') 2 + 20w'"(¢') 3,

(3.18)

g6,2 = 10w(¢"') 2 + 6w¢'¢ (5) + 15w¢"¢ (4) + 30w'¢'¢ (4) + 60w'¢"¢"' + 60wt'¢'¢" + 45w"(¢") 2 + 60w'"¢'¢" + 15w(4)(¢') 2, 96,1 : q~(6)w + 6~b(5)w' + 154 (4)w(2) -t- 20¢(3)w (3) + 15¢(2)w (4) + 6¢'w (s).

(3.19) (3.20)

Applying the sine quadrature rule to the right-hand side of (3.11) and deleting the error terms yields (3.7). | THEOREM 3.2. The following relations hold:

<~(~>~, &> = ,~w(zk)~"(zk)~(zk). ¢'(xk)

(3.22)

PROOF. For r(x)uu ~, the inner product with sinc basis elements is given by

< ~ ' , &> =

~'(&~)d..

(3.23)

Integrating by parts to remove the first derivative from the dependent variable u leads to the equality

u 2(&rw)' dx,


B1 =

u2Skrw

(3.24)

>11

= 0, .I x~O and expanding the derivatives under the integral in (3.24) yields

(3.25)

1 a~01%t2(Z)[S(1)Ot(TW ) q- S(kO)(T,w)t] dx.

(3.26)

Applying the sine quadrature rule to the right-hand side of (3.26) and deleting the error term yields (3.21). For ~(x)u n, the inner product with sine basis elements can be evaluated directly by application of (2.18) and deleting the error term to yield (3.22). | Replacing each term of (3.5) with the approximation defined in (3.7), (3.21), (3.22), and (3.6), respectively, and replacing u(xj) by cj, and dividing by h, we obtain the following theorem. THEOaEM 3.3. If the assumed approximate so/ution of the boundary-value problem (1.1),(1.2) is (3.1), then the discrete sinc-Galerkin system for the determination of the unknown coet~cients {ej}~__ M is given, for k = - M , . . . ,N, by

N ~-----~1d(i) g2m,i(Xj) j=-M ~=o

l~(l)tTW'~(X ,,,2

l[jk =-M

~(~)~(~)~. f(~k)~(~) ¢'(~k) k = " ¢'(x~)

(TW)t(Xk)e2] (3.27)

Sinc-Galerkin Method

1291

The following notation will be necessary for writing down the system. Let D(g) be the Q x Q diagonal matrix:

[g(X--M) D(g)

)

=

g(X-M+I)



(3.28)

g( N) We need the following two lemmas. LEMMA 3.1. (See [5].) Let ¢ be the conformal one-to-one mapping of the simply connected

domain DE onto Dd, given by (2.4). Then, (0) { 1, j = k, jk---[S(j,h)°¢(x)]lx=~k= O, j ¢ k ,

k-j

_7r2

d2

j=k,

0, (_l)k-J

5 (1) = hff--~[S(j, h) o ¢(x)]lz=x~ -jk x(2)

(3.29)

.

'

---~-,

= h2 $ [SO, h) o

(3.30)

j~k, j = k,

(3.31)

=

jck,

d3

j -.~k,

{ O,

j#k,

(k - j)3

(3.32)

and

,if4 R(4)

d4

-~--, - 4 ( - 1 ) k - 3 [6 - 7r2(k - j)2]

~jk -- h4-~[S(J, h) o ¢(x)l~=xk =

j ~-- k, j # k.

(3.33) |

With some computations, one can prove the following lemma. LEMMA 3.2. Let ¢ be the conformal one-to-one mapping of the simp]y connected domain DE

onto Dd, given by (2.4). Then, 5 ~ ) = h ~ d5

- ~ [S(j, h) o ¢(x)] Iz=zk =

{ 0,

j=k,

ajk, j C k ,

(3.34)

where t~jk = ((--1)k-J/(k -- j)s)[120 -- 207r2(k - j)2 + Ira(k _ j)4], xv~k ( 6 ) = h 6 d C [S(j, h) o ¢(x)l]~=~k = aq)v

71.6

~-,

j=k,

#jk,

j¢k,

(3.35)

where -6(-1)a-J

/.tjk ~- (k-j)6

[120 - 207r2(k - j)2 + 7r4(k _ j)4].

Define the Q x Q matrices I (v) (see [11]) for 0 < p _< 2m by

I (p) = [~jk [~(P)] j,

j , k = - M , . . ., N.

(3.36)

1292

M. EL-GAMEL

AND i. L ZAYED

Let c be the Q-vector with jth component given by cj, and c ~ be the Q-vector with jth component given by c~', and 1 is an Q-vector each of whose components are 1. In this notation, the system in (3.27) takes the matrix form

C-M+1 A

:

CnM+l

¢2M+ 1 :

+B

+E

= O,

i

(3.37)

where

(3.3s) (3.39)

E=D\¢,), O=D and

(°,) -~-

(3.40)

1,

2m A = E I I ( J ) D (g2m,j~

(3.41)

\ ¢'1"

j=0 hJ

Now, we have a nonlinear system of Q = M + N + 1 equations of the Q unknown coefficients, namely, {cj}N=_M . We can obtain the coefficients of the approximate solution by solving this nonlinear system by Newton's method [12-17]. The solution c -- (C-M,..., CN)T gives the coefficients in the approximate sinc-Galerkin solution urn(x) of u(x). Newton's Method To solve the system of equations (3.37), we write it in the form

F(C) = / F - M + I ( C - M ' C - M + I " .

\

"' cN)

=

,

(3.42)

I~N(C-M,C-M+I,...,CN)

where c is the column vector of independent "variables and F is the column vector of the functions Fj, with Fj(c) = _Pj(C_M,C_M+I,... ,aN), --M ~ j ~ N. The number of functions that are set equal to zero is equal to the number of independent variables. A very good method for solving equation (3.42) is Newton's method. Let c (i) be the guess at the solution for iteration i. Let F (0 denote the value of F at the ith iteration. Assuming that ]IF(~)II is not too small, we seek update vectors Ac (~)

(~+1)

c(

=

+ac(')

c_

/

+li =



c b+l

+

Ac

+l

,

(3.43)

CN

such that F(c (i+I)) = 0. Using the multidimensional extension of Taylor's theorem to approximate the variation of F(c) in the neighborhood of c (i) gives

F(c(~)+Ac(~))=F(c(0)+F,(c(~))Ac(0+O(

Ac(0 2 ) ,

(3.44)

Sinc-Galerkin Method

1293

where F~(c (~)) is the Jacobian of the system of equations a F - M (c) ( O--Z-M-M(c) OF--M 0C-M-t-1 OF-M+1(c) OF-M+~(e)

OF-M (c),

...

OCN

OF-M+1(c), Ocg

OC-M

F'(c) -- J(~) =

OFN (~)

0F~

(c)

--.

(3.45)

~ (c) . aFN

0C--M-t-1 Neglecting higher order terms and designating j(i) as the Jacobian evaluated at c (~). We can rearrange equation (3.44) F (c (i) + Ac (~)) = F (c (~)) + J(i)Ac(i).

(3.46)

The goal of Newton iterations is to make F ( c (~) ÷ Ac (i)) = 0, so setting that term to zero in the preceding equation gives

(3.47) Equation (3.47) is a system of Q linear equations in the Q unknown Ac (i). Each Newton iteration step involves evaluation of the vector F (i) , the matrix J(i), and the solution to equation (3.47). A common numerical practice is to stop the Newton iteration whenever the distance between two iterates is less than a given tolerance, i.e., when Itc (i+1) - c (i) 11 -< s.

Algorithm • • • • • • •

initialize c = c (°), for i = 0, 1, 2 , . . . , F(0 = Ac[(i) + Bc2[(i) + E c n l ( i ) - O , if HF(01[ is small enough, stop, compute j(i), solve J(i)Ac(i) = - F ( c ( 0 ) , c(i+ 1) = c(i) + Ac (i), end.

Also, some of the well-known techniques we can use in solving equation (3.37) are the quasiNewton and secant methods; for more detail, see [18-21]. 4. TREATMENT

OF

THE

BOUNDARY

CONDITION

In the previous section the development of the sinc-Galerkin technique for homogeneous boundary conditions provided a practical approach since the sinc function composed with various conformal mappings, S(j, h) o ¢, are zero at the endpoints of the interval. If the boundary conditions axe nonhomogeneous, then these conditions need be converted to homogeneous ones via an interpolation by a known function. For example, consider

~(~m) + ~ ( x ) ~ ' + ~(x)~" = / ( x ) ,

0 < x < 1,

(4.1)

subject to boundary conditions

~(')(0) = R,,

~(')(1) = T ,

0
(4.2)

The nonhomogeneous boundary conditions in (4.2) can be transformed to homogeneous boundary conditions by the change of dependent variable W ( x ) = u(x) - A(x),

(4.3)

1294

M, EL-GAMEL AND A. I. ZAYED

where A(x) is the interpolating polynomial that satisfies A(~)(0) = R~ a n d A(~)(1) = T~, 0 < i < m-1 2m--1

A(x)= E

#~x~"

(4.4)

i=0

It is easy to see the following. CASE m ~- 1. ].to ---~Ro, /.11 ~-~T0 - R 0 . CASE m = 2. #o = /~o, #1 = I{1, ~12 = 3T0 - T1 - 2 R 1 - 3 R o , #3 = T1 - 2To + R1 + 2Ro. CASE m = 3. #0 = R0,

#I=R1, ~2=-R2/2,

,ua = 1 [(20To - 8T1 + T2) - (20no + 12R1 + 3R2)],

~4 = [(-15To + 7T1- T2) + (15Ro + 8Rl + ~R2) ] , #5 -- 1[(12To - 6T1 -t- T2) - (12R0 + 6R1 + R2)]. The new problem with homogeneous boundary conditions is then n--1

0
(4.5)

k=O

subject to the boundary conditions w ( ' ) ( 0 ) = 0,

w ( ~ ) ( t ) = 0,

0 < i < m - 1,

(4.6)

where ] ( x ) : f ( ~ ) - ~(x)AA(1) - ~ ( x ) A ~. Now, apply the standard sinc-Galerkin method to (4.5). of (4.5) via the formula

(4.7)

We define an approximate solution

N

WQ(X)= E c~Sj(x),

Q = M + N + 1.

(4.8)

j=--M

Then, the approximate solution of (4.1) is N

uq(z) =

Z

cjSj(x) + h(x)

(4.9)

j=-M

5. N U M E R I C A L

RESULTS

In this section, four nonlinear problems will be tested by using the sinc Galerkin method discussed above. For comparison reasons, the problems have homogeneous and nonhomogeneous boundary conditions and known solutions. As will be demonstrated by the numerical results, the boundary singularities have no adverse effect on the performance of the method. All the experiments were performed in MATLAB. In our tests, the zero vector is the initial guess and the stopping criterion is I]c(j+l) - c (j)l[ < 10-8. In all the examples we take d -- ~r/2. Once M is chosen, the step size and remaining summation limit can be determined as follows: h =

(~M'

Sinc-Galerkin Method

1295

w h e r e [x] is t h e integer p a r t of x. N o t e t h a t if c~/f~ is an integer, it suffices t o choose N =

(o~/~)M.

W e use a b s o l u t e r e l a t i v e error which is defined as a b s o l u t e r e l a t i v e error = [Uexact solution -- Cslnc-Galerkin[

(5.1)

IVexact solutionl

For the sake of comparison only, we will discuss the first examples that were investigated by Chawla and Katti [22], Agarwal [8], and Twizell and Wirmizi [23]. EXAMPLE 1. (See [8,22,23].) Consider the boundary value problem u (4) = 6 exp(-4u) - 12(1 + x) -4,

0 < x < 1,

(5.2)

subject to boundary conditions

~(0) = 0 ,

~(1) = l n 2 ,

w h i c h has t h e e x a c t s o l u t i o n given by

~'(0)=1,

u(x) =

~'(1) =0.5,

(5.3)

ln(1 + x).

T h e p a r a m e t e r s are selected so t h a t ~ = / ~ = 1/2 and M = 60 . T h e e x a c t a n d a p p r o x i m a t e solutions a n d t h e a b s o l u t e r e l a t i v e error are displayed in T a b l e 1. In T a b l e 2, we c o m p a r e t h e results o b t a i n e d by t h e S i n c - G a l e r k i n m e t h o d w i t h t h o s e o b t a i n e d b y C h a w l a a n d K a t t a , using a f o u r t h - o r d e r finite difference m e t h o d , A g a r w a l a n d Akrivis, using t h e finite difference m e t h o d , a n d Twizell a n d T i r m i z i [23], using a f o u r t h - o r d e r m u l t i d e r i v a t i v e method. EXAMPLE 2. C o n s i d e r t h e b o u n d a r y value p r o b l e m

~,, + ~ , + ~3 = ! + xlnx(1 + lnx) + (xln~) 3,

o < x < 1,

X

(5.4)

subject to boundary conditions

u(0) = 0, which has the exact solution given by

u(x)

= x

u(1) = 0,

(~.5)

lnx.

Table 1. x

Exact Solution

Sinc-Galerkin

Relative Error 1.0e - 10

0.0

0.0

0.0

0.08065

0.077568262040

0.077568262046

0.06

0.16488

0.152623517296

0.152623517297

0.1

0.22851

0.205803507218

0.205803507212

0.04

0.39997

0.336452906454

0.336452906455

0.01

0.5

0.405465108108

0.405465108103

0.04

0.69235

0.526121481267

0.526121481263

0.03

0.77148

0.571819991855

0.571819991858

0.06

0.88369

0.633234913798

0.633234913793

0.04

0.94474

0.665133248137

0.665133248135

0.02

1.0

0.693147180559

0.693147180559

0.0

-

-

Table 2. Error norms. Sinc-Galerkin

Chawla and Katti [22]

Agarwal and Akrivis [8]

Twizell and Tirmizi [23]

0.5E-8

2.9E-7

5.4E-8

0.26E-7

1296

M.

EL-GAMEL

AND Table

x

Exact

0.0

Solution

A.

!. ZAYED

3.

Sinc-Galerkin

0.0

Absolute

Relative Error

0.0

0.07701

-- 0 . 1 9 7 4 4 3 7 8

-- 0 . 1 9 7 4 4 3 7 7

0.06

0.12058

-- 0 , 2 5 5 0 8 3 7 0

-- 0 . 2 5 5 0 8 3 6 5

0.20

0.27022

-- 0 . 3 5 3 5 9 0 8 7

- 0,35359081

0.15

0.37830

- 0.36773296

- 0.36773296

0.02

0.5

- 0.34657359

-- 0 . 3 4 6 5 7 3 5 3

0.14

0.62169

- 0.29549755

- 0.29549756

0.02

0.72977

--

0.22989603

- 0°22989600

0.16

0.87941

-- 0.11300194

- 0.11300192

0.20

0.97002

- 0.02951702

- 0.02951703

0.23

0.0

1.0

1.0e - 06

- -

0 . 0 0

- -

In this problem the function f(x) has a singularity at x = 0. T h e p a r a m e t e r s M = 40 and a = fl = 1/2 are used. T h e exact, the a p p r o x i m a t e solutions, and absolute relative error are displayed in Table 3. EXAMPLE

3. Consider the b o u n d a r y value problem x 2

x 2

~(4) + ~1 + u

- - 7 2 (1 - 5x + 5x 2) + 1 + (x - x~) 6'

0 < • < 1,

(5.6)

subject to b o u n d a r y conditions ~ ( 0 ) = 0,

~ ( 1 ) = 0,

which has the exact solution given by

~'(0) = 0,

u'(1) = 0,

(5.7)

u(x) = x3(1 - x) 3.

By writing 1

-- 1 -

1 +u

u2 + u4 -

u 6 + u s +.-.

,

2

equation (5.6) becomes X2 U(4)-~-X 2 (1-

When

?A2 -~ U 4 -

i t 6 ~- i t 8 - ~ - " ' ' )

---- - - 7 2

(1-

5X + 5X2)-~

1 + (x - z D 6'

0 < z < 1. (5.8)

equation (5.8) is solved by the sinc-Galerkin method, we get a discrete system of the form Ac-

Ec 2 +Ec

4-

Ec 6 + ....

O,

(5.9)

where A, E , and @ are defined by equations (3.39)-(3.41) b u t f changes to

f(x) = - x e - 72 (1 - 5x + 5x : ) +

X2

1 + (x - x2) 6"

T h e p a r a m e t e r s selected are a = ~ = 1/2 and M = 30. T h e exact, the a p p r o x i m a t e solutions, and absolute relative error are displayed in Table 4. In the following example, we have nonhomogeneous b o u n d a r y conditions. EXAMPLE 4. Consider the b o u n d a r y value problem u (6) +

e-~u 2 = e - x + e -3x,

0 < x < 1,

(5.10)

Sinc-Galerkin M e t h o d

1297

Table 4. x

Exact Solution

Sinc-Galerkin

Relative Error 1.0e - 03

0.0

0.0

0.0

0.0537

0.0001316

0.0001316

0.0

0.0915

0.0005760

0~0005759

0.21

0.2410

0.0061203

0.0061173

0.48

0.3604

0.0122489

0.0122509

0.20

0.5

0.0156250

0.0156245

0.02

--

0.7589

0.0061203

0.0061222

0.30

0.9084

0.0005760

0.0005759

0.21

0.9462

0.0001316

0.0001316

0.0

0.9822

0.0000052

0.0000052

1.0

0.0

0.00

0.0 •

-

-



Table 5. x

Exact Solution

Sinc-Galerkin

Relative Error 1.0e - 03

0.0

1.0

1.0

0.0

0.0089

0.99113

0.99113

0.0

0.0414

0.95942

0.95942

0.0

0.1721

0.84189

0.84189

0.0

0.3131

0.73113

0.73114

0.01

0.5

0.60653

0.60655

0.04

0.6868

0.50316

0.50320

0.08

0.8278

0.43696

0.43701

0.09

0.9134

0.40114

0.40118

0.1

0.9585

0.38343

0.38347

0.1

1.0

0.36787

0.36787

0.0

subject to boundary conditions u(0) = 1, u(1) = - ,

1 e

u'(0) = - 1 , u'(1) = - -

1 e

u"(0) = 1,

()

1 e

which has the exact solution given by u ( x ) = e - ~ . The parameters selected are c~ =/~ -- 1/2 and M = 16. The exact, the approximate solutions, and absolute relative error are displayed in Table 5.

6. C O N C L U S I O N The results of the previous section indicate that our procedure can be used to obtain accurate numerical solutions of nonlinear boundary value problem with very little computational effort. The accuracy of our methods depends on the magnitude of M. The results of Example 2 clearly indicate that our methods are accurate even when singularities occur at the boundaries. REFERENCES 1. G. Dahlquist, A. Bjorck and N. Anderson, Numerical Methods, Prentice-Hall, Englewood Cliffs, N J, (1974). 2. C. A n n e a n d K. Bowers, T h e Schwarz alternating sinc d o m a i n decomposition m e t h o d , Appl. Numer. Math. 25, 461-483, (1997). 3. J. L u n d and K. Bowers, Sinc Methods for Quadrature and Differential Equations, SIAM, Philadelphia, (1992). 4. K. Michael, Fast iterative m e t h o d s for s y m m e t r i c sinc-Galerkin system, IMA J. Numer. Anal. 19, 357-373, (1999).

1298

M. •L-GAMEL AND A. I. ZAYED

5. C. Ralph and K. Bowers, The sinc-Galerkin method for fourth-order differential equations, S I A M J. Numer. Anal. 28, 760-788, (1991). 6. D. Jespersen, Ritz-Galerkin methods for singular boundary value problems, Math. Research Center Tech. Rep. 1972, Univ. of Wisconsin, Madison, (1977). 7. J.M. Ortega and W.C. Rheinboldt, [terative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). 8. R. Agarwal and C. Akrivis, Boundary value problems occurring in plate deflection theory, Y. Comp. Appl. Math. 8, 145-154, (1982). 9. F. Stenger, Numerical Methods Based on Sinc and Analytic Functions, Springer, New York, (1993). 10. F. Stenger, Matrices of sinc methods~ J. Comput. Appl. Math. 86, 297-310, (1997). 11. V. Grenander and G. Szeg~, Toeplitz Forms and Their Applications, Second edition, Chelsea, Orlando, (1985). 12. G. Adomian~ A new approach to nonlinear partial differential equations, J. Math. Anal. Appl. 102, 420-434, (1984). 13. C.G. Broyden, A class of methods for solving nonlinear simultaneous equations, Math. Comp. 19, 577-593, (1965). 14. C.G. Broyden, The convergence of an algorithm for solving sparse nonlinear systems, Math. Comp. 25, 285-294, (1971). 15. R.L. Burden and J.D. Faires, Numerical Analysis, PWS, Boston, (1993). 16. M.E. Herniter, Programming m Matlab, Bill Stenquist, (2001). 17. C.T. Kelley, Iterative Methods for Linear and Nonlinear Equations, SIAM, (1995). 18. L. Bergamaschi et al., Inexact quasi-Newton methods for sparse systems of nonlinear equations, FGCS 18, 41-53, (2001). 19. O. Ibidapo-obe et al., A new method for the numerical solution of simultaneous nonlinear equations, Appl. Math. Comp. 125, 133-140, (2002). 20. G. Li, The secant/finite difference algorithm for solving sparse nonlinear systems of equations, S I A M J. Numer. Anal. 25, 1181-1196, (1988). 21. J.M. Martinez, Practical quasi-Newton methods for solving nonlinear systems, J. Comp. Appl. Math. 124, 97-121, (2000). 22. M. Chawla and C. Katti, Finite difference methods for two-point boundary value problems involving high order differential equations, B I T 19, 27-33, (1979). 23. E. Twizell and S. Tirmizi, Multiderivative methods for nonlinear beam problems, Comm. Appl. Numer. Moth. 4, 43-50, (1988).