Font Size: a A A

Numerical Differentiation With Wavelets Methods

Posted on:2009-08-23Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y MaFull Text:PDF
GTID:2120360242981255Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Numerical differential problems is a classical ill-posed problem in the sense of Hadamard.The small perturbations in the measurement may cause large errors in the numerical results.Whether for scientific research or in the practical application,the stable numerical differentiation methods are very important.At present, numerical differentiation has been treated by a variety of methods.In this paper,we will change differential problems into the Predholm integral equations of the first kind,then use wavelets methods,which are used to solve the Fredholm integral equations of the first kind,to solve the numerical differentiation.Given the wavelets spacewhere j≥0, k∈△j,△j is a finite index set which denotes the translation indices of j ,φis the scale function, (?) is the mother wavelet.satisfiedDefinite the orthogonal projection operator PJ : L2[0,1]→VJwhere {φ}∪{(?)j,k|j = 0,1,…, J- 1, k∈△j} is the orthogonal wavelets bases in VJ,and is denoted byΦJ .In this paper,we discuss two methods for solving the Fredholm integral equations of the first kindwhere L and L* which is the adjoint operator of L are all bounded, linear, injective, compact operators . (?) denotes the solution of equation (1).Disturbed equation iswhereδis the perturbation level,‖f-fδ‖≤δ. The method based on the wavelets decomposition of the reconstructedfunction Definite the projection operator QJ : L2[0,1]→LVJ,(?)f∈L2[0,1],According towe have the numerical solution (?)J of the equation (1) in VJ. Especially,we choose the bases in VJ isΦJ,the bases in LVJ is {ω}∪{ωj,k| j = 0,…, J-1, k∈△j},This method is equivalent to the least square Galerkin methods in solving the integral equation.Determine xn∈Vn such thatThe method based on the wavelets decomposition of the right hand function Definite the projection operator TJ : L2[0,1]→L*VJ,(?)g∈According to we have the numerical solution (?)J of the equation (1) in VJ. Especially,we choose the bases in VJ isΦJ,the bases in L*VJ is {v}∪{vj,k|j = 0, ..., J-1, k∈△j},This method is equivalent to the dual least square Galerkin methods in solving the integral equation.Determine un∈Vn such thatThen xn = L*un is the dual least squares solution.The calculation of the numerical solution changes into solving the linear algebraic equations,The equation isIn the least squares Galerkin methodαn, bn, Gn is whereIn the dual least squares Galerkin method an,bn, Gn iswhereWe also discuss the convergence and the existence,uniqueness of the numericalsolution of these methods under the wavelets bases.Proposed the sufficientconditions of convergence for the least squares Galerkin method. Theorem 1 Ifis satisfied ,the least squares Galerkin method is convergence.λmin(n) denotes the smallest singular value of the matrix GnBecause the problem is ill-posed,the condition number of coefficient matrix is increased with the increase of the subspace dimension, we give the regularizationmethods to solve equation (7).The error estimates are also given.Traditional Tikhonov regularization is the first methods.We solvewhereαis the regularization parameter.Theorem 2 Letβn be the solution of (7) ,βnαbe the solution of (8) ,βnα,δ be the solution of (αI + Gn*Gn)αn = Gn*bnδwhich is the perturbation equation,‖bn - bnδ‖l2≤δ,λ1 > 0 denotes the smallest singular value of the matrix Gn ,M =‖bn‖l2. We chooseα(n,δ) -λ12(δ/M)2/3, then the following estimate holdsSo we have the whole error estimate.corollary 1 Let x∈L2[0,1] be the solution of Lx = y , we will have where xnα,δ =βn0α,δφ+sum from j=0 to n-1 sum from k∈△jβnj,kα,δ(?)j,kis the least squares solution solved by(8).corollary 2 Let∈L2[0,1] be the solution of Lx = y ,and x∈R{L*), there is u∈L2[0,1], such that x = L*u set up. we will havewhere unα,δ =βn0α,δφ+sum from j=0 to n-1 sum from k∈△jβnj,kα,δ(?)j,k,xnα,δ=L*unα,δis the dual least squaressolution solved by(8).Second regularization method is solvingwhereαis the regularization parameter.Theorem 3 Letβn be the solution of (7) ,βnαbe the solution of (9) ,βnα,δ be the solution of (αI + Gn)αn=bnδwhich is the perturbation equation,‖bn-bnδ‖l2≤δ,λ1 > 0 denotes the smallest singular value of the matrix Gn ,M=‖bn‖l2. We chooseα(n,δ) -λ1Mδ,then the following estimate holdswe have the whole error estimate. corollary 3 Let x∈L2[0,1] be the solution of Lx = y , we will have,where xnα,δ =βn0α,δφ+ sum from j=0 to n-1 sum from k∈△jβnj,kα,δ(?)j,kis the least squares solution solved by(9).corollary 4 Let x∈L2[0,1] be the solution of Lx = y ,and x∈R(L*), there is a u∈L2[0,1], such that x = L*u set up. we will have其中unα,δ=βn0α,δφ+sum from j=0 to n-1 sum from k∈△jβnj,kα,δ(?)j,k,xnα,δ=L*unα,δis the dual least squaressolution solved by(9).Under the orthogonal wavelets ,there is another explanation of (9) ,firstly we change the Predholm integral equations of the first kind into the Fredholmintegral equations of the second kind,then solve the Fredholm integral equations of the second kind by Galerkin methods.In the least squares Galerkin method,firstly we change (1) into the Fredholmintegral equations of the second kind whereαis the regularization parameter.Secondly,given Xn = Vn,Yn = Vn, we solve (10) by Galerkin method. Determinexn∈Vn, such thatwe will be able to get (9) by choosing zninΦn.Let x∈R(L*) be the solution of (1),in the dual least squares Galerkin method,firstly we change (1) into the Predholm integral equations of the second kindwhereαis the regularization parameter.Secondly,given Xn = Vn, Yn = Vn, we solve (11) by Galerkin method.Determineun∈Vn, such thatwe will be able to get (9) by choosing zninΦn.Then xn = L*un is the dual least squares solution.The whole error estimate of the second explanation iswhere (?)nα,δ is the numerical solution of the second way. Through this expression,we can see that the two methods in this paper are more flexible in the choice of the regularization parameters.Next,we just need to change L into the operator A,A : L2[0,1]→L2[0,1],whereand B,B:L2[0,1]→L2[0,1]We will be able to solve the first order and the second order numericaldifferentiation by these methods.The error estimates of two methods used in solving the first order numerical differentiation are given under the haar wavelets.λmin(n) denotes the smallest singular value of matrix Gn in the least squares Galerkin method,λmin*(n) denotes the smallest singular value of matrix Gn in the dual least squares Galerkin method,we have the following error estimates.Theorem 4 Let x∈Lipcγbe the solution of Lx = y,subspace is holds about N,α(N,δ) - (λ(min)(N))2(δ/M)2/3,we will have the following error estimateWhere xNα,δ =βN0α,δφ+ sum from j=0 to N-1 sum from k∈△jβNj,kα,δ(?)j,kis the least squares solution solved by(αI + GN*GN)αN = GN*bNδ.Theorem 5 Let x∈Lipcγbe the solution of Lx = y,subspace is VN,ifholds about N,α(N,δ) - (λmin(N))δM,we will have the following error estimateWhere xNα,δ =βN0α,δφ+ sum from j=0 to N-1 sum from k∈△jβNj,kα,δ(?)j,kis the least squares solution solved by(αI + GN)αN = bNδ.Theorem 6 Let x∈R(A*)and x∈Lipcγbe the solution of Lx = y,subspace is VN,ifholds about N,α(N,δ) - (λmin*(N))2(δ/M)2/3,we will have the following error estimate Where uNα,δ =βN0α,δφ+ sum from j=0 to N-1 sum from k∈△jβNj,kα,δ(?)j,k,xNα,δ=A*uNα,δis the dual least squares solution solved by (αI + GN*GN)αN = GN*bNδ.Theorem 7 Let x∈R(A*)andx∈Lipcγbe the solution of Lx = y,subspace is VN,ifholds about N,α(N,δ)- (λmin*(N))δM,will have the following error estimateWhere uNα,δ=βN0α,δφ+ sum from j=0 to N-1 sum from k∈△jβNj,kα,δ(?)j,k,xNα,δ=A*uNα,δis the dual least squaressolution solved by (αI + GN)αN = bNδ.sIn the second-order numerical differential, we have similar error estimates.sFinally,we give some numerical examples and applications for both first order and second order numerical differentiation by Haar wavelets.The numericalresults shows that our algorithms are simple and stable.
Keywords/Search Tags:Differentiation
PDF Full Text Request
Related items