一种基于对角矩阵分裂的快速迭代方法及其应用

许秋燕

宁夏大学学报(自然科学版中英文) ›› 2026, Vol. 47 ›› Issue (1) : 1 -13.

PDF (654KB)
宁夏大学学报(自然科学版中英文) ›› 2026, Vol. 47 ›› Issue (1) : 1 -13. DOI: 10.20176/j.cnki.nxdz.20251215
数学物理科学

一种基于对角矩阵分裂的快速迭代方法及其应用

作者信息 +

A kind of Fast Iterative Methods with the Application Based on Diagonal Matrix Splitting

Author information +
文章历史 +
PDF (669K)

摘要

线性方程组的快速求解一直是科学计算领域的研究热点之一。文中提出了一种对角矩阵分裂迭代方法,其不同于经典的矩阵分裂。以系数矩阵对角元的分解为核心,构建了若干新的预条件子。以三对角系数矩阵为例,从理论上分析了新方法的收敛域与最佳松弛因子。将所提新迭代方法应用于线性代数方程组,以及全隐离散下的二维、三维扩散问题的求解。数值实验结果与理论分析一致,同时显示了新方法大幅减少了迭代次数。所提出的迭代方法显著优于部分经典迭代方法。

Abstract

The fast solution of linear equations has always been one of the hot spots in scientific computing. A kind of the diagonal matrix splitting iteration methods are provided, which is different from the classical matrix splitting methods. Taking the decomposition of the diagonal elements for coefficient matrix as the key point, some new preconditioners are constructed. Taking the tri-diagonal matrix as the coefficient matrix, the convergence domains and optimal relaxation factor of the new method are analyzed theoretically. The presented new iteration methods are applied to solve linear algebraic equations, even 2D and 3D diffusion problems with the fully implicit discretization. The results of numerical experiments are matched with the theoretical analysis, and show that the iteration numbers are reduced greatly, and the accuracy is improved. The superiorities of presented iteration methods exceed some classical iteration methods dramatically.

Graphical abstract

关键词

迭代 / 矩阵分裂 / 扩散方程:收敛性 / 最佳松弛因子

Key words

iteration / matrix splitting / diffusion equation / convergence / optimal relaxation factor

引用本文

引用格式 ▾
许秋燕. 一种基于对角矩阵分裂的快速迭代方法及其应用[J]. 宁夏大学学报(自然科学版中英文), 2026, 47(1): 1-13 DOI:10.20176/j.cnki.nxdz.20251215

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

In the field of scientific and engineering computing, the solution of algebraic equations obtained by discretizing many mathematical models has always been a hot topic, especially with the progress of computers. Efficient numerical methods have always been a goal pursued by people, and iterative methods in numerical methods have become the mainstream solution methods. References [1-2] are two classic books, which introduced various iterative methods for solving large-scale linear algebraic equations systematically. From the Gauss-Seidel3 iteration to the Richardson method, the Chebyshev semi-iteration method and successive over relaxation (SOR) method in the 1950s and 1970s, the iteration methods have a long history. Hageman et al4 provided many appliable iterative methods. Axelsson5 introduced iterative solution methods comprehensively, including preprocessing techniques. Saad6 gave the iterative methods for sparse linear systems. Bai et al7-8 provided Hermitian/skew-Hermitian (HSS) splitting methods for non-Hermitian positive definite linear systems, and accelerated HSS methods for saddle-point problems. In references [9], a matrix splitting method has been proposed for special matrix and a comprehensive review of preprocessing techniques was provided. References [10] gave the application of HSS method in Sylvester equation. Li et al11 proposed a modified Gauss-Seidel type iterative methods and Jacobi type methods for Z -matrices. The interpolating preconditioners for the solution of sequence of linear systems were provided in references[12]. A generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations are given in references[13]. New matrix splitting iteration method for generalized absolute value equations was provided in references [14], which proposed a relaxed Newton-type matrix splitting iteration method. References [15] introduced two-parameter modified matrix splitting iteration method, established the asymptotic convergence theory and demonstrated its convergence under certain conditions. References [16] constructed a paradigm of two-step matrix splitting iteration methods and analyze its convergence property for the nonsingular and the positive-definite matrix.

Under certain conditions, the classical Jacobi, Gauss-Seidel iteration methods can solve many practical problems. Especially for the SOR17-19 method, which adds the optimal relaxation factor, the iteration acceleration has been greatly improved. However, we find that in the Jacobi iteration method and the Gauss-Seidel iteration method, the most primitive information are not fully utilized. They all use the initial iteration information of other points to calculate. Xu et al20-21 developed a new kind of iteration methods based on the discretization schemes with some iterative operators, which are more efficient than Jacobi, GS and SOR methods. To make more use of initial information to construct iterations, we need to split the main diagonal elements, that is, the diagonal matrix constructed by the main diagonal elements, to obtain a new kind of iterative methods. Therefore, we will propose a diagonal matrix splitting iteration methods. The classical Jacobi, Gauss-Seidel and SOR iteration methods are actually special cases of the presented methods. We have added the concept of diagonal splitter and provided the range of diagonal splitter to ensure the convergence of the generalized methods. Finally, the new methods are applied to solve linear algebraic equations and 2D and 3D diffusion problems. The experimental results show that the generalized iterative methods greatly improve computational efficiency, far surpassing classical iterative methods.

The remainder of the paper is as follows: In Sec. 2, we present new generalized iterative methods for solving linear algebraic equations; Sec. 3 provides the range of the diagonal splitter and the optimal relaxation factors of the presented iteration methods for tridiagonal linear equations. The new generalized iteration methods are applied to solve linear equations, 2D and 3D diffusion problems in Sec. 4. Finally, Sec. 5 gives the conclusions.

2 Generalized iteration methods

In this section, we provide a class of new iterative methods based on the splitting strategy, which is the extension of the classical Jacobi, Gauss-Seidel and SOR methods in fact. So we call the new methods as generalized iteration methods.

2.1 Generalized Jacobi, GS, SOR methods

Consider the following linear equations

a11x1+a12x2+a13x3++a1,l-1xl-1=b1,a21x1+a22x2+a23x3++a2,l-1xl-1=b2,al-1,1x1+al-1,2x2+al-1,3x3++al-1,l-1xl-1=bl-1 .

Which can be often written as

Ax=b.

Where A=(aij)(l-1) × (l-1)x=(xj)(l-1) × 1b=(bj)(l-1)×1,i,j=1,2,,l-1. There are many methods to solve Eq. (1) or (2) like the classical Jacobi, Gauss-Seidel (GS) and SOR iterative methods and so on. We often introduce these iterative methods by the matrix splitting method, namely, the coefficient matrix can be divided into A=D-L-U, here D =diag( A ), and

L=0-a210-al-1,1-al-1,l-20(l-1)×(l-1),    U=0-a120-al-2,l-10(l-1)×(l-1).

We will provide a new splitting iterative method, which can accelerate the iteration dramatically. In fact, the classical Jacobi, GS and SOR iterative methods can be extended.

First, we divide the elements of diagonal matrix D into two parts,

D = B + C

namely,aii=bii+cii,  i=1, 2, , l-1,here

B=b11b22bl-1,l-1(l-1)×(l-1),  C=c11c22cl-1,l-1(l-1)×(l-1).

The matrix B and the elements bii,i=1,2,,l-1 are called diagonal retention matrix and diagonal retention elements, the matrix C and the elements cii,i=1,2,,l-1 are called diagonal splitting matrix and diagonal splitter.

Second, we construct the following iterative methods:

:  b11x1(k+1)+c11x1(k)+a12x2(k)+a13x3(k)++a1,l-1xl-1(k)=b1,a21x1(k)+b22x2(k+1)+c22x2(k)+a23x3(k)++a2,l-1xl-1(k)=b2,al-1,1x1(k)+al-1,2x2(k)+al-1,3x3(k)++bl-1,l-1xl-1(k+1)+cl-1,l-1xl-1(k)=bl-1,

which is called generalized Jacobi(G-Jacobi) iterative method. When C = 0, namely,cii =0,i=1,2,,l-1,method Ⅰ is just the classical Jacobi method.

In Eq. (6), we can see that once the vector x(k)is given, the new vector x(k+1)can be computed. The G-Jacobi iteration method has the essential parallelism as the same as the Jacobi method. Eq. (6) can be written in element form to show exactly how to update the approximate solution vector, namely,

xi(k+1)=1biibi-jiaijxj(k)-ciixi(k),  i=1,2,,l-1.

Next, we can construct another new iterative method,

:  b11x1(k+1)+c11x1(k)+a12x2(k)+a13x3(k)++a1,l-1xl-1(k)=b1,a21x1(k+1)+b22x2(k+1)+c22x2(k)+a23x3(k)++a2,l-1xl-1(k)=b2,al-1,1x1(k+1)+al-1,2x2(k+1)+al-1,3x3(k+1)++bl-1,l-1xl-1(k+1)+cl-1,l-1xl-1(k)=bl-1,

which is also called generalized Gauss-Seidel(G-GS) iterative method. Similarly, When C = 0, namely, cii=0, i=1, 2, , l-1, the method Ⅱ is the classical Gauss-Seidel method. The G-GS iterative method can be written in the form

x1(k+1)=1biibi-j<iaijxj(k+1)-j>iaijxj(k)-ciixi(k), i=1,2,,l-1 .

It can be seen that the latest approximations to the components of x are used in the update of subsequent components. The old components xk are overwritten with those of xk+1) as soon as they are computed.

The improvement of the G-GS method can be provided by introducing a relaxation parameter ω. We add a relaxation factor ω to construct the G-SOR iterative method as follows:

x(k+1)=x(k)+ωΔx(k),

here Δx(k)=x(k+1)-x(k), while x(k+1) is obtained by the Eq. (7).

The G-SOR method is defined by

x1(k+1)=ωbiibi-j<iaijxj(k+1)-j>iaijxj(k)-ciixi(k)+1-ωxi(k), i=1,2,,l-1 .

When ω=1, the G-SOR method is equal to G-GS method; and when cii =0,i=1,2,…,l-1, the G-SOR method is just the classical SOR method.

For above three generalized iterative methods, their iterative matrices are

M=B-1L+U-C=D-C-1L+U-CM=B-L-1U-C=D-C-L-1U-CM=B-ωL-1 1-ωB+ωU-C.

Therefore, the classical Jacobi, GS and SOR iteration methods are the special cases of the presented generalized iteration menthods. The theoretical anlyses can also be provided.

2.2 Convergence analysis

Theorem 12 Given the initial vector x(0), the G-Jacobi iterative method is convergent only if the spectral radius of the iterative matrix Msatisfies ρM)<1.

Lemma 12 If the matrix A is strictly diagonal dominant or irreducible diagonal dominant, then the matrix A is nonsingular.

Definition 12 When the matrix A is strictly diagonal dominant or irreducible diagonal dominant symmetric matrix, and its diagonal elements are all positive, which is called that A is positive definite matrix.

Theorem 2 If the coefficient matrix A in Eq. (2) is symmetric, and the diagonal retention matrix B >0, then the G-Jacobi iterative method is convergent only if A and 2 B - A are all positive definite.

Proof Because the matrix A is symmetric, and B >0, we get

M= B-1L + U - C )= I - B-1A = B1/2I - B1/2AB-1/2B1/2.

It shows that Mis equivalent to I - B1/2AB-1/2, and both of them have the same eigenvalues. Moreover, I - B1/2AB-1/2 is symmetric, which has real eigenvalues.

(ⅰ) The necessity. If the iterative method is convergent, then ρ(M)<1. We can obtain that all of the eigenvalues of the matrix I - B1/2AB-1/2are belong to (-1, 1), which means

λI-B-1/2AB-1/2<1

then

0<λB-1/2AB-1/2<2.

So the matrix B1/2AB-1/2 is positive definite, which indicates A is positive definite. Furthermore,due to

B1/2(2 B - AB-1/2=2 I - B1/2AB-1/2

then the matrix 2 B - A is positive definite.

(ⅱ) The sufficiency. Since B1/2I - MB-1/2= B1/2AB-1/2,which means that the eigenvalues of I - Mare all greater than 0, then we have

λM<1

Otherwise, due to A = BI - M), it has B-1/2(2 B - AB-1/2= B1/2I + MB-1/2.

Since 2 B - A is positive definite,λI+M>1,we obtain

λM>-1.

From (9) and (10),ρM)<1.So the G-Jacobi iterative method is convergent.

Theorem 3 If the coefficient matrix A in Eq. (2) is strictly diagonal dominant or irreducible diagonal dominant, and the diagonal retention matrix B=diag(bii)(l-1)×(l-1)bii>0,i=1,2,,l-1,then the G-Jacobi iterative method is convergent.

Proof Due to bii>0,i=1,2,,l-1, then the diagonal retention matrix B is reversible. Assume λ˜ is the eigenvalue of the iterative matrix M, which satisfy |λ˜|1. Then we can obtain that the matrix λ˜B-L-U+C is also strictly diagonal dominant or irreducible diagonal dominant because of the strictly diagonal dominance or irreducible diagonal dominance of the matrix A . From the Lemma 1, we have

det(λ˜I-MI) = det[λ˜I-B-1(L+U-C)]= det(B-1)det(λ˜B-L-U+C)0.

which is contradictory. Therefore, the spectral radius ρ(MI)<1. By Theorem 1, the G-Jacobi iterative method is convergent.

Theorem 4 If the coefficient matrix A in Eq. (2) is a real symmetric positive definite matrix, and the diagonal retention matrix B >0, then the G-SOR iterative method is convergent when

0<ω<2B2B2+C2.

Here 2 means 2-norm, and ①If B2C2,then 0<ω<1. ② If B2>C2,then 2B2B2+C2>1,which means ω=1,the G-GS iterative method is convergent.③ If C2=0,the G-SOR method is just the SOR method, which is convergent when 0<ω<2.

Proof Assume λ is any eigenvalue of the iterative matrix M and ν is the corresponding eigenvector, then we have

(B-ωL)-1[(1-ω)B+ω(U-C)]ν=λν.

Let νHbe the conjugate transpose of ν. Denote νHBv=δ1,νHCv=δ2,νHUν=α+iβ,i=-1.δ1,δ2,α,β are all real numbers. Since L = UT, we get

νHLν=α-iβ.

Taking the above relationships into (11), then

λ=δ1-ω(δ1+δ2-α)+iωβδ1-ωα+iωβ.

Computing the square of module in both sides for Eq. (13), we get

|λ|2=[δ1-ω(δ1+δ2-α)]2+ω2β2(δ1-ωα)2+ω2β2.

Note that

d = [δ1-ω(δ1+δ2-α)]2-(δ1-ωα)2= ω(2δ1-ωδ1-ωδ2)(2α-δ1-δ2),

due to A is real symmetric positive and B >0, then

δ1>0, δ1+δ2>0, νHAν=δ1+δ2-2α>0,

we can obtain that d<0, namely, λ<1when

0<ω<2δ1δ1+δ2.

Theorem 5 If the coefficient matrix A in Eq. (2) is strictly diagonal dominant or irreducible diagonal dominant, and satisfies |bii|>ij|aij| + |cii|,then the G-SOR iterative method is convergent when ω(0,1].

Proof Assume the iterative matrix M=( B-ωL-1 [(1-ωB +ωU - C )] has the eigenvalue λ˜ which is satisfied λ˜1, then

det(λ˜I-MIII)=det[(B-ωL)-1]det[λ˜(B-ωL)-(1-ω)B-ω(U-C)]=det[(B-ωL)-1]det[(λ˜-1+ω)B-ω(λ˜L+U-C)].

Since ω(0,1] and B is strictly diagonal dominant or irreducible diagonal dominant, we have

|λ˜-1+ω||bii|>ω|λ˜|i>j|aij|+i<j|aij|+|cii|>ω|λ˜|i>j|aij|+i<j|aij|+|cii|.

From Lemma 1, we have

det[(λ˜-1+ω)B-ω(λ˜L+U-C)]0,

which is contradictory. Therefore, the spectral radius of G-SOR iterative matrix ρ(M)<1. So the G-SOR iterative method is convergent when ω(0,1].

3 Tridiagonal linear equations

In this section, we take tridiagonal linear equations for example to provide the eigenvalues, eigenvectors, the range of the diagonal splitter, and the optimal relaxation factors of the presented new iterative methods.

Consider the linear equations (2) with the coefficient matrix

A¯=dββdββd(l-1)×(l-1),

we split the diagonal matrix of A into two parts, namely, diag( A )= B + C, here

B¯=d-γd-γd-γ(l-1)×(l-1),   C¯=γγγ(l-1)×(l-1),

where γ is defined as the diagonal splitter.

Theorem 6 Let A¯(l-1)×(l-1) in (14) be the coefficient matrix for linear equations A¯x=b¯,the diagonal retention matrix B¯ and the diagonal splitting matrix C¯ are defined by (15), then the eigenvalues of the G-Jacobi iteration matrix are

λk=γd-γ+2βd-γcoskπl,  k=1,2,,l-1.

Theorem 7 Let A¯(l-1)×(l-1) in (14) be the coefficient matrix for linear equations A¯x=b¯,the diagonal retention matrix B¯ and the diagonal splitting matrix C¯ are defined by (15), then the eigenvalues of the G-SOR iteration matrix are

λk=ωβd-γcoskπl+ω2β2(d-γ)2cos2kπl-(ωdd-γ-1)2,   k=1,2,,l-1.

The proofs of the Theorem 6 and Theorem 7 can be obtained by references [6]. From the Theorem 6 and Theorem 7, we can get the convergence domains about diagonal splitter γ of G-Jacobi and G-GS methods with the coefficient matrix in (14).

Theorem 8 Let A¯(l-1)×(l-1) be a coefficient matrix as in (14) for linear equations A¯x=b¯,which is diagonally dominant and d>0. diag (A¯)=B¯+C¯,  B¯>0 means the diagonal splitter γ< d as in (15). Then the G-Jacobi method is convergent when

γ<d2(1+μmin)

here μmin is the minimum of the eigenvalues for the classical Jacobi iteration matrix.

Proof Note λ and μ are the eigenvalues of G-Jacobi and Jacobi iteration matrices. From the Theorem 6, we have

λ=μd-γd-γ,

when λ<1, namely,

-1<μd-γd-γ<1

is satisfied, the G-Jacobi method is convergent. So the convergence domain is

γ<d2(1+μmin).

Similarly, we provide the following theorem.

Theorem 9 Let A¯(l-1)×(l-1) be an coefficient matrix as in (14) for linear equations A¯x=b¯ which is diagonally dominant and d>0. Then the G-GS method is convergent when

γ<d2(2-ρ(J)),

here ρ(J) is the spectral radius of Jacobi iteration matrix correspondingly.

Proof Note μ is the eigenvalue of Jacobi iterative matrix. From Eq.(17), to ensure the convergence of the G-GS method, it need that

μd2(d-γ)+μ2d24(d-γ)2-γd-γ<1,

namely,

μd2(d-γ)+μ2d24(d-γ)2-γd-γ<1,

and

μd2(d-γ)+μ2d24(d-γ)2-γd-γ>-1.

For (22), we can get

γ<d22-μ.

For (23), we get

γ<d22+μ.

Therefore, when

γ<d22-ρJ,

the G-GS iteration method is convergent.

Theorem 10 Let A¯(l-1)×(l-1) in (14) be the coefficient matrix for linear equations A¯x=b¯. The diagonal retention matrix B¯>0 and the diagonal splitting matrix C¯ are defined by (15), then the G-SOR iterative method is convergent when

0<ω<2d-γd.

The theorem can be obtained by Theorem 4.

Theorem 11 Let A(l-1)×(l-1)in (14) be the coefficient matrix for linear equations A¯x=b¯, the diagonal retention matrix B¯>0 and the diagonal splitting matrix C¯ are defined by (15), the optimal relaxation factor of the G-SOR method is

ωopt=2(d-γ)d+d2-(γ+dρ(M)-γρ(M))2, 

and

ρM=1-1-γ+d-γρMd21+1-γ+d-γρMd2,

here ρM) is the spectral radius of G-Jacobi iteration matrix.

4 Applications

In this section, we use the presented G-Jacobi, G-GS and G-SOR iteration methods to solve linear algebraic equations and 2D and 3D diffusion problems to examine their superiority and accuracy. For time dependent problems, we can choose more appropriate splittings of diagonal matrices to establish the algorithms. Due to the strictly diagonal dominance of coefficient matrix obtained by the fully implicit discretization for parabolic problems, the choice of splitting is more intuitive. Denote Ε=maxxs-x* as the absolute errors, while x(s) and x* are the sth iterative solution and exact solution. A laptop with a 2.8 GHz Intel Core i7 processor and MATLAB running in Windows 10 is used for computations.

4.1 Linear equations

(Ⅰ)Consider the linear equations Ax = b with

A=3-1-13-1-13(l-1)×(l-1)b=212(l-1).

The exact solution is x*=[1,1,,1]T. Taking the linear equations with the order of l=101 for example, Fig. 1 provides the iteration numbers of the G-Jacobi and G-GS iteration methods with the different values of diagonal splitter γ. It is shown that the G-Jacobi iteration method is convergent when γ<0.5, which is matched with the results obtained by the theory analysis

γ<32(1+μmin)0.500 5

in Theorem 8; The G-GS iteration method is convergent when γ<2 nearly, which is matched with the results obtained by the theory analysis

γ<32(2-ρ(J))2.000 5

in Theorem 9. Table 1 shows obviously the iteration numbers of the G-Jacobi and G-GS iteration methods when l=101E1×10-4, which are matched with the theoretical analysis. Moreover, when γ=0, the iteration numbers 34, 14 are corresponding to the classical Jacobi and GS methods, which are not the smallest. We find that the iteration numbers of the G-jacobi method reach to 22 nearly at γ=-0.4/0.3. The G-GS method leads to 13 when γ is about 0.1/0.2. In fact, the presented G-GS method has already achieve the same iterative numbers as the classical SOR method.

l=101E1×10-4.

(Ⅱ) Consider the linear equations Ax = b with

A=3-1-13-1-13(l-1)×(l-1)b=212(l-1).

The exact solution is x*=111T. Taking the linear equations with the order of l=101 for example, Fig. 2 provides the iteration numbers of the G-Jacobi and G-GS iteration methods with the different values of diagonal splitter γ. The eigenvalues of the coefficient matrix in problem (2) are same as that of the matrix in problem (1), so they have the same range of the diagonal splitter for the convergence.

Table 2 shows the iteration numbers of the G-Jacobi and G-GS iteration methods when l=101E1×10-4, which are matched with the theoretical analysis. When γ, the iteration numbers 34, 20 are corresponding to the classical Jacobi and GS methods, which are not the smallest. We find that the iteration numbers of the G-Jacobi method reach to 32 nearly at γ=0.2. The G-GS method leads to 13 when γ is about 0.5. These results indicate that the classical Jacobi and GS methods of matrix splitting are not optimal compared to the diagonal element splitting for G-Jacobi and G-GS methods, as the new methods use more initial information for approximation.

4.2 2D diffusion problem

Consider the following diffusion problem

CvTt-KΔT(x,y)=f(x), (x,y)Ω=(0,L)2,  t>0,

with the initial and boundary conditions

T(x,y,0)=sin(x+y),T(0,y,t)=sin(y+t),T(1,y,t)=sin(1+y+t),T(x,0,t)=sin(x+t),T(x,1,t)=sin(x+1+t),

where Cv=1 is the specific heat,K=1,f(x)=2sin(x+y+t)+cos(x+y+t). The exact solution is T=sin(x+y+t). We divide the domain Ω with the space step hx=L/l in x direction and hy=L/m in y direction (for simplicity, we only consider the case of h=hx=hy), and time step τ, while l,m and J1 are positive integers and xi=ih, yj=jh, i,j=0,1,,l; tn=nτ, n=0,1,2,,J1.Denote (xi,yj,tn) as (ih,jh,nτ) and Ti,jn,(s) as numerical solution on the nth time level with sth iteration at the node(xi,yj),s=0,1,,J2.We use the 2D fully implicit (2D F-I) scheme to discrete Eq. (25), due to its unconditionally stability. The discreted linear equations can be written as A2Tn+1=b2n while A2b2 are the coefficient matrix and right term21. For the presented generalized iteration methods, we choose the optimal diagonal splitter γ=2rK and optimal relaxation factors in Eq. (24) to examine the superiority and accuracy of the presented generalized iteration methods. Denote En as the absolute errors,

En=maxi|T(xi,yj,tn)-Ti,jn|.

Table 3 provides the absolute errors of the 2D Jacobi, 2D G-Jacobi, 2D GS, 2D G-GS, 2D SOR, and 2D G-SOR methods when τ=0.001,J1=10,J2=10, which shows the errors of 2D G-SOR method is smallest. The accuracy of 2D G-GS method even exceeds 2D SOR method. Fig. 3 gives the iteration numbers of 2D Jacobi, 2D G-Jacobi, 2D GS, 2D G-GS, 2D SOR and 2D G-SOR iteration methods under the errors control E1×10-4 within 200 times steps. From Fig. 3, we observe that the iteration numbers of 2D G-SOR method is least, and 2D G-GS method is also efficient even more than 2D SOR method.

4.3 3D diffusion problem

Consider the following diffusion problem

CvTt-KΔT(x,y,z)=f(x), (x,y,z)Ω=(0,L)3, t>0,

with the initial and boundary conditions

T(x,y,z,0)=sin(x+y+z),T(0,y,z,t)=sin(y+z+t),T(1,y,z,t)=sin(1+y+z+t),T(x,0,z,t)=sin(x+z+t),T(x,1,z,t)=sin(x+1+z+t),T(x,y,0,t)=sin(x+y+t),T(x,y,1,t)=sin(x+y+1+t),

where Cv=1 is the specific heat,K=1,f(x)=3sin(x+y+z+t)+cos(x+y+z+t).The exact solution is T=sin(x+y+z+t). We divide the domain Ω with the space step hx=L/l in x direction,hy=L/m in y direction and hz=L/q in z direction (for simplicity, we only consider the case of h=hx=hy=hz), and time step τ,while l,m,q and J1 are positive integers and xi=ih, i=0,1,,l;yj=jh,j=,1,,m; zk=kh,k=0,1,,q;tn=nτ, n=0,1,2,,J1.Denote xi,yj,zk,tn as (ih,jh,kh,nτ),and Ti,j,kn as numerical solution on the nth time level at the node xi,yj,zk. We use the 3D fully implicit (3D F-I) scheme to discrete Eq. (26) since it is stable unconditionally. The discretization can be written as,A3Tn+1=b3n,while A3, b˜3n are the coefficient matrix and right term21. For the presented generalized iteration methods, we choose the optimal diagonal splitter γ=3rK and optimal relaxation factors in Eq. (24) to examine the superiority and accuracy of the presented generalized iteration methods.

Table 4 gives the absolute errors of the 3D Jacobi, 3D G-Jacobi, 3D GS, 3D G-GS, 3D SOR, and 3D G-SOR methods when τ=0.001,J1=10,J2=5, which shows the errors of 3D G-SOR method is the smallest of these iteration methods. The accuracy of 3D G-GS and 3D G-SOR methods even exceed 3D SOR method on accuracy. Fig. 4 provides the iteration numbers of 3D Jacobi, 3D G-Jacobi, 3D GS, 3D G-GS, 3D SOR, and 3D G-SOR methods within 200 times levels when τ=0.000 5. The results show that the presented generalized iteration methods are performed very efficient because the smaller iteration numbers can save the computational cost well and speed up the calculation of the solver on each iteration.

5 Conclusions

A class of diagonal matrix splitting iteration methods is provided, and the classical Jacobi Gauss-Seidel and SOR methods are the special cases of these new methods. By selecting appropriate diagonal element splitting, better iteration methods can be constructed. We give the eigenvalues, convergence domains and the optimal relaxation factors of the presented methods for tridiagonal coefficient matrix. This kind of generalized methods are applied to solve the algebraic linear equations and 2D and 3D diffusion problems with fully implicit discretization. It is found that fewer iteration numbers are used to improve the calculation efficiency and save the calculation cost compared to the classical Jacobi, Gauss-Seidel and SOR methods. The new methods can also be used to solve ill conditioned equations, such as saddle point problem and nonlinear equations.

参考文献

[1]

VARGA R S. Matrix iterative analysis[M].2nd ed. Springer Series in Computational Mathematics, Vol. 127. Berlin: Springer-Verlag, 2000.

[2]

YOUNG D M. Iterative solution of large linear systems[M]. New York: Academic Press, 1971.

[3]

GAUSS C F. Letter to Gerling, December 26, 1823[J]. Werke19039: 278-281. English translation by FORSYTHE G E. Mathematical Tables and Other Aids to Computation, 1951, 5(36): 255-258.

[4]

HAGEMAN L AYOUNG D M. Applied iterative methods[M]. New York: Academic Press, 1981.

[5]

AXELSSON O. Iterative solution methods[M]. Cambridge: Cambridge University Press, 1994.

[6]

SAAD Y. Iterative methods for sparse linear systems[M]. 2nd ed. Philadelphia: SIAM, 2003.

[7]

BAI ZhongzhiGOLUB G HNG M K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems[J]. SIAM Journal on Matrix Analysis and Applications200324(3):603-626.

[8]

BAI ZhongzhiGOLUB G H. Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems[J].IMA Journal of Numerical Analysis200727(1): 1-23.DOI:10.1093/imanum/drl017 .

[9]

BENZI M. Preconditioning techniques for large linear systems: A survey[J]. Journal of Computational Physics2002182(2):418-477.

[10]

BAI Zhongzhi. On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations[J].Journal of Computational Mathematics201028(1):39-50.

[11]

LI WenSUN Weiwei. Modified Gauss-Seidel type methods and Jacobi type methods for Z -matrices[J]. Linear Algebra and its Applications2000317(1/2/3): 227-240.

[12]

BERTACCINI DDURASTANTE F. Interpolating preconditioners for the solution of sequence of linear systems[J]. Computers & Mathematics with Applications201672(4): 1118-1130.

[13]

SALKUYEH D KHEZARI DEDALATPOUR V. Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations[J]. International Journal of Computer Mathematics201592(4): 802-815.

[14]

ZHAO WanchenSHAO Xinhui. New matrix splitting iteration method for generalized absolute value equations[J].AIMS Mathematics20238(5): 10558-10578.

[15]

LI TianyiCHEN FangFANG Zhiweiet al. Two-parameter modified matrix splitting iteration method for Helmholtz equation[J]. International Journal of Computer Mathematics2024101(9/10): 1205-1218.

[16]

BAI Zhongzhi. A two-step matrix splitting iteration paradigm based on one single splitting for solving systems of linear equations[J]. Numerical Linear Algebra with Applications202431(3): e2510.DOI:10.1002/nla.2510 .

[17]

HADJIDIMOS A. Successive overrelaxation (SOR) and related methods[J]. Journal of Computational and Applied Mathematics199120: 75-89.

[18]

WOZNICKI Z IJEDRZEJEC H A. A new class of modified line-SOR algorithms[J]. Journal of Computational and Applied Mathematics2001131(1/2): 89-142.

[19]

YOUSSEF I KFARID M M. On the accelerated overrelaxation method[J]. Pure and Applied Mathematics Journal20154(1): 26-31.

[20]

PAN YunmingXU QiuyanLIU Zhiyonget al. A class of new successive permutation iterative algorithms with the relaxation factor for radiation diffusion equation[J]. Computational and Applied Mathematics202342(5): Article 203.DOI:10.1007/s40314-023-02341-7 .

[21]

XU QiuyanLIU Zhiyong. A class of new successive permutation iterative algorithms for diffusion equation[J]. Journal of Difference Equations and Applications202127(9): 1355-1372.

基金资助

The National Natural Science Foundations of China(12202219)

the Natural Science Foundations of Ningxia(2024AAC02009)

the Natural Science Foundations of Ningxia(2023AAC05001)

the Ningxia Youth Top Talents Training Project

AI Summary AI Mindmap
PDF (654KB)

20

访问

0

被引

详细

导航
相关文章

AI思维导图

/