School of Mathematics and Statistics,Ningxia University,Yinchuan 750021,China
XU Qiuyan (1983—), female, associate professor, doctor of science, research fields: Numerical methods for partial differential equations, E-mail: qiuyanxu@nxu.edu.cn.
The fast solution of linear equations has always been one of the hot spots in scientific computing. A kind of the diagonal matrix splitting iteration methods are provided, which is different from the classical matrix splitting methods. Taking the decomposition of the diagonal elements for coefficient matrix as the key point, some new preconditioners are constructed. Taking the tri-diagonal matrix as the coefficient matrix, the convergence domains and optimal relaxation factor of the new method are analyzed theoretically. The presented new iteration methods are applied to solve linear algebraic equations, even 2D and 3D diffusion problems with the fully implicit discretization. The results of numerical experiments are matched with the theoretical analysis, and show that the iteration numbers are reduced greatly, and the accuracy is improved. The superiorities of presented iteration methods exceed some classical iteration methods dramatically.
XU Qiuyan (1983—), female, associate professor, doctor of science, research fields: Numerical methods for partial differential equations, E-mail: qiuyanxu@nxu.edu.cn.
"}, bioImg=null, bioContent=
XU Qiuyan (1983—), female, associate professor, doctor of science, research fields: Numerical methods for partial differential equations, E-mail: qiuyanxu@nxu.edu.cn.
In the field of scientific and engineering computing, the solution of algebraic equations obtained by discretizing many mathematical models has always been a hot topic, especially with the progress of computers. Efficient numerical methods have always been a goal pursued by people, and iterative methods in numerical methods have become the mainstream solution methods. References [1-2] are two classic books, which introduced various iterative methods for solving large-scale linear algebraic equations systematically. From the Gauss-Seidel[3] iteration to the Richardson method, the Chebyshev semi-iteration method and successive over relaxation (SOR) method in the 1950s and 1970s, the iteration methods have a long history. Hageman et al[4] provided many appliable iterative methods. Axelsson[5] introduced iterative solution methods comprehensively, including preprocessing techniques. Saad[6] gave the iterative methods for sparse linear systems. Bai et al[7-8] provided Hermitian/skew-Hermitian (HSS) splitting methods for non-Hermitian positive definite linear systems, and accelerated HSS methods for saddle-point problems. In references [9], a matrix splitting method has been proposed for special matrix and a comprehensive review of preprocessing techniques was provided. References [10] gave the application of HSS method in Sylvester equation. Li et al[11] proposed a modified Gauss-Seidel type iterative methods and Jacobi type methods for Z -matrices. The interpolating preconditioners for the solution of sequence of linear systems were provided in references[12]. A generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations are given in references[13]. New matrix splitting iteration method for generalized absolute value equations was provided in references [14], which proposed a relaxed Newton-type matrix splitting iteration method. References [15] introduced two-parameter modified matrix splitting iteration method, established the asymptotic convergence theory and demonstrated its convergence under certain conditions. References [16] constructed a paradigm of two-step matrix splitting iteration methods and analyze its convergence property for the nonsingular and the positive-definite matrix.
Under certain conditions, the classical Jacobi, Gauss-Seidel iteration methods can solve many practical problems. Especially for the SOR[17-19] method, which adds the optimal relaxation factor, the iteration acceleration has been greatly improved. However, we find that in the Jacobi iteration method and the Gauss-Seidel iteration method, the most primitive information are not fully utilized. They all use the initial iteration information of other points to calculate. Xu et al[20-21] developed a new kind of iteration methods based on the discretization schemes with some iterative operators, which are more efficient than Jacobi, GS and SOR methods. To make more use of initial information to construct iterations, we need to split the main diagonal elements, that is, the diagonal matrix constructed by the main diagonal elements, to obtain a new kind of iterative methods. Therefore, we will propose a diagonal matrix splitting iteration methods. The classical Jacobi, Gauss-Seidel and SOR iteration methods are actually special cases of the presented methods. We have added the concept of diagonal splitter and provided the range of diagonal splitter to ensure the convergence of the generalized methods. Finally, the new methods are applied to solve linear algebraic equations and 2D and 3D diffusion problems. The experimental results show that the generalized iterative methods greatly improve computational efficiency, far surpassing classical iterative methods.
The remainder of the paper is as follows: In Sec. 2, we present new generalized iterative methods for solving linear algebraic equations; Sec. 3 provides the range of the diagonal splitter and the optimal relaxation factors of the presented iteration methods for tridiagonal linear equations. The new generalized iteration methods are applied to solve linear equations, 2D and 3D diffusion problems in Sec. 4. Finally, Sec. 5 gives the conclusions.
2 Generalized iteration methods
In this section, we provide a class of new iterative methods based on the splitting strategy, which is the extension of the classical Jacobi, Gauss-Seidel and SOR methods in fact. So we call the new methods as generalized iteration methods.
2.1 Generalized Jacobi, GS, SOR methods
Consider the following linear equations
Which can be often written as
.
Where , , There are many methods to solve Eq. (1) or (2) like the classical Jacobi, Gauss-Seidel (GS) and SOR iterative methods and so on. We often introduce these iterative methods by the matrix splitting method, namely, the coefficient matrix can be divided into , here D =diag( A ), and
We will provide a new splitting iterative method, which can accelerate the iteration dramatically. In fact, the classical Jacobi, GS and SOR iterative methods can be extended.
First, we divide the elements of diagonal matrix D into two parts,
D = B + C,
namely,here
The matrix B and the elements are called diagonal retention matrix and diagonal retention elements, the matrix C and the elements are called diagonal splitting matrix and diagonal splitter.
Second, we construct the following iterative methods:
which is called generalized Jacobi(G-Jacobi) iterative method. When C = 0, namely,cii =0,i=1,2,method Ⅰ is just the classical Jacobi method.
In Eq. (6), we can see that once the vector is given, the new vector can be computed. The G-Jacobi iteration method has the essential parallelism as the same as the Jacobi method. Eq. (6) can be written in element form to show exactly how to update the approximate solution vector, namely,
Next, we can construct another new iterative method,
which is also called generalized Gauss-Seidel(G-GS) iterative method. Similarly, When C = 0, namely, , the method Ⅱ is the classical Gauss-Seidel method. The G-GS iterative method can be written in the form
It can be seen that the latest approximations to the components of x are used in the update of subsequent components. The old components x(k) are overwritten with those of x(k+1) as soon as they are computed.
The improvement of the G-GS method can be provided by introducing a relaxation parameter ω. We add a relaxation factor ω to construct the G-SOR iterative method as follows:
Ⅲ:
here while is obtained by the Eq. (7).
The G-SOR method is defined by
When ω=1, the G-SOR method is equal to G-GS method; and when cii =0,i=1,2,…,l-1, the G-SOR method is just the classical SOR method.
For above three generalized iterative methods, their iterative matrices are
Therefore, the classical Jacobi, GS and SOR iteration methods are the special cases of the presented generalized iteration menthods. The theoretical anlyses can also be provided.
2.2 Convergence analysis
Theorem 1[2] Given the initial vector , the G-Jacobi iterative method is convergent only if the spectral radius of the iterative matrix MⅠsatisfies ρ( MⅠ)<1.
Lemma 1[2] If the matrix A is strictly diagonal dominant or irreducible diagonal dominant, then the matrix A is nonsingular.
Definition 1[2] When the matrix A is strictly diagonal dominant or irreducible diagonal dominant symmetric matrix, and its diagonal elements are all positive, which is called that A is positive definite matrix.
Theorem 2 If the coefficient matrix A in Eq. (2) is symmetric, and the diagonal retention matrix B >0, then the G-Jacobi iterative method is convergent only if A and 2 B - A are all positive definite.
Proof Because the matrix A is symmetric, and B >0, we get
MⅠ= B-1( L + U - C )= I - B-1A = B1/2( I - B1/2AB-1/2) B1/2.
It shows that MⅠis equivalent to I - B1/2AB-1/2, and both of them have the same eigenvalues. Moreover, I - B1/2AB-1/2 is symmetric, which has real eigenvalues.
(ⅰ) The necessity. If the iterative method is convergent, then . We can obtain that all of the eigenvalues of the matrix I - B1/2AB-1/2are belong to (-1, 1), which means
,
then
So the matrix B1/2AB-1/2 is positive definite, which indicates A is positive definite. Furthermore,due to
B1/2(2 B - A ) B-1/2=2 I - B1/2AB-1/2,
then the matrix 2 B - A is positive definite.
(ⅱ) The sufficiency. Since B1/2( I - MⅠ) B-1/2= B1/2AB-1/2,which means that the eigenvalues of I - MⅠare all greater than 0, then we have
Otherwise, due to A = B ( I - MⅠ), it has B-1/2(2 B - A ) B-1/2= B1/2( I + MⅠ) B-1/2.
Since 2 B - A is positive definite,,we obtain
.
From (9) and (10),ρ( MⅠ)<1.So the G-Jacobi iterative method is convergent.
Theorem 3 If the coefficient matrix A in Eq. (2) is strictly diagonal dominant or irreducible diagonal dominant, and the diagonal retention matrix , then the G-Jacobi iterative method is convergent.
Proof Due to , then the diagonal retention matrix B is reversible. Assume is the eigenvalue of the iterative matrix MⅠ, which satisfy . Then we can obtain that the matrix is also strictly diagonal dominant or irreducible diagonal dominant because of the strictly diagonal dominance or irreducible diagonal dominance of the matrix A . From the Lemma 1, we have
which is contradictory. Therefore, the spectral radius By Theorem 1, the G-Jacobi iterative method is convergent.
Theorem 4 If the coefficient matrix A in Eq. (2) is a real symmetric positive definite matrix, and the diagonal retention matrix B >0, then the G-SOR iterative method is convergent when
Here means 2-norm, and ①If then ② If then which means the G-GS iterative method is convergent.③ If the G-SOR method is just the SOR method, which is convergent when .
Proof Assume λ is any eigenvalue of the iterative matrix M and ν is the corresponding eigenvector, then we have
Let be the conjugate transpose of . Denote are all real numbers. Since L = UT, we get
Taking the above relationships into (11), then
Computing the square of module in both sides for Eq. (13), we get
Note that
due to A is real symmetric positive and B >0, then
we can obtain that d<0, namely, when
Theorem 5 If the coefficient matrix A in Eq. (2) is strictly diagonal dominant or irreducible diagonal dominant, and satisfies then the G-SOR iterative method is convergent when .
Proof Assume the iterative matrix MⅢ=( B-ωL )-1 [(1-ω) B +ω( U - C )] has the eigenvalue which is satisfied , then
Since and B is strictly diagonal dominant or irreducible diagonal dominant, we have
From Lemma 1, we have
which is contradictory. Therefore, the spectral radius of G-SOR iterative matrix . So the G-SOR iterative method is convergent when .
3 Tridiagonal linear equations
In this section, we take tridiagonal linear equations for example to provide the eigenvalues, eigenvectors, the range of the diagonal splitter, and the optimal relaxation factors of the presented new iterative methods.
Consider the linear equations (2) with the coefficient matrix
we split the diagonal matrix of A into two parts, namely, diag( A )= B + C, here
where γ is defined as the diagonal splitter.
Theorem 6 Let in (14) be the coefficient matrix for linear equations ,the diagonal retention matrix and the diagonal splitting matrix are defined by (15), then the eigenvalues of the G-Jacobi iteration matrix are
Theorem 7 Let in (14) be the coefficient matrix for linear equations ,the diagonal retention matrix and the diagonal splitting matrix are defined by (15), then the eigenvalues of the G-SOR iteration matrix are
The proofs of the Theorem 6 and Theorem 7 can be obtained by references [6]. From the Theorem 6 and Theorem 7, we can get the convergence domains about diagonal splitter γ of G-Jacobi and G-GS methods with the coefficient matrix in (14).
Theorem 8 Let be a coefficient matrix as in (14) for linear equations ,which is diagonally dominant and d>0. diag means the diagonal splitter γ< d as in (15). Then the G-Jacobi method is convergent when
,
here μmin is the minimum of the eigenvalues for the classical Jacobi iteration matrix.
Proof Note λ and μ are the eigenvalues of G-Jacobi and Jacobi iteration matrices. From the Theorem 6, we have
when , namely,
is satisfied, the G-Jacobi method is convergent. So the convergence domain is
.
Similarly, we provide the following theorem.
Theorem 9 Let be an coefficient matrix as in (14) for linear equations which is diagonally dominant and d>0. Then the G-GS method is convergent when
here is the spectral radius of Jacobi iteration matrix correspondingly.
Proof Note is the eigenvalue of Jacobi iterative matrix. From Eq.(17), to ensure the convergence of the G-GS method, it need that
namely,
and
For (22), we can get
For (23), we get
Therefore, when
the G-GS iteration method is convergent.
Theorem 10 Let in (14) be the coefficient matrix for linear equations . The diagonal retention matrix and the diagonal splitting matrix are defined by (15), then the G-SOR iterative method is convergent when
The theorem can be obtained by Theorem 4.
Theorem 11 Let in (14) be the coefficient matrix for linear equations the diagonal retention matrix and the diagonal splitting matrix are defined by (15), the optimal relaxation factor of the G-SOR method is
and
here ρ( MⅠ) is the spectral radius of G-Jacobi iteration matrix.
4 Applications
In this section, we use the presented G-Jacobi, G-GS and G-SOR iteration methods to solve linear algebraic equations and 2D and 3D diffusion problems to examine their superiority and accuracy. For time dependent problems, we can choose more appropriate splittings of diagonal matrices to establish the algorithms. Due to the strictly diagonal dominance of coefficient matrix obtained by the fully implicit discretization for parabolic problems, the choice of splitting is more intuitive. Denote as the absolute errors, while and are the th iterative solution and exact solution. A laptop with a 2.8 GHz Intel Core i7 processor and MATLAB running in Windows 10 is used for computations.
4.1 Linear equations
(Ⅰ)Consider the linear equations Ax = b with
The exact solution is . Taking the linear equations with the order of for example, Fig. 1 provides the iteration numbers of the G-Jacobi and G-GS iteration methods with the different values of diagonal splitter γ. It is shown that the G-Jacobi iteration method is convergent when γ<0.5, which is matched with the results obtained by the theory analysis
γ< ≈
in Theorem 8; The G-GS iteration method is convergent when γ<2 nearly, which is matched with the results obtained by the theory analysis
≈
in Theorem 9. Table 1 shows obviously the iteration numbers of the G-Jacobi and G-GS iteration methods when , , which are matched with the theoretical analysis. Moreover, when γ=0, the iteration numbers 34, 14 are corresponding to the classical Jacobi and GS methods, which are not the smallest. We find that the iteration numbers of the G-jacobi method reach to 22 nearly at γ=-0.4/0.3. The G-GS method leads to 13 when γ is about 0.1/0.2. In fact, the presented G-GS method has already achieve the same iterative numbers as the classical SOR method.
, .
(Ⅱ) Consider the linear equations Ax = b with
The exact solution is . Taking the linear equations with the order of l=101 for example, Fig. 2 provides the iteration numbers of the G-Jacobi and G-GS iteration methods with the different values of diagonal splitter γ. The eigenvalues of the coefficient matrix in problem (2) are same as that of the matrix in problem (1), so they have the same range of the diagonal splitter for the convergence.
Table 2 shows the iteration numbers of the G-Jacobi and G-GS iteration methods when , , which are matched with the theoretical analysis. When γ, the iteration numbers 34, 20 are corresponding to the classical Jacobi and GS methods, which are not the smallest. We find that the iteration numbers of the G-Jacobi method reach to 32 nearly at γ=0.2. The G-GS method leads to 13 when γ is about 0.5. These results indicate that the classical Jacobi and GS methods of matrix splitting are not optimal compared to the diagonal element splitting for G-Jacobi and G-GS methods, as the new methods use more initial information for approximation.
4.2 2D diffusion problem
Consider the following diffusion problem
with the initial and boundary conditions
where is the specific heat,. The exact solution is . We divide the domain Ω with the space step in direction and in direction (for simplicity, we only consider the case of ), and time step , while and are positive integers and Denote as and as numerical solution on the th time level with sth iteration at the nodeWe use the 2D fully implicit (2D F-I) scheme to discrete Eq. (25), due to its unconditionally stability. The discreted linear equations can be written as while A2, b2 are the coefficient matrix and right term[21]. For the presented generalized iteration methods, we choose the optimal diagonal splitter γ=2rK and optimal relaxation factors in Eq. (24) to examine the superiority and accuracy of the presented generalized iteration methods. Denote En as the absolute errors,
Table 3 provides the absolute errors of the 2D Jacobi, 2D G-Jacobi, 2D GS, 2D G-GS, 2D SOR, and 2D G-SOR methods when , which shows the errors of 2D G-SOR method is smallest. The accuracy of 2D G-GS method even exceeds 2D SOR method. Fig. 3 gives the iteration numbers of 2D Jacobi, 2D G-Jacobi, 2D GS, 2D G-GS, 2D SOR and 2D G-SOR iteration methods under the errors control within 200 times steps. From Fig. 3, we observe that the iteration numbers of 2D G-SOR method is least, and 2D G-GS method is also efficient even more than 2D SOR method.
4.3 3D diffusion problem
Consider the following diffusion problem
with the initial and boundary conditions
where is the specific heat,The exact solution is . We divide the domain Ω with the space step in x direction, in y direction and in z direction (for simplicity, we only consider the case of ), and time step τ,while and are positive integers and Denote as ,and as numerical solution on the nth time level at the node We use the 3D fully implicit (3D F-I) scheme to discrete Eq. (26) since it is stable unconditionally. The discretization can be written as,,while are the coefficient matrix and right term[21]. For the presented generalized iteration methods, we choose the optimal diagonal splitter γ=3rK and optimal relaxation factors in Eq. (24) to examine the superiority and accuracy of the presented generalized iteration methods.
Table 4 gives the absolute errors of the 3D Jacobi, 3D G-Jacobi, 3D GS, 3D G-GS, 3D SOR, and 3D G-SOR methods when , which shows the errors of 3D G-SOR method is the smallest of these iteration methods. The accuracy of 3D G-GS and 3D G-SOR methods even exceed 3D SOR method on accuracy. Fig. 4 provides the iteration numbers of 3D Jacobi, 3D G-Jacobi, 3D GS, 3D G-GS, 3D SOR, and 3D G-SOR methods within 200 times levels when . The results show that the presented generalized iteration methods are performed very efficient because the smaller iteration numbers can save the computational cost well and speed up the calculation of the solver on each iteration.
5 Conclusions
A class of diagonal matrix splitting iteration methods is provided, and the classical Jacobi Gauss-Seidel and SOR methods are the special cases of these new methods. By selecting appropriate diagonal element splitting, better iteration methods can be constructed. We give the eigenvalues, convergence domains and the optimal relaxation factors of the presented methods for tridiagonal coefficient matrix. This kind of generalized methods are applied to solve the algebraic linear equations and 2D and 3D diffusion problems with fully implicit discretization. It is found that fewer iteration numbers are used to improve the calculation efficiency and save the calculation cost compared to the classical Jacobi, Gauss-Seidel and SOR methods. The new methods can also be used to solve ill conditioned equations, such as saddle point problem and nonlinear equations.
VARGAR S. Matrix iterative analysis[M].2nd ed. Springer Series in Computational Mathematics, Vol. 127. Berlin: Springer-Verlag, 2000.
[2]
YOUNGD M. Iterative solution of large linear systems[M]. New York: Academic Press, 1971.
[3]
GAUSSC F. Letter to Gerling, December 26, 1823[J]. Werke, 1903, 9: 278-281. English translation by FORSYTHE G E. Mathematical Tables and Other Aids to Computation, 1951, 5(36): 255-258.
[4]
HAGEMANL A, YOUNGD M. Applied iterative methods[M]. New York: Academic Press, 1981.
[5]
AXELSSONO. Iterative solution methods[M]. Cambridge: Cambridge University Press, 1994.
[6]
SAADY. Iterative methods for sparse linear systems[M]. 2nd ed. Philadelphia: SIAM, 2003.
[7]
BAIZhongzhi, GOLUBG H, NGM K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems[J]. SIAM Journal on Matrix Analysis and Applications, 2003, 24(3):603-626.
[8]
BAIZhongzhi, GOLUBG H. Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems[J].IMA Journal of Numerical Analysis, 2007, 27(1): 1-23.DOI:10.1093/imanum/drl017 .
[9]
BENZIM. Preconditioning techniques for large linear systems: A survey[J]. Journal of Computational Physics, 2002, 182(2):418-477.
[10]
BAIZhongzhi. On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations[J].Journal of Computational Mathematics,2010,28(1):39-50.
[11]
LIWen, SUNWeiwei. Modified Gauss-Seidel type methods and Jacobi type methods for Z -matrices[J]. Linear Algebra and its Applications, 2000,317(1/2/3): 227-240.
[12]
BERTACCINID, DURASTANTEF. Interpolating preconditioners for the solution of sequence of linear systems[J]. Computers & Mathematics with Applications, 2016, 72(4): 1118-1130.
[13]
SALKUYEHD K, HEZARID, EDALATPOURV. Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations[J]. International Journal of Computer Mathematics, 2015, 92(4): 802-815.
[14]
ZHAOWanchen, SHAOXinhui. New matrix splitting iteration method for generalized absolute value equations[J].AIMS Mathematics, 2023,8(5): 10558-10578.
[15]
LITianyi, CHENFang, FANGZhiwei, et al. Two-parameter modified matrix splitting iteration method for Helmholtz equation[J]. International Journal of Computer Mathematics, 2024, 101(9/10): 1205-1218.
[16]
BAIZhongzhi. A two-step matrix splitting iteration paradigm based on one single splitting for solving systems of linear equations[J]. Numerical Linear Algebra with Applications, 2024, 31(3): e2510.DOI:10.1002/nla.2510 .
[17]
HADJIDIMOSA. Successive overrelaxation (SOR) and related methods[J]. Journal of Computational and Applied Mathematics, 1991, 20: 75-89.
[18]
WOZNICKIZ I, JEDRZEJECH A. A new class of modified line-SOR algorithms[J]. Journal of Computational and Applied Mathematics, 2001, 131(1/2): 89-142.
[19]
YOUSSEFI K, FARIDM M. On the accelerated overrelaxation method[J]. Pure and Applied Mathematics Journal, 2015, 4(1): 26-31.
[20]
PANYunming, XUQiuyan, LIUZhiyong, et al. A class of new successive permutation iterative algorithms with the relaxation factor for radiation diffusion equation[J]. Computational and Applied Mathematics, 2023, 42(5): Article 203.DOI:10.1007/s40314-023-02341-7 .
[21]
XUQiuyan, LIUZhiyong. A class of new successive permutation iterative algorithms for diffusion equation[J]. Journal of Difference Equations and Applications, 2021, 27(9): 1355-1372.
基金资助
The National Natural Science Foundations of China(12202219)
the Natural Science Foundations of Ningxia(2024AAC02009)
the Natural Science Foundations of Ningxia(2023AAC05001)