Hierarchical Matrices: Algorithms and Analysis

Hierarchical Matrices: Algorithms and Analysis
Author: Wolfgang Hackbusch
Publisher: Springer
Total Pages: 532
Release: 2015-12-21
Genre: Mathematics
ISBN: 3662473240

This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists in computational mathematics, physics, chemistry and engineering.

Hierarchical Matrices

Hierarchical Matrices
Author: Mario Bebendorf
Publisher: Springer Science & Business Media
Total Pages: 303
Release: 2008-06-25
Genre: Mathematics
ISBN: 3540771476

Hierarchical matrices are an efficient framework for large-scale fully populated matrices arising, e.g., from the finite element discretization of solution operators of elliptic boundary value problems. In addition to storing such matrices, approximations of the usual matrix operations can be computed with logarithmic-linear complexity, which can be exploited to setup approximate preconditioners in an efficient and convenient way. Besides the algorithmic aspects of hierarchical matrices, the main aim of this book is to present their theoretical background. The book contains the existing approximation theory for elliptic problems including partial differential operators with nonsmooth coefficients. Furthermore, it presents in full detail the adaptive cross approximation method for the efficient treatment of integral operators with non-local kernel functions. The theory is supported by many numerical experiments from real applications.

Eigenvalue Algorithms for Symmetric Hierarchical Matrices

Eigenvalue Algorithms for Symmetric Hierarchical Matrices
Author: Thomas Mach
Publisher: Thomas Mach
Total Pages: 173
Release: 2012
Genre: Mathematics
ISBN:

This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDL factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.

Numerical Mathematics and Advanced Applications

Numerical Mathematics and Advanced Applications
Author: Karl Kunisch
Publisher: Springer Science & Business Media
Total Pages: 825
Release: 2008-09-19
Genre: Mathematics
ISBN: 3540697772

The European Conference on Numerical Mathematics and Advanced Applications (ENUMATH) is a series of conferences held every two years to provide a forum for discussion on recent aspects of numerical mathematics and their applications. The ?rst ENUMATH conference was held in Paris (1995), and the series continued by the one in Heidelberg (1997), Jyvaskyla (1999), Ischia (2001), Prague (2003), and Santiago de Compostela (2005). This volume contains a selection of invited plenary lectures, papers presented in minisymposia, and contributed papers of ENUMATH 2007, held in Graz, Austria, September 10–14, 2007. We are happy that so many people have shown their interest in this conference. In addition to the ten invited presentations and the public lecture, we had more than 240 talks in nine minisymposia and ?fty four sessions of contributed talks, and about 316 participants from all over the world, specially from Europe. A total of 98 contributions appear in these proceedings. Topics include theoretical aspects of new numerical techniques and algorithms, as well as to applications in engineering and science. The book will be useful for a wide range of readers, giving them an excellent overview of the most modern methods, techniques, algorithms and results in numerical mathematics, scienti?c computing and their applications. We would like to thank all the participants for the attendance and for their va- ablecontributionsanddiscussionsduringtheconference.Specialthanksgothe m- isymposium organizers, who made a large contribution to the conference, the chair persons, and all speakers.

Supercomputing Frontiers

Supercomputing Frontiers
Author: Rio Yokota
Publisher: Springer
Total Pages: 301
Release: 2018-03-20
Genre: Computers
ISBN: 3319699539

It constitutes the refereed proceedings of the 4th Asian Supercomputing Conference, SCFA 2018, held in Singapore in March 2018. Supercomputing Frontiers will be rebranded as Supercomputing Frontiers Asia (SCFA), which serves as the technical programme for SCA18. The technical programme for SCA18 consists of four tracks: Application, Algorithms & Libraries Programming System Software Architecture, Network/Communications & Management Data, Storage & Visualisation The 20 papers presented in this volume were carefully reviewed nd selected from 60 submissions.

Matrix Computations and Semiseparable Matrices

Matrix Computations and Semiseparable Matrices
Author: Raf Vandebril
Publisher: JHU Press
Total Pages: 594
Release: 2008-01-14
Genre: Mathematics
ISBN: 0801896797

In recent years several new classes of matrices have been discovered and their structure exploited to design fast and accurate algorithms. In this new reference work, Raf Vandebril, Marc Van Barel, and Nicola Mastronardi present the first comprehensive overview of the mathematical and numerical properties of the family's newest member: semiseparable matrices. The text is divided into three parts. The first provides some historical background and introduces concepts and definitions concerning structured rank matrices. The second offers some traditional methods for solving systems of equations involving the basic subclasses of these matrices. The third section discusses structured rank matrices in a broader context, presents algorithms for solving higher-order structured rank matrices, and examines hybrid variants such as block quasiseparable matrices. An accessible case study clearly demonstrates the general topic of each new concept discussed. Many of the routines featured are implemented in Matlab and can be downloaded from the Web for further exploration.

Iterative Solution of Large Sparse Systems of Equations

Iterative Solution of Large Sparse Systems of Equations
Author: Wolfgang Hackbusch
Publisher: Springer
Total Pages: 528
Release: 2016-06-21
Genre: Mathematics
ISBN: 3319284835

In the second edition of this classic monograph, complete with four new chapters and updated references, readers will now have access to content describing and analysing classical and modern methods with emphasis on the algebraic structure of linear iteration, which is usually ignored in other literature. The necessary amount of work increases dramatically with the size of systems, so one has to search for algorithms that most efficiently and accurately solve systems of, e.g., several million equations. The choice of algorithms depends on the special properties the matrices in practice have. An important class of large systems arises from the discretization of partial differential equations. In this case, the matrices are sparse (i.e., they contain mostly zeroes) and well-suited to iterative algorithms. The first edition of this book grew out of a series of lectures given by the author at the Christian-Albrecht University of Kiel to students of mathematics. The second edition includes quite novel approaches.

Exploiting Hidden Structure in Matrix Computations: Algorithms and Applications

Exploiting Hidden Structure in Matrix Computations: Algorithms and Applications
Author: Michele Benzi
Publisher: Springer
Total Pages: 413
Release: 2017-01-24
Genre: Mathematics
ISBN: 3319498878

Focusing on special matrices and matrices which are in some sense `near’ to structured matrices, this volume covers a broad range of topics of current interest in numerical linear algebra. Exploitation of these less obvious structural properties can be of great importance in the design of efficient numerical methods, for example algorithms for matrices with low-rank block structure, matrices with decay, and structured tensor computations. Applications range from quantum chemistry to queuing theory. Structured matrices arise frequently in applications. Examples include banded and sparse matrices, Toeplitz-type matrices, and matrices with semi-separable or quasi-separable structure, as well as Hamiltonian and symplectic matrices. The associated literature is enormous, and many efficient algorithms have been developed for solving problems involving such matrices. The text arose from a C.I.M.E. course held in Cetraro (Italy) in June 2015 which aimed to present this fast growing field to young researchers, exploiting the expertise of five leading lecturers with different theoretical and application perspectives.

Multiscale Modeling and Simulation in Science

Multiscale Modeling and Simulation in Science
Author: Björn Engquist
Publisher: Springer Science & Business Media
Total Pages: 332
Release: 2009-02-11
Genre: Computers
ISBN: 3540888578

Most problems in science involve many scales in time and space. An example is turbulent ?ow where the important large scale quantities of lift and drag of a wing depend on the behavior of the small vortices in the boundarylayer. Another example is chemical reactions with concentrations of the species varying over seconds and hours while the time scale of the oscillations of the chemical bonds is of the order of femtoseconds. A third example from structural mechanics is the stress and strain in a solid beam which is well described by macroscopic equations but at the tip of a crack modeling details on a microscale are needed. A common dif?culty with the simulation of these problems and many others in physics, chemistry and biology is that an attempt to represent all scales will lead to an enormous computational problem with unacceptably long computation times and large memory requirements. On the other hand, if the discretization at a coarse level ignoresthe?nescale informationthenthesolutionwillnotbephysicallymeaningful. The in?uence of the ?ne scales must be incorporated into the model. This volume is the result of a Summer School on Multiscale Modeling and S- ulation in Science held at Boso ¤n, Lidingo ¤ outside Stockholm, Sweden, in June 2007. Sixty PhD students from applied mathematics, the sciences and engineering parti- pated in the summer school.