In lieu of an abstract, here is a brief excerpt of the content:

Part I The reduction of matrices 15 This page intentionally left blank [3.139.82.23] Project MUSE (2024-04-26 10:06 GMT) 17 The first and second part of this book focus on the translation of traditional eigenvalue algorithms, based on sparse matrices, towards a structured rank approach . Let us specify this. For example, a traditional method for computing the eigenvalues of a symmetric matrix first transforms this symmetric matrix to a similar tridiagonal one, and then the eigenvalues of this similar tridiagonal one are computed via the well-known QR-algorithm. A translation of the reduction to a more easy form (tridiagonal in the traditional setting) is the topic of the first part, and a translation of the QR-algorithm (for tridiagonal matrices in the traditional algorithm) is the topic of the second part of this book. Let us provide some more information on the first part. This part goes out to the development and study of new reduction algorithms, which reduce matrices not to a sparse form such as Hessenberg, bidiagonal or tridiagonal form, but to their analogues in the structured rank setting, i.e., Hessenberg-like, upper triangular semiseparable and semiseparable. The algorithms are proposed. Their properties with regard to the traditional reduction algorithms are investigated and their implementations, computational complexities and some numerical examples will be discussed. Chapter 2 starts with introducing the different reduction algorithms. The traditional algorithms for reducing matrices to sparse form are also revisited to clarify the correspondences and differences with the new proposed methods. First we discuss transformations of symmetric matrices to two specific forms, namely semiseparable and semiseparable plus diagonal. It will be shown that the reduction to semiseparable plus diagonal form is the most general one, covering in fact also the reduction to tridiagonal and semiseparable form. Applying the same reduction algorithm to nonsymmetric matrices leads to an orthogonal similarity transformation for transforming arbitrary matrices to Hessenberg-like form. Similarity transformations are not necessary if one wants to compute the singular values; hence, a reduction method to transform arbitrary rectangular matrices into an upper triangular semiseparable matrix is also proposed. To conclude this chapter, two extra sections are added proposing methods to go from sparse to structured rank form and vice versa. For example efficient algorithms for reducing a semiseparable plus diagonal matrix to a tridiagonal matrix are proposed. It was already mentioned that the reductions to structured rank form are slightly more costly than the reductions to sparse form. In Chapter 3, we discuss in more detail the effect of these extra transformations. We start by deducing a general theory stating which similarity transformations have the Lanczos-Ritz values as eigenvalues in the already reduced part. The provided theoretical results give necessary and sufficient conditions that need to be imposed on similarity transformations to obtain this convergence behavior. After this investigation we investigate the extra convergence behavior due to extra operations performed in the reduction procedure. This results in a kind of nested subspace iteration. Moreover in the case of the reduction to semiseparable plus diagonal form, it is a kind of nested multishift iteration, in which the shift depends on the diagonal initially chosen for the reduction procedure. Of course these two types of convergence behavior interact with each other. It is proven that the subspace iteration starts working on the already 18 reduced part if the Lanczos-Ritz values approximate well enough particular values in the spectrum. A tight upper bound is computed predicting rather well the speed of the convergence of the multishift subspace iteration taking into consideration the interaction with the Lanczos-Ritz values. The last chapter of this first part gives some mathematical details of implementing these methods and shows some numerical experiments. First some tools for operating with Givens transformations are provided. Secondly the mathematical details of implementing some of the reduction procedures, based on the Givens-vector representation, are shown. The chapter is concluded by showing some numerical experiments illustrating the convergence behavior of the proposed reduction procedures . ✎ The main realization of this part will be the development of some reduction algorithms and the study of their (convergence) behavior. Chapter 2 proposes the different reduction algorithms that will be used throughout this book. The reductions to structured rank forms are the essentials for this chapter. These reductions are discussed in Theorems 2.6, 2.9, 2.13 and 2.17. Chapter 3 discusses the convergence properties of the designed reduction algorithms . Two...

Share