In lieu of an abstract, here is a brief excerpt of the content:

Chapter 5 Introduction: traditional sparse QR-algorithms This chapter is merely a repetition of the theory of QR-algorithms for sparse matrices . All the ingredients needed for designing such algorithms will be investigated. In a first section we will reconsider what is meant by a QR-factorization and a QR-algorithm. It turns out that applying the QR-algorithm directly onto dense matrices is very expensive. We know, however, from part I, that we can reduce matrices via orthogonal similarity transformations to sparse form, such as Hessenberg or tridiagonal form. Therefore we investigate in Section 5.2 how to efficiently compute the QRfactorization of a Hessenberg matrix. Moreover it will also be shown that the structure of a Hessenberg matrix is preserved after performing a step of the QRmethod . This means that we can apply the QR-algorithm on Hessenberg matrices, with a reduced computational complexity. Rewriting the formulas determining the QR-algorithm we see that we can also apply a similarity transformation onto the matrix, without computing the upper triangular factor R. In Section 5.3 we will see that based on the first Givens transformation of the QR-factorization that we will be able to perform a complete step of the QR-method without having to compute the QR-factorization. This technique is called implicit. To be able to prove the correctness of the presented approach an implicit Q-theorem is needed. The last section of this chapter pays attention to the QR-algorithm for computing the singular values. The algorithm is an adaptation of the QR-method for tridiagonal matrices. The global algorithm also consists of two steps. Firstly a matrix is reduced to upper bidiagonal form, to reduce the complexity of the second step. In the second step an adaptation of the traditional QR-algorithm is applied to the upper bidiagonal matrices to let them converge to a diagonal matrix containing the singular values on its diagonal. ✎ This chapter repeats well-known results related to QR-algorithms (for sparse matrices). The sections and subsections in this chapter are ordered somehow 155 156 Chapter 5. Introduction: traditional sparse QR-algorithms similarly as the ones in the forthcoming two chapters. This is done to clearly see the similarities and differences between the approaches. This chapter is unessential for the reader familiar with the notions, QR-algorithm, implicit QR-algorithm, QRfactorization , chasing technique, implicit Q-theorem, unreducedness, maintaining of the Hessenberg structure under a step of the QR-method and the implicit QRalgorithm for bidiagonal matrices to compute the singular values. For those interested in a brief repetition of these results let us present a concise summary of the most important results. The computation of the QR-factorization is described in Subsection 5.1.1. The QR-algorithm itself is presented in Subsection 5.1.2. Both these subections illustrate the heavy computational complexity of the QR-algorithm in case no structure of the matrices is involved. Adaptations towards sparse matrices are discussed in Section 5.2. Successively the QRfactorization and the preservation of the structure are discussed. How to obtain an implicit QR-algorithm is discussed in Section 5.3. The idea behind an implicit QR-algorithm, the chasing technique and the implicit Q-theorem, proving the correctness of the followed approach are discussed in this order. Especially the definition of unreducedness, discussed in Subsection 5.3.2 and the implicit Q-theorem (Theorem 5.5) are important. To conclude, also the ideas behind the QR-algorithm for computing the singular values are included (Section 5.4). 5.1 On the QR-algorithm In this section we will briefly explain the QR-factorization and the QR-algorithm (with shift). 5.1.1 The QR-factorization Let us start by shortly repeating what is meant with a QR-factorization. Suppose we have a matrix A and a factorization of the form: A = QR, with Q a matrix having orthonormal columns and R an upper triangular matrix. If Q is a square matrix, and hence orthogonal, this is called the full QR-factorization (in this case R is rectangular). Otherwise, if Q has the same number of columns as A this is called the thin (or reduced) QR-factorization (in this case R is square). In fact this means that the first k columns of A span the same space as the first k columns of Q. This leads to the following equation, which we already met before in Part I: Ae1, e2, . . . , ek = q1...

Share