In lieu of an abstract, here is a brief excerpt of the content:

Chapter 2 Algorithms for reducing matrices It is known from the introductory chapter that semiseparable and tridiagonal matrices are closely related to each other. Invertible semiseparable matrices have as an inverse a tridiagonal matrix and vice versa. One might wonder whether this close relation between these two classes of matrices can be extended, for example , whether one can also reduce any symmetric matrix via orthogonal similarity transformations into a similar semiseparable one. In this chapter we will construct algorithms for reducing matrices to different structured rank forms. We will propose algorithms for reducing matrices to upper triangular semiseparable, semiseparable and semiseparable plus diagonal form. The aim of these reductions is to exploit the rank structure in the development of eigenvalue/singular value algorithms. In the first introductory section we briefly discuss the nomenclature of transformations . In the second section we discuss three types of orthogonal similarity transformations, applied on symmetric matrices. Firstly we will introduce the wellknown tridiagonalization of a symmetric matrix. Secondly we will see that the semiseparabilization of a symmetric matrix is closely related to the previously de- fined tridiagonalization process. Finally we will further explore the semiseparabilization and adapt this procedure to arrive at an orthogonal similarity transformation for reducing a symmetric matrix to semiseparable plus diagonal form. The third section focuses on applying this reduction scheme on nonsymmetric matrices. We will show the similarity transformation of matrices into Hessenberg and Hessenberg-like forms. A Hessenberg-like matrix can be considered as the analogue of a Hessenberg matrix for the structured rank case. The second and the third section focused attention on similarity transformations . Similarity transformations are useful if the aim is to compute the eigenvalues of the original matrix. If one wants to compute the singular values, however, there is no reason to stick to similarity transformations. Therefore, we propose in this section orthogonal transformations to come to an easy form, useful for computing the singular values. In correspondence with the reduction of an arbitrary matrix by orthogonal transformations to a bidiagonal one, we construct an algorithm that 19 20 Chapter 2. Algorithms for reducing matrices reduces a matrix into an upper triangular semiseparable one. In Sections 2.5 and 2.6 algorithms for transforming matrices from sparse form to the structured rank form and vice versa are presented.1 Similarity transformations as well as the other type of transformation are discussed for transforming tridiagonal to semiseparable (plus diagonal) form and transforming bidiagonal to upper triangular semiseparable form and vice versa. ✎ The reader familiar with the standard reductions to sparse matrix format such as to tridiagonal, Hessenberg and bidiagonal (described in Theorems 2.5, 2.12 and 2.15) can omit the discussion of these reduction algorithms and proceed immediately to the new reduction methods to structured rank format. The following reductions are the key algorithms for this chapter: the reduction to semiseparable form in Theorem 2.6; the reduction to semiseparable plus diagonal form in Theorem 2.9; the reduction to Hessenberg-like form in Theorem 2.13 and the reduction to upper triangular semiseparable form in Theorem 2.17. These algorithms are the basics of the forthcoming chapters. For completeness also reductions from sparse to structured rank and from structured rank to sparse form are included (Sections 2.5 and 2.6). These are not however essential for the remainder of the book. 2.1 Introduction Before focusing attention on the various reduction algorithms, we will briefly explain some tools and notations used throughout this chapter. We start by introducing the different types of transformations that will be used such as similarity and equivalence transformations. Secondly we will introduce two popular types of orthogonal transformations, namely the Givens and the Householder transformations. 2.1.1 Equivalence transformations We will briefly repeat here the standard nomenclature related to the transformations discussed in this book. Definition 2.1. A matrix2 A ∈ Rn×n is said to be equivalent with a matrix B ∈ Rn×n if there exist two nonsingular matrices C and D such that: C−1 AD = B. This transformation is called an equivalence transformation. Equivalence transformations come in handy when computing the singular values of matrices as orthogonally equivalent matrices (with C and D orthogonal) have 1Remember the convention from the preceding chapter. With sparse form we mean tridiagonal , bidiagonal and Hessenberg, with structured rank we mean semiseparable, upper triangular semiseparable and Hessenberg-like. 2Even though in this book, we will mainly focus on real matrices, most of...

Share