In lieu of an abstract, here is a brief excerpt of the content:

Appendix B Linear Algebra Background B.1 Norms Vector norms The most commonly used norms for vectors are the l1-, l2- and l∞-norms, denoted by || · ||1, || · ||2, and || · ||∞, respectively. These norms are defined by: • v1 = n i=1 |vi|. • v2 = n i=1 |vi|2 1 2 =  vT v 1 2 (Euclidean norm). • v∞ = max1≤i≤n |vi| (Chebyshev norm). A useful relationship between an inner product and the l2-norms of its factors is the Cauchy-Schwartz inequality:  xT y   ≤ x2 y2 . Norms are continuous functions of the entries of their arguments. It follows that a sequence of vectors x0, x1, . . . converges to a vector x if and only if limk→∞ xk − x = 0 for any norm. Matrix norms A natural definition of a matrix norm is the so-called induced or operator norm that, starting from a vector norm  · , defines the matrix norm as the maximum amount that the matrix A can stretch a unit vector, or more formally: A = maxv=1 Av. Thus, the induced norms associated with the usual vector norms are 267 268 APPENDIX B • A1 = maxj n i=1 |aij| . • A2 = [max(eigenvalue of (AT A))] 1 2 . • A∞ = maxi n j=1 |aij| . In addition the so-called Frobenius norm (Euclidean length of A considered as an nm-vector) is • AF = n j=1 m i=1 |aij|2 1 2 = trace(AT A) 1 2 . For square, orthogonal matrices Q ∈ Rn×n we have Q2 = 1 and QF = √ n. Both the Frobenius and the matrix l2-norms are compatible with the Euclidean vector norm. This means that Ax ≤ A x is true when using the l2-norm for the vector and either the l2 or the Frobenius norm for the matrix. Also, both are invariant with respect to orthogonal transformations Q: QA2 = A2 , QAF = AF . In terms of the singular values of A, the l2 norm can be expressed as A2 = maxi σi = σ1, where σi, i = 1, . . . , min(m, n) are the singular values of A in descending order of size. In the special case of symmetric matrices the l2-norm reduces to A2 = maxi |λi|, with λi an eigenvalue of A. This is also called the spectral radius of the matrix A. B.2 Condition number The condition number of a general matrix A in norm ·p is κp(A) = Ap  A†   p . For a vector-induced norm, the condition number of A is the ratio of the maximum to the minimum stretch produced by the linear transformation represented by this matrix, and therefore it is greater than or equal to 1. In the l2-norm, κ2(A) = σ1/σr, where σr is the smallest nonzero singular value of A and r is the rank of A. In finite precision arithmetic, a large condition number can be an indication that the “exact” matrix is close to singular, as some of the zero singular values may be represented by very small numbers. [3.14.246.254] Project MUSE (2024-04-26 05:44 GMT) LINEAR ALGEBRA BACKGROUND 269 B.3 Orthogonality The notation used for the inner product of vectors is vT w =  i viwi. Note that v 2 2 = vT v. If v, w = 0, and vT w = 0, then these vectors are orthogonal, and they are orthonormal if in addition they have unit length. Orthogonal matrices. A square matrix Q is orthogonal if QT Q = I or QQT = I, i.e., the columns or rows are orthonormal vectors and thus Q2 = 1. It follows that orthogonal matrices represent isometric transformations that can only change the direction of a vector (rotation, reflection ), but not its Euclidean norm, a reason for their practical importance: Qv2 = v2, QA2 = AQ2 = A2. Permutation matrix. A permutation matrix is an identity matrix with permuted rows or columns. It is orthogonal, and products of permutation matrices are again permutations. Orthogonal projection onto a subspace of an inner product space. Given an orthonormal basis, {u1, u2, . . . , un} of a subspace S ⊆ X, where X is an inner product space, the orthogonal projection P : X → S satisfies Px = n  i=1 (xT ui) ui. The operator P is linear and satisfies Px = x if x ∈ S (idempotent) and Px2 ≤ x2 ∀x ∈ X. Therefore, the associated square matrix P is an orthogonal projection matrix if it is...

Share