In lieu of an abstract, here is a brief excerpt of the content:

Chapter 7 Additional Topics in Least Squares In this chapter we collect some more specialized topics, such as problems with constraints, sensitivity analysis, total least squares and compressed sensing. 7.1 Constrained linear least squares problems The inclusion of constraints in the linear least squares problem is often a convenient way to incorporate a priori knowledge about the problem. Linear equality constraints are the easiest to deal with, and their solution is an important part of solving problems with inequality constraints. Bound constraints and quadratic constraints for linear least squares are also considered in this chapter. Least squares with linear constraints Linear equality constraints (LSE) The general form of a linear least squares problem with linear equality constraints is as follows: find a vector x ∈ Rn that minimizes b − A x2 subject to the constraints CT x = d, where C is a given n × p matrix with p ≤ n and d is a given vector of length p. This results in the LSE problem: Problem LSE: minx b − A x2 subject to CT x = d. (7.1.1) A solution exists if CT x = d is consistent, which is the case if rank(C) = p. For simplicity, we assume that this is satisfied; Björck ([20], pp. 188) ex121 122 LEAST SQUARES DATA FITTING WITH APPLICATIONS plains ways to proceed when there is no a priori knowledge about consistency . Furthermore, the minimizing solution will be unique if the augmented matrix Aaug = CT A (7.1.2) has full rank n. The idea behind the different algorithms for the LSE problem is to reduce it to an unconstrained (if possible lower-dimensional) LSQ problem. To use Lagrange multipliers is of course an option, but we will instead describe two more direct methods using orthogonal transformations, each with different advantages. One option is to reformulate the LSE problem as a weighted LSQ problem , assigning large weights (or penalties) for the constraints, thus enforcing that they are almost satisfied: min x     λCT A x − λd b     2 , λ large. (7.1.3) Using the generalized singular value decomposition (GSVD), it can be proved that if x(λ) is the solution to (7.1.3) then x(λ) − x2 = O(λ−2 ), so that in fact x(λ) → x as λ → ∞. The LU factorization algorithm from Section 4.2 is well suited to solve this problem, because p steps of Gaussian elimination will usually produce a well-conditioned L matrix. Although in principle only a general LSQ solver is needed, there are numerical difficulties because the matrix becomes poorly conditioned for increasing values of λ. However, a strategy described in [150], based on Householder transformations combined with appropriate row and column interchanges has been found to give satisfactory results. In practice, as [20] mentions, if one uses the LSE equations in the form (7.1.3), it is sufficient to apply a QR factorization with column permutations. To get an idea of the size of the weight λ we refer to an example by van Loan described in [20], pp. 193. One can obtain almost 14 digits accuracy with a weight of λ = 107 using a standard QR factorization with permutations (e.g., MATLAB’s function “qr”), if the constraints are placed in the first rows. Inverting the order in the equations, though, gives only 10 digits for the same weight λ and an increase in λ only degrades the computed solution. In addition, Björck [20] mentions a QR decomposition based on self-scaling Givens rotations that can be used without the “risk of overshooting the optimal weights.” Another commonly used algorithm directly eliminates some of the variables by using the constraints. The actual steps to solve problem (7.1.1) are: [3.144.84.155] Project MUSE (2024-04-26 16:36 GMT) ADDITIONAL TOPICS IN LEAST SQUARES 123 1. Compute the QR factorization of C to obtain: C = Q R1 0 ⇐⇒ CT = ( RT 1 0 )QT . 2. Use the orthogonal matrix to define new variables: u v = QT x ⇐⇒ x = Q u v (7.1.4) and also to compute the value of the unknown p-vector u by solving the lower triangular system RT 1 u = d. 3. Introduce the new variables into the equation b − A x2 =  b − AQQT x   2 ≡    b − A u v     2 , where the matrix A = AQ is partitioned according to the dimensions of u v : A = ( A1 A2 ), allowing the reduced n − p dimensional LSQ problem to be written as min v (b − A1u...

Share