In lieu of an abstract, here is a brief excerpt of the content:

Appendix A Sensitivity Analysis A.1 Floating-point arithmetic The classic reference for finite precision arithmetic is Wilkinson’s monograph “Rounding Errors in Algebraic Processes” [255], while a more recent treatment is Higham’s “Accuracy and Stability of Numerical Algorithms” [128]. Almost any numerical analysis book has an introductory chapter about this topic. Here we list some of the basic ideas used in our text. Digital computers use floating-point representation for real and complex numbers based on the binary system, i.e., the basis is 2. Real numbers are rewritten in a special normalized form, where the mantissa is less than 1. Usually there is the option to use single (t-digit) or double (2t-digit) length mantissa representation and arithmetic. If we denote by fl(x) the floatingpoint computer representation of a real number x, and by ⊕ the floatingpoint addition, then the unit round-off μ (for a given computer) is defined as the smallest ε such that in floating-point arithmetic: fl(1) ⊕ ε > fl(1). For a binary t-digit floating-point system μ = 2−t . The machine epsilon εM = 2μ is the gap between 1 and the next larger floating-point number (and, thus, in a relative sense, gives an indication of the gap between the floating-point numbers). Several of the bounds in this book contain the unit round-off or the machine precision; it is therefore advisable to check the size of εM for a particular machine and word length. A small Fortran program is available from Netlib to compute the machine precision for double precision and can be adapted easily for single. Representation error. The relative error in the computer representation fl(x) of a real number x = 0 satisfies|fl(x) − x||x| ≤ μ, 263 264 APPENDIX A implying that fl(x) ∈ [x(1 − εM ), x(1 + εM )]. Rounding error. The error in a given floating-point operation , corresponding to the real operation ∗, satisfies fl(x)  fl(y) = (x ∗ y)(1 + ε), with |ε| ≤ μ. To measure the cost of the different algorithms described in the book we use as the unit the flop. A word of warning though: its definition differs from one author to the other; here we follow the one used in [105, 128], which is also common in many articles in the literature. Definition 1. A flop is roughly the work associated with a floating-point operation (addition, subtraction, multiplication, or division). In March 2011 the cheapest cost per Gigaflop (109 flops) was $1.80, achieved on the computer cluster HPU4Science, made of six dual Core 2 Quad off-the-shelf machines at a cost of $30, 000, with performance enhanced by combining the CPUs with the Graphical PUs. In comparison, the cost in 1984 was $15 million on a Cray X-MP. A.2 Stability, conditioning and accuracy A clear and concise review of these topics can be found in [57, 128, 237]. One general comment first: given a t-digit arithmetic, there is a limit to the attainable accuracy of any computation, because even the data themselves may not be representable by a t-digit number. Additionally, in practical applications, one should not lose sight of the fact that usually the data, derived from observations, already have a physical error much larger than the one produced by the floating-point representation. Let us formally define a mathematical problem by a function that relates a data space X with a solutions space Y, i.e., P : X(data) → Y(solutions). Let us also define a specific algorithm for this problem as a function P : X → Y. One is interested in evaluating how close the solution computed by the algorithm is to the exact solution of the mathematical problem. The accuracy will depend on the sensitivity of the mathematical problem P to perturbations of its data, the condition of the problem, and on the sensitivity of the algorithm P̃ to perturbations of the input data, the stability of the algorith. The condition of the mathematical problem is commonly measured by the condition number κ(x). We emphasize the problem dependency, so, for example, the same matrix A may give rise to an ill-conditioned least squares problem and a well-conditioned eigenvector problem. A formal definition of the condition number follows. [3.143.218.146] Project MUSE (2024-04-25 17:17 GMT) SENSITIVITY ANALYSIS 265 Definition 2. The condition number is defined by κ(x) = sup δx P(x + δx) − P(x)2 P(x)2 / δx2 x2...

Share