數值算法的精確性與穩定性

數值算法的精確性與穩定性

《數值算法的精確性與穩定性》是由清華大學出版社出版編著的實體書,作者是(美國)漢安(Nicholas J.Higham)。本書主要講述了數值算法的精確性與穩定性。

基本介紹

  • 書名:數值算法的精確性與穩定性
  • 頁數:680頁
  • 出版時間:第1版 (2011年2月1日)
  • 裝幀:平裝
圖書信息,作者簡介,內容簡介,媒體評論,目錄,

圖書信息

出版社: 清華大學出版社;
外文書名: Accuracy and Stability of Numerical Algorithms 2nd edition
叢書名: 國際著名數學圖書
:
正文語種: 英語
開本: 16
ISBN: 9787302244936, 7302244936
條形碼: 9787302244936
尺寸: 24.4 x 17.4 x 3.4 cm
重量: 939 g

作者簡介

作者:(美國)漢安(Nicholas J.Higham)
Nicholas J. Higham is Richardson Professor of Applied Mathematics at the University of Manchester, England. He is the author of more than 80 publications and is a member of the editorial boards of Foundations of Computational Mathematics, the IMA Journal of Numerical Analysis, Linear Algebra and Its Applications, and the SIAM Journal on Matrix Analysis and Applications.

內容簡介

《數值算法的精確性與穩定性(第2版)(影印版)》內容簡介:accuracy and stability of numerical algorithms gives a thorough, up-to-date treatment of the behavior of numerical algorithms in finite precision arithmetic. it combines algorithmic derivations, perturbation theory, and rounding error analysis, all enlivened by historical perspective and informative quotations.
this second edition expands and updates the coverage of the first edition (1996) and includes numerous improvements to the original material. two new chapters treat symmetric indefinite systems and skew-symmetric systems, and nonlinear systems and newton's method. twelve new sections include coverage of additional error bounds for gaussian elimination, rank revealing lu factorizations, weighted and constrained least squares problems, and the fused multiply-add operation found on some modern computer architectures. although not designed specifically as a textbook, this new edition is a suitable reference for an advanced course. it can also be used by instructors at all levels as a supplementary text from which to draw examples, historical perspective, statements of results, and exercises.

媒體評論

"This definitive source on the accuracy and stability of numerical algorithms is quite a bargain and a worthwhile addition to the library of any statistician heavily involved in computing."
——Robert L. Strawderman, Journal of the American Statistical Association, March 1999.
"This text may become the new 'Bible' about accuracy and stability for the solution of system of linear equations. It covers 688 pages carefully collected, investigated, and written.. One will find that this book is a very suitable and comprehensive reference for research in numerical linear algebra, software usage and development, and for numerical linear algebra courses."
—— N. Kockler, Zentrablatt for Mathematik, Band 847/96.
"Nick Higham has assembled an enormous amount of important and useful material in a coherent, readable form. His book belongs on the shelf of anyone who has more than a casual interest in rounding error and matrix computation."
—— G.W. Stewart, SIAM Review, March 1997.

目錄

list of figures
list of tables
preface to second edition
preface to first edition
about the dedication
1 principles of finite precision computation
1.1 notation and background
1.2 relative error and significant digits
1.3 sources of errors
1.4 precision versus accuracy
1.5 backward and forward errors
1.6 conditioning
1.7 cancellation
1.8 solving a quadratic equation
1.9 computing the sample variance
1.10 solving linear equations
1.10.1 gepp versus cramer's rule
1.11 accumulation of rounding errors
1.12 instability without cancellation
1.12.1 the need for pivoting
1.12.2 an innocuous calculation?
1.12.3 an infinite sum
1.13 increasing the precision
1.14 cancellation of rounding errors
1.14.1 computing (ex - 1)ix
1.14.2 qr factorization
1.15 rounding errors can be beneficial
1.16 stability of an algorithm depends on the problem
1.17 rounding errors are not random
1.18 designing stable algorithms
1.19 misconceptions
1.20 rounding errors in numerical analysis
1.21 notes and references
problems
2 floating point arithmetic
2.1 floating point number system
2.2 model of arithmetic
2.3 ieee arithmetic
2.4 aberrant arithmetics
2.5 exact subtraction
2.6 fused multiply-add operation
2.7 choice of base and distribution of numbers
2.8 statistical distribution of rounding errors
2.9 alternative number systems
2.10 elementary functions
2.11 accuracy tests
2.12 notes and references
problems
3 basics
3.1 inner and outer products
3.2 the purpose of rounding error analysis
3.3 running error analysis
3.4 notation for error analysis
3.5 matrix multiplication
3.6 complex arithmetic
3.7 miscellany
3.8 error analysis demystified
3.9 other approaches
3.10 notes and references
problems
4 summation
4.1 summation methods
4.2 error analysis
4.3 compensated summation
4.4 other summation methods
4.5 statistical estimates of accuracy
4.6 choice of method
4.7 notes and references
problems
5 polynomials
5.1 hornet's method
5.2 evaluating derivatives
5.3 the newton form and polynomial interpolation
5.4 matrix polynomials
5.5 notes and references
problems
6 norms
6.1 vector norms
6.2 matrix norms
6.3 the matrix p-norm
6.4 singular value decomposition
6.5 notes and references
problems
7 perturbation theory for linear systems
7.1 normwise analysis
7.2 componentwise analysis
7.3 scaling to minimize the condition number
7.4 the matrix inverse
7.5 extensions
7.6 numerical stability
7.7 practical error bounds
7.8 perturbation theory by calculus
7.9 notes and references
problems
8 triangular systems
8.1 backward error analysis
8.2 forward error analysis
8.3 bounds for the inverse
8.4 a parallel fan-in algorithm
8.5 notes and references
8.5.1 lapack
problems
9 lu factorization and linear equations
9.1 gaussian elimination and pivoting strategies
9.2 lu factorization
9.3 error analysis
9.4 the growth factor
9.5 diagonally dominant and banded matrices
9.6 tridiagonal matrices
9.7 more error bounds
9.8 scaling and choice of pivoting strategy
9.9 variants of ganssian elimination
9.10 a posteriori stability tests
9.11 sensitivity of the lu factorization
9.12 rank-revealing lu factorizations
9.13 historical perspective
9.14 notes and references
9.14.1 lapack
problems
10 cholesky factorization
10.1 symmetric positive definite matrices
10.1.1 error analysis
10.2 sensitivity of the cholesky factorization
10.3 positive semidefinite matrices
10.3.1 perturbation theory
10.3.2 error analysis
10.4 matrices with positive definite symmetric part
10.5 notes and references
10.5.1 lapack
problems
11 symmetric indefinite and skew-symmetric systems
11.1 block ldlt factorization for symmetric matrices
11.1.1 complete pivoting
11.1.2 partial pivoting
11.1.3 rook pivoting
11.1.4 tridiagonal matrices
11.2 aasen's method
11.2.1 aasen's method versus block ldlt factorization
11.3 block ldlt factorization for skew-symmetric matrices
11.4 notes and references
11.4.1 lapack
problems
12 iterative refinement
12.1 behaviour of the forward error
12.2 iterative refinement implies stability
12.3 notes and references
12.3.1 lapack
problems
13 block lu factorization
13.1 block versus partitioned lu factorization
13.2 error analysis of partitioned lu factorization
13.3 error analysis of block lu factorization
13.3.1 block diagonal dominance
13.3.2 symmetric positive definite matrices
13.4 notes and references
13.4.1 lapack
problems
14 matrix inversion
14.1 use and abuse of the matrix inverse
14.2 inverting a triangular matrix
14.2.1 unblocked methods
14.2.2 block methods
14.3 inverting a full matrix by lu factorization
14.3.1 method a
14.3.2 method b
14.3.3 method c
14.3.4 method d
14.3.5 summary
14.4 gauss-jordan elimination
14.5 parallel inversion methods
14.6 the determinant
14.6.1 hyman's method
14.7 notes and references
14.7.1 lapack
problems
15 condition number estimation
15.1 how to estimate componentwise condition numbers
15.2 the p-norm power method
15.3 lapack 1-norm estimator
15.4 block 1-norm estimator
15.5 other condition estimators
15.6 condition numbers of tridiagonal matrices
15.7 notes and references
15.7.1 lapack
problems
16 the sylvester equation
16.1 solving the sylvester equation
16.2 backward error
16.2.1 the lyapunov equation
16.3 perturbation result
16.4 practical error bounds
16.5 extensions
16.6 notes and references
16.6.1 lapack
problems
17 stationary iterative methods
17.1 survey of error analysis
17.2 forward error analysis
17.2.1 jacobi's method
17.2.2 successive overrelaxation
17.3 backward error analysis
17.4 singular systems
17.4.1 theoretical background
17.4.2 forward error analysis
17.5 stopping an iterative method
17.6 notes and references
problems
18 matrix powers
18.1 matrix powers in exact arithmetic
18.2 bounds for finite precision arithmetic
18.3 application to stationary iteration
18.4 notes and references
problems
19 qr factorization
19.1 householder transformations
19.2 qr factorization
19.3 error analysis of householder computations
19.4 pivoting and row-wise stability
19.5 aggregated householder transformations
19.6 givens rotations
19.7 iterative refinement
19.8 gram-schmidt orthogonalization
19.9 sensitivity of the qr factorization
19.10 notes and references
19.10.1 lapack
problems
20 the least squares problem
20.1 perturbation theory
20.2 solution by qr factorization
20.3 solution by the modified gram-schmidt method
20.4 the normal equations
20.5 iterative refinement
20.6 the seminormal equations
20.7 backward error
20.8 weighted least squares problems
20.9 the equality constrained least squares problem
20.9.1 perturbation theory
20.9.2 methods
20.10 proof of wedin's theorem
20.11 notes and references
20.11.1 lapack
problems
21 underdetermined systems
21.1 solution methods
21.2 perturbation theory and backward error
21.3 error analysis
21.4 notes and references
21.4.1 lapack
problems
22 vandermonde systems
22.1 matrix inversion
22.2 primal and dual systems
22.3 stability
22.3.1 forward error
22.3.2 residual
22.3.3 dealing with instability
22.4 notes and references
problems
23 fast matrix multiplication
23.1 methods
23.2 error analysis
23.2.1 winograd's method
23.2.2 strassen's method
23.2.3 bilinear noncommutative algorithms
23.2.4 the 3m method
23.3 notes and references
problems
24 the fast fourier transform and applications
24.1 the fast fourier transform
24.2 circulant linear systems
24.3 notes and references
problems
25 nonlinear systems and newton's method
25.1 newton's method
25.2 error analysis
25.3 special cases and experiments
25.4 conditioning
25.5 stopping an iterative method
25.6 notes and references
problems
26 automatic error analysis
26.1 exploiting direct search optimization
26.2 direct search methods
26.3 examples of direct search
26.3.1 condition estimation
26.3.2 fast matrix inversion
26.3.3 roots of a cubic
26.4 interval analysis
26.5 other work
26.6 notes and references
problems
27 software issues in floating point arithmetic
27.1 exploiting ieee arithmetic
27.2 subtleties of floating point arithmetic
27.3 cray peculiarities
27.4 compilers
27.5 determining properties of floating point arithme
27.6 testing a floating point arithmetic
27.7 portability
27.7.1 arithmetic parameters
27.7.2 2 x 2 problems in lapack
27.7.3 numerical constants
27.7.4 models of floating point arithmetic
27.8 avoiding underflow and overflow
27.9 multiple precision arithmetic
27.10 extended and mixed precision blas
27.11 patriot missile software problem
27.12 notes and references
problems
28 a gallery of test matrices
28.1 the hilbert and cauchy matrices
28.2 random matrices
28.3 "randsvd" matrices
28.4 the pascal matrix
28.5 tridiagonal toeplitz matrices
28.6 companion matrices
28.7 notes and references
28.7.1 lapack
problems
a solutions to problems
b acquiring software
b.1 internet
b.2 netlib
b.3 matlab
b.4 nag library and nagware f95 compiler
c program libraries
c.1 basic linear algebra subprograms
c.2 eispack
c.3 linpack
c.4 lapack
c.4.1 structure of lapack
d the matrix computation toolbox
bibliography
name index
subject index

相關詞條

熱門詞條

聯絡我們