Numerical Analysis: A Graduate Course

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book aims to introduce graduate students to the many applications of numerical computation, explaining in detail both how and why the included methods work in practice. The text addresses numerical analysis as a middle ground between practice and theory, addressing both the abstract mathematical analysis and applied computation and programming models instrumental to the field. While the text uses pseudocode, Matlab and Julia codes are available online for students to use, and to demonstrate implementation techniques. The textbook also emphasizes multivariate problems alongside single-variable problems and deals with topics in randomness, including stochastic differential equations and randomized algorithms, and topics in optimization and approximation relevant to machine learning. Ultimately, it seeks to clarify issues in numerical analysis in the context of applications, and presenting accessible methods to students in mathematics and data science. 

Author(s): David E. Stewart
Series: CMS/CAIMS Books in Mathematics, 4
Publisher: Springer
Year: 2022

Language: English
Pages: 644
City: Cham

Preface
Contents
1 Basics of Numerical Computation
1.1 How Computers Work
1.1.1 The Central Processing Unit
1.1.2 Code and Data
1.1.3 On Being Correct
1.1.4 On Being Efficient
1.1.5 Recursive Algorithms and Induction
1.1.6 Working in Groups: Parallel Computing
1.1.7 BLAS and LAPACK
Exercises
1.2 Programming Languages
1.2.1 MATLABTM
1.2.2 Julia
1.2.3 Python
1.2.4 C/C++ and Java
1.2.5 Fortran
Exercises
1.3 Floating Point Arithmetic
1.3.1 The IEEE Standards
1.3.2 Correctly Rounded Arithmetic
1.3.3 Future of Floating Point Arithmetic
Exercises
1.4 When Things Go Wrong
1.4.1 Underflow and Overflow
1.4.2 Subtracting Nearly Equal Quantities
1.4.3 Numerical Instability
1.4.4 Adding Many Numbers
Exercises
1.5 Measuring: Norms
1.5.1 What Is a Norm?
1.5.2 Norms of Functions
Exercises
1.6 Taylor Series and Taylor Polynomials
1.6.1 Taylor Series in One Variable
1.6.2 Taylor Series and Polynomials in More than One Variable
1.6.3 Vector-Valued Functions
Exercises
Project
2 Computing with Matrices and Vectors
2.1 Solving Linear Systems
2.1.1 Gaussian Elimination
2.1.2 LU Factorization
2.1.3 Errors in Solving Linear Systems
2.1.4 Pivoting and PA=LU
2.1.5 Variants of LU Factorization
Exercises
2.2 Least Squares Problems
2.2.1 The Normal Equations
2.2.2 QR Factorization
Exercises
2.3 Sparse Matrices
2.3.1 Tridiagonal Matrices
2.3.2 Data Structures for Sparse Matrices
2.3.3 Graph Models of Sparse Factorization
2.3.4 Unsymmetric Factorizations
Exercises
2.4 Iterations
2.4.1 Classical Iterations
2.4.2 Conjugate Gradients and Krylov Subspaces
2.4.3 Non-symmetric Krylov Subspace Methods
Exercises
2.5 Eigenvalues and Eigenvectors
2.5.1 The Power Method & Google
2.5.2 Schur Decomposition
2.5.3 The QR Algorithm
2.5.4 Singular Value Decomposition
2.5.5 The Lanczos and Arnoldi Methods
Exercises
3 Solving nonlinear equations
3.1 Bisection method
3.1.1 Convergence
3.1.2 Robustness and reliability
Exercises
3.2 Fixed-point iteration
3.2.1 Convergence
3.2.2 Robustness and reliability
3.2.3 Multivariate fixed-point iterations
Exercises
3.3 Newton's method
3.3.1 Convergence of Newton's method
3.3.2 Reliability of Newton's method
3.3.3 Variant: Guarded Newton method
3.3.4 Variant: Multivariate Newton method
Exercises
3.4 Secant and hybrid methods
3.4.1 Convenience: Secant method
3.4.2 Regula Falsi
3.4.3 Hybrid methods: Dekker's and Brent's methods
Exercises
3.5 Continuation methods
3.5.1 Following paths
3.5.2 Numerical methods to follow paths
Exercises
Project
4 Approximations and Interpolation
4.1 Interpolation—Polynomials
4.1.1 Polynomial Interpolation in One Variable
4.1.2 Lebesgue Numbers and Reliability
Exercises
4.2 Interpolation—Splines
4.2.1 Cubic Splines
4.2.2 Higher Order Splines in One Variable
Exercises
4.3 Interpolation—Triangles and Triangulations
4.3.1 Interpolation over Triangles
4.3.2 Interpolation over Triangulations
4.3.3 5021671En4FigdPrint.eps Approximation Error over Triangulations
4.3.4 Creating Triangulations
Exercises
4.4 Interpolation—Radial Basis Functions
Exercises
4.5 Approximating Functions by Polynomials
4.5.1 Weierstrass' Theorem
4.5.2 Jackson's Theorem
4.5.3 Approximating Functions on Rectangles and Cubes
4.6 Seeking the Best—Minimax Approximation
4.6.1 Chebyshev's Equi-oscillation Theorem
4.6.2 Chebyshev Polynomials and Interpolation
4.6.3 Remez Algorithm
4.6.4 Minimax Approximation in Higher Dimensions
Exercises
4.7 Seeking the Best—Least Squares
4.7.1 Solving Least Squares
4.7.2 Orthogonal Polynomials
4.7.3 Trigonometric Polynomials and Fourier Series
4.7.4 Chebyshev Expansions
Exercises
Project
5 Integration and Differentiation
5.1 Integration via Interpolation
5.1.1 Rectangle, Trapezoidal and Simpson's Rules
5.1.2 Newton–Cotes Methods
5.1.3 Product Integration Methods
5.1.4 Extrapolation
5.2 Gaussian Quadrature
5.2.1 Orthogonal Polynomials Reprise
5.2.2 Orthogonal Polynomials and Integration
5.2.3 Why the Weights are Positive
5.3 Multidimensional Integration
5.3.1 Tensor Product Methods
5.3.2 Lagrange Integration Methods
5.3.3 Symmetries and Integration
5.3.4 Triangles and Tetrahedra
5.4 High-Dimensional Integration
5.4.1 Monte Carlo Integration
5.4.2 Quasi-Monte Carlo Methods
5.5 Numerical Differentiation
5.5.1 Discrete Derivative Approximations
5.5.2 Automatic Differentiation
6 Differential Equations
6.1 Ordinary Differential Equations — Initial Value Problems
6.1.1 Basic Theory
6.1.2 Euler's Method and Its Analysis
6.1.3 Improving on Euler: Trapezoidal, Midpoint, and Heun
6.1.4 Runge–Kutta Methods
6.1.5 Multistep Methods
6.1.6 Stability and Implicit Methods
6.1.7 Practical Aspects of Implicit Methods
6.1.8 Error Estimates and Adaptive Methods
6.1.9 Differential Algebraic Equations (DAEs)
Exercises
6.2 Ordinary Differential Equations—Boundary Value Problems
6.2.1 Shooting Methods
6.2.2 Multiple Shooting
6.2.3 Finite Difference Approximations
Exercises
6.3 Partial Differential Equations—Elliptic Problems
6.3.1 Finite Difference Approximations
6.3.2 Galerkin Method
6.3.3 Handling Boundary Conditions
6.3.4 Convection—Going with the Flow
6.3.5 Higher Order Problems
Exercises
6.4 Partial Differential Equations—Diffusion and Waves
6.4.1 Method of Lines
Exercises
Projects
7 Randomness
7.1 Probabilities and Expectations
7.1.1 Random Events and Random Variables
7.1.2 Expectation and Variance
7.1.3 Averages
Exercises
7.2 Pseudo-Random Number Generators
7.2.1 The Arithmetical Generation of Random Digits
7.2.2 Modern Pseudo-Random Number Generators
7.2.3 Generating Samples from Other Distributions
7.2.4 Parallel Generators
Exercises
7.3 Statistics
7.3.1 Averages and Variances
7.3.2 Regression and Curve Fitting
7.3.3 Hypothesis Testing
Exercises
7.4 Random Algorithms
7.4.1 Random Choices
7.4.2 Monte Carlo Algorithms and Markov Chains
Exercises
7.5 Stochastic Differential Equations
7.5.1 Wiener Processes
7.5.2 Itô Stochastic Differential Equations
7.5.3 Stratonovich Integrals and Differential Equations
7.5.4 Euler–Maruyama Method
7.5.5 Higher Order Methods for Stochastic Differential Equations
Exercises
Project
8 Optimization
8.1 Basics of Optimization
8.1.1 Existence of Minimizers
8.1.2 Necessary Conditions for Local Minimizers
8.1.3 Lagrange Multipliers and Equality-Constrained Optimization
Exercises
8.2 Convex and Non-convex
8.2.1 Convex Functions
8.2.2 Convex Sets
Exercises
8.3 Gradient Descent and Variants
8.3.1 Gradient Descent
8.3.2 Line Searches
8.3.3 Convergence
8.3.4 Stochastic Gradient Method
8.3.5 Simulated Annealing
Exercises
8.4 Second Derivatives and Newton's Method
Exercises
8.5 Conjugate Gradient and Quasi-Newton Methods
8.5.1 Conjugate Gradients for Optimization
8.5.2 Variants on the Conjugate Gradient Method
8.5.3 Quasi-Newton Methods
Exercises
8.6 Constrained Optimization
8.6.1 Equality Constrained Optimization
8.6.2 Inequality Constrained Optimization
Exercises
Project
Appendix A What You Need from Analysis
A.1 Banach and Hilbert Spaces
A.1.1 Normed Spaces and Completeness
A.1.2 Inner Products
A.1.3 Dual Spaces and Weak Convergence
A.2 Distributions and Fourier Transforms
A.2.1 Distributions and Measures
A.2.2 Fourier Transforms
A.3 Sobolev Spaces
Appendix References
Index