Introduction to Scientific Computing and Data Analysis

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This textbook provides an introduction to numerical computing and its applications in science and engineering. The topics covered include those usually found in an introductory course, as well as those that arise in data analysis. This includes optimization and regression-based methods using a singular value decomposition. The emphasis is on problem solving, and there are numerous exercises throughout the text concerning applications in engineering and science. The essential role of the mathematical theory underlying the methods is also considered, both for understanding how the method works, as well as how the error in the computation depends on the method being used. The codes used for most of the computational examples in the text are available on GitHub.  This new edition includes material necessary for an upper division course in computational linear algebra.

Author(s): Mark H. Holmes
Series: Texts in Computational Science and Engineering, 13
Edition: 2
Publisher: Springer
Year: 2023

Language: English
Pages: 562
City: Cham

Preface
Preface to Second Edition
Contents
1 Introduction to Scientific Computing
1.1 Unexpected Results
1.2 Floating-Point Number System
1.2.1 Normal Floats
1.2.2 Rounding
1.2.3 Non-Normal Floats
1.2.4 Significance
1.2.5 Flops
1.2.6 Functions
1.3 Arbitrary-Precision Arithmetic
1.4 Explaining, and Possibly Fixing, the Unexpected Results
1.5 Error and Accuracy
1.5.1 Over-computing?
1.6 Multicore Computing
2 Solving A Nonlinear Equation
2.1 The Problem to Solve
2.2 Bisection Method
2.2.1 Convergence Theory
2.3 Newton's Method
2.3.1 Picking x0
2.3.2 Order of Convergence
2.3.3 Failure
2.3.4 Examples
2.3.5 Convergence Theorem
2.4 Secant Method
2.4.1 Convergence Theorem
2.5 Other Ideas
2.5.1 Is Newton's Method Really Newton's Method?
3 Matrix Equations
3.1 An Example
3.2 Finding L and U
3.2.1 What Matrices Have a LU Factorization?
3.3 Implementing a LU Solver
3.3.1 Pivoting Strategies
3.3.2 LU and Gaussian Elimination
3.4 LU Method: Summary
3.4.1 Flops and Multicore Processors
3.5 Vector and Matrix Norms
3.5.1 Matrix Norm
3.6 Error and Residual
3.6.1 Correct Significant Digits
3.6.2 The Condition Number
3.6.3 A Heuristic for the Error
3.7 Positive Definite Matrices
3.7.1 Cholesky Factorization
3.8 Tri-Diagonal Matrices
3.9 Sparse Matrices
3.10 Nonlinear Systems
3.11 Some Additional Ideas
3.11.1 Yogi Berra and Perturbation Theory
3.11.2 Fixing an Ill-Conditioned Matrix
3.11.3 Insightful Observations about the Condition Number
3.11.4 Faster than LU?
3.11.5 Historical Comparisons
4 Eigenvalue Problems
4.1 Review of Eigenvalue Problems
4.2 Power Method
4.2.1 General Formulation
4.3 Extensions of the Power Method
4.3.1 Inverse Iteration
4.3.2 Rayleigh Quotient Iteration
4.4 Calculating Multiple Eigenvalues
4.4.1 Orthogonal Iteration
4.4.2 Regular and Modified Gram-Schmidt
4.4.3 QR Factorization
4.4.4 The QR Method
4.4.5 QR Method versus Orthogonal Iteration
4.4.6 Are the Computed Values Correct?
4.5 Applications
4.5.1 Natural Frequencies
4.5.2 Graphs and Adjacency Matrices
4.5.3 Markov Chain
4.6 Singular Value Decomposition
4.6.1 Derivation of the SVD
4.6.2 Interpretations of the SVD
4.6.3 Summary of the SVD
4.6.4 Consequences of a SVD
4.6.5 Computing a SVD
4.6.6 Low-Rank Approximations
4.6.7 Application: Image Compression
5 Interpolation
5.1 Information from Data
5.2 Global Polynomial Interpolation
5.2.1 Direct Approach
5.2.2 Lagrange Approach
5.2.3 Runge's Function
5.3 Piecewise Linear Interpolation
5.4 Piecewise Cubic Interpolation
5.4.1 Cubic B-splines
5.5 Function Approximation
5.5.1 Global Polynomial Interpolation
5.5.2 Piecewise Linear Interpolation
5.5.3 Cubic Splines
5.5.4 Chebyshev Interpolation
5.5.5 Chebyshev versus Cubic Splines
5.5.6 Other Ideas
5.6 Questions and Additional Comments
6 Numerical Integration
6.1 The Definition from Calculus
6.1.1 Midpoint Rule
6.2 Methods Based on Polynomial Interpolation
6.2.1 Trapezoidal Rule
6.2.2 Simpson's Rule
6.2.3 Cubic Splines
6.2.4 Other Interpolation Ideas
6.3 Romberg Integration
6.3.1 Computing Using Romberg
6.4 Methods for Discrete Data
6.5 Methods Based on Precision
6.5.1 1-Point Gaussian Rule
6.5.2 2-Point Gaussian Rule
6.5.3 m-Point Gaussian Rule
6.5.4 Error
6.5.5 Parting Comments
6.6 Adaptive Quadrature
6.7 Other Ideas
6.8 Epilogue
7 Initial Value Problems
7.1 Solution of an IVP
7.2 Numerical Differentiation
7.2.1 Higher Derivatives
7.2.2 Interpolation
7.3 IVP Methods Using Numerical Differentiation
7.3.1 The Five Steps
7.3.2 Additional Difference Methods
7.3.3 Error and Convergence
7.3.4 Computing the Solution
7.4 IVP Methods Using Numerical Integration
7.5 Runge-Kutta Methods
7.5.1 RK2
7.5.2 RK4
7.5.3 Stability
7.5.4 RK-n
7.6 Solving Systems of IVPs
7.6.1 Examples
7.6.2 Simple Approach
7.6.3 Component Approach and Symplectic Methods
7.7 Some Additional Questions and Ideas
7.7.1 RK4 Order Conditions
8 Optimization: Regression
8.1 Model Function
8.2 Error Function
8.2.1 Vertical Distance
8.3 Linear Least Squares
8.3.1 Two Parameters
8.3.2 General Case
8.3.3 QR Approach
8.3.4 Moore-Penrose Pseudo-Inverse
8.3.5 Overdetermined System
8.3.6 Regularization
8.3.7 Over-fitting the Data
8.4 QR Factorization
8.4.1 Finding Q and R
8.4.2 Accuracy
8.4.3 Parting Comments
8.5 Other Error Functions
8.6 Nonlinear Regression
8.7 Fitting IVPs to Data
8.7.1 Logistic Equation
8.7.2 FitzHugh-Nagumo Equations
8.7.3 Parting Comments
9 Optimization: Descent Methods
9.1 Introduction
9.2 Descent Methods
9.2.1 Descent Directions
9.3 Solving Linear Systems
9.3.1 Basic Descent Algorithm for Ax = b
9.3.2 Method of Steepest Descents for Ax = b
9.3.3 Conjugate Gradient Method for Ax = b
9.3.4 Large Sparse Matrices
9.3.5 Rate of Convergence
9.3.6 Preconditioning
9.3.7 Parting Comments
9.4 General Nonlinear Problem
9.4.1 Descent Direction
9.4.2 Line Search Problem
9.4.3 Examples
9.5 Levenberg-Marquardt Method
9.5.1 Parting Comments
9.6 Minimization Without Differentiation
9.7 Variational Problems
9.7.1 Example: Minimum Potential Energy
9.7.2 Example: Brachistochrone Problem
9.7.3 Parting Comments
10 Data Analysis
10.1 Introduction
10.2 Principal Component Analysis
10.2.1 Example: Word Length
10.2.2 Derivation of Principal Component Approximation
10.2.3 Summary of the PCA
10.2.4 Scaling Factors
10.2.5 Geometry and Data Approximation
10.2.6 Application: Crime Data
10.2.7 Practical Issues
10.2.8 Parting Comments
10.3 Independent Component Analysis
10.3.1 Derivation of the Method
10.3.2 Reduced Problem
10.3.3 Contrast Function
10.3.4 Summary of ICA
10.3.5 Application: Image Steganography
10.4 Data Based Modeling
10.4.1 Dynamic Modes
10.4.2 Example: COVID-19
10.4.3 Propagation Paths
10.4.4 Parting Comments
Appendix A Taylor's Theorem
A.1 Useful Examples for x Near Zero
A.2 Order Symbol and Truncation Error
Appendix B Vector and Matrix Summary
Appendix References
Index