Computational Mathematics: An introduction to Numerical Analysis and Scientific Computing with Python

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This textbook is a comprehensive introduction to computational mathematics and scientific computing suitable for undergraduate and postgraduate courses. It presents both practical and theoretical aspects of the subject, as well as advantages and pitfalls of classical numerical methods alongside with computer code and experiments in Python. Each chapter closes with modern applications in physics, engineering, and computer science. Features: • No previous experience in Python is required. • Includes simplified computer code for fast-paced learning and transferable skills development. • Includes practical problems ideal for project assignments and distance learning. • Presents both intuitive and rigorous faces of modern scientific computing. • Provides an introduction to neural networks and machine learning.

Author(s): Dimitrios Mitsotakis
Series: Advances in Applied Mathematics
Edition: 1
Publisher: Chapman and Hall/CRC
Year: 2023

Language: English
Commentary: Publisher's PDF
Pages: 508
City: Boca Raton, FL
Tags: Programming; Python; Ordinary Differential Equations; Numerical Methods; Optimization; Partial Differential Equations; scikit-learn; NumPy; matplotlib; Linear Algebra; SciPy; Interpolation; Equation Solving; Calculus; Splines; Approximation Algorithms

Cover
Half Title
Series Page
Title Page
Copyright Page
Dedication
Contents
Preface
I: Introduction to Scientific Computing with Python
1. Introduction to Python
1.1. Basic Python
1.1.1. Python, numbers and variables
1.1.2. Basic mathematical operations
1.1.3. Mathematical functions and constants
1.1.4. Powers of 10 and scientific notation
1.2. Assignment Operators
1.3. Strings, Tuples, Dictionaries and Lists
1.3.1. Strings
1.3.2. Tuples
1.3.3. Dictionaries
1.3.4. Lists
1.3.5. Lists of lists
1.4. Flow Control
1.4.1. for loops
1.4.2. Useful applications
1.4.3. Boolean type of variables
1.4.4. while loops
1.4.5. Decisions and the if statement
1.4.6. Error control and exceptions
1.4.7. The command break
1.5. Functions
1.6. Classes
1.7. Accessing Data Files
1.8. Further Reading
Chapter Highlights
Exercises
2. Matrices and Python
2.1. Review of Matrices
2.1.1. Matrix properties
2.1.2. Special matrices
2.2. The NumPy Module
2.2.1. Matrices and arrays
2.2.2. Slicing
2.2.3. Special matrices, methods and properties
2.2.4. Matrix assignment
2.2.5. Matrix addition and scalar multiplication
2.2.6. Vectorized algorithms
2.2.7. Performance of algorithms
2.2.8. Matrix multiplication
2.2.9. Inverse of amatrix
2.2.10. Determinant of a matrix
2.2.11. Linear systems
2.2.12. Eigenvalues and eigenvectors
2.3. The SciPy Module
2.3.1. Systems with banded matrices
2.3.2. Systems with positive definite, banded matrices
2.3.3. Systems with other special matrices
2.4. Drawing Graphs with MatPlotLib
2.4.1. Plotting one-dimensional arrays
2.5. Further Reading
Chapter Highlights
Exercises
3. Scientific Computing
3.1. Computer Arithmetic and Sources of Error
3.1.1. Catastrophic cancelation
3.1.2. The machine precision
3.2. Normalized Scientific Notation
3.2.1. Floats in Python
3.3. Rounding and Chopping
3.3.1. Absolute and relative errors
3.3.2. Loss of significance (again)
3.3.3. Algorithms and stability
3.3.4. Approximations of π
3.4. Further Reading
Chapter Highlights
Exercises
4. Calculus Facts
4.1. Sequences and Convergence
4.2. Continuous Functions
4.2.1. Bolzano's intermediate value theorem
4.3. Differentiation
4.3.1. Important theorems from differential calculus
4.3.2. Taylor polynomials
4.3.3. The big-O notation
4.4. Integration
4.4.1. Approximating integrals
4.4.2. Important theorems of integral calculus
4.4.3. The remainder in Taylor polynomials
4.5. Further Reading
Chapter Highlights
Exercises
II: Introduction to Computational Mathematics
5. Roots of Equations
5.1. Bisection Method
5.1.1. Derivation and implementation
5.1.2. Proof of convergence
5.1.3. Rate of convergence
5.2. Fixed-Point Methods
5.2.1. Derivation and implementation
5.2.2. Theoretical considerations
5.2.3. Convergence of fixed-point methods
5.3. Newton's Method
5.3.1. Derivation and implementation
5.3.2. Rate of convergence
5.4. Secant Method
5.4.1. Derivation and implementation
5.4.2. Rate of convergence
5.5. Other Methods and Generalizations
5.5.1. Stopping criteria
5.5.2. Steffensen's method
5.5.3. Aitken's delta-squared process
5.5.4. Multiple roots
5.5.5. High-order methods
5.5.6. Systems of equations
5.6. The Module scipy.optimize
5.6.1. Bisection method
5.6.2. Fixed-point methods
5.6.3. Newton and secant methods
5.6.4. A hybrid method
5.6.5. Application in astrophysics
5.7. Further Reading
Chapter Highlights
Exercises
6. Interpolation and Approximation
6.1. Introduction
6.2. Lagrange Interpolation
6.2.1. Naive construction of the interpolant
6.2.2. Lagrange polynomials
6.2.3. Failure of Lagrange interpolation
6.2.4. Error analysis
6.2.5. Newton's representation with divided differences
6.2.6. More on divided differences
6.3. Hermite Interpolation
6.3.1. Computation of the Hermite interpolant
6.3.2. Error analysis
6.3.3. Implementation details
6.4. Spline Interpolation
6.4.1. Continuous piecewise linear interpolation
6.4.2. Some theoretical considerations
6.4.3. Error analysis of piecewise linear interpolation
6.4.4. Cubic spline interpolation
6.4.5. An algorithmfor cubic splines
6.4.6. Theoretical considerations and error analysis
6.4.7. B-splines
6.5. Method of Least Squares
6.5.1. Linear least squares
6.5.2. Polynomial fit
6.5.3. Non-polynomial fit
6.6. The Module scipy.interpolate
6.6.1. Lagrange interpolation
6.6.2. Cubic spline interpolation
6.6.3. Computations with B-splines
6.6.4. General purpose interpolation and the function interp1d
6.6.5. Interpolation in two dimensions
6.6.6. Application in image processing
6.7. Further Reading
Chapter Highlights
Exercises
7. Numerical Integration
7.1. Introduction to Numerical Quadrature and Midpoint Rule
7.2. Newton-Cotes Quadrature Rules
7.2.1. Trapezoidal rule
7.2.2. Error estimate of the trapezoidal rule
7.2.3. Composite trapezoidal rule
7.2.4. Error estimate of the composite trapezoidal rule
7.2.5. Simpson's rule
7.2.6. Composite Simpson's rule
7.2.7. Error of Simpson's rule
7.2.8. Degree of exactness
7.3. Gaussian Quadrature Rules
7.3.1. Choice of nodes and weights
7.3.2. Gauss-Legendre quadrature rules
7.3.3. Gaussian quadrature on general intervals
7.3.4. Error analysis of the Gaussian quadrature
7.4. The Module scipy.integrate
7.4.1. Trapezoidal rule
7.4.2. Simpson's rule
7.4.3. Gaussian quadrature
7.4.4. Application in classical mechanics
7.5. Further Reading
Chapter Highlights
Exercises
8. Numerical Differentiation and Applications to Differential Equations
8.1. Numerical Differentiation
8.1.1. First-order accurate finite difference approximations
8.1.2. Second-order accurate finite difference approximations
8.1.3. Error estimation
8.1.4. Approximation of second-order derivatives
8.1.5. Richardson's extrapolation
8.2. Applications to Ordinary Differential Equations
8.2.1. First-order ordinary differential equations
8.2.2. Euler method
8.2.3. Alternative derivation and error estimates
8.2.4. Implicit variants of Euler'smethod
8.2.5. Improved Euler method
8.2.6. The notion of stability
8.3. Runge-Kutta Methods
8.3.1. Explicit Runge-Kutta methods
8.3.2. Implicit and diagonally implicit Runge-Kutta methods
8.3.3. Adaptive Runge-Kutta methods
8.4. The Module scipy.integrate Again
8.4.1. Generic integration of initial value problems
8.4.2. Systems of ordinary differential equations
8.4.3. Application in epidemiology
8.5. Further Reading
Chapter Highlights
Exercises
9. Numerical Linear Algebra
9.1. Numerical Solution of Linear Systems
9.1.1. Algorithm for the naive Gaussian elimination
9.1.2. LU factorization
9.1.3. Implementation of LU factorization
9.2. Pivoting Strategies
9.2.1. Gaussian elimination with partial pivoting
9.2.2. LU factorization with pivoting
9.3. Condition Number of a Matrix
9.3.1. Vector and matrix norms
9.3.2. Theoretical properties of matrix norms
9.3.3. Condition number
9.4. Other Matrix Computations
9.4.1. Computation of inverse matrix
9.4.2. Computation of determinants
9.5. Symmetric Matrices
9.5.1. LDLT factorization
9.5.2. Cholesky factorization
9.6. The Module scipy.linalg Again
9.6.1. LU factorization
9.6.2. Cholesky factorization
9.7. Iterative Methods
9.7.1. Derivation of iterative methods
9.7.2. Classical iterative methods - Jacobi and Gauss-Seidel methods
9.7.3. Convergence of iterativemethods
9.7.4. Sparse matrices in Python and the module scipy.sparse
9.7.5. Sparse Gauss-Seidel implementation
9.7.6. Sparse linear systems and the module scipy.sparce
9.7.7. Application in electric circuits
9.8. Further Reading
Chapter Highlights
Exercises
III: Advanced Topics
10. Best Approximations
10.1. Vector Spaces, Norms and Approximations
10.1.1. Vector spaces
10.1.2. Inner products and norms
10.1.3. Best approximations
10.2. Linear Least Squares
10.2.1. Theoretical properties of linear least squares
10.2.2. Solution of linear least squares problem
10.2.3. Linear least squares problem in Python
10.3. Gram-Schmidt Orthonormalization
10.3.1. Numerical implementation
10.4. QR Factorization
10.4.1. Linear least squares using QR factorization
10.4.2. Computation of range
10.4.3. Householder matrices and QR factorization
10.4.4. Python implementation and orthonormalization
10.5. Singular Value Decomposition
10.5.1. Derivation of SVD
10.5.2. Theoretical considerations
10.5.3. Pseudoinverse and SVD
10.5.4. Linear least squares problem and SVD
10.5.5. Singular value decomposition in Python
10.5.6. Application in image compression
10.6. Further Reading
Chapter Highlights
Exercises
11. Unconstrained Optimization and Neural Networks
11.1. Gradient Methods for Unconstrained Optimization
11.1.1. Estimation of convergence
11.1.2. Method of steepest descent
11.1.3. Solving linear systems using optimization
11.1.4. Theoretical considerations
11.2. Conjugate Gradient Method
11.2.1. Conjugate gradient method and linear systems
11.2.2. Convergence and preconditioning
11.2.3. Extensions to nonlinear systems
11.3. Newton's Method in Optimization
11.3.1. Newton's iteration
11.3.2. Quasi-Newton methods
11.3.3. Nonlinear optimization and the module scipy.optimize
11.3.4. The Gauss-Newton approximation
11.3.5. The Levenberg-Marquardt modification
11.4. Introduction to Neural Networks
11.4.1. A simple neural network
11.4.2. Activation functions
11.4.3. General feedforward neural networks
11.4.4. Machine learning and the module sklearn
11.4.5. Application in image recognition
11.5. Further Reading
Chapter Highlights
Exercises
12. Eigenvalue Problems
12.1. Eigenvalues and Eigenvectors
12.1.1. Basic properties
12.1.2. Diagonalization of a matrix and the Jordan normal form
12.1.3. Other properties
12.1.4. First estimation of eigenvalues using Gerschgorin disks
12.2. Numerical Approximation of Eigenvalues and Eigenvectors
12.2.1. Power method
12.2.2. Inverse power method
12.2.3. Shifted inverse power method
12.2.4. Basic QR iteration
12.2.5. Application in computer networks
12.3. Further Reading
Chapter Highlights
Exercises
Appendix A Computing Jordan Normal Forms
Bibliography
Index