This book provides an introduction to modern topics in scientific computing and machine learning, using JULIA to illustrate the efficient implementation of algorithms. In addition to covering fundamental topics, such as optimization and solving systems of equations, it adds to the usual canon of computational science by including more advanced topics of practical importance. In particular, there is a focus on partial differential equations and systems thereof, which form the basis of many engineering applications. Several chapters also include material on machine learning (artificial neural networks and Bayesian estimation).
JULIA is a relatively new programming language which has been developed with scientific and technical computing in mind. Its syntax is similar to other languages in this area, but it has been designed to embrace modern programming concepts. It is open source, and it comes with a compiler and an easy-to-use package system.
Aimed at students of applied mathematics, computer science, engineering and bioinformatics, the book assumes only a basic knowledge of linear algebra and programming.
Author(s): Clemens Heitzinger
Publisher: Springer
Year: 2022
Language: English
Pages: 446
City: Cham
Foreword
Preface
Contents
Part I
The Julia Language
Chapter 1 An Introduction to the Julia Language
1.1 Brief Historic Overview
1.2 An Overview of Julia
1.2.1 The Reproducibility of Science and Open Source
1.2.2 Compiler
1.2.3 Libraries and Numerical Linear Algebra
1.2.4 Interactivity
1.2.5 High-Level Programming Concepts
1.2.6 Interoperability
1.2.7 Package System
1.2.8 Parallel and Distributed Computing
1.2.9 Availability on Common Operating Systems
1.3 Using Julia and Accessing Documentation
1.3.1 Starting Julia
1.3.2 The Read-Eval-Print Loop
1.3.3 Help and Documentation
1.3.4 Handling Packages
1.3.5 Developing Julia Programs
Problems
References
Chapter 2 Functions
2.1 Defining Functions
2.2 Argument Passing Behavior
2.3 Multiple Return Values
2.4 Functions as First-Class Objects
2.5 Anonymous Functions
2.6 Optional Arguments
2.7 Keyword Arguments
2.8 Functions with a Variable Number of Arguments
2.9 do blocks
Problems
Chapter 3 Variables, Constants, Scopes, and Modules
3.1 Modules and Global Scopes
3.2 Dynamic and Lexical Scoping
3.3 Local Scope Blocks
3.3.1 Hard Local Scopes
3.3.2 Soft Local Scopes
3.4 let Blocks and Closures
3.5 for Loops and Array Comprehensions
3.6 Constants
3.7 Global and Local Variables in this Book
Problems
Chapter 4 Built-in Data Structures
4.1 Characters
4.2 Strings
4.2.1 Creating and Accessing
4.2.2 String Interpolation
4.2.3 String Operations
4.2.4 String Literals
4.2.5 Regular Expressions
4.3 Symbols
4.4 Expressions
4.5 Collections
4.5.1 General Collections
4.5.2 Iterable Collections
4.5.3 Indexable Collections
4.5.4 Associative Collections
4.5.5 Sets
4.5.6 Deques (Double-Ended Queues)
Problems
Chapter 5 User Defined Data Structures and the Type System
5.1 Introduction
5.2 Type Annotations
5.2.1 Annotations of Expressions
5.2.2 Declarations of Variables and Return Values
5.3 Abstract Types, Concrete Types, and the Type Hierarchy
5.4 Composite Types
5.5 Constructors
5.6 Type Unions
5.7 Parametric Types
5.7.1 Parametric Composite Types
5.7.2 Parametric Abstract Types
5.8 Tuple Types
5.9 Pretty Printing
5.10 Operations on Types
5.11 Bibliographical Remarks
Problems
References
Chapter 6 Control Flow
6.1 Compound Expressions
6.2 Conditional Evaluation
6.3 Short-Circuit Evaluation
6.4 Repeated Evaluation
6.5 Exception Handling
6.5.1 Built-in Exceptions and Defining Exceptions
6.5.2 Throwing and Catching Exceptions
6.5.3 Messages,Warnings, and Errors
6.5.4 Assertions
6.6 Tasks, Channels, and Events
6.7 Parallel Computing
6.7.1 Starting Processes
6.7.2 Data Movement and Processes
6.7.3 Parallel Loops and Parallel Mapping
Problems
Chapter 7 Macros
7.1 Introduction
7.2 Macros in Common Lisp
7.3 Macro Definition
7.4 Two Examples: Repeating and Collecting
7.5 Memoization
7.6 Built-in Macros
7.7 Bibliographical Remarks
Problems
References
Chapter 8 Arrays and Linear Algebra
8.1 Dense Arrays
8.1.1 Introduction
8.1.2 Construction, Initialization, and Concatenation
8.1.3 Comprehensions and Generator Expressions
8.1.4 Indexing and Assignment
8.1.5 Iteration and Linear Indexing
8.1.6 Operators
8.1.7 Broadcasting and Vectorizing Functions
8.2 Sparse Vectors and Matrices
8.3 Array Types
8.4 Linear Algebra
8.4.1 Vector Spaces and Linear Functions
8.4.2 Basis Change
8.4.3 Inner-Product Spaces
8.4.4 The Rank-Nullity Theorem
8.4.5 Matrix Types
8.4.6 The Cross Product
8.4.7 The Determinant
8.4.8 Linear Systems
8.4.9 Eigenvalues and Eigenvectors
8.4.10 Singular-Value Decomposition
8.4.11 Summary of Matrix Operations and Factorizations
Problems
References
Part II
Algorithms for Differential Equations
Chapter 9 Ordinary Differential Equations
9.1 Introduction
9.2 Existence and Uniqueness of Solutions *
9.3 Systems of Ordinary Differential Equations
9.4 Euler Methods
9.4.1 Forward and the Backward Euler Methods
9.4.2 Truncation Errors of the Forward Euler Method
9.4.3 Improved Euler Method
9.5 Variation of Step Size
9.6 Runge–Kutta Methods
9.7 Butcher Tableaux
9.8 Adaptive Runge–Kutta Methods
9.9 Implementation of Runge–Kutta Methods
9.10 Julia Packages
9.11 Bibliographical Remarks
Problems
References
Chapter 10 Partial-Differential Equations
10.1 Introduction
10.2 Elliptic Equations
10.2.1 Three Physical Phenomena
10.2.2 Boundary Conditions
10.2.3 Existence, Uniqueness, and a Pointwise Estimate *
10.3 Parabolic Equations
10.4 Hyperbolic Equations
10.5 Finite Differences
10.5.1 One-Dimensional Second-Order Discretization
10.5.2 Compact Fourth-Order Finite-Difference Discretizations
10.6 Finite Volumes
10.7 Finite Elements
10.8 Julia Packages
10.9 Bibliographical Remarks
Problems
References
Part III Algorithms for Optimization
Chapter 11 Global Optimization
11.1 Introduction
11.2 No Free Lunch
11.3 Simulated Annealing
11.3.1 The Metropolis Monte Carlo Algorithm
11.3.2 The Simulated-Annealing Algorithm
11.3.3 Cooling Strategies
11.4 Particle-Swarm Optimization
11.5 Genetic Algorithms
11.5.1 The Algorithm
11.5.2 Genotypes and Phenotypes
11.5.3 Fitness
11.5.4 Selection
11.5.5 Reproduction
11.6 Ablation Studies
11.7 Random Restarting and Hybrid Algorithms
11.8 Benchmark Problems
11.9 Julia Packages
11.10 Bibliographical Remarks
Problems
References
Chapter 12 Local Optimization
12.1 Introduction
12.2 The Hessian Matrix
12.3 Convexity
12.4 Gradient Descent
12.5 Accelerated Gradient Descent *
12.6 Line Search and the Wolfe Conditions
12.7 The Newton Method
12.8 The bfgs Method
12.9 The L-BFGS (Limited-Memory BFGS) Method
12.10 Julia Packages
12.11 Bibliographical Remarks
Problems
References
Part IV Algorithms for Machine Learning
Chapter 13 Neural Networks
13.1 Introduction
13.2 Feeding Forward
13.3 The Approximation Property
13.4 Handwriting Recognition
13.5 Cost Functions
13.6 Stochastic Gradient Descent
13.7 Backpropagation
13.8 Hyperparameters and Overfitting
13.9 Improving Training
13.9.1 Regularization
13.9.2 Cost Functions
13.10 Julia Packages
13.11 Bibliographical Remarks
Problems
References
Chapter 14 Bayesian Estimation
14.1 Introduction
14.2 The Riemann–Stieltjes Integral
14.3 Bayes’ Theorem
14.4 Frequentist and Bayesian Inference
14.5 Parameter Estimation and Inverse Problems
14.5.1 Problem Statement
14.5.2 The Logistic Equation as an Example
14.5.3 The Likelihood
14.5.4 Markov-Chain Monte Carlo
14.5.5 The Metropolis–Hastings Algorithm
14.5.6 Implementation of the Metropolis–Hastings Algorithm
14.5.7 Maximum-a-Posteriori Estimate and Maximum-Likelihood Estimate
14.5.8 Convergence
14.5.9 The Delayed-Rejection Adaptive-Metropolis (DRAM) Algorithm
14.6 Julia Packages
14.7 Bibliographical Remarks
Problems
References
Index