Applications of Linear and Nonlinear Models: Fixed Effects, Random Effects, and Total Least Squares

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book provides numerous examples of linear and nonlinear model applications. Here, we present a nearly complete treatment of the Grand Universe of linear and weakly nonlinear regression models within the first 8 chapters. Our point of view is both an algebraic view and a stochastic one. For example, there is an equivalent lemma between a best, linear uniformly unbiased estimation (BLUUE) in a Gauss–Markov model and a least squares solution (LESS) in a system of linear equations. While BLUUE is a stochastic regression model, LESS is an algebraic solution. In the first six chapters, we concentrate on underdetermined and overdetermined linear systems as well as systems with a datum defect. We review estimators/algebraic solutions of type MINOLESS, BLIMBE, BLUMBE, BLUUE, BIQUE, BLE, BIQUE, and total least squares. The highlight is the simultaneous determination of the first moment and the second central moment of a probability distribution in an inhomogeneous multilinear estimation by the so-called E-D correspondence as well as its Bayes design. In addition, we discuss continuous networks versus discrete networks, use of Grassmann–Plucker coordinates, criterion matrices of type Taylor–Karman as well as FUZZY sets. Chapter seven is a speciality in the treatment of an overjet. This second edition adds three new chapters:

(1)  Chapter on integer least squares that covers (i) model for positioning as a mixed integer linear  model which includes integer parameters. (ii) The general integer least squares problem is formulated, and the optimality of the least squares solution  is shown. (iii) The relation to the closest vector problem is considered, and the notion of reduced lattice basis is introduced. (iv) The famous LLL algorithm for generating a Lovasz reduced basis is explained.

(2) Bayes methods  that covers (i) general principle of Bayesian modeling. Explain the notion of prior distribution and posterior distribution. Choose the pragmatic approach  for exploring the advantages of iterative Bayesian calculations and hierarchical modeling. (ii) Present the Bayes methods for linear models with normal distributed errors, including noninformative priors, conjugate priors, normal gamma distributions and (iii) short outview to modern application of Bayesian modeling. Useful in case of nonlinear models or linear models with no normal distribution: Monte Carlo (MC), Markov chain Monte Carlo (MCMC), approximative Bayesian computation (ABC)  methods.

(3) Error-in-variables models, which cover: (i) Introduce the error-in-variables (EIV) model, discuss the difference to least squares estimators (LSE), (ii) calculate the total least squares (TLS) estimator. Summarize the properties of TLS, (iii) explain the idea of simulation extrapolation (SIMEX) estimators, (iv) introduce the symmetrized SIMEX (SYMEX) estimator and its relation to TLS, and (v) short outview to nonlinear EIV models. 

The chapter on algebraic solution of nonlinear system of equations has also been updated in line with the new emerging field of hybrid numeric-symbolic solutions to systems of nonlinear equations, ermined system of nonlinear equations on curved manifolds. The von Mises–Fisher distribution is characteristic for circular or (hyper) spherical data. Our last chapter is devoted to probabilistic regression, the special Gauss–Markov model with random effects leading to estimators of type BLIP and VIP including Bayesian estimation.

A great part of the work is presented in four appendices. Appendix A is a treatment, of tensor algebra, namely linear algebra, matrix algebra, and multilinear algebra. Appendix B is devoted to sampling distributions and their use in terms of confidence intervals and confidence regions. Appendix C reviews the elementary notions of statistics, namely random events and stochastic processes. Appendix D introduces the basics of Groebner basis algebra, its careful definition, the Buchberger algorithm, especially the C. F. Gauss combinatorial algorithm.

Author(s): Erik W. Grafarend, Silvelyn Zwanzig, Joseph L. Awange
Series: Springer Geophysics
Edition: 2
Publisher: Springer
Year: 2022

Language: English
Pages: 1126
City: Cham

Foreword
Contents
Preface to the First Edition
Preface to the Second Edition
Chapter 1 The First Problem of Algebraic Regression
1-1 Introduction
1-11 The Front Page Example
1-12 The Front Page Example: Matrix Algebra
1-13 The Front Page Example: MINOS, Horizontal Rank Partitioning
1-14 The Range R(f) and the Kernel N(f)
1-15 The Interpretation of MINOS
1-2 Minimum Norm Solution (MINOS)
1-21 A Discussion of the Metric of the Parameter Space X
1-22 An Alternative Choice of the Metric of the Parameter Space X
1-23 Gx-MINOS and Its Generalized Inverse
1-24 Eigenvalue Decomposition of Gx-MINOS: Canonical MINOS
1-3 Case Study
1-31 Fourier Series
1-32 Fourier–Legendre Series
1-33 Nyquist Frequency for Spherical Data
1-4 Special Nonlinear Models
1-41 Taylor Polynomials, Generalized Newton Iteration
1-42 Linearized Models with Datum Defect
1-5 Notes
Chapter 2 The First Problem of Probabilistic Regression: The Bias Problem
2-1 Linear Uniformly Minimum Bias Estimator (LUMBE)
2-2 The Equivalence Theorem of Gx-MINOS and S-LUMBE
2-3 Example
Chapter 3 The Second Problem of Algebraic Regression
3-1 Introduction
3-11 The Front Page Example
3-12 The Front Page Example in Matrix Algebra
3-13 Least Squares Solution of the Front Page Example by Means of Vertical Rank Partitioning
3-14 The RangeR(f) and the Kernel N(f), Interpretation of “LESS” by Three Partitionings
3-2 The Least Squares Solution: “LESS”
3-21 A Discussion of the Metric of the Parameter Space X
3-22 Alternative Choices of the Metric of the Observation Y
3-23 Gx-LESS and Its Generalized Inverse
3-24 Eigenvalue Decomposition of Gy-LESS: Canonical LESS
3-3 Case Study
3-31 Canonical Analysis of the Hat Matrix, Partial Redundancies, High Leverage Points
3-32 Multilinear Algebra, “Join” and “Meet”, the Hodge Star Operator
3-33 From A to B: Latent Restrictions, Grassmann Coordinates, Plücker Coordinates
3-34 From B to A: Latent Parametric Equations, Dual Grassmann Coordinates, Dual Plücker Coordinates
3-35 Break Points
3-4 Special Linear and Nonlinear Models: A Family of Means for Direct Observations
3-5 A Historical Note on C.F. Gauss and A.M. Legendre
Chapter 4 The Second Problem of Probabilistic Regression
4-1 Introduction
4-11 The Front Page Example
4-12 Estimators of Type BLUUE and BIQUUE of the Front Page Example
4-13 BLUUE and BIQUUE of the Front Page Example, Sample Median, Median Absolute Deviation
4-14 Alternative Estimation Maximum Likelihood (MALE)
4-2 Setup of the Best Linear Uniformly Unbiased Estimator
4-21 The Best Linear Uniformly Unbiased Estimation ^ξ of ξ : Σy-BLUUE
4-22 The Equivalence Theorem of Gy-LESS and Σy-BLUUE
4-3 Setup of the Best Invariant Quadratic Uniformly Unbiased Estimator
4-31 Block Partitioning of the Dispersion Matrix and Linear Space Generated by Variance-Covariance Components
4-32 Invariant Quadratic Estimation of Variance-Covariance Components of Type IQE
4-33 Invariant Quadratic Uniformly Unbiased Estimations of Variance-Covariance Components of Type IQUUE
4-34 Invariant Quadratic Uniformly Unbiased Estimationsof One Variance Component (IQUUE) from Σy-BLUUE: HIQUUE
4-35 Invariant Quadratic Uniformly Unbiased Estimators of Variance Covariance Components of Helmert Type: HIQUUE Versus HIQE
4-36 Best Quadratic Uniformly Unbiased Estimations of One Variance Component: BIQUUE
4-37 Simultaneous Determination of First Moment and the Second Central Moment, Inhomogeneous Multilinear Estimation, the E – D Correspondence, Bayes Designwith Moment Estimations
Chapter 5 The Third Problem of Algebraic Regression
5-1 Introduction
5-11 The Front Page Example
5-12 The Front Page Example in Matrix Algebra
5-13 Minimum Norm: Least Squares Solution of the Front Page Example by Means of Additive Rank Partitioning
5-14 Minimum Norm: Least Squares Solution of the Front Page Example by Means of Multiplicative Rank Partitioning
5-15 The Range R(f) and the Kernel N(f) Interpretation of “MINOLESS” by Three Partitionings
5-2 MINOLESS and Related Solutions Like Weighted Minimum Norm-Weighted Least Squares Solutions
5-21 The Minimum Norm-Least Squares Solution: “MINOLESS”
5-22 (Gx, Gy)-MINOS and Its Generalized Inverse
5-23 Eigenvalue Decomposition of (Gx, Gy)-MINOLESS
5-24 Notes
5-3 The Hybrid Approximation Solution: α-HAPS and Tykhonov–Phillips Regularization
Chapter 6 The Third Problem of Probabilistic Regression
6-1 Setup of the Best Linear Minimum Bias Estimator of Type BLUMBE
6-11 Definitions, Lemmas and Theorems
6-12 The First Example: BLUMBE Versus BLE, BIQUUE Versus BIQE, Triangular Leveling Network
6-2 Setup of the Best Linear Estimators of Type hom BLE, hom S-BLE and hom a-BLE for Fixed Effects
6-3 Continuous Networks
6-31 Continuous Networks of Second Derivatives Type
Chapter 7 Overdetermined System of Nonlinear Equations on Curved Manifolds
7-1 Introduction
7-2 Minimal Geodesic Distance: MINGEODISC
7-3 Special Models: From the Circular Normal Distribution to the Oblique Normal Distribution
7-31 A Historical Note of the von Mises Distribution
7-32 Oblique Map Projection
7-33 A Note on the Angular Metric
7-4 Case Study
References
Chapter 8 The Fourth Problem of Probabilistic Regression
8-1 The Random Effect Model
8-2 Examples
Chapter 9 The Fifth Problem of Algebraic Regression: The System of Conditional Equations: Homogeneous and Inhomogeneous Equations: {By = Bi versus –c + By = Bi}
9-1 Gy-LESS of a System of a Inconsistent Homogeneous Conditional Equations
9-2 Solving a System of Inconsistent Inhomogeneous Conditional Equations
9-3 Examples
Chapter 10 The Fifth Problem of Probabilistic Regression
10-1 Inhomogeneous General Linear Gauss–Markov Model (Fixed Effects and Random Effects)
10-2 Explicit Representations of Errors in the General Gauss–Markov Model with Mixed Effects
10-3 An Example for Collocation
10-4 Comments
Chapter 11 The sixth problem of probabilistic regression
11-1 Introduction
11-2 The Errors-in-Variables Model and its Symmetry
11-3 Least Squares in Linear Errors-in-Variables Models
11-31 Naive Least Squares
11-32 Total Least Squares TLS
11-4 SIMEX and SYMEX
11-41 SIMEX
11-42 SYMEX
11-5 Datum Transformation
11-6 Nonlinear Errors-in-Variables Models
Chapter 12 The Nonlinear Problem of the 3d Datum Transformation and the Procrustes Algorithm
12-1 The 3d Datum Transformation and the Procrustes Algorithm
12-2 The Variance: Covariance Matrix of the Error Matrix E
12-3 References
Chapter 13 The Sixth Problem of Generalized Algebraic Regression
13-1 Variance-Covariance-Component Estimation in the Linear Model Ax + ε = y, y ∉ R(A)
13-2 Variance-Covariance-Component Estimation in the Linear Model Bε = By –c, By ∉ R(A) + c
13-3 Variance-Covariance-Component Estimation in theLinear Model Ax + ε + Bε = By –c, By ∉ R(A) + c
13-4 The Block Structure of Dispersion Matrix D{y}
Chapter 14 Special Problems of Algebraic Regression and Stochastic Estimation
14-1 The Multivariate Gauss–Markov Model: A Special Problem of Probabilistic Regression
14-2 n-Way Classification Models
14-21 A First Example: 1-Way Classification
14-22 A Second Example: 2-Way Classification Without Interaction
14-23 A Third Example: 2-Way Classification with Interaction
14-24 Higher Classifications with Interaction
14-3 Dynamical Systems
Chapter 15 Systems of equations: Hybrid algebraic-numeric solutions
15-1 Algebraic, numeric, and hybrid algebraic-numeric
15-2 Algebraic solutions: Background
15-3 Nonlinear systems of equations: Algebraic methods
15-31 Nonlinear Gauss-Markov model: Algebraic solution
15-32 Adjustment of the combinatorial subsets
15-4 Examples
15-5 Hybrid algebraic-numeric methods
15-6 Notes
Chapter 16 Integer Least Squares
16-1 Introductory remarks
16-2 Model for Positioning
16-3 Mixed Integer Linear Model
16-4 Integer Least Squares
16-41 Simple Rounding Solution
16-42 Main Steps
16-43 The Closest Vector Problem (CVP)
16-44 Reduction
16-45 Gram–Schmidt Method
16-46 The LLL Algorithm
16-47 Babai’s Rounding Technique
Chapter 17 Bayesian Inference
17-1 Introduction
17-2 Principle of Bayesian Analysis
17-21 Sequential Analysis
17-22 Hierarchical Bayes Models
17-23 Choice of Prior
17-24 Bayesian Inference
17-3 Univariate Linear Model
17-31 Model Assumptions
17-32 Normal-inverse-gamma Distribution
17-33 Noninformative Prior
17-34 Conjugate Prior
17-35 Regularized Estimators
17-4 Mixed Model
17-41 Prior Distribution
17-42 Posterior Distribution
17-5 Multivariate Linear Model
17-51 Normal-inverse-Wishart Distribution
17-52 Noninformative Prior
17-53 Informative Prior
17-6 Computer Intensive Methods
17-61 Independent Monte Carlo (MC)
17-62 Importance Sampling
17-63 Markov Chain Monte Carlo
17-64 Gibbs Sampling
17-65 Rejection Algorithm
17-66 Approximative Bayesian Computation (ABC)
Appendix A Tensor Algebra, Linear Algebra, Matrix Algebra, Multilinear Algebra
A-1 Multilinear Functions and the Tensor Space Tpq
A-2 Decomposition of Multilinear Functions into Symmetric Multilinear Functions, Antisymmetric Multi-linear Functions and Residual Multilinear Functions T T pq = Spq  Apq  Rpq
A-3 Matrix Algebra, Array Algebra, Matrix Norm and Inner Product
A-4 The Hodge Star Operator, Self Duality
A-5 Linear Algebra
A-51 Definition of a Linear Algebra
A-52 The Diagrams “Ass”, “Uni” and “Comm”
A-53 Ringed Spaces: The Subalgebra “Ring with Identity”
A-54 Definition of a Division Algebra and Non-Associative Algebra
A-55 Lie Algebra, Witt Algebra
A-56 Definition of a Composition Algebra
A-6 Matrix Algebra Revisited, Generalized Inverses
A-61 Special Matrices: Helmert, Hankel, and Vandemonte
A-62 Scalar Measures of Matrices
A-63 Three Basic Types of Generalized Inverses
A-7 Complex Algebra, Quaternion Algebra, Octonian Algebra, Clifford Algebra, Hurwitz Theorem
A-71 Complex Algebra as a Division Algebra as well as a Composition Algebra, Clifford algebra Cl(0, 1)
A-72 Quaternion Algebra as a Division Algebra as well as a Composition Algebra, Clifford algebra Cl(0, 2)
A-73 Octanian Algebra as a Non-Associative Algebra as well as a Composition Algebra, Clifford algebra with Respect to H x H
A-74 Clifford Algebra
Appendix B Sampling Distributions and Their Use: Confidence Intervals and Confidence Regions
B-1 A First Vehicle: Transformation of Random Variables
B-2 A Second Vehicle: Transformation of Random Variables
B-3 A First Confidence Interval of Gauss–Laplace Normally Distributed Observations µ, σ2 Known, the Three Sigma Rule
B-31 The Forward Computation of a First Confidence Interval of Gauss–Laplace Normally Distributed Observations: µ, σ2 Known
B-32 The Backward Computation of a First ConfidenceInterval of Gauss–Laplace Normally DistributedObservations: µ, σ2 Known
B-4 Sampling from the Gauss–Laplace Normal Distribution: A Second Confidence Interval for the Mean, Variance Known
B-41 Sampling Distributions of the Sample Mean µ, σ2 Known, and of the Sample Variance σ2
B-42 The Confidence Interval for the Sample Mean, Variance Known
B-5 Sampling from the Gauss–Laplace Normal Distribution: A Third Confidence Interval for the Mean, Variance Unknown
B-51 Student’s Sampling Distribution of the Random Variable (µ – µ)/σ
B-52 The Confidence Interval for the Mean, Variance Unknown
B-53 The Uncertainty Principle
B-6 Sampling from the Gauss–Laplace Normal Distribution: A Fourth Confidence Interval for the Variance
B-61 The Confidence Interval for the Variance
B-62 The Uncertainty Principle
B-7 Sampling from the Multidimensional Gauss–Laplace Normal Distribution: The Confidence Region for the Fixed Parameters in the Linear Gauss–Markov Model
B-8 Multidimensional Variance Analysis, Sampling from the Multivariate Gauss–Laplace Normal Distribution
B-81 Distribution of Sample Mean and Variance-Covariance
B-82 Distribution Related to Correlation Coefficients
Appendix C Statistical Notions, Random Events and Stochastic Processes
C-1 Moments of a Probability Distribution, the Gauss–Laplace Normal Distribution and the Quasi-Normal Distribution
C-2 Error Propagation
C-3 Useful Identities
C-4 Scalar – Valued Stochastic Processes of One Parameter
C-5 Characteristic of One Parameter Stochastic Processes
C-6 Simple Examples of One Parameter Stochastic Processes
C-7 Wiener Processes
C-71 Definition of the Wiener Processes
C-72 Special Wiener Processes: Ornstein–Uhlenbeck, Wiener Processes with Drift, Integral Wiener Processes
C-8 Special Analysis of One Parameter Stationary Stochastic Process
C-81 Foundations: Ergodic and Stationary Processes
C-82 Processes with Discrete Spectrum
C-83 Processes with Continuous Spectrum
C-84 Spectral Decomposition of the Mean and Variance-Covariance Function
C-9 Scalar-, Vector-, and Tensor Valued Stochastic Processes of Multi-Parameter Systems
C-91 Characteristic Functional
C-92 The Moment Representation of Stochastic Processes for Scalar Valued and Vector Valued Quantities
C-93 Tensor-Valued Statistical Homogeneous and Isotropic Field of Multi-Point Systems
Appendix D Basics of Groebner Basis Algebra
D-1 Definitions
D-2 Buchberger Algorithm
D-21 Mathematica Computation of Groebner Basis
D-22 Maple Computation of Groebner Basis
D.3 Gauss Combinatorial Formulation
References
Index