Matrix and Tensor Decompositions in Signal Processing

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

The second volume will deal with a presentation of the main matrix and tensor decompositions and their properties of uniqueness, as well as very useful tensor networks for the analysis of massive data. Parametric estimation algorithms will be presented for the identification of the main tensor decompositions. After a brief historical review of the compressed sampling methods, an overview of the main methods of retrieving matrices and tensors with missing data will be performed under the low rank hypothesis. Illustrative examples will be provided.

Author(s): Gérard Favier
Series: Digital Signal and Image Processing Series: Matrices and Tensors with Signal Processing Set, 2
Publisher: Wiley-ISTE
Year: 2021

Language: English
Pages: 378
City: London

Cover
Half-Title Page
Title Page
Copyright Page
Contents
Introduction
I.1. What are the advantages of tensor approaches?
I.2. For what uses?
I.3. In what fields of application?
I.4. With what tensor decompositions?
I.5. With what cost functions and optimization algorithms?
I.6. Brief description of content
1. Matrix Decompositions
1.1. Introduction
1.2. Overview of the most common matrix decompositions
1.3. Eigenvalue decomposition
1.3.1. Reminders about the eigenvalues of a matrix
1.3.2. Eigendecomposition and properties
1.3.3. Special case of symmetric/Hermitian matrices
1.3.4. Application to compute the powers of a matrix and a matrix
1.3.5. Application to compute a state transition matrix
1.3.6. Application to compute the transfer function and the output of a
1.4. URVH decomposition
1.5. Singular value decomposition
1.5.1. Definition and properties
1.5.2. Reduced SVD and dyadic decomposition
1.5.3. SVD and fundamental subspaces associated with a matrix
1.5.4. SVD and the Moore–Penrose pseudo-inverse
1.5.5. SVD computation
1.5.6. SVD and matrix norms
1.5.7. SVD and low-rank matrix approximation
1.5.8. SVD and orthogonal projectors
1.5.9. SVD and LS estimator
1.5.10. SVD and polar decomposition
1.5.11. SVD and PCA
1.5.12. SVD and blind source separation
1.6. CUR decomposition
2. Hadamard, Kronecker and Khatri–Rao Products
2.1. Introduction
2.2. Notation
2.3. Hadamard product
2.3.1. Definition and identities
2.3.2. Fundamental properties
2.3.3. Basic relations
2.3.4. Relations between the diag operator and Hadamard product
2.4. Kronecker product
2.4.1. Kronecker product of vectors
2.4.2. Kronecker product of matrices
2.4.3. Rank, trace, determinant and spectrum of a Kronecker product
2.4.4. Structural properties of a Kronecker product
2.4.5. Inverse and Moore–Penrose pseudo-inverse of a Kronecker
2.4.6. Decompositions of a Kronecker product
2.5. Kronecker sum
2.5.1. Definition
2.5.2. Properties
2.6. Index convention
2.6.1. Writing vectors and matrices with the index convention
2.6.2. Basic rules and identities with the index convention
2.6.3. Matrix products and index convention
2.6.4. Kronecker products and index convention
2.6.5. Vectorization and index convention
2.6.6. Vectorization formulae
2.6.7. Vectorization of partitioned matrices
2.6.8. Traces of matrix products and index convention
2.7. Commutation matrices
2.7.1. Definition
2.7.2. Properties
2.7.3. Kronecker product and permutation of factors
2.7.4. Multiple Kronecker product and commutation matrices
2.7.5. Block Kronecker product
2.7.6. Strong Kronecker product
2.8. Relations between the diag operator and the Kronecker product
2.9. Khatri–Rao product
2.9.1. Definition
2.9.2. Khatri–Rao product and index convention
2.9.3. Multiple Khatri–Rao product
2.9.4. Properties
2.9.5. Identities
2.9.6. Khatri–Rao product and permutation of factors
2.9.7. Trace of a product of matrices and Khatri–Rao product
2.10. Relations between vectorization and Kronecker and Khatri–Rao products
2.11. Relations between the Kronecker, Khatri–Rao and Hadamard products
2.12. Applications
2.12.1. Partial derivatives and index convention
2.12.2. Solving matrix equations
3. Tensor Operations
3.1. Introduction
3.2. Notation and particular sets of tensors
3.3. Notion of slice
3.3.1. Fibers
3.3.2. Matrix and tensor slices
3.4. Mode combination
3.5. Partitioned tensors or block tensors
3.6. Diagonal tensors
3.6.1. Case of a tensorX . K[N;I]
3.6.2. Case of a square tensor
3.6.3. Case of a rectangular tensor
3.7. Matricization
3.7.1. Matricization of a third-order tensor
3.7.2. Matrix unfoldings and index convention
3.7.3. Matricization of a tensor of order N
3.7.4. Tensor matricization by index blocks
3.8. Subspaces associated with a tensor and multilinear rank
3.9. Vectorization
3.9.1. Vectorization of a tensor of order N
3.9.2. Vectorization of a third-order tensor
3.10. Transposition
3.10.1. Definition of a transpose tensor
3.10.2. Properties of transpose tensors
3.10.3. Transposition and tensor contraction
3.11. Symmetric/partially symmetric tensors
3.11.1. Symmetric tensors
3.11.2. Partially symmetric/Hermitian tensors
3.11.3. Multilinear forms with Hermitian symmetry and Hermitian tensors
3.11.4. Symmetrization of a tensor
3.12. Triangular tensors
3.13. Multiplication operations
3.13.1. Outer product of tensors
3.13.2. Tensor-matrix multiplication
3.13.3. Tensor–vector multiplication
3.13.4. Mode-(p, n) product
3.13.5. Einstein product
3.14. Inverse and pseudo-inverse tensors
3.15. Tensor decompositions in the form of factorizations
3.15.1. Eigendecomposition of a symmetric square tensor
3.15.2. SVD decomposition of a rectangular tensor
3.15.3. Connection between SVD and HOSVD
3.15.4. Full-rank decomposition
3.16. Inner product, Frobenius norm and trace of a tensor
3.16.1. Inner product of two tensors
3.16.2. Frobenius norm of a tensor
3.16.3. Trace of a tensor
3.17. Tensor systems and homogeneous polynomials
3.17.1. Multilinear systems based on the mode-n product
3.17.2. Tensor systems based on the Einstein product
3.17.3. Solving tensor systems using LS
3.18. Hadamard and Kronecker products of tensors
3.19. Tensor extension
3.20. Tensorization
3.21. Hankelization
4. Eigenvalues and Singular Values of a Tensor
4.1. Introduction
4.2. Eigenvalues of a tensor of order greater than two
4.2.1. Different definitions of the eigenvalues of a tensor
4.2.2. Positive/negative (semi-)definite tensors
4.2.3. Orthogonally/unitarily similar tensors
4.3. Best rank-one approximation
4.4. Orthogonal decompositions
4.5. Singular values of a tensor
5. Tensor Decompositions
5.1. Introduction
5.2. Tensor models
5.2.1. Tucker model
5.2.2. Tucker-(N1,N) model
5.2.3. Tucker model of a transpose tensor
5.2.4. Tucker decomposition and multidimensional Fourier transform
5.2.5. PARAFAC model
5.2.6. Block tensor models
5.2.7. Constrained tensor models
5.3. Examples of tensor models
5.3.1. Model of multidimensional harmonics
5.3.2. Source separation
5.3.3. Model of a FIR system using fourth-order output cumulants
Appendix. Random Variables and Stochastic Processes
A1.1. Introduction
A1.2. Random variables
A1.2.1. Real scalar random variables
A1.2.2. Real multidimensional random variables
A1.2.3. Gaussian distribution
A1.3. Discrete-time random signals
A1.3.1. Second-order statistics
A1.3.2. Stationary and ergodic random signals
A1.3.3. Higher order statistics of random signals
A1.4. Application to system identification
A1.4.1. Case of linear systems
A1.4.2. Case of homogeneous quadratic systems
References
Index
Other titles from iSTE in Digital Signal and Image Processing