This major revision of the author's popular book still focuses on foundations and proofs, but now exhibits a shift away from Topology to Probability and Information Theory (with Shannon's source and channel encoding theorems) which are used throughout. Three vital areas for the digital revolution are tackled (compression, restoration and recognition), establishing not only what is true, but why, to facilitate education and research. It will remain a valuable book for computer scientists, engineers and applied mathematicians.
Author(s): S. G. Hoggar
Edition: 1
Year: 2006
Language: English
Pages: 854
Tags: Информатика и вычислительная техника;Обработка медиа-данных;Обработка изображений;
Cover......Page 1
Half-title......Page 3
Title......Page 5
Copyright......Page 6
Dedication......Page 7
Contents......Page 9
Preface......Page 13
Beauty is in the eye of the beholder?…......Page 15
Which chapters depend on which......Page 26
Wiener filter......Page 27
Tomography......Page 28
A word on notation......Page 29
Symbols......Page 31
Part I The plane......Page 35
1.1 Introduction......Page 37
1.2.1 The plane and vectors......Page 40
1.2.2 Isometries......Page 44
1.2.3 The sense of an isometry......Page 47
1.3.1 Composing isometries......Page 50
1.3.2 The classification of isometries......Page 52
Exercises 1......Page 55
2 How isometries combine......Page 57
2.1 Reflections are the key......Page 58
2.2 Some useful compositions......Page 59
2.3 The image of a line of symmetry......Page 65
2.4 The dihedral group......Page 70
2.5 Appendix on groups......Page 74
Exercises 2......Page 75
3 The seven braid patterns......Page 77
Constructing braid patterns......Page 79
Exercises 3......Page 80
4.1 Translations and nets......Page 82
4.2 Cells......Page 84
4.3.1 Nets allowing a reflection......Page 90
4.3.3 The five net types......Page 92
Exercises 4......Page 97
5.1 Preliminaries......Page 98
5.2 The general parallelogram net......Page 100
5.3 The rectangular net......Page 101
5.4 The centred rectangular net......Page 102
Explanation for symmetries......Page 103
5.6 The hexagonal net......Page 105
5.7 Examples of the 17 plane pattern types......Page 107
5.8 Scheme for identifying pattern types......Page 109
Exercises 5......Page 111
6.1 Equivalent symmetry groups......Page 113
6.2 Plane patterns classified......Page 116
6.2.1 Patterns with different signatures – the point group......Page 117
6.2.2 The classification is complete......Page 119
6.3 Tilings and Coxeter graphs......Page 125
6.3.1 Archimedean tilings......Page 127
6.3.2 Coxeter graphs and Wythoff’s construction......Page 129
One node ringed......Page 130
Cases with two rings......Page 131
Cases with three rings......Page 132
6.4.1 Using nets and cells......Page 133
6.4.3 Using the fundamental region......Page 134
6.4.4 Algorithm for the 17 plane patterns......Page 139
Exercises 6......Page 143
Part II Matrix structures......Page 147
7.1.1 Recapitulation – vectors......Page 149
7.1.2 Recapitulation – coordinate axes......Page 150
7.1.3 Right handed versus left handed triples......Page 151
7.1.4 The scalar product......Page 152
7.1.5 A trip through n-space......Page 154
7.2.1 Matrices......Page 160
7.2.2 Determinants......Page 164
First results......Page 165
Three special cases......Page 166
7.2.3 The inverse of a matrix......Page 168
7.2.4 Orthogonal matrices......Page 170
7.2.5 Block matrices......Page 172
7.3.1 The standard vector product......Page 174
7.3.2 Triple products of vectors......Page 175
7.3.3 Vectors and coordinate geometry......Page 177
7.4.1 Rotations and reflections......Page 179
7.4.2 Linearity......Page 182
7.4.3 Projection mappings......Page 184
7.4.4 Changing bases......Page 185
Isometries and orthogonal matrices......Page 187
7.5 Permutations and the proof of determinant rules......Page 189
Exercises 7......Page 193
8.1.1 Complex numbers......Page 196
8.1.2 Eigenvalues and eigenvectors......Page 198
8.1.3 Linear isometries in 3-space......Page 202
Calculating axes and normals in practice......Page 204
8.2.1 Rank and row operations......Page 206
8.2.2 Rank, nullity and linear equations......Page 211
8.2.3 Inverting a matrix by row operations......Page 213
8.3.1 Diagonalisation......Page 214
8.3.2 Quadratic forms......Page 217
8.3.3 Positive definite forms......Page 221
8.4 The Singular Value Decomposition (SVD)......Page 226
8.4.1 Matrix norms......Page 227
8.4.2 The SVD – existence and basics......Page 230
First properties of the SVD......Page 231
8.4.3 Optimisation properties of the SVD......Page 233
Exercises 8......Page 237
Part III Here’s to probability......Page 241
9.1.1 Sample spaces and events......Page 243
9.1.2 First rules of probability......Page 245
9.1.3 Finite sample spaces......Page 247
9.1.4 Finite equiprobable spaces......Page 248
9.2.1 Conditional probability......Page 251
9.2.2 Independence......Page 254
9.2.3 Bayes’ Theorem......Page 258
9.3.1 Discrete random variables......Page 261
9.3.2 Continuous random variables......Page 262
9.3.3 The cdf of a random variable......Page 264
9.3.4 Functions of a random variable......Page 266
9.3.5 Expected value......Page 268
9.3.6 Variance......Page 271
9.4.1 The binomial distribution......Page 273
9.4.2 The Poisson distribution......Page 274
The standard normal distribution......Page 277
The normal distribution in general......Page 279
9.4.4 Other distributions......Page 281
The continuous case......Page 282
9.5.1 Chebychev’s Inequality......Page 285
9.5.2 Jensen’s Inequality......Page 286
Exercises 9......Page 290
10.1.1 Discrete versus continuous......Page 292
10.1.2 Independence......Page 295
10.1.3 Conditional probability......Page 296
10.2.1 Product and quotient......Page 299
10.2.2 Convolution: from X + Y to sum of squares......Page 303
More on convolution......Page 305
10.2.3 Expectation and variance......Page 306
10.2.4 The Law of Large Numbers......Page 309
10.2.5 Independence and n-vectors......Page 310
10.3.1 Moments......Page 311
10.3.2 Transforms and uniqueness......Page 313
10.3.3 The Central Limit Theorem......Page 315
10.4.1 Correlation......Page 319
10.4.2 The covariance matrix......Page 321
10.4.3 The d-dimensional normal/Gaussian distribution......Page 323
10.4.4 Principal Component Analysis......Page 327
Exercises 10......Page 336
11 Sampling and inference......Page 337
11.1.1 Sampling......Page 338
Distribution of the sample mean......Page 339
Distribution of the sample variance S2......Page 340
11.1.2 Estimating parameters......Page 343
11.1.3 Maximum likelihood estimates......Page 345
11.1.4 Regression and least squares estimates......Page 349
11.1.5 Hypothesis testing......Page 352
11.1.6 Hypothesis testing – distributions......Page 355
11.2 The Bayesian approach......Page 358
11.2.1 From prior to posterior......Page 359
11.2.2 Bayes pattern classifiers......Page 364
11.3.2 Basis: the uniform distribution......Page 369
11.3.3 The inverse transform method (IT)......Page 371
11.3.4 The accept–reject method (AR)......Page 372
11.3.5 Generating for the normal/Gaussian......Page 375
11.3.6 Generating for discrete distributions......Page 377
11.4 Markov Chain Monte Carlo......Page 378
11.4.1 Markov chains......Page 379
The long term......Page 382
11.4.2 Monte Carlo methods......Page 385
11.4.3 Markov Chain Monte Carlo......Page 389
11.4.4 Bayesian networks......Page 393
11.4.5 Markov Random Fields and the Gibbs sampler......Page 398
11.4.6 Bayesian image restoration......Page 406
11.4.7 The binary case – MAP by network flows......Page 410
From energy to network......Page 415
From MAP candidate to cut......Page 416
The cut-flow algorithm......Page 417
Results and conjectures......Page 421
Exercises 11......Page 423
Part IV Information, error and belief......Page 427
12.1 The idea of entropy......Page 429
First consequences for H......Page 432
12.2 Codes and binary trees......Page 436
12.3 Huffman text compression......Page 440
Canonical Huffman codes......Page 443
12.4 The redundancy of Huffman codes......Page 446
12.5 Arithmetic codes......Page 451
12.6 Prediction by Partial Matching......Page 458
12.7 LZW compression......Page 459
12.8 Entropy and Minimum Description Length (MDL)......Page 463
12.8.1 Kolmogorov complexity......Page 464
12.8.2 Complexity and entropy......Page 466
12.8.3 The MDL Principle......Page 471
Exercises 12......Page 476
13.1 Channel capacity......Page 478
13.1.1 The Discrete Memoryless Channel......Page 479
13.1.2 Entropies of a pair......Page 480
13.1.3 Three properties of mutual entropy......Page 484
13.1.4 Random vectors......Page 487
13.1.5 Channel capacity......Page 492
13.1.6 The Channel Coding Theorem......Page 497
13.2 Error-correcting codes......Page 503
13.2.1 Nearest neighbour decoding......Page 505
13.2.2 Hamming single-error-correcting codes......Page 508
13.2.3 Finite fields and minimal polynomials......Page 511
Minimal polynomials......Page 516
13.2.4 BCH and Reed–Solomon codes......Page 518
BCH codes......Page 520
Reed–Solomon codes......Page 523
13.2.5 Aiming for the Shannon prediction......Page 525
13.3.1 Belief propagation in Bayesian networks......Page 527
Markov chains and message passing......Page 530
Subnetworks......Page 533
Computing Belief (x)......Page 535
The levels method......Page 536
The factor graph......Page 537
13.3.2 Convolutional codes......Page 540
The state diagram as encoder......Page 541
Trellis diagrams for decoding......Page 542
13.3.3 Convolutional decoding by belief propagation......Page 545
13.3.4 Turbocodes......Page 547
13.4 Postscript: Bayesian nets in computer vision......Page 550
Further issues in channel coding......Page 551
Exercises 13......Page 552
Part V Transforming the image......Page 555
14.1.1 Basic definitions and properties......Page 557
14.1.2 Inversion......Page 562
14.1.3 Convolution......Page 565
14.1.4 The Fast Fourier Transform (FFT)......Page 569
14.1.5 Choices in defining the DFT......Page 573
14.2.1 The 1-dimensional case......Page 574
14.2.2 The Dirac delta function Sigma(x)......Page 578
14.3.1 The background – Fourier series......Page 580
Real Fourier series......Page 581
Complex Fourier series......Page 583
14.3.2 The DFT as approximation to the Fourier Transform......Page 585
14.3.3 The DFT, Fourier series and sampling rate......Page 587
Exercises 14......Page 592
15 Transforming images......Page 594
15.1.1 Separable transforms in general......Page 595
15.1.2 Constructing the 2D Fourier Transform......Page 597
Exhibiting the discrete transform......Page 599
15.1.3 Rotation and projection theorems......Page 602
15.1.4 The discrete 2D Convolution Theorem......Page 605
15.1.5 Fourier Transforms and the statistics of an image......Page 610
15.2 Filters......Page 612
15.2.1 Greyscale transforms......Page 613
15.2.2 Highpass and lowpass filters......Page 616
15.2.3 Small convolution approximations to large filters......Page 620
15.2.4 Edge-detection filters......Page 626
15.3.1 Motion blur......Page 629
15.3.2 Gaussian optics – the thin lens......Page 634
Lens blur......Page 637
Atmospheric blur......Page 640
A matrix approach......Page 644
The Wiener filter......Page 646
15.4.1 Pyramids, sub-band coding and entropy......Page 651
15.4.2 The Discrete Cosine Transform......Page 656
15.4.3 JPEG and beyond......Page 660
15.5.1 The DCT and the K–L transform......Page 662
15.5.2 The Fourier Transform in n dimensions......Page 665
Main results for the n-dimensional Fourier Transform Proofs of main results......Page 667
Exercises 15......Page 669
16.1.1 The fractal nature of Nature......Page 671
16.1.2 Fractal dimension......Page 674
16.1.3 Iterated Function Systems......Page 679
16.1.4 Fractal Compression by PIFS......Page 682
Sources of more accurate domains......Page 688
Domain pools for speed......Page 689
Subdividing the ranges......Page 690
Further developments......Page 691
16.2.1 The Haar approach......Page 692
Deriving basis functions......Page 693
16.2.2 The box basis and multiresolution......Page 694
The next level......Page 695
16.2.3 The 2D Haar Transform......Page 697
16.3.1 Filter banks for multiresolution......Page 699
16.3.2 Orthogonal wavelets......Page 704
16.3.3 Daubechies wavelets......Page 706
16.3.4 Fingerprints......Page 709
16.4.1 Discrete versus continuous wavelets......Page 711
16.4.3 Canny edge-detection......Page 713
16.4.4 Wavelets in edge-detection......Page 714
Some application areas of wavelets......Page 716
Exercises 16......Page 717
Part VI See, edit, reconstruct......Page 719
17.1 Splines from boxes......Page 721
17.1.2 Convexity properties......Page 722
17.1.3 B-splines by convolution......Page 726
17.1.4 The Fourier Transform......Page 729
17.1.5 Basis functions: Cox–de Boor relations......Page 731
17.1.6 Cubic B-splines......Page 733
Using collinear and double control points......Page 735
17.2.1 A fundamental theorem......Page 737
17.2.2 B-splines by subdivision......Page 741
17.2.3 Subdivision and basis functions......Page 744
17.2.4 Nested spaces......Page 747
17.2.5 An end-correction for tangency......Page 749
17.3.1 Semi-orthogonal wavelets......Page 753
17.3.2 Finding Q with symmetry......Page 756
Solving for Q......Page 758
17.3.3 The filter bank and curve editing......Page 760
Curve editing at different scales......Page 762
Conclusions......Page 765
17.4 Appendix: band matrices for finding Q, A and B......Page 766
17.4.1 Band matrices and polynomials......Page 767
Why band matrices?......Page 768
Products of truncations......Page 769
17.4.2 Solving the matrix equation FX = 0......Page 770
Application 1: uniform splines......Page 772
Application 2: end-corrected cubic B-splines......Page 774
Application 3: inverting [P|Q] to obtain A, B......Page 776
17.5.1 Loop subdivision......Page 777
17.5.2 Parameters and basis functions......Page 779
17.5.3 The inner product......Page 782
17.5.4 Building orthogonality......Page 783
Further work......Page 787
Exercises 17......Page 788
18.1 Neural networks......Page 791
18.1.1 In the beginning McCulloch–Pitts......Page 792
18.1.2 Rosenblatt’s Perceptron......Page 793
18.1.3 The Perceptron Learning Algorithm......Page 796
The single-layer perceptron net......Page 798
Limitations of the perceptron......Page 800
18.1.4 Multilayer nets and backpropagation......Page 802
Discovering the backpropagation method......Page 803
18.1.5 Classification: can outputs be probabilities?......Page 809
18.1.6 Getting the right minimum (ensembles)......Page 812
18.2.1 Hebbian reinforcement learning and PCA......Page 817
18.2.2 Competitive learning......Page 819
18.2.3 Kohonen nets and LVQ......Page 822
18.3.1 Differential entropy......Page 826
18.3.2 Mutual information and neural nets......Page 831
18.3.3 A general learning rule......Page 835
18.3.4 Shannon’s Rate Distortion Theory......Page 839
18.3.5 Source coding and the Gaussian source......Page 843
The Source Coding Theorem......Page 845
Gaussians again......Page 846
18.3.6 The LBG quantiser......Page 847
Examples and developments......Page 851
18.4 Tomography......Page 852
18.4.1 The Hough Transform......Page 853
18.4.2 Tomography and the Radon Transform......Page 856
18.4.3 The Fourier method of reconstruction......Page 857
18.4.4 Further developments......Page 863
Exercises 18......Page 864
References......Page 866
Index......Page 879