Smoothing, Filtering and Prediction. Estimating the Past, Present and Future

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Издательство InTech, 2012, -286 pp.
Scientists, engineers and the like are a strange lot. Unperturbed by societal norms, they direct their energies to finding better alternatives to existing theories and concocting solutions to unsolved problems. Driven by an insatiable curiosity, they record their observations and crunch the numbers. This tome is about the science of crunching. It’s about digging out something of value from the detritus that others tend to leave behind. The described approaches involve constructing models to process the available data. Smoothing entails revisiting historical records in an endeavour to understand something of the past. Filtering refers to estimating what is happening currently, whereas prediction is concerned with hazarding a guess about what might happen next.
The basics of smoothing, filtering and prediction were worked out by Norbert Wiener, Rudolf E. Kalman and Richard S. Bucy et al over half a century ago. This book describes the classical techniques together with some more recently developed embellishments for improving performance within applications. Its aims are threefold. First, to present the subject in an accessible way, so that it can serve as a practical guide for undergraduates and newcomers to the field. Second, to differentiate between techniques that satisfy performance criteria versus those relying on heuristics. Third, to draw attention to Wiener’s approach for optimal non-causal filtering (or smoothing).
Optimal estimation is routinely taught at a post-graduate level while not necessarily assuming familiarity with prerequisite material or backgrounds in an engineering discipline. That is, the basics of estimation theory can be taught as a standalone subject. In the same way that a vehicle driver does not need to understand the workings of an internal combustion engine or a computer user does not need to be acquainted with its inner workings, implementing an optimal filter is hardly rocket science. Indeed, since the filter recursions are all known – its operation is no different to pushing a button on a calculator. The key to obtaining good estimator performance is developing intimacy with the application at hand, namely, exploiting any available insight, expertise and a priori knowledge to model the problem. If the measurement noise is negligible, any number of solutions may suffice. Conversely, if the observations are dominated by measurement noise, the problem may be too hard. Experienced practitioners are able recognise those intermediate sweet-spots where cost-benefits can be realised.
Systems employing optimal techniques pervade our lives. They are embedded within medical diagnosis equipment, communication networks, aircraft avionics, robotics and market forecasting – to name a few. When tasked with new problems, in which information is to be extracted from noisy measurements, one can be faced with a plethora of algorithms and techniques. Understanding the performance of candidate approaches may seem unwieldy and daunting to novices. Therefore, the philosophy here is to present the linear-quadratic-Gaussian results for smoothing, filtering and prediction with accompanying proofs about performance being attained, wherever this is appropriate. Unfortunately, this does require some maths which trades off accessibility. The treatment is little repetitive and may seem trite, but hopefully it contributes an understanding of the conditions under which solutions can value-add.
Science is an evolving process where what we think we know is continuously updated with refashioned ideas. Although evidence suggests that Babylonian astronomers were able to predict planetary motion, a bewildering variety of Earth and universe models followed. According to lore, ancient Greek philosophers such as Aristotle assumed a geocentric model of the universe and about two centuries later Aristarchus developed a heliocentric version. It is reported that Eratosthenes arrived at a good estimate of the Earth’s circumference, yet there was a revival of flat earth beliefs during the middle ages. Not all ideas are welcomed - Galileo was famously incarcerated for knowing too much. Similarly, newly-appearing signal processing techniques compete with old favourites. An aspiration here is to publicise that the oft forgotten approach of Wiener, which in concert with Kalman’s, leads to optimal smoothers. The ensuing results contrast with traditional solutions and may not sit well with more orthodox practitioners.
Kalman’s optimal filter results were published in the early 1960s and various techniques for smoothing in a state-space framework were developed shortly thereafter. Wiener’s optimal smoother solution is less well known, perhaps because it was framed in the frequency domain and described in the archaic language of the day. His work of the 1940s was borne of an analog world where filters were made exclusively of lumped circuit components. At that time, computers referred to people labouring with an abacus or an adding machine – Alan Turing’s and John von Neumann’s ideas had yet to be realised. In his book, Extrapolation, Interpolation and Smoothing of Stationary Time Series, Wiener wrote with little fanfare and dubbed the smoother unrealisable. The use of the Wiener-Hopf factor allows this smoother to be expressed in a time-domain state-space setting and included alongside other techniques within the designer’s toolbox.
A model-based approach is employed throughout where estimation problems are defined in terms of state-space parameters. I recall attending Michael Green’s robust control course, where he referred to a distillation column control problem competition, in which a student’s robust low-order solution out-performed a senior specialist’s optimal high-order solution. It is hoped that this text will equip readers to do similarly, namely: make some simplifying assumptions, apply the standard solutions and back-off from optimality if uncertainties degrade performance.
Both continuous-time and discrete-time techniques are presented. Sometimes the state dynamics and observations may be modelled exactly in continuous-time. In the majority of applications, some discrete-time approximations and processing of sampled data will be required. The material is organised as a ten-lecture course.
Chapter 1 introduces some standard continuous-time fare such as the Laplace Transform, stability, adjoints and causality. A completing-the-square approach is then used to obtain the minimum-mean-square error (or Wiener) filtering solutions.
Chapter 2 deals with discrete-time minimum-mean-square error filtering. The treatment is somewhat brief since the developments follow analogously from the continuous-time case.
Chapter 3 describes continuous-time minimum-variance (or Kalman-Bucy) filtering. The filter is found using the conditional mean or least-mean-square error formula. It is shown for time-invariant problems that the Wiener and Kalman solutions are the same.
Chapter 4 addresses discrete-time minimum-variance (or Kalman) prediction and filtering. Once again, the optimum conditional mean estimate may be found via the least-mean-square-error approach. Generalisations for missing data, deterministic inputs, correlated noises, direct feedthrough terms, output estimation and equalisation are described.
Chapter 5 simplifies the discrete-time minimum-variance filtering results for steady-state problems. Discrete-time observability, Riccati equation solution convergence, asymptotic stability and Wiener filter equivalence are discussed.
Chapter 6 covers the subject of continuous-time smoothing. The main fixed-lag, fixed-point and fixed-interval smoother results are derived. It is shown that the minimum-variance fixed-interval smoother attains the best performance.
Chapter 7 is about discrete-time smoothing. It is observed that the fixed-point fixed-lag, fixed-interval smoothers outperform the Kalman filter. Once again, the minimum-variance smoother attains the best-possible performance, provided that the underlying assumptions are correct.
Chapter 8 attends to parameter estimation. As the above-mentioned approaches all rely on knowledge of the underlying model parameters, maximum-likelihood techniques within expectation-maximisation algorithms for joint state and parameter estimation are described.
Chapter 9 is concerned with robust techniques that accommodate uncertainties within problem specifications. An extra term within the design Riccati equations enables designers to trade-off average error and peak error performance.
Chapter 10 rounds off the course by applying the afore-mentioned linear techniques to nonlinear estimation problems. It is demonstrated that step-wise linearisations can be used within predictors, filters and smoothers, albeit by forsaking optimal performance guarantees.
The foundations are laid in Chapters 1 – 2, which explain minimum-mean-square error solution construction and asymptotic behaviour. In single-input-single-output cases, finding Wiener filter transfer functions may have appeal. In general, designing Kalman filters is more tractable because solving a Riccati equation is easier than pole zero cancellation. Kalman filters are needed if the signal models are time-varying. The filtered states can be updated via a one-line recursion but the gain may require to be reevaluated at each step in time. Extended Kalman filters are contenders if nonlinearities are present. Smoothers are advocated when better performance is desired and some calculation delays can be tolerated.
This book elaborates on ten articles published in IEEE journals and I am grateful to the anonymous reviewers who have improved my efforts over the years. The great people at the CSIRO, such as David Hainsworth and George Poropat generously make themselves available to anglicise my engineering jargon. Sometimes posing good questions is helpful, for example, Paul Malcolm once asked is it stable? which led down to fruitful paths. During a seminar at HSU, Udo Zoelzer provided the impulse for me to undertake this project. My sources of inspiration include interactions at the CDC meetings - thanks particularly to Dennis Bernstein whose passion for writing has motivated me along the way.
Continuous-Time Minimum-Mean-Square-Error Filtering.
Discrete-Time Minimum-Mean-Square-Error Filtering.
Continuous-Time Minimum-Variance Filtering.
Discrete-Time Minimum-Variance Prediction and Filtering.
Discrete-Time Steady-State Minimum-Variance Prediction and Filtering.
Continuous-Time Smoothing.
Discrete-Time Smoothing.
Parameter Estimation.
Robust Prediction, Filtering and Smoothing.
Nonlinear Prediction, Filtering and Smoothing.

Author(s): Einicke G.A. (Ed.)

Language: English
Commentary: 774188
Tags: Приборостроение;Обработка сигналов