|
x'(n) = sum_i=1^p a_i x(n-i) |
:x'(n) = sum_i=1^p a_i x(n-i) |
e(n) = x(n) - x'(n) |
:e(n) = x(n) - x'(n) |
These equations are valid for all types of (one-dimensional) linear prediction. The differences are found in the way the parameters a_i are chosen. The most common choice in optimisation of parameters a_i is the root mean square criterion which is also called the autocorrelation criterion. In this method we minimise the expected value of the squared error E{e^2(n)}, which yields the equation :sum_i=1^p a_i R(i-j) = -R(j), for 1 <= j <= p, where R is the autocorrelation of signal x(n) defined as R(i) = E{x(n)x(n-i)}. The above equations are called the normal equations or Yule-Walker equations. In matrix form the equations can be equivalently written as R*a = r, where autocorrelation matrix R is a symmetric? Toeplitz? matrix with elements r_i,j = R(i-j), vector r is the autocorrelation vector r_j = R(j), and vector a is the parameter vector of a_i. |
Optimisation of the parameters is a wide topic and a large number of other approaches have been proposed. Still, the autocorrelation method is the most common and it is used, for example, for speech coding in the GSM standard. |
This is a humble start on a big topic. Please continue. |
Solution of the matrix equation R*a = r is computationally a relatively expensive process. The [Gauss algorithm]? for matrix inversion is probably the oldest solution but this approach does not efficiently use the symmtric properties of R and r. A faster algorithm is the [Levinson recursion]? proposed by N. Levinson in 1947, which recursively calculates the solution. Later, Delsarte et al proposed an improvement to this algorithm called the [split Levinson recursion]? which requires about half the number of multiplications and divisions. It uses a special symmetrical property of parameter vectors on subsequent recursion levels. |