Adaptive filters are optimum filters whose transfer functions adapt to changing filtering requirements. Such adaptation is usually needed because of fluctuating signal and/or noise conditions. Adaptive filters have gain functions or pole/zero patterns that are periodically recomputed according to some algorithm and then optimally periodically recomputed according to some algorithm and then readjusted to optimally process incoming signals while discriminating against noise. Unfortunately, both classical and optimum filters have fixed pole/zero positions and are nonadaptive. Their inability to adapt to changing conditions makes such nonadaptive filters unsuitable for many applications.


Reconsider Fig. 4.4.1 where the input r(t) and output c(t) equal

r(t) = s(t)+n(t) or R(jw) = S(jw)+N(jw)
c(t) = h(t)*c(t) or C(jw) = H(jw)C(jw)
Let d(t) equal the desired optimal output. Then define the output error e(t) to equal
e(t) = d(t)-c(t) or E(jw) = D(jw)-C(jw)

A wide variety of optimization criteria can be used to minimize output error as discussed in Chap. 4.3. One of the most universally accepted is the integral-squared error (ISE) criterion where (see Eq. 4.3.1b)

ISE = int |e(t)|^2 dt = int |d(t) - c(t)|^2 dt
The ideal filter is the one whose transfer function H(jw) minimizwes the error index ISE. Using Parseval's theorem given by Eq. 1.6.15, Eq. 4.4.11 can be rewritten in the frequency domain as
ISE = (1/2 pi) int |E(jw)|^2 dw = int |D(jw) - C(jw)|^2 dw
Since C(jw) = H(jw)R(jw) is a functional of H(jw), Eq. 4.4.12 is minimized by setting the derivative of the integral with respect to H(jw) equal to zero. Taking the expected value of the result and solving for H(jw) gives
H(jw) = E{D(jw)R*(jw)}/E{R(jw)R*(jw)}
= Desired input-output cross-correlation spectrum / Input autocorrelation spectrum
Eq. 4.4.15 is the transfer function for the general estimation filter. This ideal filter gives the best estimate (in the minimum ISE sense) of a desired signal d(t) from an input signal r(t) as the ratio of two power spectra. When r(t) is composed of a signal s(t) contaminated with additive noise n(t), Eq. 4.4.15 reduces to
H(jw) = E{S(jw)[S*(jw)+N*(jw)]}/E{|S(jw)+N(jw)|^2}
for d(t)=s(t). When the signal and noise are orthogonal, or equivalently, are uncorrelated and the noise has zero mean, then S(jw)N*(jw) = 0 (see discussion for Eq. 4.4.26) and Ea. 4.4.24 reduces to
H(jw) = E{|S(jw)|^2}/E{|S(jw)|^2 + |N(jw)|^2}
This is the uncorrelated estimation or uncorrelated Wiener filter.

© C.S. Lindquist, Adaptive and Digital Signal Processing with Digital Filtering Applications, vol. 2, pp. 285-286, 288, Steward & Sons, 1989.