2.2 Optimum First Order Solution 

2.2.1 Basic Assumptions 

As noted earlier, equation (2.11) is indeterminate when the input variable is specified only by a sequence of discrete values, unless some assumption is made as to the behavior of the input variable between the discrete values. When only two discrete values (the current and one past value) are available at any one time, as is the case in virtually all firstorder algorithms, the best possible assumption is that “x” varies linearly with time between the two discrete values. If x were significantly nonlinear over the interval, then clearly two values would be insufficient to specify the function. So, we can represent the input function over an interval Dt by 

_{} 

where the subscripts p and c denote “past” and “current” discrete values, corresponding to the initial and final values for that interval. The derivative of this x(t) during the interval is then just 
_{} 

so equation (2.11) can be rewritten 

_{} 

Now, if t_{D} and t_{N} are known constants, and we are given the discrete values of x_{p}, y_{p}, and x_{c} for any given interval Dt, then equation (2.2.11) can be solved exactly for the current output value y_{c}. This is the basis for the optimum solution for constant t presented in Section 2.2.3. 

If the coefficients t_{D} and t_{N} are variable, the situation is somewhat more complicated. We will consider only the case where these coefficients may be treated as functions of time, so that (2.11) remains linear. In this case the coefficients are specified for a given interval by their initial and final values (i.e., past and current). As with the input function x(t), this in itself is insufficient specification unless the coefficients can be assumed to vary linearly with time between the current instants. Therefore, we make the assumption that 

_{} 

On this basis, equation (2.2.11) can be written 

_{} 

where 

_{} 

Knowing the initial and final values of x, t_{N}, and t_{D}, and the initial value of y for a given Dt, equation (2.2.12) can be solved exactly for the final (current) value of y. Therefore, this equation is the basis for the optimum solution to be presented in Section 2.2.4 for variable t. 


2.2.2 General First Order Recurrence Formulas 

Before deriving the actual solutions of equations (2.2.11) and (2.2.12) we will consider the form these solutions must take. We require a formula that can be applied recursively to compute the current value of y based on the current value of x and the past values of x and y. We will find that, in general, the solution can be expressed as a linear combination of the three given values, i.e., 

_{} 

where f_{1}, f_{2}, and f_{3} are all functions of t_{D}, t_{N}, and Dt. However, these three functions are obviously not independent, because if y_{p} = x_{p} = x_{c} = m then clearly y_{c} = m, so we have 
m = mf_{1} + mf_{2} + mf_{3} which implies 1 = f_{1} + f_{2} + f_{3}. Taking advantage of this fact, we can eliminate f_{1} from equation (2.2.21) to give 

_{} 

which can be written 

_{} 

where the coefficients A and B are defined as 

_{} 

Any valid lead/lag algorithm that computes y_{c} as a linear function of y_{p},x_{p}, and x_{c} must be expressible in the form of equation (2.2.22), regardless of the assumptions made concerning the interpolation of the independent variable. Therefore, we define equation (2.2.22) as the standard recurrence formula for digital lead/lag simulations. In subsequent sections we will frequently identify algorithms simply by specifying their standard recurrence coefficients A and B. 


2.2.3 Optimum Recurrence Coefficients for Constant t 

When t_{D} and t_{N} are constant, the optimum recurrence coefficients can be determined by solving equation (2.2.11), which can be rewritten as follows 

_{} 

Recall that the solution of any equation of the form 

_{} 

where F(t) and G(t) are functions of t is given by 

_{} 

where K is a constant of integration. Making the substitutions 

_{} 

we have 
_{} 

Performing the integration and dividing through by e^{t/}^{t}^{D} gives 

_{} 

To determine K we apply the initial condition y = y_{p} at t = 0, which gives 

_{} 

Inserting this back into the preceding equation, recalling the definition of _{}, and noting that y = y_{c} at t = Dt, we arrive (after some algebraic manipulation) at the result 

_{} 

where 
_{} 

Note that as required the sum of f_{1}, f_{2}, and f_{3} is identically 1. Substituting the expressions for f_{2} and f_{3} into equations (2.2.23) gives the optimum recurrence coefficients for constant t: 

_{} 


2.2.4 Optimum Recurrence Coefficients For Variable t 

When t_{N} and t_{D} are variable, we base the optimum recurrence coefficients on equation (2.2.12). For convenience we define the following parameters 

_{} 

In these terms equation (2.2.12) can be rewritten 

_{} 

This is in the form of equation (2.2.31), so the general solution is given by equation (2.2.32) where 

_{} 

Therefore we have 

_{} 

Performing the integration and dividing through by (at + b)^{1/a} gives 

_{} 

The constant of integration K is determined by the initial condition y = y_{p} at t = 0, which gives 

_{} 

We can now compute y_{c} by evaluating equation (2.2.42) at t = Dt. If we then replace a,b,c, and d with their respective definitions, we arrive at the result 

_{} 

where 
_{} 

Note that, as required, the sum of f_{1}, f_{2}, and f_{3} is identically 1. Substituting the expressions for f_{2} and f_{3} into equations (2.2.23) gives the optimum recurrence coefficients for variable t: 

_{} 


2.2.5 Discussion 

When referring to the recurrence formulas (2.2.33) and (2.2.43) the term “optimum” is justified in the following sense: The total solution y(t) equals the solution to the homogeneous equation plus a particular solution for the given forcing function. The only ambiguity is in the particular solution, which depends on how we choose to interpolate the independent function x(t). Note that the forcing function has no effect on the homogeneous solution, and that the particular solution is independent of the actual y(t) values. It follows that the coefficients of the “y terms” in the recurrence relation must be exactly as given in (2.2.33) and (2.2.43), regardless of how the x(t) function is interpolated. The only ambiguity in the recurrence formula is in the “xterm” coefficients (f_{2} and f_{3}), and even those have a fully determined sum. In view of this, the term “optimum” is used throughout this document to denote a recurrence formula that has the exact “yterm” coefficient(s), and for which the “xterm” coefficients sum to the correct value. 

With regard to the variablet solution, we would expect to find that it has as a special case the constantt solution, and in fact if _{} and _{} are zero then clearly equation (2.2.45) is equivalent to the constantt expression for B given by equation (2.2.34). However, it may not be selfevident that equation (2.2.44) reduces to the A of equation (2.2.34) for constant t. To show that it does, we can rewrite equation (2.2.44) as follows 

_{} 

If we now recall the series expansion of the natural log 

_{} 

we see that if the ratio t_{DP}/t_{DC} is close to 1 we can approximate the natural log in equation (2.2.51) very accurately by just the first term of the expansion, i.e., 

_{} 

in which case equation (2.2.51) becomes 

_{} 

This makes it clear that as _{} goes to zero, and t_{DC} approaches t_{DP}, the variablet expression for A does in fact reduce to the constantt case given by equation (2.2.34). 

We now demonstrate that the constantt response approaches the variablet response if a sufficiently small time interval Dt is used. First, notice that as Dt goes to zero the ratio of t_{DP} to t_{DC} can be made arbitrarily close to 1 for any finite value of _{}. Also, as we have already seen, equation (2.2.44) is equivalent to A of equation (2.2.34) in the limit as t_{DP}/t_{DC} goes to 1. Therefore, as Dt goes to zero the A coefficient for both constant t and variable t is given by equation (2.2.34). Furthermore, if we recall the power series expansion 

_{} 

we see that as Dt goes to zero, A of equation (2.2.34) becomes simply A = Dt/t_{D}. Substituting this into the expressions for B in equations (2.2.34) and (2.2.45), along with the stipulation that t_{NC} » t_{NP} for a sufficiently small Dt, we see that both solutions give B = t_{N}/t_{D}. Thus, for sufficiently small Dt, the constantt and variablet solutions both reduce to the form 

_{} 

Notice that if Dt is actually zero, but x_{c} – x_{p} does not vanish, then this can be written as 

_{} 

which is the expected response to a step input, viz, an instantaneous change in x yields an instantaneous change in y with a magnitude amplified by the factor t_{N}/t_{D}. 

Examination of equation (2.2.44) also shows that the variablet solution requires t_{DP} and t_{DC} have the same sign, since if they didn’t, the ratio t_{DP}/t_{DC} would be negative and the result of the exponentiation would, in general, be complex. Another way of stating this restriction is that t_{D} can never pass through zero, which effectively prohibits a change of sign for a continuous function. In one sense, the “reason” for this restriction is that we divided by t_{D} when we wrote equation (2.2.41). More fundamentally, equation (2.11) shows that when t_{D} is zero the differential term in y vanishes and the equation is singular. 

It may appear that equation (2.2.45) also exhibits a singularity, specifically when _{} equals –1. However, as long as the absolute value of Dt (_{}/t_{DP}) is less than 1 (which corresponds to the requirement that t_{D} never pass through zero during the interval) it can be shown that B remains analytic at _{} = 1, and is given by 

_{} 
