3.3 Modified Methods for SecondOrder Interpolation 

3.3.1 Cascaded FirstOrder Simulation 

In Section 3.1 we briefly considered the use of two firstorder simulations in series as a means of representing secondorder response. This section presents a more detailed analysis of such a configuration as illustrated in the figure below. 


Figure 10 

By equation (2.2.21) we have 

_{} 

where 
_{} 

The functions f_{1}, f_{2}, f_{3} can be any set of firstorder coefficient functions as defined in Section 2.2 (e.g., equation 2.2.33). Our objective is to derive a single recurrence formula in terms of y_{n} and x_{n} only, so we must eliminate the intermediate variable z_{n}. Substituting the expression for z_{i} from equation (3.3.11) into equation (3.3.12), we have 

_{} 

The formula still has a term involving z_{i1}, since it’s impossible to eliminate both z_{i} and 
z_{i1} from equations (3.3.11) and (3.3.12) simultaneously. Therefore, we consider the relations from the previous iteration 

_{} 

Eliminating z_{i2} from these equations and solving for z_{i1} gives 

_{} 

Substituting this expression for z_{i1} into equation (3.3.13) gives the desired recurrence formula for two firstorder lead/lag functions in series: 

_{} 

where 
_{} 

Recalling that the sum of the F functions is 1, as is the sum of the G functions, it is easily verified that the sum of c_{1} through c_{5} from the preceding equations is also 1, as required. 

Assuming constant t, the optimum equations for the F and G functions are given beneath equation (2.2.33), and on this basis (comparing the expressions for t_{1} and t_{4} developed in Section 3.1 with the definitions of r_{1} and r_{2} given beneath (3.2.34), and noting that t_{2} r_{2} = t_{4} r_{4} = 1) we can write 

_{} 

Therefore, the first two coefficients of equation (3.3.14) are given by 

_{} 

These are just –g and d respectively, so if we convert equation (3.3.14) to the form of equation (3.2.23) we have 

_{} 

Also, since c_{3} + c_{4} + c_{5} = 1 – c_{1} – c_{2}, we have 

_{} 

To determine L_{4}, recall that L_{4} = c_{4} + 2c_{5}. Substituting the expressions from equations (3.3.15) we have 

_{} 

Substituting the appropriate expressions for the F and G functions from the definitions beneath equation (2.2.33) gives 

_{} 

Since r_{1} = 1/t_{4} and r_{2} = 1/t_{2}, we can simplify this equation to arrive at 

_{} 

Furthermore, since t_{1} + t_{3} = b_{2} and t_{2} + t_{4} = b_{1} (as shown in Section 3.1), this can be written as 

_{} 

which is identical to equation (3.2.38). Thus we’ve shown that two firstorder simulations in series are functionally equivalent to the optimum secondorder recurrence formula when DDx_{i} is zero (i.e., when the x_{n} samples are linear with time), assuming that the optimum recurrence coefficients are used in the firstorder algorithms. It follows that the recurrence formula for two optimum firstorder algorithms in series can be written in the “standard form” of equation (3.2.52) where the coefficients A, B, and C are just as given by equations (3.2.53). However, the D coefficient in this case is given by F_{3}G_{3}. Substituting the optimum firstorder coefficient formulas from beneath equation (2.2.33) for F_{3} and G_{3}, we have 

_{} 

which can be written 

_{} 
(3.3.16) 

Since this is not equivalent to either (3.2.54) or (3.2.55), it’s clear that two optimum firstorder functions in series do not reproduce the exact secondorder response corresponding to either pointtopoint or secondorder interpolation of the input x(t). In fact, the D coefficient given by (3.3.16) can be complex, which is never the case with either pointtopoint or secondorder interpolation. 

The conditions under which D is nonreal are clarified by rewriting equation (3.3.16) in the form 

_{} 

where d_{1} = b_{1}^{2} – 4a_{1} and d_{2} = b_{2}^{2} – 4a_{2}. This shows that D is real if and only if the product of d_{1} and d_{2} is a positive real number, i.e., if d_{1} and d_{2} are both real and of the same sign, or are complex conjugates with a positive product. It is interesting that if both d_{1} and d_{2} are real and negative, meaning that the combined function is underdamped in both directions, then the discretetime response of two conjugate firstorder functions in series can still be reproduced using the secondorder recurrence formula (3.2.52) with purely real coefficients, even though all the t in the firstorder functions are complex. 

Also, since equation (3.3.16) is not invariant under an exchange of t_{1} and t_{4}, or an exchange of t_{1} and t_{3}, it follows that the two sets of cascaded functions shown in Figure 11 are not functionally equivalent when implemented in discrete time, even though they are equivalent in the continuous domain. 



In the preceding discussion it was assumed that the individual firstorder algorithms using the optimum recurrence coefficients (for constant t), but equation (3.3.14) is applicable to the cascading of any firstorder recurrence algorithm that can be expressed in the form of equation (2.2.21). 


3.3.2 Finite Difference Method 

The most direct approach to simulating a differential equation such as 

_{} 

is by substitution of simple finite difference ratios for the differentials. The derivative of y(t) at t_{i1} is approximately equal to the average of the “slopes” on the two neighboring intervals 
_{} 

and we can approximate the second derivative of y(t) at t_{i1} by the change in “slope” per interval 

_{} 

Similarly for the function x(t) we have the approximations 

_{} 

_{} 

Substituting from equations (3.3.22) through (3.3.25) into equation (3.3.21) for the time t_{i1}, and solving for y_{i} gives the recurrence formula 

_{} 

where 
_{} 

The relation between this approximate solution and the optimum solution derived in Section 3.2 is clarified by comparing equations (3.3.24) and (3.3.25) with the coefficients of equation (3.2.31). This comparison shows that equations (3.3.24) and (3.3.25) are optimum if (but not only if) we assume the function x(t) is the seconddegree polynomial through x_{i2}, x_{i1}, and x_{i}, just as was assumed in the derivation of the optimum solution. Similarly, equations (3.3.22) and (3.3.23) are optimum if we assume the function y(t) is the seconddegree polynomial through y_{i2}, y_{i1}, and y_{i}. However, the simultaneous imposition of both these assumptions is inconsistent with equation (2.2.34), since it was demonstrated in Section 3.2 that if x(t) is a seconddegree polynomial then y(t) is not. Hence, the finite difference method is intrinsically inexact, except in the limit as Dt goes to zero. 


3.3.3 Bilinear Substitution Algorithm 

Another method for deriving a recurrence formula to simulate a given continuous response is the bilinear substitution algorithm (also known as Tustin’s algorithm). This method was mentioned briefly in Section 2.3.2 as it applies to firstorder transfer functions, where it was shown to be equivalent to the substitution of simple finite difference ratios. When applied to other transfer functions, however, the bilinear substitution algorithm is not generally equivalent to the method of finite differences. In this section we present a more detailed examination of the bilinear substitution method and its underlying assumptions. 

To apply the bilinear substitution algorithm, the basic governing equation is written using the differential operator D. For the general secondorder lead//lag equation we have 

_{} 

Then according to the algorithm we make the substitution 

_{} (3.3.32) 

where the symbol z^{1} represents the “past value” operator (also known as the “time delay” operator). In general, z^{m} x_{n} = x_{nm}. Substituting equation (3.3.32) into equation (3.3.31) gives 

_{} 

Multiplying through by Dt^{2} (1 + z^{}^{1})^{2} we have 

_{} 

Combining terms by powers of z gives 

_{} 

Carrying out the past value operations and solving for y_{i} gives the general secondorder recurrence formula based on bilinear substitution: 

_{} 

where 
_{} 

Comparing these coefficients with those of equation (3.3.26) shows that the methods of finite differences and bilinear substitution are identical to the first order in Dt, but differ in terms of Dt^{2}. (Of course, both methods converge on the exact solution in the limit as Dt goes to zero.) 

The use of a “recipe” such as bilinear substitution to derive recurrence formulas is convenient, but it tends to obscure the basic assumptions implicit in the method. To illuminate these assumptions, we now give another, more explicit, derivation of equation (3.3.34). To derive a recurrence formula for simulating equation (3.3.21), write the equation in specific form for three consecutive instants. (In general, to derive a recurrence for a differential equation of order N, write the specific equation for N+1 consecutive instants.) Using dot notation for time derivatives we have 

_{} 

We then sum each pair of consecutive equations to give 

_{} 

_{} 

These two equations are exact, but we now introduce an approximation by assuming that the change in any variable during any interval Dt is given by trapezoidal integration of its derivative during that interval. In the present example we will need the following “integrations”: 

_{} 

for all n. Rearranging, we have 

_{} 

Substituting these expressions into equations (3.3.35) and (3.3.36) gives 

_{} 

and 
_{} 

Adding these two equations gives the single equation 

_{} 

Substituting from equations (3.3.38) (taking the differences of the first derivatives on successive indices to give expressions for the differences in derivatives at the times t_{i} and t_{i2}), we can eliminate the “dotted” terms, and then multiplying through by Dt^{2}, the result is identical to equation (3.3.33), which leads to the recurrence formula (3.3.34) which was derived using the bilinear substitution recipe. 

The integrations expressed by equations (3.3.37) treat each of the functions _{}, _{}, _{}, and _{} as linear during each interval Dt. The assumption of linearity for _{} is sufficient to fully specify the response, so clearly the complete set of assumptions constitutes an overspecification. Also, since two consecutive derivatives of a given function cannot actually both be linear functions of time (any more than an object can have constant velocity and constant acceleration simultaneously), it's clear that bilinear substitution gives (as does the method of finite differences) an intrinsically inexact recurrence formula. 

Perhaps for historical reasons, the approach of bilinear substitution is in accord with the techniques of analog simulation, where differential equations are commonly treated in integral form. For example, if we integrate equation (3.3.21) twice with respect to time, we have 

_{} 

Naturally, direct application of trapezoidal integration to this equation leads to exactly the same recurrence formula as is given by bilinear substitution. 


3.3.4 Simplifications Based on Approximations of e^{z} 

In this section we return to the optimum recurrence formula presented in Section 3.2.3 to show how it may be simplified by substituting approximations for ez into the coefficient expressions. First, substituting the lowestorder Taylor series approximation of ez 

_{} 

into the equation for g in Table 2 gives 

_{} 

Using the subscript “a” to denote approximate values, this equation can be written 

_{} 

By equations for r_{1} and r_{2} beneath (3.2.34) we have 

_{} 

so equation (3.3.42) becomes 

_{} 

Similarly, d_{a} can be determined by using equation (3.3.41) to approximate the exponentials in equation for d in Table 2, which gives 

_{} 

Inserting these approximate expressions for g and d along with the exact expressions for a and b into the coefficients of equation (3.2.37), we arrive at the general recurrence formula 

_{} 

where 
_{} 

Since this recurrence formula is based on the exponential approximation given by equation (3.3.41), it will give acceptable results only if the exponents r_{1}Dt and r_{2}Dt are very much smaller than 1, which means that the execution frequency must be very high. 

Rather than proceeding to higherorder Taylor series approximations, we will turn to a more efficient approximation of the exponential 

_{} 

Substituting this into equations for g and d in Table 2 gives the approximate parameters 

_{} 

These are exactly the same approximations as were derived using bilinear substitution in Section 3.3.3. The homogeneous (i.e., unforced) response is fully determined by these two coefficients. In general, the substitution of (3.3.43) into the analytical expression for the exact Nthorder homogeneous response leads to the same recurrence coefficients (for the output variable terms) as does bilinear substitution. (Recall from Section 2.3.2 that the total solutions match in the case of first order response.) This clarifies the correspondence between bilinear substitution and the optimum solution, and provides an alternative derivation without resort to trapezoidal integration. (The coefficients for the input variable terms can be derived similarly by substituting (3.3.43) into the “homogeneous form” of x(t), as if it were the output of the transfer function, and then applying the appropriate scale factor.) 


3.3.5 Discussion 

The derivations in the preceding sections have assumed that the coefficients of the secondorder differential equation to be simulated were constant. In the case of constant coefficients there is little reason to use an approximate method to determine the corresponding recurrence coefficients, since they can be precomputed (offline) using the optimum solution. However, in practice the coefficients are often not constant. Even assuming the “variable t” response can be approached using the “constant t” solution with a sufficiently small Dt, variable coefficients make it necessary for the recurrence coefficients to be periodically recomputed. Approximate methods such as those presented above are used to minimize the computational burden. 

We saw in Section 3.3.1 that cascaded firstorder algorithms with real coefficients cannot be used to simulate all possible secondorder response. Therefore, in this discussion we will focus on the three general approximate methods described above (1) finite differences, (2) Taylor’s series substitution, and (3) bilinear substitution. Our assessment of these three methods will be based on the fact that each of the corresponding recurrence formulas is given by (3.2.37) with an appropriate choice of approximations for the parameters g and d. This was proven in Section 3.3.3 for the methods of Taylor series and bilinear substitution. Furthermore, it can easily be verified that if g and d are approximated by 

_{} 

then equations for the coefficients of (3.2.37) give the recurrence coefficients for the method of finite differences. This value of g_{a} follows from the exponential approximation 

_{} 

which is the same approximation as equation (3.3.43). However, in this case it is applied to the combined exponential (r_{1 }+ r_{2})Dt instead of the individual exponentials r_{1}Dt and r_{2 }Dt as was done to yield the bilinear substitution recurrence. The approximation for d in the case of finite differences does not correspond exactly to any welldefined exponential approximation. It is consistent with the assumption 

_{} 

If r_{1} = r_{2} = r this is equivalent to the approximation e^{x} » (2x^{2})/(22x), which for small values of x is better than the firstorder Taylor series approximation, but not as good as either the secondorder Taylor series or the (3.3.43) approximation. 

We will assess the three approximate methods by comparing their respective expressions for g_{a} and d_{a} with the optimum expressions for g and d. For this purpose we define the dimensionless parameters 

_{} 

In these terms the values of g and ga for the various methods can be summarized as follows: 

Optimum: _{} 

FirstOrder Taylor Series: _{} 

Bilinear Substitution: _{} (3.3.51) 

Finite Differences: _{} (3.3.52) 

These expressions show that in order to give reasonably accurate values of g the Taylor series and bilinear substitution methods must satisfy two requirements, namely, P must be small and A_{1} must be large. If, for example, we require P < 1 and A_{1} > 1, then we have the following limitations on Dt 

_{} 

In contrast, the method of finite differences only needs to satisfy the first of these requirements. Another derivation of these two limits is to recognize that the exponential approximations are valid only for exponents that are small relative to 1. Therefore, we require r_{1,2} Dt < 1. This implies that 

_{} 

If b_{1} is much larger than a_{1}, this reduces to Dt < a_{1}/b_{1} , whereas if a_{1} is much larger than b_{1} this reduces to _{}. 

Equations (3.3.51) and (3.3.52) also show that bilinear substitution will give a slightly more accurate g_{a} than finite differencing when a_{1} is positive, but slightly less accurate when a_{1} is negative. This is because for a wide range of positive A_{1} values the inverse A_{1} terms in (3.3.51) actually improve the exponential approximation, but if A_{1} is negative these terms make the approximation worse. 

The expressions for d and d_{a} for the various methods are as follows: 

Optimum: _{} 

Taylor Series: _{} 

Bilinear Substitution: _{} 

Finite Differences: _{} 

By plotting d and d_{a} versus P for various values of A_{1} it can be shown that the relative accuracies and limitations on Dt are essentially the same as for the parameter g. The major difference is that the limitation _{} applied to the finite difference method to ensure the accuracy of d_{a}, even though it is not needed to ensure the accuracy of g_{a}. 

The limitations on Dt imposed by the use of approximate recurrence coefficients are in addition to, and independent of, the requirements imposed by the intrinsic information rates of the input and output signals. To illustrate this fact, consider the dynamic function represented by the equation 

_{} 

with the initial conditions y(0)=100 and y(1) = 50. Since the input signal is always zero, the recurrence formula given by equation (3.2.21) is just 

y_{i} = (c_{1}) y_{i2} + (c_{2}) y_{i1} 

Also, since x(t) is constant, and the natural frequency of the response is only 0.16 cycles per second, it is clear that the intrinsic information flow rate of this system can be adequately represented with a sampling frequency of 1 sec^{1}, which corresponds to a discrete time interval of Dt = 1 second. On this basis, the equations for c_{1} and c_{2} beneath (3.2.37) give the optimum recurrence formula. 

y_{i} = (1.125352E7) y_{i2} + (0.939182) y_{i1} 

In contrast, the method of bilinear substitution gives the formula 

y_{i} = (0.729730) y_{i2} + (0.162162) y_{i1} 

The computed response given by these two formulas is plotted in Figure 12, superimposed on the exact analytical solution. 

Figure 12 

Clearly the formula based on bilinear substitution does not adequately represent the system under these conditions. This is because when using bilinear substitution (or any of the other approximate methods discussed in this document) the execution period must not only be small enough to resolve the input and output signals, it must also satisfy the requirements Dt < a_{1}/b_{1} and _{}. 

In the present example we have a_{1}=1 and b_{1}=16, so the execution interval must be less than 0.0625 sec when using an approximate method. Thus the execution frequency in this example must be 16 times greater when using the approximate recurrence formula than when using the optimum recurrence formula. 
