Iterative Solutions of Homogeneous Linear Systems 

Suppose we are given a homogeneous linear system of equations such as 

_{} 

with the auxiliary condition P_{0} + P_{1} + P_{2} = 1. (This system appears in the note on Redundant Systems with a Common Threat.) The steadystate solution satisfies the algebraic equations 

_{} 

The coefficient matrix is singular (as can be seen from the fact that each column sums to zero), so there exists a solution other than the trivial solution P_{0} = P_{1} = P_{2} = 0 (which does not satisfy the auxiliary condition). A straightforward approach to determining the nontrivial solution is to replace the first equation (say) with the auxiliary condition, giving the system 

_{} 

Multiplying through by the inverse of the coefficient matrix gives the result 

_{} 

For small systems this method works well, but for large systems the general matrix inversion becomes quite laborious, and for sparse systems (i.e., with many zeros in the coefficient matrix) it is inefficient. An alternative approach is suggested by rewriting equation (1) in the form of a recursion, moving the diagonal terms over to the right hand side and dividing through by the corresponding coefficients (assuming all the diagonal coefficients are nonzero, which is assured in many applications, such as Markov models). This gives 

_{} 

where the subscripts signify the state vectors on successive iterations. (Of course, at steadystate, multiplication by this coefficient matrix would leave the system state unchanged.) Unfortunately, the iteration in (2) does not converge, so this is not a viable solution method. 

To understand why (2) doesn’t converge, a quick review of the most commonly used iterative methods for general nonhomogeneous linear systems will be helpful. Given a system of the form Mx = b, where M is a square matrix and x and b are column vectors, suppose we divide each row equation by the corresponding diagonal coefficient of M (assuming all those diagonal elements are nonzero). Keeping the diagonal terms on the left side, and moving the offdiagonal terms over to the right side, we get an equivalent system of equations that can be written in the form x = Ax + u. for some square matrix A (will all diagonal terms equal to zero) and column vector u. Now suppose we employ this as a recurrence relation 

_{} 

Beginning with an arbitrary initial state vector x_{0}, successive iterations yield 

_{} 

and so on. Thus the nth iterate can be written as 

_{} 

Clearly if A^{n} goes to zero as n increases, then the first term vanishes and the geometric series in the second term converges, so we have 

_{} 

which is obviously the solution of the original scaled system (I  A)x = u. This iteration scheme is known as the Jacobi method, which we see is essentially just a disguised way of inverting the coefficient matrix. It converges only if the powers of the depleted coefficient matrix A approach zero. This will be the case if the original coefficient matrix M is “strictly diagonally dominant”, meaning the magnitude of the diagonal term in each row exceeds the sum of the magnitudes of the offdiagonal terms in that row. To prove this, note that the sum of each row of A is strictly less than 1, because it consists of the offdiagonal terms of the corresponding row of M divided by the diagonal term of that row, whose magnitude (by assumption) exceeds the sum of the offdiagonal terms. Hence if we let c denote the column vector consisting of all 1's, it follows that Ac is a column vector consisting of terms all strictly smaller than 1, and so A(Ac) consists of even smaller terms, and so on. Thus A^{n}c approaches zero, so A^{n} approaches zero. 

In some applications we encounter systems of equations for which the coefficient matrices are not diagonally dominant by rows, but they are diagonally dominant by columns. In other words, the sum of the magnitudes of the offdiagonal elements of each column is less than the magnitude of the diagonal term of that column. Such systems are still convergent under the Jacobi method. To see why, note that instead of dividing through each row by the diagonal coefficient of that row, we can rescale each component of x so that the diagonal terms of the modified coefficient matrix become 1s. To accomplish this, we define a new variable column vector y by the relations y_{i} = M_{ii}x_{i}. Making these substitutions into the original system equations, we essentially just divide each element of a given column in the coefficient matrix by the diagonal element of that column. In this way we arrive at a relation of the form y = Ay + b where A is a square matrix with all zero diagonal elements, and such that the sum of the elements of each column is strictly less than 1. Just as in the previous case, iterations of this relation converge provided the powers of A approach zero. Letting r denote the row vector consisting of all 1s, the row vector rA consists of the sums of the columns of A, so the terms are all strictly smaller than 1, and hence the row vector (rA)A consists of even smaller terms, and so on. Thus rA^{n} approaches zero, so A^{n} approaches zero as n increases. 

Given that the original coefficient matrix is strictly diagonally dominant either by rows or by columns, we can simply proceed as we did originally, solving each row for the diagonal term and setting up the recurrence. It isn’t necessary to normalize by columns, because the Jacobi method will converge regardless of whether we normalize by rows or by columns (assuming diagonal dominance in either of those two directions). This is because the two methods described above – one normalizing by rows and the other by columns – lead to normalized matrices with the same convergence properties. To prove this, let D be the diagonal matrix whose nonzero elements are the reciprocals of the diagonal elements of the matrix M. Now letting N denote the matrix M with the diagonal terms removed, we see that the matrix products DN and ND represent the coefficient matrices normalized by rows and by columns respectively. The powers of either of these products consist of alternating factors of D and N, so they both exhibit the same convergence. 

Here we should also mention that, although the Jacobi method consists of evaluating successive iterations of the entire state vector based on the previous state vector, we could also evaluate each row of the system using the updated component values as soon as they are calculated. This is known as the GaussSeidel method, and it typically converges about twice as fast as the Jacobi method. The condition for convergence is essentially the same, i.e., the coefficient matrix must be diagonally dominant. 

It should also be noted that strict diagonal dominance is a sufficient condition for convergence but not a necessary condition. Assuming a fullyconnected system of equations, a weaker but still sufficient condition is that the magnitude of the diagonal term of each row (or each column) is at least equal to the sum of the magnitudes of the other elements of that row (or column), and for at least one row (or column) the magnitude of the diagonal element strictly exceeds the sum of the magnitudes of the other elements. This can be proven as with the previous propositions, but accounting for the fact that a reduction in the size of term in the c or r vector affects every column having a contribution from that row, and then every row having a contribution from any of those columns is affected, and so on. Assuming the system is fullyconnected, it follows that all the elements of A^{n} go to zero an n increases. 

Returning now to our original example, the homogeneous equation (1), we can see why the Jacobi and GaussSeidel methods do not converge. The system of equations is diagonally dominant by columns, but not strictly diagonally dominant is any column. This is true in general for the steadystate equations of Markov models in reliability calculations. Successive powers of the coefficient matrix in (2) remain bounded, but they oscillate endlessly, as we would expect for a matrix with spectral radius of 1. As a result, the Jacobi or GaussSeidel methods would not be very suitable for solving this system, even if it had a nonzero “forcing state” (i.e., even if it had a nonzero u vector in equation (3)). Similar comments apply to more complicated systems, such as the one described at the end of the note on Undetected Failures of Threat Protection. 

However, it is possible to solve such systems very efficiently by an iterative method, taking advantage of the homogeneity of the equations. We might impose the auxiliary condition to replace one of the rows, but it is preferable to leave that condition aside for the moment, and notice that the absolute values of the P_{j} variables are irrelevant, since only their ratios have significance (with respect to this system of equations). We can exploit this fact by dividing through all the equations by P_{0}, resulting in a new set of equations in the ratios c_{j} = P_{j}/P_{0}. Now, a consequence of this normalization is that c_{0} is identically equal to 1, so we can rewrite equation (2) in terms of the c_{j} variables and replace the first row with the identity, giving the equations 

_{} 

The auxiliary condition implies c_{0} + c_{1} + c_{2} = 1/P_{0}, so we have P_{j} = c_{j} /(c_{0}+c_{1}+c_{2}). For convenience we can stipulate that the initial c vector was [1 0 0]^{T}, allowing us to write 

_{} 

This amount to using each of the original equations as recurrence relations to compute successive c vectors, analogous to the Jacobi method. (Of course, we could also proceed rowbyrow, making use of each coefficient as soon as it is available, analogous to the GaussSeidel method, and the results would be similar, but with faster convergence.) Since the system has a unique nontrivial steadystate solution, we expect the powers of the coefficient matrix to converge on a fixed matrix. Indeed, for this simple system, we have 

_{} 

where 

_{} 

The odd powers behave similarly. Notice that the product BC involves the numerators and denominators from the 2nd and 3rd rows/columns, illustrating why it doesn’t matter whether we normalized by rows or by columns. If the original matrix is strictly diagonally dominant either by rows or by columns, the product BC is less than 1, and therefore the iteration converges to 

_{} 

It is easily verified that this result is equivalent to the explicit solution given previously. 

In general, for homogeneous linear systems representing steadystate Markov models, the original coefficient matrix is diagonally dominant by columns, although not strictly in any column. However, there are invariably transitions back to the initial state for any closedloop model, and these are represented by positive components in the first row of the matrix. Once this row is set to zero (except for the leading entry of 1), the resulting columns are then strictly diagonally dominant. After solving for the diagonal terms of the original homogeneous system, and dividing through by the first variable (P_{0}) we get a system of the form 

_{} 

Let C denote a square matrix with 1 in the upper left corner and whose only other nonzero elements are in the first column, and let H denote a square matrix with all zero elements in the first row and column and diagonal, and let Q denote a square matrix with 0 in the upper left corner and whose only other nonzero elements are in the first column. Then we have the identities 

_{} 

and of course the sum of a C matrix and a Q matrix is a C matrix. In terms of these types of matrices, the preceding equation can be written as 

_{} 

Of the 2^{n} terms given by raising C + H to the nth power, the only terms not containing CH (and hence the only nonzero terms) are 

_{} 

Since the submatrix of H is strictly diagonally dominant (by columns), we know that H^{n} goes to zero as n increases, and the geometric series converges, so we have 

_{} 

Typically the first column converges very rapidly – within just five or ten iterations – even before the powers of H have become negligible. For some applications of this method, see the note on Complete Markov Models and Reliability. 

Needless to say, this method has much in common with numerical techniques for solving potential flow fields in computational fluid dynamics. We are given a grid with specified values of the flow function at the boundaries, and then the value of each “node” is computed as a function of the values of the neighboring nodes such that the Laplace equation (in discrete form) is locally satisfied. We step through the grid, imposing the governing equation at each point, and then iterate. Eventually the entire flow field is determined so that the homogeneous Laplace equation is simultaneously satisfied at each point. Likewise in our iterative approach to solving homogeneous linear systems we have a set of “nodes” with governing equations giving the value of each node in terms of the values of the “neighboring” nodes. However, in this case there is no notion of “locality”, because any node can be a function of any other node. In sparse models the number of dependent nodes is quite limited, but they need not be in close “proximity” in terms of any lowdimensional space. 
