Never Say Never 

And
my poor fool is hang'd! No, no, no life! 

In another note we discussed the socalled “gambler’s ruin” problem, wherein a gambler, beginning with h dollars, plays a series of games, in each of which he either wins or loses 1 dollar, with probability a and 1a respectively. Assuming the gambler decides in advance to play until either reaching N dollars or going broke, we saw that in the case of a “fair” game, a = 1/2, the prior probability of reaching N is simply h/N. This approaches zero as N increases without bound, so we say (somewhat loosely) the probability of ruin approaches 1 in the limit as N “goes to infinity”. More generally, for values of a not equal to 1/2, we saw the probability of reaching N from initial holdings h is (1  r^{h})/(1  r^{N}) where r = (1  a)/a. Thus, for r < 1, in the limit as N “goes to infinity” the probability of reaching N before going broke goes to 1 – r^{h}. 

It’s worth noting that, in each of the cases we’ve considered, there is some finite value of N. We evaluated the limit of the probabilities for larger and larger finite values of N, but we didn’t actually evaluate the probabilities for infinite N. It’s interesting to consider whether it is possible to directly evaluate the probabilities if there is no finite N. First we must ask whether it even makes sense to consider a hypothetical gambler with no limit (and assuming the house has unlimited resources), who simply intends to play forever or until he goes broke – whichever come first. The phrase “whichever comes first” in this context has a certain comical quality, because it’s unclear how “forever” can ever occur. Conversely, we may ask for the probability of never going broke in an infinite game. Is this even a welldefined proposition? (Similar questions underlie the Petersburg Paradox, which also posits a gambler with infinite resources.) 

The sequence of games has no memory, so for any current holding h there is a (we assume) unique probability of ever going broke, and the complement of this is the probability of never going broke. It isn’t surprising that if a is greater than 1/2, meaning that we have a better than even chance of winning on each round, our expected winnings will increase without limit. Also, we shouldn’t necessarily be distracted by the idea that we could never complete infinitely many rounds in a finite time, because it might be possible to conceive of the rounds occurring at exponentially shorter and shorter intervals of time – allowing an infinite sequence to be completed in finite time. This admittedly leads to the problematic notion of “supertasks” that is sometimes discussed in relation to generalizations of Zeno’s paradoxes, but at least it reminds us that temporal duration need not be a governing consideration. Perhaps more relevant is the notion of nonterminating patterns. For example, if a = 1, the gambler simply collects one dollar on each round, and we have no difficulty accepting that he will never go broke playing this game. Likewise we can imagine other patterns of wins and losses, such as WWLWWLWWL… which will obviously lead to endless increase in the gambler’s holdings, and he will never go broke. So it’s easy to imagine sequences with that property, and it’s natural to expect that a probabilistic game with W being twice as likely as L on each round could likewise lead to sequences that never drop to zero. 

On the other hand, there is undeniably a distinction between the fixed repetitive pattern and a probabilistic sequence, even if they have the same expectations for the ratios of Ws to Ls. The repetitive sequence consisting of repetitions of WWL can never fall more than one dollar below it’s current level, whereas the probabilistic sequence will contain arbitrarily large downward subsequences. Intuitively we expect the gambler’s holdings to almost certainly increase fast enough to be able to absorb the downward sequences, but in order to quantify the probability that such a probabilistic sequences will never drop to zero holdings, we would like to evaluate this without basing our evaluation on the behavior of games with a finite N. 

To motivate this, it may be worth noting some important examples showing that the behavior of truly infinite patterns need not be the same as the limit of finite patterns as the finite size is allowed to increase indefinitely. Naturally we recall the wellknown “limit paradox”, but an even more significant example can be found in the mathematics of quantum mechanics. In Heisenberg’s original formulation, the coordinate q and the corresponding momentum p are represented by matrices that satisfy the fundamental commutation relation 

_{} 

where I is the identity matrix and _{} is a positive constant. At first glance this relation might seem impossible, because even though matrices need not commute, and hence qp need not be identical to pq, it is nevertheless true that the traces (i.e., the sum of the diagonal elements) of those two products are identical. It follows that the trace of the difference between those two products must be zero, and yet the trace of the right hand side is obviously not zero. What has gone wrong? The answer is that the commutation relation is indeed impossible for matrices of any finite size, no matter how large, but it is possible for infinite matrices. To illustrate with a simple example, consider the two matrices 

_{} 

We have 

_{} 

As expected, the sum of the diagonal elements of this expression is zero, so it couldn’t possibly equal the (scaled) identity matrix. To solve for the values of a_{j} and b_{j} that would cause this to be the identity matrix, we would have to impose the conditions 

_{} 

The first four equations imply a_{1}b_{1} = 1, a_{2}b_{2} = 2, a_{3}b_{3} = 3, and a_{4}b_{4} = 4, but this contradicts the fifth equation a_{4}b_{4} = 1. Hence the fundamental commutation relation of quantum mechanics cannot be satisfied for matrices of this form, and this can be shown for all matrices of finite size, no matter how large. However, if we consider actually infinite matrices, then it is possible to satisfy the commutation relation. Extending the pattern of our a and b matrices to infinity, we get the infinite sequence of conditions 

_{} 

and hence we equate the commutator of these two matrices to the infinite identity matrix by setting a_{j}b_{j} = j for all j from 1 to infinity. We eliminate the inconsistency at the “end” of the matrix by simply denying that the matrix has an end, and we allow the elements of the matrices to increase without limit. This shows that the attributes of infinite matrices can be fundamentally different from those of finite matrices, and we can’t accurately represent the behavior of infinite matrices simply as the limit of finite matrices as the size increases. 

With this in mind, we return to the gambler’s ruin problem, and seek to determine the probability of ruin for a truly infinite game, without assuming that it is represented by the limit of the probability for games with finite N. To do this we return to the derivation of the formulas noted above. Letting p_{j} denote the probability of ruin given current holdings of j dollars, we have the relation p_{j} = ap_{j+1} + (1a)p_{j}_{1}. From this it follows that p_{j} = A + Br^{j}, where A and B are constants and r = (1a)/a. Now, we know that p_{0} = 1, so we have the condition 1 = A + B. If there is some finite amount of winnings, denoted by N, at which the gambler will stop and declare victory, then we also have p_{N} = 0, which implies 0 = A + Br^{N}. Combining these conditions we get A = r^{N}/(1r^{N}) and B = 1/(1r^{N}), so the probability of ruin is p_{j} = (r^{j} – r^{N})/(1 – r^{N}). For r < 1, this goes to r^{j} in the limit as N increases to infinity. But this solution rests on the validity of the boundary condition p_{N} = 0 in the limit as N goes to infinity. For any finite value of N this condition is true by definition, but if we say there is no finite N, and the gambler is just going to continue playing forever, the validity of this boundary condition is less clear. 

It’s tempting to think that the truth of this boundary condition at infinity may be implicit in the other conditions of the problem, but that’s not the case, because we can choose any probability we like for p_{∞}, leading to the result A = p_{∞} and B = 1  p_{∞}. From this we know the solution for the infinite gambler’s game is of the form p_{j} = p_{∞} + (1  p_{∞})r^{j}. It certainly seems most natural to set p_{∞} = 0, consistent with the limiting value of the finite gambler’s game as N increases towards infinity, but strictly speaking this is a free choice. We could just as well choose p_{∞} = 1/2, for example, in which case the solution for the infinite gambler would be p_{j} = (1 + r^{j})/2. This obviously satisfies the boundary condition at 0, and is also consistent with the recurrence relation at each step of the game, so we can’t exclude this solution – it is consistent with all the stated conditions of the infinite gambler’s problem. In fact, we could even select p_{∞} = 1, in which case p_{j} = 1 for all j. So we conclude that the infinite gambler’s game is not well defined, and has no unique solution. Some independent assumption, such as specifying the value of p_{∞}, is required in order to determine a solution. 

This situation is similar to the wellknown problem asking for the resistance between two specified nodes of an infinite grid of resistors. The usual “answer” is based on computing the result for a finite grid, and then evaluating this in the limit as the size of the grid is increased “to infinity”. But just as in the gambler’s problem (and the matrices in quantum mechanics), the result is different if we consider an actually infinite grid, rather than the limit of a large but finite grid. The answer for a infinite grid is strictly indeterminate without some additional conditions (such as the resistances “at infinity”) being specified. 
