Never Say Never

 

And my poor fool is hang'd! No, no, no life!
Why should a dog, a horse, a rat, have life,
And thou no breath at all? Thou'lt come no more,
Never, never, never, never, never!
Pray you, undo this button: thank you, sir.
Do you see this? Look on her, look, her lips,
Look there, look there!
                              Shakespeare, King Lear

 

In another note we discussed the so-called “gambler’s ruin” problem, wherein a gambler, beginning with h dollars, plays a series of games, in each of which he either wins or loses 1 dollar, with probability α and 1−α respectively. Assuming the gambler decides in advance to play until either reaching N dollars or going broke, we saw that in the case of a “fair” game, α = 1/2, the prior probability of reaching N is simply h/N. This approaches zero as N increases without bound, so we say (somewhat loosely) the probability of ruin approaches 1 in the limit as N “goes to infinity”. More generally, for values of α not equal to 1/2, we saw the probability of reaching N from initial holdings h is (1 − rh)/(1 − rN) where r = (1 − α)/α. Thus, for r < 1, in the limit as N “goes to infinity” the probability of reaching N before going broke goes to 1 – rh.

 

It’s worth noting that, in each of the cases we’ve considered, there is some finite value of N. We evaluated the limit of the probabilities for larger and larger finite values of N, but we didn’t actually evaluate the probabilities for infinite N. It’s interesting to consider whether it is possible to directly evaluate the probabilities if there is no finite N. First we must ask whether it even makes sense to consider a hypothetical gambler with no limit (and assuming the house has unlimited resources), who simply intends to play forever or until he goes broke – whichever come first. The phrase “whichever comes first” in this context has a certain comical quality, because it’s unclear how “forever” can ever occur. Conversely, we may ask for the probability of never going broke in an infinite game. Is this even a well-defined proposition? (Similar questions underlie the Petersburg Paradox, which also posits a gambler with infinite resources.)

 

The sequence of games has no memory, so for any current holding h there is a (we assume) unique probability of ever going broke, and the complement of this is the probability of never going broke. It isn’t surprising that if α is greater than 1/2, meaning that we have a better than even chance of winning on each round, our expected winnings will increase without limit. Also, we shouldn’t necessarily be distracted by the idea that we could never complete infinitely many rounds in a finite time, because it might be possible to conceive of the rounds occurring at exponentially shorter and shorter intervals of time – allowing an infinite sequence to be completed in finite time. This admittedly leads to the problematic notion of “supertasks” that is sometimes discussed in relation to generalizations of Zeno’s paradoxes, but at least it reminds us that temporal duration need not be a governing consideration. Perhaps more relevant is the notion of non-terminating patterns. For example, if α = 1, the gambler simply collects one dollar on each round, and we have no difficulty accepting that he will never go broke playing this game. Likewise we can imagine other patterns of wins and losses, such as WWLWWLWWL… which will obviously lead to endless increase in the gambler’s holdings, and he will never go broke. So it’s easy to imagine sequences with that property, and it’s natural to expect that a probabilistic game with W being twice as likely as L on each round could likewise lead to sequences that never drop to zero.

 

On the other hand, there is undeniably a distinction between the fixed repetitive pattern and a probabilistic sequence, even if they have the same expectations for the ratios of Ws to Ls. The repetitive sequence consisting of repetitions of WWL can never fall more than one dollar below it’s current level, whereas the probabilistic sequence will contain arbitrarily large downward sub-sequences. Intuitively we expect the gambler’s holdings to almost certainly increase fast enough to be able to absorb the downward sequences, but in order to quantify the probability that such a probabilistic sequences will never drop to zero holdings, we would like to evaluate this without basing our evaluation on the behavior of games with a finite N.

 

To motivate this, it may be worth noting some important examples showing that the behavior of truly infinite patterns need not be the same as the limit of finite patterns as the finite size is allowed to increase indefinitely. Naturally we recall the well-known “limit paradox”, but an even more significant example can be found in the mathematics of quantum mechanics. In Heisenberg’s original formulation, the coordinate q and the corresponding momentum p are represented by matrices that satisfy the fundamental commutation relation

 

 

where I is the identity matrix and  is a positive constant. At first glance this relation might seem impossible, because even though matrices need not commute, and hence qp need not be identical to pq, it is nevertheless true that the traces (i.e., the sum of the diagonal elements) of those two products are identical. It follows that the trace of the difference between those two products must be zero, and yet the trace of the right hand side is obviously not zero. What has gone wrong? The answer is that the commutation relation is indeed impossible for matrices of any finite size, no matter how large, but it is possible for infinite matrices. To illustrate with a simple example, consider the two matrices

 

 

We have

 

 

As expected, the sum of the diagonal elements of this expression is zero, so it couldn’t possibly equal the (scaled) identity matrix. To solve for the values of aj and bj that would cause this to be the identity matrix, we would have to impose the conditions

 

 

The first four equations imply a1b1 = 1, a2b2 = 2, a3b3 = 3, and a4b4 = 4, but this contradicts the fifth equation a4b4 = −1. Hence the fundamental commutation relation of quantum mechanics cannot be satisfied for matrices of this form, and this can be shown for all matrices of finite size, no matter how large. However, if we consider actually infinite matrices, then it is possible to satisfy the commutation relation. Extending the pattern of our a and b matrices to infinity, we get the infinite sequence of conditions

 

 

and hence we equate the commutator of these two matrices to the infinite identity matrix by setting ajbj = j for all j from 1 to infinity. We eliminate the inconsistency at the “end” of the matrix by simply denying that the matrix has an end, and we allow the elements of the matrices to increase without limit. This shows that the attributes of infinite matrices can be fundamentally different from those of finite matrices, and we can’t accurately represent the behavior of infinite matrices simply as the limit of finite matrices as the size increases.

 

With this in mind, we return to the gambler’s ruin problem, and seek to determine the probability of ruin for a truly infinite game, without assuming that it is represented by the limit of the probability for games with finite N. To do this we return to the derivation of the formulas noted above. Letting pj denote the probability of ruin given current holdings of j dollars, we have the relation pj = αpj+1 + (1−α)pj−1.  From this it follows that pj = A + Brj, where A and B are constants and r = (1−α)/α. Now, we know that p0 = 1, so we have the condition 1 = A + B.  If there is some finite amount of winnings, denoted by N, at which the gambler will stop and declare victory, then we also have pN = 0, which implies 0 = A + BrN. Combining these conditions we get A = −rN/(1−rN) and B = 1/(1−rN), so the probability of ruin is pj = (rj – rN)/(1 – rN). For r < 1, this goes to rj in the limit as N increases to infinity. But this solution rests on the validity of the boundary condition pN = 0 in the limit as N goes to infinity. For any finite value of N this condition is true by definition, but if we say there is no finite N, and the gambler is just going to continue playing forever, the validity of this boundary condition is less clear.

 

It’s tempting to think that the truth of this boundary condition at infinity may be implicit in the other conditions of the problem, but that’s not the case, because we can choose any probability we like for p, leading to the result A = p and B = 1 − p. From this we know the solution for the infinite gambler’s game is of the form pj = p + (1 − p)rj. It certainly seems most natural to set p = 0, consistent with the limiting value of the finite gambler’s game as N increases towards infinity, but strictly speaking this is a free choice. We could just as well choose p = 1/2, for example, in which case the solution for the infinite gambler would be pj = (1 + rj)/2. This obviously satisfies the boundary condition at 0, and is also consistent with the recurrence relation at each step of the game, so we can’t exclude this solution – it is consistent with all the stated conditions of the infinite gambler’s problem. In fact, we could even select p = 1, in which case pj = 1 for all j. So we conclude that the infinite gambler’s game is not well defined, and has no unique solution. Some independent assumption, such as specifying the value of p, is required in order to determine a solution. 

 

This situation is similar to the well-known problem asking for the resistance between two specified nodes of an infinite grid of resistors. The usual “answer” is based on computing the result for a finite grid, and then evaluating this in the limit as the size of the grid is increased “to infinity”. But just as in the gambler’s problem (and the matrices in quantum mechanics), the result is different if we consider an actually infinite grid, rather than the limit of a large but finite grid. The answer for a infinite grid is strictly indeterminate without some additional conditions (such as the resistances “at infinity”) being specified.

 

Return to MathPages Main Menu