Is Arithmetic Consistent? 

Do I contradict myself? Very well then, I contradict myself. I am large, I contain multitudes. 
Walt Whitman 

The most widely accepted formal basis for arithmetic is called Peano's Axioms (PA). Giuseppe Peano based his system on a specific natural number, 1, and a successor function such that to each natural number x there corresponds a successor x' (also denoted as x+1). He then formulated the properties of the set of natural numbers in five axioms: 

(1) 1 is a natural number. 
(2) If x is a natural number then x' is a natural number. 
(3) If x is a natural number then x' is not 1. 
(4) If x' = y' then x = y. 
(5) If S is a set of natural numbers including 1, and if for every x in S the successor x' is also in S, then every natural number is in S. 

These axioms (together with numerous tacit rules of reasoning and implication, etc) constitute a formal basis for the subject of arithmetic, and all formal “proofs” ultimately are derived from them. The first four, at least, appear to be “clear and distinct” notions, and even the fifth would be regarded by most people as fairly unobjectionable. Nevertheless, the question sometimes arises (especially in relation to very complicated and lengthy proofs) whether theorems based on these axioms (and tacit rules of implication) are perfectly indubitable. According to Goedel’s theorem, it is impossible to formally prove the consistency of arithmetic, which is to say, we have no rigorous proof that the basic axioms of arithmetic do not lead to a contradiction at some point. For example, if we assume some proposition (perhaps the negation of a conjecture we wish to prove), and then, via some long and complicated chain of reasoning, we arrive at a contradiction, how do we know that this contradiction is essentially a consequence of the assumed proposition? Could it not be that we have just exposed a contradiction inherent in our formal system of arithmetic itself? In other words, if arithmetic itself is inconsistent, then proof by contradiction loses its persuasiveness. 

On one level, this kind of objection can easily be vitiated by simply prefacing every theorem with the words "If our formalization of arithmetic is consistent, then...". Indeed, for short simple proofs by contradiction we can strengthen the theorem by reducing this antecedent condition to something like "If arithmetic is consistent over this small set of operations, then...". We can be confident that the contradiction really is directly related to our special assumption, because it's highly implausible that our formalization of arithmetic could exhibit a contradiction over a very short chain of implication. However, with long proofs of great subtlety, extending over multiple papers by multiple authors, and involving the interaction of many different branches and facets of mathematics, how would we really distinguish between a subtle contradiction resulting from one specific false assumption vs. a subtle contradiction inherent in the fabric of arithmetic itself? 

Despite Goedel’s theorem, the statement that we cannot absolutely prove the axioms of arithmetic is sometimes challenged on the grounds that we can prove the consistency of PA, provided we are willing to accept the consistency of some more encompassing formal system such as the ZermeloFrankel (ZF) axioms, perhaps augmented with the continuum hypothesis (ZFC). But this is a questionable position. Let's say a proof of the consistency of system X is "incomplete" if it's carried out within a system Y whose consistency has not been completely proven. Theorem: Every proof of the consistency of arithmetic is incomplete. In view of this, it isn't clear how "working in ZFC" resolves the issue. There is no complete and absolute proof of the consistency of arithmetic, so every arithmetical proof is subject to doubt. (By focusing on arithmetic I don't mean to imply that other branches of mathematics are exempt from doubt. Hermann Weyl, commenting on Gödel’s work, said that "God exists because mathematics is undoubtedly consistent, and the devil exists because we cannot prove the consistency".) 

As Morris Kline said in his book Mathematics and the Loss of Certainty, “Gödel’s result on consistency says that we cannot prove consistency in any approach to mathematics by safe logical principles”, meaning firstorder logic and finitary proof theory, which had been shown in Russell's "Principia Mathematica" to be sufficient as the basis for much of mathematics. Similarly, in John Stillwell's book Mathematics and its History we find "If S is any system that includes PA, then Con(S) [the consistency of S] cannot be proved in S, if S is consistent." On the other hand, some would suggest that the contemplation of inconsistency in our formal arithmetic is tantamount to a renunciation of reason itself, i.e., if our concept of natural numbers is inconsistent then we must be incapable of rational thought, and any further considerations are pointless. This attitude is reminiscent of the apprehensions mathematicians once felt regarding "completed infinities". "We recoil in horror", as Hermite said, believing that the introduction of actual infinities could lead only to nonsense and sophistry. Of course, it turned out that we are quite capable of reasoning in the presence of infinities. Similarly, I believe reason can survive even the presence of contradiction in our formal systems. 

Admittedly this belief is based on a somewhat unorthodox view of formal systems, according to which such systems should be seen not as unordered ("random access") sets of syllogisms, but as structured spaces, with each layer of implicated objects representing a region, and the implications representing connections between different regions. The space may even possess a kind of metric, although "distances" are not necessarily commutative. For example, the implicative distance from an integer to its prime factors is greater than the implicative distance from those primes to their product. According to this view a formal system does not degenerate into complete nonsense simply because at some point it contains a contradiction. A system may be "locally" consistent even if it is not globally consistent. To give a crude example, suppose we augment our normal axioms and definitions of arithmetic with the statement that a positive integer n is prime if and only if 2^{n}  2 is divisible by n. This axiom conflicts with our existing definition of a prime, but the first occurrence of a conflict is 341. Thus, over a limited range of natural numbers the axiom system possesses "local consistency". 

Suppose we then substitute a stronger axiom by saying n is a prime iff f(r^{n}) = 0 (mod n) where r is any root of f(x) = x^{5}  x^{3}  2x^{2} + 1. With this system we might go quite some time without encountering a contradiction. When we finally do bump into a contradiction (e.g., 2258745004684033) we could simply substitute an even stronger axiom. In fact, we can easily specify an axiom of this kind for which the smallest actual exception is far beyond anyone's (present) ability to find, and for which we have no theoretical proof that any exception even exists. Thus, there is no direct proof of inconsistency. We might then, with enough imagination, develop a plausible (e.g., as plausible as BanachTarski) nonfinitistic system within which I can actually prove that our arithmetic is consistent. In fact, it might actually be consistent… but we would have no more justification to claim absolute certainty than with our present arithmetic. 

As to the basic premise that we have no absolute proof of the consistency of arithmetic, here are a few other people's thoughts on the subject: 

A metamathematical proof of the consistency of arithmetic is not excluded by...Goedel's analysis. In point of fact, metamathematical proofs of the consistency of arithmetic have been constructed, notably by Gerhard Gentzen, a member of the Hilbert school, in 1936. But such proofs are in a sense pointless if, as can be demonstrated, they employ rules of inference whose own consistency is as much open to doubt as is the formal consistency of arithmetic itself. Thus, Gentzen used the socalled "principle of transfinite mathematical induction" in his proof. But the principle in effect stipulates that a formula is derivable from an infinite class of premises. Its use therefore requires the employment of nonfinitistic meta  mathematical notions, and so raises once more the question which Hilbert's original program was intended to resolve. 
Ernest Nagel and James Newman 

Gödel showed that...if anyone finds a proof that arithmetic is consistent, then it isn't! 
Ian Stewart 

...Hence one cannot, using the usual methods, be certain that the axioms of arithmetic will not lead to contradictions. 
Carl Boyer 

An absolute consistency proof is one that does not assume the consistency of some other system...what Gödel did was show that there must be "undecidable" statements within any [formal system]... and that consistency is one of those undecidable propositions. In other words, the consistency of an allembracing formal system can neither be proved nor disproved within the formal system. 
Edna Kramer 

Gentzen's discovery is that the Goedel obstacle to proving the consistency of number theory can be overcome by using transfinite induction up to a sufficiently great ordinal... The original proposals of the formalists to make classical mathematics secure by a consistency proof did not contemplate that such a method as transfinite induction up to e_{0} would have to be used. To what extent the Gentzen proof can be accepted as securing classical number theory in the sense of that problem formulation is in the present state of affairs a matter for individual judgment... 
Kleene, "Introduction to Metamathematics" 

Some mathematicians assert that there is a consistency proof of PA, and it is quite elementary, using standard mathematical techniques (ie, ZF). It consists of exhibiting a model. However, we speak of "exhibiting a model" we are referring to a relative consistency proof, not an absolute consistency proof. Examples of relative consistency theorems are 

If Euclidean geometry is consistent then nonEuclidean geometry is consistent. 
If ZF is consistent then ZFC is consistent. 

Relative consistency proofs assert nothing about the absolute consistency of any system, they merely relate the consistency of one system to that of another. Here's what the Encyclopedic Dictionary of Mathematics (2nd Ed) says on the subject: 

Hilbert proved the consistency of Euclidean geometry by assuming the consistency of the theory of real numbers. This is an example of a relative consistency proof, which reduces the consistency proof of one system to that of another. Such a proof can be meaningful only when the latter system can somehow be regarded as based on sounder ground than the former. To carry out the consistency of logic proper and set theory, one must reduce it to that of another system with sounder ground. For this purpose, Hilbert initiated metamathematics and the finitary standpoint...Let S be any consistent formal system containing the theory of natural numbers. Then it is impossible to prove the consistency of S by utilizing only arguments that can be formalized in S.... In [these] consistency proofs of pure number theory..., transfinite induction up to the first enumber e_{0} is used, but all the other reasoning used in these proofs can be presented in pure number theory. This shows that the legitimacy of transfinite induction up to e_{0} cannot be proved in this latter theory. 

All known consistency proofs of arithmetic rely on something like transfinite induction (or possibly primitive recursive functionals of finite type), the consistency of which is no more selfevident than that of arithmetic itself. 

Oddly enough, some people (even some mathematicians) are under the impression that Goedel’s results apply only to very limited formal systems. One mathematician wrote to me that “there is no proof in firstorder logic that arithmetic is consistent, but that has more to do with the limitations of firstorder logic than anything else, and there are other more general types of logic in which proofs of the consistency of arithmetic are available.” Of course, contrary to this individual’s claim, Goedel's results actually apply to any formal system that is sufficiently complex to encompass and be modeled by arithmetic. Granted, if we postulate a system that cannot be modeled (encoded) by arithmetic then other things are possible, but the consistency of such a system would be at least as doubtful as the consistency of the system we were trying to prove. For example, Gentzen's proof of the consistency of PA uses transfinite induction, but surely it is pointless to try to resolve doubts about arithmetic by working with transfinite induction, since the latter is even more dubious. 

The inability of even many mathematicians to absorb the actual content and significance of Goedel’s theorems is interesting in itself, as are the various misconstruals of those theorems, which tend to reflect what the person thinks must be true. For example, we can see how congenial is the idea that Goedel’s results apply only to a limited class of formal systems. The fact that mathematicians at universities actually believe this is rather remarkable. It seems to be a case of sophomoric backlash, in reaction to what can often seem like sensationalistic popular accounts of Goedel’s theorems. Apparently it becomes a point of pride among math graduate students to “see through the hype”, and condescendingly advise the less welleducated as to the vacuity of Goedel’s theorems. (This would be fine if Goedel’s theorems actually were vacuous, but since they aren’t, it isn’t.) Another apparent source of misunderstanding is the sheer inability to believe that any rational person could doubt the consistency of, say, arithmetic. 

Imagine the reaction of the typical mathematician to the even more radical suggestion that every (sufficiently complex) formal system contains a contradiction at some point. When I mentioned this to an email correspondent, he expressed utter incredulity, saying "You can't possibly believe that simple arithmetic could contain an inconsistency! How would you balance your check book?" This is an interesting question. I actually balance my checkbook using a formal system called Quicken. Do I have a formal proof of the absolute consistency and correctness of Quicken? No. Is it conceivable that Quicken might contain an imperfection that could lead, in some circumstances, to an inconsistency? Certainly. But for many mathematicians this situation must be a real paradox, so it’s worth examining in some detail. 

Suppose I balance my checkbook with a program called Kash (so as not to sully the good name of Quicken), and suppose this program implements arithmetic perfectly  with one exception. The result of subtracting 5.555555 from 7.777777 is 2.222223. Now if B is my true balance then I should have the formal theorems 

_{} 

for every value of q. Thus, in the formal system of Kash I can prove that 1 = 2 = 3 = 4.23 = 89.23 = anything. Clearly the Kash system has a consistency problem, because I can compute my balance to be anything I want just by manipulating the error produced by that one particular operation. ("So oft it chances in particular men...") But here is the fact that must seem paradoxical to many mathematicians: thousands of people have used Kash for years, and not a single error has appeared in the results. How can this be, given that Kash is formally inconsistent? 

The answer is that although Kash is globally inconsistent, it possesses a high degree of local consistency. Traveling from any given premise (such as 5+7) directly to the evaluation (e.g., 12), we are very unlikely to encounter the inconsistency. Of course, in a perfectly consistent system we could take any of the infinitely many paths from 5+7 to the evaluation and we would always arrive at the same result, which is clearly not true within Kash (in which we could, by roundabout formal manipulations, evaluate 5+7 to be 3511.1093, or any other value we wanted). Nevertheless, for almost all paths leading from a given premise to its evaluation, the result is the same. 

Now consider our formal system of arithmetic. Many people seem agog at the suggestion that our formalization of arithmetic might possibly be inconsistent at some point. Clearly our arithmetic must possess a very high degree of local consistency, because otherwise we would have observed anomalies long before now. However, are we really justified in asserting that every one of the infinitely many paths from every premise to its evaluation gives the same result? As with the system Kash, this question can't be answered simply by observing that our checkbooks usually seem to balance. 

Moreover, the question cannot even be answered within any formal system that can be modeled by the natural numbers. It is evidently necessary to assume the validity of something like transfinite induction to prove the consistency of arithmetic. But how sure are we that a formal system that includes transfinite induction is totally consistent? (If, under the assumption of transfinite induction, we had found that arithmetic was not consistent, would we have abandoned arithmetic... or transfinite induction?) The only way we know how to prove this is by assuming still less selfevidently consistent procedures, and so on. 

The points I'm trying to make are

(1) We have no meaningful proof of the consistency of arithmetic. 
(2) If arithmetic is inconsistent, it does not follow that our checkbooks must all be out of balance. It is entirely possible that we could adjust our formalization of arithmetic to patch up the inconsistency, and almost all elementary results would remain unchanged. 
(3) However, highly elaborate and lengthy chains of deduction in the far reaches of advanced number theory might need to be reevaluated in the light of our patchedup formalization. 

Of course, the consistency or inconsistency of arithmetic can only be appraised in the context of a completely formalized system, but the very act of formalizing is problematic, because it invariably presupposes prior knowledge on the part of the reader. Thus it can never be completely clear that our formalization corresponds perfectly with what we call `arithmetic'. Our efforts to project (exert) our formalizations past any undefined prior knowledge tend to lead to the possibility of contradictions and inconsistencies. But when such an inconsistency comes to light, we don’t say the ideal of “arithmetic” is faulty. Rather we say "Ooppss, our formalization didn't quite correspond to true ideal arithmetic. Now, here's the final ultimate and absolutely true formalization... (I know we said this last time, but this time our formalization really is perfect.)" As a matter of fact, this very thing has occurred historically. In this sense we are tacitly positing the existence of a Platonic ideal "ARITHMETIC" that is eternal, perfect, and true, while acknowledging that any given formalization of this Platonic ideal may be flawed. The problem is that our formal proofs are based on a specific formalization of arithmetic, not on the ideal Platonic ARITHMETIC, so we are not justified in transferring our sublime confidence in the Platonic ideal onto our formal proofs. 

Claims that arithmetic is indubitable, while acknowledging that our formalization may not be perfect, are essentially equivalent to saying that we are always right, because even if we are found to have said something wrong, that's not what we meant. Any given theorem can be regarded as a theorem about the ideal of ARITHMETIC, prior to any particular formalization, but then the first step in attempting to prove it is to select a formal system within which to work. Of course, it's trivial to devise a formal system labeled "arithmetic" and then prove X within that system. For example, take PA+X. But the question is whether that system really represents ARITHMETIC, one requirement of which is consistency. 

We don't know what, if any, parts of our present mathematics would be rendered uninteresting by the discovery of an inconsistency in our present formalization of arithmetic, because it would depend on the nature of the inconsistency and the steps taken to resolve it. Once the patchedup formalization was in place, we would reevaluate all of our mathematics to see which, if any, proofs no longer work in the new improved "arithmetic". One would expect that almost all present theorems would survive. The theorems most likely to be in jeopardy would be the most elaborate, farreaching, and "deep" results, because their proofs tax the resources of our present system the most. 

Some mathematicians respond to the assertion that we have no meaningful proof of the consistency of arithmetic by claiming that “the usual ZFC proof is quite meaningful." But this seems to hinge on different understandings of the meaning of “meaningful”. Consider the two well known theorems 

(1) con(ZF) implies con(ZFC) 
(2) con(ZF) implies con(PA) 

From a foundational standpoint, these two theorems act in opposite directions. In case (1), if the result had been {con(ZF) implies NOTcon(ZFC)} then it would have undermined our confidence in the "C" part of ZFC. However, if the result of case (2) had been {con(ZF) implies NOTcon(PA)}, it would presumably would have undermined our confidence in ZF, not in PA (because the principles of PA are considered to be more selfevidently consistent that those of ZF). The only kind of proof that would enhance our confidence in PA would be of the form 

con(X) implies con(PA) 

where X is a system whose consistency is MORE selfevident than that of PA. (This is the key point.) For example, Hilbert hoped that with X = 1st Order Logic it would be possible to prove this theorem, thereby enhancing our confidence in the consistency of PA. That would have been a meaningful proof of the consistency of PA. However, it's now known that such a proof is impossible (unless you believe in the existence of a formal system that is more selfevidently consistent than PA but that cannot be modeled within the system of natural numbers). 

Others argue that it may be possible to regard the theory of primitive recursive functionals as more evidently consistent than PA. It's well known that both transfinite induction and the theory of primitive recursive functionals cannot be modeled within the system of natural numbers, but we do not need to claim that it would be impossible to regard such principles as more evidently consistent than PA. We simply observe that no one does – and for good reason. Each represents a nonfinitistic extension of formal principles, which is precisely the source of uncertainty in the consistency of PA. Again, there is a little thought experiment that sometimes helps people sort out their own hierarchy of faith: If, assuming the consistency of the theory of primitive recursive functionals, one could prove that PA is NOT consistent, would we be more inclined to abandon PA or the theory of primitive recursive functionals? 

Some mathematicians assert that doubts about whether PA is consistent, and whether it can be proven to be consistent, are trivial and pointless, partly because this places all of mathematics in doubt. However, as to the triviality, much of the most interesting and profound mathematics of this century has been concerned with just such doubts. As to the number of proofs that are cast into doubt by the possibility of inconsistency in PA, the "perfect consistency or total gibberish" approach to formal systems evidently favored by many mathematicians is not really justified. 

It just so happened that Russell and Whitehead's FOL was a convenient finitistic vehicle to use as an example, although subsequent developments showed that this maps to computability, from which the idea of a universal Turing machine yields a large segment (if not all) of what can be called cognition. Of course, people sometimes raise the possibility of a finitistic system that cannot be modeled within the theory of natural numbers but, as Ernst Nagel remarked, "no one today appears to have a clear idea of what a finitistic proof would be like that is NOT capable of formulation within arithmetic". 

PA can be modeled within ZF. It follows that con(ZF) implies con(PA). This was simply presented as an illustration of how the formal meaning of a theorem of this form depends on our "a priori" perceptions of the relative soundness of the two systems. 

Some mathematicians have alluded to the "usual" proof of con(PA) but have not specified the formal system within which this "usual" proof resides. Since there are infinitely many possibilities, it's not possible to guess which specific one they have in mind. In general terms, if there exists a proof of con(PA) within a formal system X, then we have the metatheorem { con(X) implies con(PA) }, so we can replace X with whatever formal system we consider to be the "usual" one for proofs of PA. The meaningfulness of such a theorem depends on our perception of the relative soundness of the systems X and PA. I assert that no system X within which con(PA) can be proved is evidently more consistent that con(PA) itself. 

Here's a related quote from "Mathematical Logic", 2nd Ed, by H. D. Ebbinghaus, J. Flum, and W. Thomas, SpringerVerlag, 1994: 

The above argument [Goedel's 2nd Thm] can be transferred to other systems where there is a substitute for the natural numbers and where Rdecidable relations and Rcomputable functions are representable. In particular, it applies to systems of axioms for set theory such as ZFC... Since contemporary mathematics can be based on the ZFC axioms, and since...the consistency of ZFC cannot be proved using only means available within ZFC, we can formulate [this theorem] as follows: If mathematics is consistent, we cannot prove its consistency by mathematical means. 

I wanted to include this quote because every time I say there is no meaningful formal proof of the consistency of arithmetic (PA), someone always says "Oh yes there is, just work in ZFC, or ZF, or PA + transfinite induction, or PA + primitive recursive functionals, or PA + con(PA), or just use the "usual" proof, or (as one mathematician advised me) just stop and think about it!" But none of these proposed "proofs" really adds anything to the indubitability of con(PA). Of course, it's perfectly acceptable to say "I'm simply not interested in the foundational issues of mathematics or in the meaning of consistency for formal systems". However, disinterest should not be presented as a substitute for proof. 

In response to the rhetorical question “Are we really justified in asserting that every one of the infinitely many paths from every premise to its evaluation gives the same result?”, some mathematicians will say that if we allow associativity of modus ponens, then there is essentially only one proof of any classical formula. However, this misses a crucial point. If we define the "essential path" between any two points in space as a function only of the beginning and ending points, then there is essentially only one path from New York to Los Angeles. This definition of a "path", like the corresponding definition of a "proof" is highly reductionist. We are certainly free to adopt that definition if we wish, but in so doing we ignore whole dimensions of nontrivial structure, as well as making it impossible to reason meaningfully about "imperfectly consistent" systems which, for all we know, may include every formal system that exists. The unmeasured application of the modus ponens (rule of detachment) is precisely what misleads people into thinking that a formal system must be either absolutely consistent or total gibberish. Then, when they consider systems such as naive set theory, PA, or ZFC, which are clearly not total gibberish, they conclude that they must be absolutely consistent. 

Some mathematicians claim that PA is a collection of simple statements, all of which are manifestly true about the positive integers. However, although the explicit axioms look simple, they are meaningful only in the context of a vast mathematical apparatus that includes conventions regarding names of variables, rules about substitutions, and more generally rules of inference and implication. For example, if we propose a sequence of variable names x, x', x'' and so on, we need to know that all these names are different. Of course, most of us would accept this as intuitively obvious, but if the formal system is to be absolutely watertight, we must prove that they are different. Thus we end up needing to apply the Peano axioms to the language used to talk about them. 

As Jan Nienhuys observed, one of the basic subtleties related to Peano’s axioms is that they implicitly assert that any set of positive integers has a smallest element. In view of Ramsey numbers, it’s clear that even in simple cases this is more a metaphysical assertion than a matter of "manifestly true". How can we have confidence that any set of positive integers, no matter how unwieldy its definition, has a smallest element? (The unwieldiness of definitions is made possible by having an unlimited amount of positive integers and variable names at our disposal.) The Peano axioms plus a formal system in which they are embedded must be free of contradictions, i.e. for no statement X is both X and notX provable. (Also, note that we allow variables such as X to denote statements, so the formal system should have watertight descriptions  no hand waving or illustration by example allowed  of how to go about making variables represent statements.) Only such a system as a whole can conceivably contain a contradiction. Without the formal framework a contradiction doesn't make much sense. 

Here’s a sampling of comments on this topic received from various mathematicians: 

Personally I think that mathematicians are sometimes too hung up on the details of formalizations and lose track of the actual math they are talking about. As the old saying goes, mathematicians are Platonists on weekdays and formalists on Sunday. I personally think that this tendency is a good and healthy one! 

But Goedel's Theorem can be derived even with a substantially weakened Axiom of Induction, and I believe that such a weakened axiom would then also lead to a contradiction, so it seems that we would have to throw away all induction. 

If the consistency of FOL is somehow a hypothetical matter, why should I assume that you or I make any sense whatever in our babbling? 

If arithmetic was inconsistent, wouldn't all the bridges fall down? 

If so, it's an awfully trivial point, and hardly worth making. She might as well say "Everything we think might have an error in it because some demon somewhere is messing with our brains." Quite true. So what? 

A proof from ZF brings with it the supreme confidence that a century of working with ZF and beyond has given us. 

One mathematician argued that (apparently with a straight face) “The consistency of ZFC is provable in ZFC+Con(ZFC), the consistency of ZFC+Con(ZFC) is provable in FC+Con(ZFC)+Con(ZFC+Con(ZFC)), etc., so the infinite hierarchy of such systems provides a complete proof of the consistency of ZFC. Or, just to press the point, there is actually a system which proves the consistency of any system... This system even proves its own consistency.” This proposed hierarchy of systems possesses an interesting structure. For example, each system has a successor which is assumed to be a system, and distinct from any of the previous systems (i.e., no loops). And of course it's necessary to justify speaking of the completed infinite hierarchy, implying induction. The whole concept might almost be formalized as a set of axioms along the lines of: 

(i) ZFC is a System. 
(ii) For any System X, the successor X+con(X) is also a System. 
(iii) ZFC is not the successor of a System. 
(iv) If the successors of Systems X and Y are the same, then X and 
Y are the same. 
(v) If ZFC is in a set M, and if for every System X in M the 
successor X+con(X) is in M, then M contains every System. 

Now we're getting somewhere! If we can just prove this is consistent... By the way, here's an interesting quote from Ian Stewart's book "The Problems of Mathematics": 

Mathematical logicians study the relative consistency of different axiomatic theories in terms of consistency strength. One theory has greater consistency strength than another if its consistency implies the consistency of the other (and, in particular, if it can model the other). The central problem in mathematical logic is to determine the consistency strength in this ordering of any given piece of mathematics. One of the weakest systems is ordinary arithmetic, as formalized axiomatically by Giuseppe Peano... Analysis finds its place in a stronger theory, called secondorder arithmetic. Still stronger theories arise when we axiomatize set theory itself. The standard version, ZermeloFrankel Set Theory, is still quite weak, although the gap between it and analysis is large, in the sense that many mathematical results require MORE than ordinary analysis but LESS than the whole of ZF for their proofs. 

It should be noted that Peano's Postulates (P1P4 in Andrews, "An Introduction to Mathematical Logic and Type Theory") assert the existence of an infinitely large set. Since it is not possible to simple ‘exhibit’ an infinite model, any such assertion must simply assume that it is possible to speak about such things without the possibility of contradictions. And indeed the Axiom of Infinity is one of the axioms of ZF. Hence the assertion that PA can be “proved” within ZF can be taken as nothing more than a joke. 

Much of this depends on the status of induction. Normally we are careful to distinguish between common induction and mathematical induction. The former consists of drawing general conclusions empirically from a finite set of specific examples. The latter is understood to be an exact mathematical technique that, combined with the other axioms of arithmetic, can be used to rigorously prove things about infinite sets of integers. For example, by examining the square number 25 we might observe that it equals the sum of the first five odd integers, i.e., 5^{2} = 1+3+5+7+9. We might then check a few more squares and by common induction draw the general conclusion that the square of N is always equal to the sum of the first N odd numbers. In contrast, mathematical induction would proceed by first noting that the proposition is trivially true for the case N=1. Moreover, if it's true for any given integer n it is also true for n+1 because (n+1)^{2}  n^{2} equals 2n+1, which is the (n+1)th odd number. Thus, by mathematical induction it follows that the proposition is true for all N. 

Understandably, many mathematicians take it as an insult to have mathematical induction confused with common induction. The crucial difference is that MI requires a formal implicative relation connecting all possible instances of the proposition, whereas CI leaps to a general conclusion simply from the fact that the proposition is true for a finite number of specific instances. Of course, it's easy to construct examples where CI leads to a wrong conclusion but, significantly, CI often leads to correct conclusions. We could devote an entire discussion to "the unreasonable effectiveness of common thought processes", but suffice it to say that for a system of limited complexity the possibilities can often be "spanned" by a finite number of instances. 

In any case, questions about the consistency of arithmetic may cause us to view the distinction between MI and CI in a different light. How do we know that (n+1)^{2}  n^{2} always equals 2n+1? Of course this is a trivial example; in advanced proofs the formal implicative connection can be much less selfevident. Note that when challenged as to the absolute consistency of formal arithmetic, one response was to speak of "the supreme confidence that a century of working with ZF has given us". This, of course, is nothing but common induction. So too are claims that arithmetic must be absolutely consistent because otherwise bridges couldn't stand up and check books wouldn't balance. (These last two are not only common induction, they are bad common induction.) 

Based on these reactions, we may wonder whether, ultimately, the two kinds of induction really are as distinct as is generally supposed. It would seem more accurate to say that mathematical induction reduces a problem to a piece of common induction in which we have the very highest confidence, because it represents the pure abstracted essence of predictability, order, and reason that we've been able to infer from our existential experience. Nevertheless, this inference is ultimately nothing more (or less) than common induction. 

It's clear that many people are highly disdainful of attempts to examine the fundamental basis of knowledge. In particular, some mathematicians evidently take it as an affront to the dignity and value of their profession (not to mention their lives) to have such questions raised. (One professional mathematician objected to my quoting from Morris Kline's "Mathematics and the Loss of Certainty", advising me that it is “a very very very very very very pathetic and ignorant book”.) In general, I think people have varying thresholds of tolerance for selfdoubt. For many people the exploration of philosophical questions reaches its zenith at the point of adolescent sophistry, as in "did you ever think that maybe none of this is real, and some demon is just messing with our minds?" Never progressing further, for the rest of their lives whenever they encounter an issue of fundamental doubt they project their own adolescent interpretation onto the question and dismiss it accordingly. 

In any case, this discussion has provided some nice examples of reactions to such questions, including outrage, condescension, bafflement, fascination, and complete disinterest. The most controversial point seems to have been my contention that every formal system is inconsistent. I was therefore interested to read in Harry Kessler’s “Diaries of a Cosmopolitan” about a discussion that Kessler had had at a dinner party in Berlin in 1924. 

I talked for quite awhile to Albert Einstein at a banker's jubilee banquet where we both felt rather out of place. In reply to my question what problems he was working on now, he said that he was engaged in thinking. Giving thought to almost any scientific proposition almost invariably brings progress with it. For without exception, every scientific proposition was wrong. That was due to human inadequacy of thought and inability to comprehend nature, so that every abstract formulation about it was always inconsistent somewhere. Therefore, every time he checked a scientific proposition, his previous acceptance of it broke down and led to a new, more precise formulation. This was again inconsistent in some respects and consequently resulted in fresh formulations, and so on indefinitely. 
