Home      Last Modified 10/26/2006  

A Review of What Gödel’s Incompleteness Result Does and Does Not Show

June 1st, 2006

Gödel’s Incompleteness results have been used by some as an argument for a “no-computer” thesis – i.e. that “no computer program that produces, or searches for, proofs can achieve what is achievable by mathematicians who engage in proving theorems by mathematical reasoning”. Haim Gaifman in What Gödel’s Incompleteness Result Does and Does Not Show undercuts such arguments, and plants in their stead a new conclusion from Gödel’s Incompleteness, that if a computer could replicate all of human mathematical reasoning, we would be unable to form a belief on whether this computer is indeed consistent (i.e. does not prove falsities). This conclusion challenges not the possibility of strong A.I., but rather our ability to understand or verify its existence or nature.

Gaifman first disassembles a common argument from Gödel’s Incompleteness to the no-computer thesis. Computers work only within a deductive system, and a computer can be said to “know” only those sentences that it can prove. Given Gödel’s 2nd Incompleteness Theorem, however, any theory T extending basic arithmetic can never prove its own consistency, and so, the argument goes, T’s consistency must be outside the realm of what a computer can know. We, on the other hand, can know that T is consistent. We achieve this knowledge by bridging, via a proof of T’s soundness, the syntactic world of provability to the semantic world of truth. If all axioms of T are true and all inference rules preserve truth, then “whatever we prove from the axioms by applying inference rules must be true, a fortiori, non-contradictory”. The lynch pin in the argument for the “no-computer” thesis is the perceived inaccessibility of semantic notions of truth from a computer, a point that Gaifman quickly refutes. Semantic truth, he reveals, is easily accessible to a computer through the expansion of T to a richer system which includes a truth predicate proving all those theorems that are true. In this way a computer can use this richer system to know T is consistent, in the same way that we can know that T is consistent – i.e. using a proof of its soundness. As a result, Gaifman finds it conceivable that a computer could mimic human mathematical reasoning, and surely finds no obstacle in Gödel’s Incompleteness.

Even independent of soundness, we can know that our mathematical reasoning, call it again T, is consistent via simple self-reflection.

Observe, moreover, that the mathematician’s ability derives from a certain capacity for self-reflection. Reflecting on one’s mathematical reasoning and realizing that it can be formalized as T, one takes Con(T) as something that can be legitimately used in mathematical reasoning.

Gaifman goes on to show that no matter what is in T, there must be something outside of T, say f, that allows us to reason that T is consistent. If we knew what f was and could reason that f combined with T was consistent as well, then there must be some additional tool, f’, that allows us to reason thusly (since no system can prove its own consistency, by Gödel’s 2nd Incompleteness). T, therefore, can be defined as the largest formal system that is “recognizably valid”, and for any such system there must be some other system, call it T*, beyond the reach of T that allows us to know T’s consistency.

It is using this line of argument that Gaifman shows that if a computer could somehow generate all the theorems of T* (i.e. the sum of our mathematical reasoning), we would be unable to believe in the consistency of this computer on mathematical grounds. “When it comes to T*, we can say that the theorems of T* coincide with what is humanly provable in principle...but...we could not, as a matter of principle, recognize T* as valid”. In other words, strong A.I. is possible, but not verifiable by us.

The conclusion that we can only self-reflect on part of our “rational apparatus” seems, to me, counterintuitive (but of course unassailable given Gödel’s result). Given this, we seem to be in the position where we must either (1) give up our ability to self-reflect all which encompasses our mathematical reasoning or alternatively (2) give up the knowledge that one piece, T, is itself consistent. I find the latter argument intriguing, and absent from Gaifman’s paper. Perhaps T is all that our mathematical reasoning covers, and the knowledge that T is consistent is only achieved using some extra-mathematical reasoning (e.g. insert any epistemological justification here). In other words, it is not mathematical reasoning, a T*, that allows us know that T is consistent, but rather some T + X. In this case, however, his conclusion would not change in that any computer that could mimic all of human mathematical reasoning we would still be unable to show its consistency.

I am currently living in Pittsburgh, PA, working as a Senior Technical Consultant for Summa, and studying as a part-time graduate student in Philosophy at Carnegie Mellon University.