I think you might like to read a great recent paper by Scott Aaronson called "Why Philosophers Should Care About Computational Complexity". It covers a wide range of topics in philosophy that have been dramatically changed not just by computability but also by complexity theory.
It discusses a few points about Godel. In particular there is a great (but not well-known) letter mentioned in it from Godel to Von Neumann in which Godel essentially anticipates the whole P vs. NP idea and what its ramifications would be on human mathematics if P happened to actually be equal to NP.
Another recent paper that uses Godel's theorems in a very technical way to address a philosophical problem is "The Surprise Examination Paradox and the Second Incompleteness Theorem" by Kritchman and Raz.
In it, they take the classic example of an exam that will be given next week, but you won't be able to know the day of the exam ahead of time (it's also often re-phrased in terms of an execution next week, but you won't know the day of the execution; this is how it is described at Wikipedia).
There is a very naive "resolution" to this paradox using backward induction. Kritchman and Raz give a cool argument that basically claims that it all hinges on what you mean by "to know the day of the exam ahead of time." It turns out that if you mean "be able to prove the exam won't be tomorrow," then Godel's theorem actually lets you escape the backward induction and hence the seemingly paradoxical set-up doesn't have to be paradoxical at all.
Also, a very very important place where Godel's theorem was invoked is in Roger Penrose's book "The Emperor's New Mind." Penrose's main argument is that brains cannot be given a fully reductionist explanation in terms of currently understood physics because there's just something about a human mathematician that can somehow "see" the consistency of the mathematician's own "formal system" which ought to be prevented by Godel's theorem if our brains were just formal systems in the sense of Turing machines / Church-Turing thesis. And hence, Penrose rejects the plausibility of Strong A.I., pending the discovery of something like quantum gravitational effects in a brain (which he asserts we wouldn't be able to engineer or harness for the A.I. part).
I believe Robin Hanson wrote up an excellent rebuttal to Penrose's highly speculative use of Godel's theorem (link). Here's just a brief quote from that rebuttal:
"Penrose gives many reasons why he is uncomfortable with computer-based AI. He is concerned about "the 'paradox' of teleportation" whereby copies could be made of people, and thinks "that Searle's [Chinese-Room] argument has considerable force to it, even if it is not altogether conclusive." He also finds it "very difficult to believe ... some kind of natural selection process being effective for producing [even] approximately valid algorithms" since "the slightest 'mutation' of an algorithm ... would tend to render it totally useless."
These are familiar objections that have been answered quite adequately, in my opinion. But the anti-AI argument that stands out to Penrose as "as blatant a reductio ad absurdum as we can hope to achieve, short of an actual mathematical proof!" turns out be a variation on John Lucas's much-criticized "Godel" argument, offered in 1961.
A mathematician often makes judgments about what mathematical statements are true. If he or she is not more powerful than a computer, then in principle one could write a (very complex) computer program that exactly duplicated his or her behavior. But any program that infers mathematical statements can infer no more than can be proved within an equivalent formal system of mathematical axioms and rules of inference, and by a famous result of Godel, there is at least one true statement that such an axiom system cannot prove to be true. "Nevertheless we can (in principle) see that P_k(k) is actually true! This would seem to provide him with a contradiction, since he aught to be able to see that also."
This argument won't fly if the set of axioms to which the human mathematician is formally equivalent is too complex for the human to understand. So Penrose claims that can't be because "this flies in the face of what mathematics is all about! ... each step [in a math proof] can be reduced to something simple and obvious ... when we comprehend them [proofs], their truth is clear and agreed by all."
And to reviewers' criticisms that mathematicians are better described as approximate and heuristic algorithms, Penrose responds (in BBS) that this won't explain the fact that "the mathematical community as a whole makes extraordinarily few" mistakes.
These are amazing claims, which Penrose hardly bothers to defend. Reviewers knowledgeable about Godel's work, however, have simply pointed out that an axiom system can infer that if its axioms are self-consistent, then its Godel sentence is true. An axiom system just can't determine its own self-consistency. But then neither can human mathematicians know whether the axioms they explicitly favor (much less the axioms they are formally equivalent to) are self-consistent. Cantor and Frege's proposed axioms of set theory turned out to be inconsistent, and this sort of thing will undoubtedly happen again."
As a final aside, I think the Aaronson paper linked above does a superb job of synthesizing the complexity-theory reasons why the Chinese Room argument totally fails. It's just a nerd interest, but something perhaps others here will appreciate.
You could write an essay about how people erroneously draw philosophical consequences from the theorem. The irony would be sweet and there would be a sense of originality in that you'd be relaying your own attempts and the attempts of others.
– 000 Apr 14 '12 at 18:55