I'm defining 'Hard AI' as a human-equivalent intelligent machine, or beyond that. Contrast with 'Soft AI' the type of software that runs on your email filter for example.
I've been chewing on this problem for a little while now, and would like some feedback on my logic. This is the only place I could think to put it, so please feel free to move/remove if this is not appropriate.
My argument hinges on the definition of 'human-equivalent intelligence' as 'capable of solving any given instance of the Halting Problem'. This is something no machine can ever do - while they may be able to solve for sub-sets of the Halting Problem, they can never accept an arbitrary instance of it (ie, specific inputs I for a specific program P) and produce an answer.
Humans, however, do this all the time! The entire profession of programming revolves around fixing incarnations of the Halting Problem that pop up in one's own codebase. There are problem-tracking instances full of real-world examples of the Halting Problem - "Subsystem Z stops responding after receiving A with B options", etc. These examples are fixed by humans, and the problem goes away. Sometimes instructions are written for the machine to verify that the problem hasn't returned, taking advantage of the recognizability of solutions to the Halting Problem.
Since any 'Hard AI' would need to be just as capable as humans in any intellectual pursuit, and we KNOW that a machine that solves the Halting Problem cannot exist, we must conclude that there can never be human-intelligent machines.