3

In computer science it is often assumed that a human mind can be reduced to a Turing machine. This is the assumption that underlies the field of artificial intelligence.

However, it is an assumption, one that has neither been proven or disproven.

Is there any kind of test within our current capabilities where we can prove/disprove this assumption?

If not, is there any evidence that would suggest one way or another?

Here is a similar question I asked awhile back on theoretical computer science:

https://cstheory.stackexchange.com/questions/3170/human-intelligence-and-algorithms

yters
  • 1,417
  • 2
  • 12
  • 21
  • 4
    That we can't come up with a more powerful model of computation suggests that there's not a more powerful model, but it's not proof. Of course, it's all a little ill-posed, since finite brain matter means finite state. There are regular languages for which the human mind can't decide membership, even if we had enough time to try. In that sense, we can say with 100% confidence that the human mind isn't Turing equivalent. Not even close. At least, real human minds aren't. To talk about hypothetical minds, we need a scalable model for the mind, which we are still looking for. – Patrick87 May 02 '14 at 00:22
  • 1
    That assumes the mind is computational, which is equivalent to saying it is Turing reducible. So, your answer still begs the question. – yters May 02 '14 at 00:27
  • Good point :) Will have to think about whether there's any value in my comment given this. Still... my gut says that finite memory/state could be an issue, but if we can choose any model... – Patrick87 May 02 '14 at 02:41
  • If we're being really pedantic, the fact that there's a finite amount of atoms in the human mind means it's not Turing-complete. I would argue that a human mind with a pen and unlimited paper is Turing complete, since clearly a human is able to simulate a TM by hand. – Joey Eremondi May 02 '14 at 05:29
  • 1
    Valid point, but that's why I said Turing reducible, not Turing complete. A finite Turing machine can be represented by an infinite Turing machine. The question is whether the finite human mind is reducible to a finite Turing machine. – yters May 02 '14 at 05:33
  • 3
    I dispute your premise that the human mind being reducible to a Turing machine is the underlying assumption of AI. It's perfectly possible to ask questions along the lines of "How much intelligent behaviour can I simulate and approximate with a computer?" without assuming anything about Turing reducibility. Are there even any AI researchers whose goal is to simulate the whole human mind? There might have been in the early days but it was quickly realised that this is far too difficult to do, so people adopted more realistic goals. – David Richerby May 02 '14 at 09:29
  • 2
    @DavidRicherby: The money has moved from AI to neuroscience, but still based on the materialist axiom/faith that human conciousness can be reduced to physics. See https://www.humanbrainproject.eu/ and http://en.wikipedia.org/wiki/Henry_Markram – Wandering Logic May 02 '14 at 11:56
  • 2
    @yters: you should look at the Church-Turing thesis. There's a reason that it's (at best) a thesis or conjecture and not a theorem. No one even knows how to state it axiomatically. – Wandering Logic May 02 '14 at 12:00
  • @WanderingLogic What about the work og Dershowitz ande Gurevitch on proving Church-Turing Thesis from more elementary axioms? – babou May 10 '15 at 20:25
  • John Searle's Chinese room argument tries to disprove that assumption. Indeed it isn't a proof but is good food for thought. – manlio May 18 '16 at 13:39

5 Answers5

1

To answer your question:

Is there any kind of test within our current capabilities where we can prove/disprove this assumption?

The Turing test was conceived as a way to test a particular special case of this assumption: it gives us a way to test whether a particular AI system is successful at behaving indistinguishably from a human. Thus, this would give a plausible way to prove the assumption -- if we could come up with an AI system that is good enough to pass the Turing test. Unfortunately, that's something we haven't been able to do, yet.

D.W.
  • 159,275
  • 20
  • 227
  • 470
  • The Turing test would not prove the mind is Turing reducible. It would only show that certain human interactions are Turing reducible. – yters May 02 '14 at 00:50
  • 4
    @yters I'm afraid this is a question you'll never answer with a mathematical proof; it's about something that really exists in the physical world, and we'll therefore never really know enough about it to be sure. That's the nature of empiricism. Of course, once we have a sufficiently useful model of the human mind, we can prove something about the model... But never really about the gray matter itself. – Patrick87 May 02 '14 at 02:45
  • We can know certain things about how algorithms will perform, for example with the No Free Lunch Theorem. If we observe a human performing outside those bounds we'd have good evidence the person is not reducible to a Turing machine. Or, if a human can accurately say a program halts more often than expected if the mind were a Turing machine. Etc. It's definitely not an area investigated often by computer science, but it doesn't seem to be a completely intractable question, and any sort of definitive answer would be incredibly valuable. Which is why I asked. – yters May 02 '14 at 02:53
  • 2
    I do agree with yters that a Turing test would only prove that certain tasks are Turing reducible. Thus, it is only an assumption that applies to operational artificial intelligence. For people working in strong artificial intelligence, a Turing test is not enough (how to prove whether a computer really loves you or that it can truly create/understand art or the fear to death?) – Carlos Linares López May 02 '14 at 07:43
1

In computer science it is often assumed that a human mind can be reduced to a Turing machine.

Since when? I've read a lot of computer science papers and never once encountered this assumption.

This is the assumption that underlies the field of artificial intelligence.

Not really. I think artificial intelligence can exist independently of the ability to emulate human intelligence. Deep Blue beat Kasparov, and we're pretty sure it didn't do it by emulating human thought processes.

However, it is an assumption, one that has neither been proven or disproven. Is there any kind of test within our current capabilities where we can prove/disprove this assumption?

I personally suspect the assumption is true. I think it could only be proven by constructing a computer simulation of a particular human's brain and asking a series of questions, both of the human and of the simulated version, and seeing if the answers are indicative of a similar level of skill and knowledge. I would not expect the answers to be identical, even if the simulation is highly accurate. Constructing a computer simulation of a human brain is not remotely feasible at present.

If not, is there any evidence that would suggest one way or another?

With accurate equations, we expect a TM to be, in principle, capable of simulating any physical system, including in particular a human brain. The hard parts are (1) having correct quantum mechanics equations and (2) data acquisition of the initial state of a human brain. While these are not feasible today, there's no reason to believe they cannot be done in principle. Note we assume human thought is "reducible" to a TM even if the simulated brain is way slower than a real brain.

Atsby
  • 141
  • 1
  • If human intelligence is not reducible, then artificial intelligence is not intelligence. The very name of the field highlights the assumption. – yters May 18 '16 at 18:05
  • 1
    @yters: You make the unwarranted assumption that AI researchers think in terms of Turing machines. They don't. In fact, most AI uses models which are not as powerful as Turing machines. – Andrej Bauer Jun 11 '17 at 09:51
  • @AndrejBauer, if human intelligence can do something that is not Turing reducible, then a procedure that is less powerful than a Turing machine certainly cannot replicate human intelligence. – yters Jun 11 '17 at 21:24
0

The trivial answer is "no" because the Turing machine has infinite memory and no human can.

Since a computer can be made entirely of NAND gates, and the human neurology can implement NAND gates, it is theoretically possible that a Turing-machine-with-limited-memory could be built from neurons.

It may be that being conscious is what it is like to be the on-board computer implemented by neurology.

-2

If we identify a certain task that is non-computable, but the human mind perform, then this proves the human mind is not a Turing machine.

As an example, Turing machines cannot make the distinction between proof and truth. Yet, we humans can, as with the statement "this statement is unprovable," which is true but unprovable.

yters
  • 1,417
  • 2
  • 12
  • 21
  • How would you prove that a human mind can solve a non-computable problem? You can't. Note that "solving a problem" requires the human mind to be able to solve all instances of the problem, and there are infinitely many instances -- and I can't imagine any way to prove that the human mind can do that. Just because the human mind can solve one instance of the problem, or a few instances, or many instances, proves nothing -- it certainly doesn't prove the human mind is not a Turing machine.
  • – D.W. Jun 11 '17 at 04:46
  • Your second paragraph is not an example of the situation mentioned in the first paragraph. Distinguishing between proof and truth is not the same thing as an uncomputable task. Moreover, what makes you think that Turing machines can't make such a distinction just as well as the human mind can? I don't see any proof or evidence of that.
  • – D.W. Jun 11 '17 at 04:47
  • @D.W. Turing machines cannot identify truth. All they can do is identify that one statement syntactically follows from another. But, that is not truth. For example, the premises may be false, in which case all conclusions are false. Additionally, even if the premises are true, only a small subset of true statements can be proven from any set of axioms, and there are an infinite number of true but unprovable statements, showing that truth and proof and not co-extensive. Yet, we humans know that proof and truth are different things, highlighting the distinction between minds and machines. – yters Jun 11 '17 at 21:28
  • Also, per the second incompleteness theorem, no truthful Turing machine can prove its axioms are consistent, and thus it cannot know its conclusions are true. – yters Jun 11 '17 at 21:41