8

When I first heard of these things I was very fascinated as I thought it sets really a limit to mathematics and science in general. But how practically relevant are these things?

For the Halting Problem: Are there more than some artificially constructed cases, where one can't decide whether the algorithm will terminate or not?

For the Incompleteness Theorem: Are there more than some artificially constructed cases, where one can't prove/disprove statement?

I'm asking this because it seems that in most areas of science it doesn't really matter that there are such fundamental limitations. Are they even there? I'd like to know where this really sets a limit and where it is really relevant.

Raphael
  • 72,336
  • 29
  • 179
  • 389
Nocta
  • 121
  • 1
  • 5
  • 4
    The halting problem result does not state that you cannot decide whether a given TM does not halt. It states that there is no general algorithm that can decide that for all TM. – babou May 28 '14 at 13:37
  • 2
    Well yes but what are the practical implications of this? Is it really relevant or does it only matter in artificially constructed cases? – Nocta May 28 '14 at 13:52
  • Maybe something to look into is total functional programming, which allows only terminating programs. Then you can see how often practitioners run into having to fall back on weak functional programming (Turing complete) in order to solve problems. As far as general purpose programming languages are concerned, I have only heard of Idris - the rest seem to be theorem provers. – Guildenstern May 28 '14 at 14:36
  • So you're asking if these things are really actually important or even useful (as opposed to just "interesting")? Would you include quantum uncertainty principle? – d'alar'cop May 28 '14 at 14:38
  • 1
    Many interesting and hard mathematical problems could be solved easily if the halting problem could be solved. For example, Fermat's Last Theorem. Takes me 5 minutes to write a program that will halt if and only if there is a solution to a^n + b^n = c^n with a, b, c > 0 and n >= 3. If you prove or disprove that it halts, that's FLT proven. "There are infinitely many twin primes" is only slightly more difficult to prove if the halting problem is solved. – gnasher729 May 28 '14 at 17:56
  • 2
    Please restrict yourself to one question per post; the two theorems you reference have little to do with each other. Your phrasing also suggests that you did not really understand what they say (see babou's comment); you have to do some more reading! These are deeply mathematical-formal statements that can not be properly grasped in a pop-science fashion. – Raphael May 28 '14 at 18:07
  • @gnasher729 "solving the halting problem" doesn't mean there is an efficient algorithm to do it. – Paŭlo Ebermann May 28 '14 at 23:15
  • @MilesRout: You'll have to give me more in order to convince me or enable me to reason back. – Raphael Jun 03 '14 at 06:19
  • @MilesRout More blanket statements won't help you make your point. I think I have more than just a rudimentary knowledge and I don't think they are that closely related (for one thing, they are statements about completely different objects). Not knowing anything about your competence, I don't see why I should just take your words for granted. (For the record, I think I may be feeding a troll here, but well.) – Raphael Jun 03 '14 at 08:52

3 Answers3

9

The halting problem being undecidable has lots of practical relevance, here is a quick example:

Writing anti-virus software is hard: We can't decide whether a given piece of code is malicious because if we could we could decide the halting problem.

To see this take a piece of code which takes as input a Turing machine $M$ and an input word $w$ and does something malicious if and only if $M$ halts on input $w$. If we could decide whether a given piece of code were malicious then we could decide whether this piece of code was malicious but then we would be able to decide the halting problem, which we know we cannot do.

What this is saying is that there is no perfect anti-virus software, it can't be done. That doesn't mean we shouldn't try to write anti-virus sotware, just that we will never be able to write a perfect one. In fact any statement about deciding what programs do is undecidable (see Rice's theorem).

With respect to Godel's theorem, Goodstein's theorem is an example of a statement which is unprovable using the Peano axioms.

David Richerby
  • 81,689
  • 26
  • 141
  • 235
Sam Jones
  • 1,141
  • 7
  • 17
  • 5
    With respect, I think this answer is a perfect example of failing to make the distinction that the OP is asking about: you're using the halting problem in a way which has no practical relevance. In practice, there is no reason for a benign piece of code to contain instructions that perform a malicious action, so it doesn't matter that we can't work out whether those instructions ever get executed or not; if they're there, the code can be considered to be malicious. – Harry Johnston May 29 '14 at 01:21
  • (I suspect you can more properly draw the same conclusion by considering instead sets of instructions whose combined effect may be either benign or malicious, depending on what combination of instructions get executed, and then arguing that under the right circumstances you can't figure out which combinations are possible. But I'm not sure quite how to put it all together rigorously.) – Harry Johnston May 29 '14 at 01:27
  • @HarryJohnston A possibility is disguising malicious instructions in data. For some programs we may not be able to decide whether this data is ever executed as code, or is simply a benign string that happens to look like malicious instructions.

    Of course in mainstream operating systems (and even processors) there is an enforced separation between read-only executable memory and writable data memory, but there are scenarios where this is not the case.

    – WaelJ May 29 '14 at 11:19
  • @HarryJohnston: That would be true if we could query the instruction's evil bit, but we can't. My arbitrary piece of software reads a config file from ~/Documents, and supports deleting its own configuration. There is an execution path where the "file to delete" is set to ~/Documents before "delete" is called, which is extremely malicious, but there's no way of saying "hey, this application has these two things, it must be malicious!" without looking what it's doing with them. Which you can't do in the general case, because halting problem. – Phoshi May 29 '14 at 14:31
  • @Phoshi: yes, that's an example of the argument I describe in my second comment. It is my guess that it is possible to make this rigorous given certain assumptions, though it isn't obvious to me off the top of my head how best to approach it. OTOH, it may be that to make it rigorous you'd need to allow the application to analyze itself (i.e., inspect its own code) and it should be safe in practice to disallow this. – Harry Johnston May 29 '14 at 20:53
  • @HarryJohnston: I think you could get pretty close with set pathToDelete to ~/Documents; if (function f halts) then append /config to pathToDelete; delete pathToDelete. In order to figure out what gets deleted you must solve the halting problem. Not fully rigorous, but convincing to me, and I don't see any unfounded logical leaps. – Phoshi May 30 '14 at 08:13
  • @HarryJohnston, I'm not completely sure I understand your argument. My sketch proof above shows that the practical problem of deciding whether an arbitrary piece of code is malicious is undecidable, it doesn't require that the piece of code you are analysing contain instructions that are obviously malicious. – Sam Jones May 30 '14 at 14:12
  • @SamJones: in your sketch proof, you don't know whether certain instructions are executed because you can't solve the halting problem. That implies that you can tell that those instructions are malicious. If you can't tell that those instructions are malicious, then the halting problem is irrelevant to your argument, because even if you could solve it you wouldn't be able to conclude that the code was malicious. – Harry Johnston May 31 '14 at 04:29
  • 1
    @Phoshi: it seems to me that given such a straightforward construction you could legitimately conclude that the code must be malicious because there is no reason for the malicious potential code path to exist, and also because there is no reason for the turing machine to be present at all. (It is obvious that detecting this scenario wouldn't be practical in general, but then the conclusion we're aiming at was obvious from the get-go, so IMO introducing the halting problem hasn't really helped us much.) – Harry Johnston May 31 '14 at 04:43
  • @Phoshi: I think there may also be an issue of rigorousness because the proof that there's no general solution to the halting problem (one that works for all turing machines and all inputs) doesn't necessarily demonstrate that there's no solution that is good enough in this particular scenario (one that works for all credible turing machines and inputs). In other words we haven't proved that it's impossible to build a halting problem solver that succeeds for all non-malicious turing machines. Again, it's obvious, but we haven't actually proved it. :-) – Harry Johnston May 31 '14 at 04:52
6

For the Halting Problem: Are there more than some artificially constructed cases, where one can't decide whether the algorithm will terminate or not?

there are quite a few "roughly practical/applied" contexts with active research where the halting problem plays a role:

  • automated theorem proving. proving theorems by computers runs into the same undecidability limits of the halting problem.

  • proving program termination for real programs is an area of research and shows up in eg compiler logic and program analysis.

  • Kolmogorov complexity attempts to study the theoretical limits of data compression algorithms. finding an optimal compression (in a certain sense, ie finding the smallest TM compressing a string) is undecidable.

  • undecidability shows up in some physical problems. eg dynamical systems.

  • a basic problem studied called "the busy beaver" problem. still theoretical but less abstract than the halting problem and studied in particular for its connection. researchers have attempted to resolve this for decades for "small" TMs with few states/symbols.

here is an related/interesting quote from a recent paper studying the busy beaver problem "problems in number theory from busy beaver competition" by Michel (p.3):

Actually, the halting problem for Turing machines launched on a blank tape is m-complete, and this implies that this problem is as hard as the problem of the provability of a mathematical statement in a logical theory such as ZFC (Zermelo Fraenkel set theory with axiom of choice). So, when Turing machines with more and more states and symbols are studied, potentially all theorems of mathematics will be met. When more and more non-halting Turing machines are studied to be proved non-halting, one has to expect to face hard open problems in mathematics, that is problems that current mathematical knowledge can’t settle.

in other words the halting problem actually encodes/encapsulates the challenge of attempting to prove new mathematical theorems in math/CS and therefore can be regarded as extremely deep/practical/applied in this sense. (however while some consider this observation obvious or trivial, this is also generally not a commonly held/voiced opinion.)

vzn
  • 11,034
  • 1
  • 27
  • 50
5

I am answering one of your two questions, regarding the halting problem.

First, the undecidability of the halting problem does not state that you cannot decide whether a given TM does not halt. It states that there is no general algorithm that can decide that for all TM.

This is a statement about our models of what constitute computation. But, according to Turing-Church thesis, that is all we have to express compuation.

Regarding the relevance, it is based on artificially constructed Turing Machines. But then, all TM are pretty artificial and constructed only to assert some facts about computation. Whether some TM are more relevant than others in practice is pretty much as important a question as the sex of angels, or the number of them that can stand on a needle head.

The undecidability of the halting problem tells us that there are general questions that cannot be solved by a general technique applicable to all cases. What I mean by general question is a question depending on some parameters, where the answer is to be found for some values of the parameters.

Recall that the purpose of much of our mathematics is to find general techniques to solve a family of problems. A typical example is the resolution of equations. The undecidability of the halting problem tells us that this is not always possible.

For example, it can be used to show that there is no general technique to decide whether a context-free grammar is ambiguous.

However, you question is a valid one. It may be that a problem is undecidable because you just made it a bit too general. Possibly, by restricting it a bit, you can make it decidable for useful and still large enough subfamily.

I do not have a spectacular example in mind, but I am sure there must be some.

I recall one true case of a program analysis problem that was proved NP-complete (unless it was undecidable, I do not remember well). Against all advice, a PhD student decided to tackle it anyway. He was actually able to show that some restrictions on the problem, that did not matter much in practice, turned it into a very tractable problem, thus enabling the use of various program analysis and optimization tools.

babou
  • 19,445
  • 40
  • 76
  • 4
    An example that I think would fit well into your answer is optimizing compilers. The undecidability of the halting problem means that, for example, there's no algorithm that can do perfect dead-code removal or produce the fastest possible executable for a particular source file. But this doesn't stop compilers doing a very good job of these things, in most cases, in practice. – David Richerby May 28 '14 at 14:58