0

In 1948, Von Neumann mentioned that some processes in nature might be irreducibly complex. “It is not at all certain that in this domain a real object might not constitute the simplest description of itself, that is, any attempt to describe it by the usual literary or formal-logic method may lead to something less manageable and more involved.”

A similar idea was invoked by David Marr in a 1977 paper, “Artificial Intelligence— a Personal View.” Marr distinguishes between two types of information processing problems. With a Type 1 problem, one can formulate the computational theory behind it, and then devise an algorithm to implement the computation (37). Comparatively, for a Type 2 problem, any abstraction would be unreliable because the process being described evolves as the result of “the simultaneous action of a considerable number of processes, whose interaction is its own simplest description” (38).

Marr presents predicting how a protein will fold as an example: “A large number of influences act on a large polypeptide chain as it flaps and flails in a medium. At each moment only a few of the possible interactions will be important, but the importance of those few is decisive. Attempts to construct a simplified theory must ignore some interactions; but if most interactions are crucial at some stage during the folding, a simplified theory will prove inadequate” (39).

A Type 2 problem involves physical processes that start and end in an interpretable state, but their trajectory is unpredictable; the factors mediating the start and end state are large in number, and on any individual run are impossible to predict. Thus, for brains to solve these sorts of problems, they would have to encode the initial state of the type 2 system, and then simulate the physical interaction of its parts. If this interaction relies on physical equilibria, there may be no reliable way of running an internal simulation. One would need to rely on physical interaction and the intrinsic unpredictability of those analog systems. For example, to compute the behavior of an n body system, such as our solar system, our best hope is to construct a small analog version of that system—an orrery—then run the model and read off the result. Through this analog process, one can compute a function they have no other reliable way of computing.

Have there been mathematical arguments that follow a trajectory similar to that of a Type II problem? Arguments that depend on external states because the proof (informal or formal) is, perhaps, neither fully understood or verifiable? By ‘understood,’ I’m referring to instances where, much like type 2 problems, our simplest description of the problem is the problem itself.

At first glance, I was curious if proofs that proceed by diagonalization (Godel’s first incompleteness theorem, Cantor’s theorem for all cardinal numbers, Turing’s theorem that the halting problem is undecidable) could be taken as examples. I haven’t completely abandoned this line of thinking, but I don’t have much to substantiate it either.

Chimera
  • 9
  • 1
  • I’m reminded of a Gödelian statement of number theory that can be interpreted as asserting “this statement cannot be proved in fewer than $10^{100}$ steps.” The statement is true, of course, but by virtue of its self-reference, it takes monstrously long to prove it within the system. – Franklin Pezzuti Dyer Jun 25 '20 at 21:00
  • 1
    I don't see anything special about diagonalization etc. In mathematics, if we have a proof, we have a proof. There is no non-determinism involved. – Rob Arthan Jun 25 '20 at 21:06
  • 1
    There's considerable ambiguity here. What if part of a proof is fully understood and the rest of the proof is verifiable by computer (like the original proof of the four-color theorem)? What if every part of the proof is fully understood (by some human beings) but no single human being understands the whole proof (which I think is the case for the classification of finite simple groups)? – Andreas Blass Jun 26 '20 at 00:17
  • Re: your last paragraph, those proofs are all quite well-understood by humans and are formalizable (see e.g. here). So I'm not sure what the point there is. – Noah Schweber Jul 08 '20 at 17:40
  • 1
    A proof that is "neither fully understood nor verifiable" is not a proof. – Mauro ALLEGRANZA Jul 09 '20 at 11:43

0 Answers0