In 1948, Von Neumann mentioned that some processes in nature might be irreducibly complex. “It is not at all certain that in this domain a real object might not constitute the simplest description of itself, that is, any attempt to describe it by the usual literary or formal-logic method may lead to something less manageable and more involved.”
A similar idea was invoked by David Marr in a 1977 paper, “Artificial Intelligence— a Personal View.” Marr distinguishes between two types of information processing problems. With a Type 1 problem, one can formulate the computational theory behind it, and then devise an algorithm to implement the computation (37). Comparatively, for a Type 2 problem, any abstraction would be unreliable because the process being described evolves as the result of “the simultaneous action of a considerable number of processes, whose interaction is its own simplest description” (38).
Marr presents predicting how a protein will fold as an example: “A large number of influences act on a large polypeptide chain as it flaps and flails in a medium. At each moment only a few of the possible interactions will be important, but the importance of those few is decisive. Attempts to construct a simplified theory must ignore some interactions; but if most interactions are crucial at some stage during the folding, a simplified theory will prove inadequate” (39).
A Type 2 problem involves physical processes that start and end in an interpretable state, but their trajectory is unpredictable; the factors mediating the start and end state are large in number, and on any individual run are impossible to predict. Thus, for brains to solve these sorts of problems, they would have to encode the initial state of the type 2 system, and then simulate the physical interaction of its parts. If this interaction relies on physical equilibria, there may be no reliable way of running an internal simulation. One would need to rely on physical interaction and the intrinsic unpredictability of those analog systems. For example, to compute the behavior of an n body system, such as our solar system, our best hope is to construct a small analog version of that system—an orrery—then run the model and read off the result. Through this analog process, one can compute a function they have no other reliable way of computing.
Have there been mathematical arguments that follow a trajectory similar to that of a Type II problem? Arguments that depend on external states because the proof (informal or formal) is, perhaps, neither fully understood or verifiable? By ‘understood,’ I’m referring to instances where, much like type 2 problems, our simplest description of the problem is the problem itself.
At first glance, I was curious if proofs that proceed by diagonalization (Godel’s first incompleteness theorem, Cantor’s theorem for all cardinal numbers, Turing’s theorem that the halting problem is undecidable) could be taken as examples. I haven’t completely abandoned this line of thinking, but I don’t have much to substantiate it either.