4

Is there a standard way to model the complexity of a mathematical problem? If so, what are the best online resources to learn more about it?

Honestly don't have an even a basic way to model the complexity of a problem other than "depth" of concepts required to solve the problem and the smallest number of steps require to reach an answer. For example, "1+1=2" I would venture to guess is less complex than "1x2=2".

sxd
  • 3,504
blunders
  • 263

2 Answers2

4

Computational Complexity?

It is indeed faster for a computer to perform addition than multiplication.

  • +1 @user474632: Thanks, your answer lead me to find this Computational complexity of mathematical operations - that said, looking for more of a complete way to take a set of problems and rank their complexity. Again, thanks and if you're still the best answer I'll award you the extra 50 points and select you as the answer. Cheers! – blunders Dec 01 '11 at 21:47
  • 1
    Depends on which problems and what type of complexity. The method of storing previous solutions in a table for example can greatly reduce the time to solve a problem, but the space-complexity for a large table is higher. – user474632 Dec 01 '11 at 22:00
  • +1 @user474632: Yes, all true. The goal is not to create optimized execution plans, but to take a problem as is and assign it a complexity rating. Took a look at why Furer's algorithm for multiplication beats versions based on the Chinese Remainder Theorem, and it was way over my head. Guess the best answer for me would be an open-source software that was able to read problems formatted in a standard markup, for example MathML, and assign them a score; also guess either that's flawed, or does not exist. – blunders Dec 01 '11 at 23:01
  • 1
    It is based on the Chinese Remainder Theorem. It also clearly displays that the 'complexity' in refining even the simplest of mathematical operations, is unknown. Basically, we have to choose a measuring stick. It's fairly practical to time how long it takes a computer to evaluate a given problem, for example. Or stick to algorithmic time complexity, it's up to you. – user474632 Dec 02 '11 at 06:36
  • +1 user474632: I'd thought about using runtime performance as a measure, but seems so brute force, and wanted to make sure I understood at least at a high-level that the approach would male sense. Findings that you explained make perfect sense, thanks for adding that! – blunders Dec 02 '11 at 11:58
  • +1 @user474632: I went forward with modeling complexity based on runtime and appears based on my research to be a deadend; reason being I believe that runtime is effected by more than just the code run, and the environment impacts the code's runtime. Here's the code I used, which when run twice return different runtimes for the same problems "1+1"-vs-"1x2". – blunders Dec 02 '11 at 17:14
  • clearly creating code is beyond the scope of the question, but the concept of runtime seems unstable as a measure of complexity. Any thoughts? Again, thanks! – blunders Dec 02 '11 at 17:16
  • Here's a better proof of the runtime stability issue, benchmark irregularites. – blunders Dec 02 '11 at 17:59
  • 1
    Computational complexity accounts for the time-space tradeoff but it specifically tests algorithms. 'Problems' are not inherently algorithms, so how they are reduced affects their complexity. Iow, if the reduction provided by your computer was unsatisfactory, another standard must be introduced; Choosing the fastest known algorithm representing a problem, taking an average of several appropriate algorithms, etc. – user474632 Dec 03 '11 at 02:39
2

The complexity of the flow of information is one measure for the complexity of a mathematical problem. I will try to explain by example what I mean by this. Let's assume that we want to solve $F(x_1,\ldots,x_n)=0$ for $x_1,\ldots,x_n$.

  1. In the simplest case, each $x_i$ can be computed completely independent of the other $x_j$.

  2. More interesting is the case where $x_i$ can be computed based only on $x_1,\ldots,x_{i-1}$ and completely independent from $x_{i+1},\ldots,x_n$.

  3. The LU decomposition of a matrix (=Gaussian elimination) reduces $F(x_1,\ldots,x_n)=0$ to the form $L(y_1,\ldots,y_n)=0$ and $U(x_1,\ldots,x_n)=R(y_1,\ldots,y_n)$. Here, $L$ is such that $y_i$ can be computed based only on $y_1,\ldots,y_{i-1}$ and $U$ is such that $x_i$ can be computed based only on $x_{i+1},\ldots,x_n$.

  4. I should now try to describe the underlying information flow for a problem that can be solved by divide and conquer, but let's skip that step.

Let's look at differential equations for another example.

  1. For an initial value problem of an ordinary differential equation, to flow of information goes from the past into the future, so it is relatively simple.

  2. For a boundary value problem of an ordinary differential equation, the direction of the flow is no longer clear, but it is clear that the flow can be blocked/controlled by "some" values at the boundary of any interval. That's a bit similar to the situation where divide and conquer is applicable (which I omitted above).

Things get more challenging when the flow of information depends on the solution itself. This situation arises for some types of partial differential equations. The eikonal equation is one example of this, which is relatively easy to understand. A worse example are the equations for fluid flows, where the main flow of information will happen along the direction of the flow, but the flow itself is also a result of that information...

These examples may be a bit sketchy, and some important notions like well/ill-posed problems and inverse problems were completely omitted. But I hope that it has become clear what I mean by flow of information, and how it helps to judge the complexity of certain problems.

  • +1 @Thomas Klimpel: Yes, I'd wondered about the flow, thanks. Does this method seem related, or no, Cyclomatic Complexity. – blunders Dec 02 '11 at 02:43
  • 1
    @blunders It's probably related, but not in the way you hope. The problem of testing an existing program also has a certain associated flow of information, and Cyclomatic Complexity might be one measure for the complexity of that flow. So you don't measure the complexity of the problem that is solved by the program, but the complexity of the problem to test that program. – Thomas Klimpel Dec 02 '11 at 22:28
  • Given your background in software and mathematics, do you have any additional thought on how one might created a stable measure of a mathematical problems complexity? User474632 suggestion to use runtime appears to be unstable, for a proof of this see source-of-ruby-benchmark-irregularites. Again, thanks for the info! – blunders Dec 02 '11 at 23:25
  • 1
    @blunders You should probably work with a set of measures instead of a single measure. Things like "optimal memory consumption", "optimal sequential runtime", "optimal parallel runtime" (where optimal more or less means minimal here) come to mind. It probably also makes sense to specify the algorithm used for solving the problem at least a bit. So for a sorting problem (as an example), you should probably specify beforehand whether the algorithm may only do comparisons, or whether things like hashes (=working with the values themselves) are also allowed. – Thomas Klimpel Dec 03 '11 at 13:41