0

Linear programming is the very common problem of computing $$\min_{Ax\leq b}c^\top x,$$ where $A\in\mathbb{R}^{n\times m}$, $b\in\mathbb{R}^n$, and $c\in\mathbb{R}^m$. This is an optimization problem, but it can be turned into a decision problem with an additional parameter $s$, by asking: Does a vector $x$ exist such as $Ax\leq b$ and $c^\top x \leq s$?

In various places, it is stated that linear programming can be "solved in polynomial time"; what this actually means is that it admits an FPTAS, i.e., the optimum value can be determined up to arbitrary accuracy in polynomial time w.r.t. $n$, $m$, and the accuracy.

Does this mean that it is unknown whether the decision formulation of linear programming is in $\mathsf{P}$? (I know that the exact solution can be computed using the simplex algorithm, but since it has exponential runtime in the worst case, it can not be used to prove that linear programming is in $\mathsf{P}$)

Linear programming is pretty much the easiest kind of (continuous) optimization problem that I can think of, which is why I am curious at the possibility of its decision formulation not being in $\mathsf{P}$...

Firavox
  • 11
  • 3
  • 2
    How do you specify an instance of your problem as a bitstring? – Yuval Filmus Mar 29 '24 at 10:42
  • 1
    There are polynomial algorithms for linear programming: the ellipsoid algorithm and interior point algorithms. But these algorithms are not strongly polynomial. It is not known whether there is a strongly polynomial algorithm for linear programming. This is explained (to some extent) in the Wikipedia page on linear programming. – Yuval Filmus Mar 29 '24 at 10:45
  • For the encoding as a bitstring: I guess you can assume w.l.o.g. that $A$, $b$, and $c$ have integer entries that can each be represented by an $N$-bit integer. Packing all of those into a big vector allows representing the problem in $nmN+nN+mN$ bits. You could also consider runtime with respect to that quantity, but at the end of the day this is similar to the runtime w.r.t. $n$ and $m$.

    As for the polynomial algorithms for linear programming: Correct, so I guess that means it is unknown whether (the decision-formulation of) linear programming is in P?

    – Firavox Mar 29 '24 at 11:43
  • As Yuval said, the problem (including the decision problem) is known to be in P. That is, it can be solved in time polynomial in the size of the input, where the measure of size includes the bits needed to represent the numbers in the input to full precision (as you describe in your comment). What is not known is whether there is a strongly polynomial-time algorithm, which means (in your notation) one whose running time is polynomial in just $n$ and $m$. (Related: https://cs.stackexchange.com/questions/124996/complexity-of-linear-programming?rq=1 .) – Neal Young Mar 29 '24 at 21:27

1 Answers1

1

From Wikipedia:

The linear programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems.

The complexity class P contains the set of decision problems for which we have at least one deterministic polynomial time algorithm. If we have a decision version problem in P, we also have a deterministic polynomial time algorithm for the (original) optimization problem (with possibly some additional complexity than the decision version).

FPTAS and other related terms are usually associated with NP-hard problems where we do expect to have deterministic polynomial time algorithms, and thus we compromise with respect to solution quality to gain an advantage in time complexity.

Also see this discussion on LP vs ILP.

codeR
  • 527
  • 2
  • 12
  • Correct me if I'm wrong, but the statement "The linear programming problem [is] solvable in polynomial time" is inaccurate (the same holds for the other link). This is because interior-point methods can only guarantee that the result is a $1-\varepsilon$-approximation of the exact solution, where $\varepsilon > 0$ can be as small as desired; in other words, linear programming has an FPTAS, but can not necessarily be solved exactly (in polynomial time). This is also discussed here: https://or.stackexchange.com/a/8237 – Firavox Mar 29 '24 at 12:47
  • I think you are mixing up 'liner' programs with general convex programs. Please refer to some published research or good books. The Karmarkar's algorithm as well as its improvements give exact optimal solution in polynomial time assuming a general RAM model of computation. – codeR Mar 29 '24 at 13:19
  • The Wikipedia article you link to describes an algorithm that explicitly depends on a stopping criterion $\gamma$, which sets the precision of the algorithm. Even in the original paper by Karmarkar (cited in the Wikipedia article, which can be found here: https://web.archive.org/web/20131228145520/http://retis.sssup.it/~bini/teaching/optim2010/karmarkar.pdf ), it is stated that the algorithm is not exact: Theorem 1 on p. 379 mentions that the precision is $2^{-q}$, where $q$ is a suitable accuracy parameter. Thus, the algorithm is not exact. – Firavox Mar 29 '24 at 13:29
  • The said error margin is system-dependent and not problem-specific. This is where the factor $L$ bits comes into the picture. – codeR Mar 29 '24 at 14:10
  • If that were true, $\gamma$ (or $q$) would have fixed values, not something that the user can choose. I think you might be confused by the fact that linear programming can be solved up to arbitrary precision, so that one can compute any desired number of digits of the exact solution. But that is not the question. The question is whether the EXACT solution can be computed. For instance, one could enumerate all the vertices of the polytope defined by $Ax\leq b$. Assuming w.l.o.g. that the entries of $A$ and $b$ are integral, those are vectors of rational numbers that can be represented exactly. – Firavox Mar 29 '24 at 15:25
  • But this has exponential runtime; a smarter way to do this results in the simplex algorithm, but this one also has worst case exponential runtime. – Firavox Mar 29 '24 at 15:26
  • 2
    Yes, the exact solution can be computed in polynomial time, using the ellipsoid method (and also interior-point methods). Here is a classic text presenting the theoretical foundations of this result. Roughly, for explicitly given linear programs, you can get very close to an optimal basic feasible solution (BFS) in polynomial time, close enough to then find the solution. – Neal Young Mar 29 '24 at 21:31
  • 1
    Thanks! I couldn't find what you referred to in the text you linked to, but this idea of BFS led me to scrutinize the ellipsoid method in more detail. I then found lecture notes by Ben-Tal & Nemirovski (https://www2.isye.gatech.edu/~nemirovs/OPTIIILN2023Spring.pdf) that address the issue of setting the accuracy small enough so that one can guarantee the (approximate) solution is actually a BFS (it is in Section 8.4.2, specifically in the discussion right after Eq. (8.4.6)). – Firavox Mar 30 '24 at 04:14