2

I have a doubt about an application of the Complementary Slackness theorem. I want to show that $\mathbf{x}^{\ast} = (7.25, 0 , 3.25 , 0.75)$ is an optimal solution of the linear program (which is, checking on a software)

\begin{align*} \max \quad x_1 + x_2 - 2x_3 + 2x_4 \\ x_1 - x_2 - x_3 - 2x_4 = 2.5 \\ x_1 + x_2 + x_4 = 8 \\ x_1 + 2x_2 - x_3 = 4 \\ x_1, x_2, x_3, x_4 \geq 0 \ . \end{align*} What I did was write the dual problem, test the feasibility of $\mathbf{x}^{\ast}$, and it happens that $\mathbf{x}^{\ast}$ satisfies all constraints with equality, so, by Complementary Slackness, there should be a $\mathbf{y} ^{\ast} = (y_1^{\ast}, y_2^{\ast}, y_3^{\ast})$ such that equality holds on the dual problem constraints, that is

\begin{align*} y_1^{\ast} + y_2^{\ast} + y_3^{\ast} = 1 \\ -y_1^{\ast} + y_2^{\ast} + 2y_3^{\ast} = 1 \\ y_1^{\ast} + y_2^{\ast} = 2 \\ -2y_1^{\ast} + y_2^{\ast} = 2 \ , \end{align*}

and... the system has no solution.

EDIT: One of the problems I was having was finding the proper dual problem, which is

\begin{align*} \min \quad 2.5y_1 + 8y_2 + 4y_3 \\ y_1 + y_2 + y_3 \geq 1 \\ -y_1 + y_2 + 2y_3 \geq 1 \\ -y_1 - y_3 \geq -2 \\ -2y_1 + y_2 \geq 2\ , \end{align*}

so, by the version of the Complementary Slackness Theorem I am familiar with, a solution for the primal is optimal if and only if the dual problem constraints are satisfied as an equality for some $y^{\ast} = (y_1^{\ast}, y_2^{\ast}, y_3^{\ast})$, everytime a constraint is satisfied with equality (and in this case, ALL constraints are satisfied with equality). So, either this version of the theorem is false, or it doesn't hold when all constrains are satisfied with equality in the primal problem, and that is my doubt.

1 Answers1

3

General strategy

To apply duality, use the SOB table.

In this case, we have

  • primal equality constraints transformed to unrestricted dual variables, and
  • positive primal variables transformed to $\ge$ constraints in the dual.
  • Your mistake is wrong type of dual constraints.

You may want to see MIT's notes section 4.3 for the derivations of such transform.

Theoretical explanation for the mistake

Your attempt: Transform $$\max z=\sum_{j} c_jx_j \\ \text{s.t. } \sum_{j} a_{ij}x_j = b_j \quad (i=1,\dots,m) \\ x_j \ge 0 \quad (j=1,\dots,n)$$ into $$\min v=\sum_{i} b_iy_i \\ \text{s.t. } \sum_{i} a_{ji}y_i = c_i \quad (j=1,\dots,n) \\ y_i \ge 0 \quad (i=1,\dots,m)$$

In fact, if you transform it into the standard form $$\max z=\sum_{j} c_jx_j \\ \text{s.t. } \sum_{j} a_{ij}x_j \le b_j \quad (i=1,\dots,m)\\ \sum_{j} -a_{ij}x_j \le -b_j \\ x_j \ge 0 \quad (j=1,\dots,n)$$ you'll see the dual is $$\min v=\sum_{i} b_iy_i^+ +\sum_{i} -b_iy_i^- \\ \text{s.t. } \sum_{i} a_{ji}y_i^+ \sum_{i} -a_{ji}y_i^- \ge c_i \quad (j=1,\dots,n) \\ y_i^+,y_i^- \ge 0 \quad (i=1,\dots,n)$$ Collecting $y_i^+-y_i^-$, we get the expected form. $$\min v=\sum_{i} b_iy_i \\ \text{s.t. } \sum_{i} a_{ji}y_i \ge c_i \quad (j=1,\dots,n) \\ y_i \text{ unrestricted} \quad (i=1,\dots,n)$$


Additional response to updated question

So, either this version of the theorem is false, or it doesn't hold when all constrains are satisfied with equality in the primal problem, and that is my doubt.

My correction

  • This version of the theorem is true.
  • It does hold when all constraints are satisfied with equality in the primal problaem, and
  • that is not my doubt. (Since this is the Strong Duality Theorem.)

Remarks:

  1. After answering this question, I'll change my old complementary slackness habits to solve problem faster using the SOB table.

  2. I find it difficult to understand this in the question body.

    by the version of the Complementary Slackness Theorem I am familiar with ... and in this case, ALL constraints are satisfied with equality.

    It takes time to guess where the constraints are. (In the primal or dual?) To be concise and precise, I'll describe this as complementary slackness in words (to facilitate oral commumincation when there's no writing tool).

    In the optimal feasible solution,

    • A nonzero primal decision variable makes its corresponding dual slack/surplus variable zero.
    • A nonzero primal slack/surplus variable makes its corresponding dual decision variable zero.

    N.B.: The words "primal" and "dual" can be interchanged in the above sentences.

    The advantage of using "primal/dual slack/surplus variable" variables over "the inequality constraints is satisfied with equality" is the conciseness and clearity. You won't confuse the given equality constraints with the ones found by complementary slackness.

Verification using your optimal solution

I suggest you to DIY, then check the solution.

Given optimal solution $\mathbf{x}^{\ast} = (7.25, 0 , 3.25 , 0.75)$. Apply complementary slackness. $$\begin{cases}x_1^*:& y_1+y_2+y_3 &= 1\\x_3^*:& -y_1\phantom{+y_2}-y_3 &= -2\\x_4^*:& -2y_1+y_2\phantom{+y_3} &= 2\end{cases}$$ Explanation: nonzero 1st, 3rd & 4th primal decision variables $\implies$ zero 1st, 3rd & 4th dual surplus variables. Solving this linear system gives $\mathbf{y}^*=(-1,-1.5,3.5)$. Check if 2nd dual constraint satisfied: $$-y_1^{\ast} + y_2^{\ast} + 2y_3^{\ast}=1.5 -1 + 2(3.5)=7.5 \ge 1,$$ then check optimality $$2.5y_1^*+8y_2^*+4y_3^*=2.5(-1.5)+8(-1)+4(3.5)=-3.75-8+14=2.25 \\ x_1^* + x_2^* - 2x_3^* + 2x_4^*=7.5+0-2(3.25)+2(0.75)=2.25.$$ So we can conclude the optimality from the weak duality theorem.

  • Thank you, that was very clear. The source material I am following (AND my lectures) didn't even mentioned any method of constructing the dual. That was what I missed. – 園田海未 Dec 14 '17 at 15:57
  • Ok. The dual was wrong, but that doesn't help me to get out of the situation where the system has no solution. If the dual variables are unrestricted (which they are in this case), so I can't apply the Complementary Slackness Theorem to check the optimality? – 園田海未 Dec 14 '17 at 17:53
  • @B.Chinaski My habit of using complementary slackness is to always start from inequality constraints instead of "=" constraints, and to use positive variables instead of unrestricted variables. Otherwise, I believe you're going to miss out variables. Since I'm not doing any ([tag:linear-programming]) for studies now (I'm studying ([tag:probability-theory]).), I'm not going to justify this believe, but that's the usual practice to find solutions in the primal and dual problems. – GNUSupporter 8964民主女神 地下教會 Dec 14 '17 at 19:22
  • @B.Chinaski Even though your moving question makes my answer incomplete, but I'll still strive to get you out of the problem. To sum up, to address your new doubt, I suggest you to straighten things by writing the problem in the standard form, find the optimal solution by (dual) simplex algorithm, then use the usual verison (p.14 out of 23) of complementary slackness to find the dual, and finally substituting things back into the LP problem in the question. – GNUSupporter 8964民主女神 地下教會 Dec 14 '17 at 20:00
  • Finding out the explanation for the doubt in the question is meaningful to you, but I regret not sharing this with you as (1) I'm not current working on this branch of maths (2) I am satisfied with my habits and versions of complementary slackness, and I don't have any problems with it. I used to solve others' problems. (3) I'm tired of writing detailed answers to ([linear-programming]) questions after receiving few responses. – GNUSupporter 8964民主女神 地下教會 Dec 14 '17 at 20:22
  • You've been very helpful, @GNUSupporter. I have to try with detail your new sugestion later, looking briefly it seems that it would work. Also, I don't think I've changed the question, it just wasn't very clear where the problem was (by my fault - as you pointed out - I was not writing the dual correctly), and then your explanation made it clear for me where is the problem. – 園田海未 Dec 14 '17 at 20:27
  • Sorry for splitting my response into multiple comments due to the word limit. Since maths is not a spectator sport, in long run, you'll remember the solution for your doubt by verifying the variants of a fancy result from its basic version(s) that you know. That's another reason that I invite you to do the verification yourself and show us where you're stuck. (As my user name suggests, I welcome the use of free software to save time.) Say for example, this application of Farkas' Lemma to show that exactly one of the two systems is feasible. – GNUSupporter 8964民主女神 地下教會 Dec 14 '17 at 20:29
  • As you said, I am also very disappointed not only with the linear-programming tag, but also with the area in general. It seems all that people do is to throw some 2 or 3 theorems and never really work any problem by hand (because "its trivial") and then solve it with a software. I never studied nothing so cryptical and messy as linear programming, and I'm even starting to think that the "rigorous" theory behind it add little to nothing in practical terms. – 園田海未 Dec 14 '17 at 20:31
  • @B.Chinaski You're right to make the question clearer. If you had been taught by my first LP lecturer, you'll change your mind. Since her notes is copyrighted, I can't share them with you. (In fact, I don't have access to the soft copy now.) It depends on the perspective from which you've considering. Practically, one is rather interested in finding the solutions rather than exploring its theoretical properties. Since my goal is to learn modern probability theory (rather theoretical and abstract from Kolmogorov's axioms as a starting point), I enjoyed thinking from theoretical side. – GNUSupporter 8964民主女神 地下教會 Dec 14 '17 at 20:41
  • In principle, the "rigourous" theory is the cornerstone of the whole building, as the name "Fundamental Theorem" of LP suggests. This result rassures us the validity of simplex algorithm by proving the equivalence between existence of FS and that of BFS (basic feasible solution). The former is infinitely many; the later is finitely many. Computers can't work on the former, but the later. IMHO, that's really boost my confidence in LP, so I was motivated to post so many answers under this tag. – GNUSupporter 8964民主女神 地下教會 Dec 14 '17 at 20:58
  • Last but not least, without actual calculations, it's possible that the problem is unfeasible. You may want to use an online version of Octave to verify the related code for this answer to save time. – GNUSupporter 8964民主女神 地下教會 Dec 14 '17 at 21:01
  • The solutions is an optimal solution, I checked that right at the start. And I am aware about the importance of a theoretical formulation of this theory, but maybe I thought there would be more practical results (being an "applied math" area). Maybe that was a misconcept of mine. It seems that LP is the "PDE's" of this area, where you know generally how things would work but could get lost in some very "small" problems. Anyway, I'm kind of burned out of it today so I will get back to it tomorrow and hope to solve this problem for good. Thanks again, @GNUSupporter:) – 園田海未 Dec 14 '17 at 22:41
  • @B.Chinaski After over 10 hours of inactivity of this question, I changed my mind and updated my solution since I find my old habit unnecessary for solving this problem. For this purpose, my very first comment is quite misleading. I hope this update can clear your doubts. – GNUSupporter 8964民主女神 地下教會 Dec 15 '17 at 19:13
  • 1
    That's great, @GNUSupporter. The whole confusion seemed to come from a misreading (or a miswriting) of the theorem. Yesterday I was clarified by a colleague about how one of the equations would be supressed whenever the correspondent primal variable is zero, and that solved the problem. I hadn't the time to post here yesterday, but you nailed it, and I am very thankful for it (and all your help in between). – 園田海未 Dec 16 '17 at 13:21