13

Does coNP-completeness imply NP-hardness? In particular, I have a problem that I have shown to be coNP-complete. Can I claim that it is NP-hard? I realize that I can claim coNP-hardness, but I am not sure if that terminology is standard.

I am comfortable with the claim that if an NP-complete problem belonged to coNP, then NP=coNP. However, these lecture notes state that if an NP-hard problem belongs to coNP, then NP=coNP. This would then suggest that I cannot claim that my problem is NP-hard (or that I have proven coNP=NP, which I highly doubt).

Perhaps, there is something wrong with my thinking. My thought is that a coNP-complete problem is NP-hard because:

  1. every problem in NP can be reduced to its complement, which will belong to coNP.
  2. the complement problem in coNP reduces to my coNP-complete problem.
  3. thus we have a reduction from every problem in NP to my coNP-complete, so my problem is NP-hard.
Austin Buchanan
  • 669
  • 6
  • 15
  • in a word, no! at least based on current knowledge. the question is closely connected to P=?NP (or more strictly coNP=?NP which is also open). note that if coNP≠NP is proven then P≠NP is also proven because P is closed under complement. – vzn Oct 23 '13 at 20:50

2 Answers2

10

You claim that every problem in NP can be reduced to its complement, and this is true for Turing reductions, but (probably) not for many-one reductions. A many-one reduction from $L_1$ to $L_2$ is a polytime function $f$ such that for all $x$, $x \in L_1$ iff $f(x) \in L_2$.

If some problem $L$ in coNP were NP-hard, then for any language $M \in NP$ there would be a polytime function $f$ such that for all $x$, $x \in M$ iff $f(x) \in L$. Since $L$ is in coNP, this gives a coNP algorithm for $M$, showing that NP$\subseteq$coNP, and so NP$=$coNP. Most researchers don't expect this to be the case, and so problems in coNP are probably not NP-hard.

The reason we use Karp reductions rather than Turing reductions is so that we can distinguish between NP-hard and coNP-hard problems. See this answer for more details (Turing reductions are called Cook reductions in that answer).

Finally, coNP-hard and coNP-complete are both standard terminology, and you are free to use them.

Yuval Filmus
  • 276,994
  • 27
  • 311
  • 503
  • "but not for many-one reductions" - isn't the problem of deciding $\text{NP} \overset{?}{=} \text{coNP}$ exactly that we don't know whether there are Karp-reductions from a ($\text{co}$)$\text{NP}$-language to its complement? – G. Bach Oct 23 '13 at 21:53
  • That's correct, and that's also what I show in the answer. When I stated that it's not true for many-one reductions, I didn't mean it in the strictly logical sense, but rather in the sense that "the reduction you are thinking of is a Turing reduction but not a many-one reduction". – Yuval Filmus Oct 23 '13 at 21:58
  • Oh alright, yes that's probably the problem. – G. Bach Oct 23 '13 at 22:10
  • Thanks. What's a good reference for this? In particular for "NP=coNP under Cook reductions, but it is thought that they are different w.r.t. Karp reductions"? – Austin Buchanan Dec 20 '13 at 04:35
  • The believe that NP is different from coNP is rather widespread. Sometimes it is attributed to Stephen Cook. That NP-hardness is the same as coNP-hardness under Cook reductions follows immediately from the definition. – Yuval Filmus Dec 20 '13 at 07:57
6

The problem with that line of reasoning is the first step. In the deterministic case, you can decide $x \in L$ with a TM $\text{M}$ iff you can decide $x \notin \overline{L}$ with it, because the way to do it is just flip the output bit of $\text{M}$ since its output only depends on $x$ (if we compare with the verifier definition of $NP$).

In the nondeterministic case using the verifier definition, it's not known whether you can build an $\text{NP}$-verifier from a $\text{coNP}$-verifier or vice versa, and the problem is that they have different quantifiers in the definitions that the verifier machines must fulfill. Let $L \in \text{coNP}$, then we have a verifier DTM $\text{M}$ such that:

$$x \in L \iff \forall z \in \{0,1\}^{p(|x|)}:\text{M}(x,z) = 1$$

For $\overline{L}$, the verifier $\text{M'}$ will have to fulfill

$$x \in \overline{L} \iff \exists z \in \{0,1\}^{q(|x|)}:\text{M'}(x,z) = 1$$

Why can't we then just use the $\text{NP}$-verifier $\text{M'}$ of the language $\text{K}$ to build a $\text{coNP}$-verifier $\text{M}$ for $\text{K}$? The problem is the $\forall$-quantifier required to have a $\text{coNP}$-verifier. The $\text{NP}$-verifier $\text{M'}$ may give you $0$ for some (wrong) certificate even for $x \in \text{K}$, so you can't go from $\exists$ to $\forall$.

Maybe more abstractly: it's not clear how to build (in polynomial time) a machine that recognizes exactly the elements of a language, regardless of what certificate come with them, from a machine that recognizes exactly the elements of a language that have some certificate for it, but for which also some certificates don't work.

G. Bach
  • 2,019
  • 1
  • 17
  • 27
  • 4
    Surprisingly, however, it is known that NL=coNL, NPSPACE=coNPSPACE, and in general non-deterministic classes defined by space constraints are closed under complementation. This is the Immerman-Szelepcsényi theorem. – Yuval Filmus Oct 23 '13 at 22:20
  • Interesting, I didn't know that - but the intuition behind it probably is the way it always is with space classes: we can just reuse the space. – G. Bach Oct 23 '13 at 22:23
  • @G.Bach Not really, no. NL=co-NL is established by showing that $s$-$t$-non-connectivity is in NL. For larger space classes (the theorem only applies for space at least $\log n$), you use $s$-$t$-(non)-connectivity on the configuration graph of the relevant Turing machine. – David Richerby Oct 23 '13 at 22:55