I think most of this confusion is coming from a deeper confusion about how programs-as-proofs works, so I'll discuss that. At the end, it will hopefully be clear why we get intuitionistic logics in this way, unless we're actively trying to get something classical.
The idea is that our types should correspond to "propositions" of interest, and then programs inhabiting those types are "proofs" that that proposition is true. Let's start small and work our way up. As a simple example, let's see a proof that $A \to A$.
We literally want to write a function which takes in a proof $a$ of $A$ (which we write as $a : A$) and then outputs a proof of $A$. If we use syntax from lambda calculus, we have
$$ \lambda (a : A) . a \ \ : A \to A$$
this is the function which takes in $a$ as input, and just... outputs $a$. So we've successfully turned a proof of $A$ into a proof of $A$ (which wasn't very hard). But notice this proof also has a type. So this is a proof that $A \to A$, and we know that $A \to A$ is a true proposition.
Let's get a little bit fancier. Can we show $A \land B \to A$? In programming languages we typically write $\times$ instead of $\land$ for types, but they're the same thing.
We take in a pair of proofs $(a,b) : A \times B$. We want to output a proof of $A$. I think it should be clear how to do this:
$$ \lambda \left ( (a,b) : A \times B \right ) . a \ \ : A \times B \to A.$$
Let's try de morgan's laws. If $\mathbb{0}$ is the empty type, we associate it with "false", because there's nothing inside it, so it is unprovable. Then we think of $\lnot A$ as being an abbreviation for $A \to \mathbb{0}$, since if $A$ can prove false, we have a problem. Notice this idea of defining $\lnot A$ in terms of $A$ and false is already an intuitionistic thing to do.
But now, let's prove $\lnot (A \lor B) \to \lnot A \land \lnot B$. Again, we typically write $+$ instead of $\lor$ (it's a sumtype) and $\times$ instead of $\land$.
We take in a proof (or a program) $f : (A + B) \to \mathbb{0}$. We want to spit out a pair of proofs $g_A : A \to \mathbb{0}$ and $g_B : B \to \mathbb{0}$. But that's easy to do! Since our input $f$ can take either an $A$ or a $B$ as input, we're golden!
$$
\lambda \left ( f : (A + B) \to \mathbb{0} \right ) .
\left ( \lambda (a : A) . f (\mathtt{inl}\ a), \ \lambda (b : B) . f ( \mathtt{inr}\ b) \right )
\ \ : ((A + B) \to \mathbb{0}) \to (A \to \mathbb{0}) \times (B \to \mathbb{0})
$$
With our abbreviation, this program has type $\lnot(A + B) \to \lnot A \times \lnot B$.
I'm mainly talking about how programs can be interpreted as proofs, since you mentioned you're more a mathematician than a programmer. But the correspondence goes both ways. Say you want to prove that your programming language has nice properties. It's helpful to mathematically formalize how your programming language behaves, and we do this with the language of (intuitionistic) logic, with operational semantics.
Now! Notice the running theme in each of these cases. How are we able to prove a proposition $A$ is true? We have to build a program which inhabits its type. In fact, this is the only way to prove a proposition. But there's a reason to care about this! Now all of our proofs have "computational content". You can imagine having a proof of some complicated implication $A \to B$. If we do this constructively (that is, intuitionistically), then if you give me an $a : A$, I know how to actually convert it into a $b : B$. That's really cool!
If you think about Brouwer's fixed point theorem, it tells us we have an implication
$$
\{ f : D^2 \to D^2 \}
\to
\{ x : D^2 \mid f(x) = x \}
$$
but, rather aggravatingly, the classical proof of this fact doesn't tell you how to find the $x$. It tells you that something exists, but gives you no idea how to get your hands on it. Intuitionistically, this is not possible. Every proof of a proposition has to actually witness the existence of the object it constructs by building the object in question. This is great, because an intuitionistic proof of Brouwer's fixed point theorem is basically an algorithm taking in a continuous function $f : D^2 \to D^2$ and spitting out a point $x$ fixed by $f$, and if we ever have a real life function $f$ we're interested in, we can evaluate the proof with that input and it will actually hand us a fixed point in return.
Now. Why is this all intuitionistic? What does this have to do with $\mathsf{LEM}$ or $\mathsf{DNE}$?
I think it's slightly clearer with $\mathsf{DNE}$, so let's use that. $\mathsf{DNE}$ tells us that, for every $A$, there's something inhabiting $\lnot \lnot A \to A$.
But what does that mean? It means there's a function
$$\mathtt{dne}_A : ((A \to \mathbb{0}) \to \mathbb{0}) \to A$$
and I encourage you to try to write one. The problem is that knowing $\lnot A$ is false doesn't furnish us with a proof of $A$! There's no way to build a term $a : A$ from our input $f : ((A \to \mathbb{0}) \to \mathbb{0})$, and if you use heyting algebras you can prove that no such term can possibly exist.
Now, there's nothing stopping you from saying "oh, I actually want to put a term $\mathtt{dne}_A : \lnot \lnot A \to A$ into my programming language as a primitive!" You can totally do that, and plenty of people have. The difficulty is that it doesn't compute anything. How do you interpret a term of that type if you're a programmer? What does it do?
In my mind, every answer to this question is a bit contrived. The best we have is continuations, which you can read more about here, for instance.
If you want to read more about this stuff as a mathematician, you should try the lecture notes from CMU's class on constructive logic (15-317). That's where I learned a lot of this, and the notes are available online.
If you want to read more about this stuff as a programmer, or rather someone who builds programming languages, you should read Harper's Practical Foundations for Programming Languages. Since I'm linking old classes, I learned a lot of this stuff in 15-312 (which is taught by Harper), and you can find lecture notes here.
I hope this helps ^_^
if
statements in programming correpond to implications logically (and I agree they don't), but I don't see anything in OP's question that makes me think they have this misunderstanding. – HallaSurvivor Jun 07 '21 at 16:48