I have a question about attacks on the implementation of cryptographic code that are enabled by compilation and compiler-optimisations. I am aware of this. Would anyone be able to point me to other examples? In particular, are there examples of timing attacks that are enabled by compilation (in particular by compiler optimisations) that are not possible w.r.t. the semantics of the source language?
$\mathbf{Edit:}$ Let me illustrate what I mean by this in two ways.
First, sometimes the semantics of a programming language is given using a one-step reduction relation. In particular, the one-step reductions of equality might be specified like this.
- For all values $x$: $x = x \rightarrow \mathtt{true}$.
- For all values $x, y$ such that $x$ is not equal to $y$: $x = y \rightarrow \mathtt{false}$.
You can see the one step reductions as taking one unit of time, so in this model, all computations of equality take the same amount of time. But for complex data-structures like strings, lists or arrays, compilation would typically be to code that has data-dependent run-time (or power consumption or EM radiation or whatever is your favourite side-channel).
Second, I know that not all programming languages have nice, simple operational semantics like that. So another way of looking at the same problem is: if I compile a program twice, once with little/no optimisation turned on, and once with a high degree of compiler optimisation, would the latter open my executable vulnerable to attacks in ways that the former does not? I'm thinking in particular of the optimisations (tracing) JIT compilers do.
PS I don't have enough reputation to give more/better tags to my question. Please feel free to do so.