There's been a lot of hype about JIT compilers for languages like Java, Ruby, and Python. How are JIT compilers different from C/C++ compilers, and why are the compilers written for Java, Ruby or Python called JIT compilers, while C/C++ compilers are just called compilers?
2 Answers
JIT compilers compiles the code on the fly, right before their execution or even when they are already executing. This way, the VM where the code is running can check for patterns in the code execution to allow optimizations that would be possible only with run-time information. Further, if the VM decide that the compiled version is not good enough for whatever reason (e.g, too many cache misses, or code frequently throwing a particular exception), it may decide to recompile it in a different way, leading to a much smarter compilation.
On the other side, C and C++ compilers are traditionally not JIT. They compile in a single-shot only once on developer's machine and then an executable is produced.

- 1,202
- 12
- 21
-
Are there JIT compilers that keep track of cache misses, and adapt their compilation strategy accordingly? @Victor – Martin Berger Mar 22 '13 at 15:40
-
@MartinBerger I don't know if any available JIT compilers do that but why not? As computers are more powerful and computer science develops, people can apply more optimizations to the program. For example when Java was born, it is very slow, maybe 20 times compared to the compiled ones but now the performance is not lagging so much and may be comparable to some compilers – phuclv Mar 11 '14 at 07:02
-
I've heard this on some blog post long time ago. The compiler may compile differently depend on the current CPU. For example it may vectorize some code automatically if it detects that the CPU supports SSE/AVX. When newer SIMD extensions are available it may compile to the newer ones so the programmer don't need to change anything. Or if it can arrange the operations/data to take advantage of the larger cache or to reduce cache miss – phuclv Mar 11 '14 at 07:05
-
@LưuVĩnhPhúc Chào! I'm happy to believe that, but the difference is that e.g. CPU type or cache size are something static that isn't changing throughout the computation, so could easily be done by a static compiler too. Cache misses OTOH are very dynamic. – Martin Berger Mar 11 '14 at 12:16
JIT is short for just-in-time compiler, and name is misson: during runtime, it determines worthwhile code optimisations and applies them. It does not replace usual compilers but are part of interpreters. Note that languages like Java that use intermediate code have both: a normal compiler for source to intermediate code translation, and a JIT included in the interpreter for performance boosts.
Code optimisations can certainly be performed by "classical" compilers, but note the main difference: JIT compilers have access to data at runtime. This is a huge advantage; exploiting it properly may be hard, obviously.
Consider, for example, code like this:
m(a : String, b : String, k : Int) {
val c : Int;
switch (k) {
case 0 : { c = 7; break; }
...
case 17 : { c = complicatedMethod(k, a+b); break; }
}
return a.length + b.length - c + 2*k;
}
A normal compiler can not do too much about this. A JIT compiler, however, may detect that m
is only ever called with k==0
for some reason (stuff like that can happen as code changes over time); it can then create a smaller version of the code (and compile it to native code, although I consider this a minor point, conceptually):
m(a : String, b : String) {
return a.length + b.length - 7;
}
At this point, it will probably even inline the method call as it is trivial now.
Apparently, the Sun dismissed most optimisations javac
used to do in Java 6; I have been told that those optimisations made it hard for JIT to do much, and naively compiled code ran faster in the end. Go figure.

- 72,336
- 29
- 179
- 389
-
By the way, in the presence of type erasure (e.g. generics in Java) JIT is actually at a disadvantage w.r.t. types. – Raphael Mar 26 '12 at 17:21
-
So the runtime is more complicated than that for a compiled language because the runtime environment must collect data in order to optimize. – mrk Mar 21 '13 at 23:44
-
2In many cases, a JIT wouldn't be able to replace
m
with a version that didn't checkk
since it would be unable to prove thatm
would never be called with a non-zerok
, but even without being able to prove that it could replace it withstatic miss_count; if (k==0) return a.length+b.length-7; else if (miss_count++ < 16) { ... unoptimized code for
m...} else { ... consider other optimizations...}
. – supercat Oct 13 '17 at 00:03 -
1I agree with @supercat and think your example is actually pretty misleading. Only if
k=0
always, meaning it's not a function of the input or environment, is it safe to drop the test -- but determining this requires static analysis, which is very costly and thus only affordable at compile time. A JIT can win when one path through a block of code is taken much more often than others, and a version of the code specialised for this path would be much faster. But the JIT will still emit a test to check whether the fast path applies, and take the "slow path" if not. – j_random_hacker Apr 04 '18 at 08:12 -
An example: A function takes a
Base* p
parameter, and calls virtual functions through it; runtime analysis shows that the actual object pointed to always (or nearly always) seems to be ofDerived1
type. The JIT could produce a new version of the function with statically resolved (or even inlined) calls toDerived1
methods. This code would be preceded by a conditional that checks whetherp
's vtable pointer points to the expectedDerived1
table; if not, it instead jumps to the original version of the function with its slower dynamically resolved method calls. – j_random_hacker Apr 04 '18 at 08:18 -
@j_random_hacker Right. Maybe I should have used weaker language, but I think my point stands. A JIT can wastly speed up such code, dynamic check or no, while a static compiler has no chance to even begin (which value to specialize for?). – Raphael Apr 09 '18 at 11:15