The problem is more complex than simply exchanging out EGA for VGA.
In fact, the VGA interface is what grew out of the EGA interface, which grew out of the CGA interface, which grew out of the MDA interface, which is why you see the interrupt call schemas as so similar.
Basically the idea was "here's interrupt 0x10... and a few simple functions for setting the graphics mode and plotting pixels... go!"
However, that isn't all there is to the story.
For example, very few applications actually used the pixel plotting functionality of 0x10.
Why not?
It was slow.
What did they use instead?
Direct memory manipulation.
Generally, the memory locations for the various display modes started at one of three locations: B000:0000, B800:0000, or A000:0000.
And the code for talking to a 4 color CGA (2 bit) display versus a 16 color EGA (4 bit) display and a 256 color VGA (8 bit) display would be so very different that they would not be adaptable to other pixel depths. You'd basically have to rewrite the entire rendering engine from scratch.
To further complicate things, the popular video mode was 13h, which was 320x200x256.
It also just so happened that the VGA could be tweaked into 320x240x256 (to give square pixels on a 4:3 display) "mode x", which was actually split into four pieces for putting together the video data.
In other words, the games you mention only used the VGA API for one thing: to set the video mode. After that, it was all display resolution specific stuff. They didn't use the setpixel call of int 0x10.
So, the answer is: the int 0x10 "api" (if you want to call it that) was not that terribly different, but it also wasn't widely used for high performance applications (games).
As far as feeling old, I hear you. It's crazy to think these games came out 15-20 years ago.
– Wisteso Jul 26 '12 at 18:38