In a typical software development situation, tests are used at two points: during development, and before moving the product along the development chain.
The first situation, running tests during development, serves short-term goals: defining tasks (as in TDD: write a failing test, then make it pass), preventing regressions, making sure your changes don't break anything else, etc. Such tests need to be extremely fast: ideally, your whole test suite runs in less than 5 seconds, and you can just run it in a loop next to your IDE or text editor while you code. Any regression you introduce will pop up within seconds. Speedy test runs are more important in this phase than catching 100% of regressions and bugs, and since it is impractical (or outright impossible) to develop on exact copies of the production systems, the effort required to achieve perfect testing here isn't worth it. Using in-memory databases is a trade-off: they are not exact copies of the production system, but they do help keep test runs below the 5-second limit; if the choice is between a slightly different database setup for my database-related testing, and no testing at all, I know what I pick.
The second situation, moving the code along the development chain, however, does require extensive testing. Since we can (and should) automate this part of the development process, we can afford much slower tests - even if a full test run takes hours, scheduling a nightly build still means we always have an accurate picture of yesterday's codebase. Simulating the production environment as accurately as possible is important now, but we can afford it. So we don't make the in-memory-database tradeoff: we install the exact same version of the exact same DBMS as the production systems, and if possible, we fill it with actual production data before the testing begins.