A habit I've observed among programmers, and a habit I sometimes subconsciously exhibit myself, is to pick powers of two (or powers of two minus one) when defining a database schema, a data buffer, etc.
Have you made the same observation? If I'm not being blatantly subjective, the follow-up questions are:
Are there still valid reasons to use powers of two [minus one] in modern technologies?
Assuming these habits are mostly vestiges of old technological limitations, I'm just wondering what different flavors of limitations there once were.
Some potential reasons I can think of are data structure optimizations and addressing bits. I'm wondering what else was/is out there...