I have recently ported a stemmer from Java to Python for a highly inflectional language.
The stemmer learns how to change suffixes from the dictionary of words and their inflected forms. It basically builds a stemming table with learned stemming rules. As I was porting the algorithm I decided to train it on a larger dictionary. As a result, the learned stemming table got bigger, and stemming accuracy got higher as well.
Then I thought this actually make no sense as the stemming table size gets closer and closer to the size of the dictionary.
Why build or train stemming algorithms if you can simply lookup a dictionary?
I can understand that in old times storing large files could be a problem, but now? And for some languages there might be no proper dictionary resources. But is there any other reason?