You’ve asked two questions:
1) Do you make decisions about model superiority based on training or testing performance?
2) Which model should you prefer?
I’ll answer both.
1) First, come over to Cross Validated (the Stack Exchange site for statistics and similar topics, with some overlap to this site) and check out what Frank Harrell has to say about accuracy (or even AUC) as a measure of performance (for example, his arguments are summed up in the accepted answer to this question). I think he takes it slightly too far, but his arguments against those metrics are compelling. Let’s say, however, that accuracy really is right for you. Then, as the other answers are reporting, you would judge model superiority based on the out-of-sample performance.
2) The difference in accuracy is so slight that I do not think you can say either way. Is model 2 consistently performing better on other training sets? Unless you can show that, I would not see compelling evidence to prefer either model. In fact, I would be inclined to go with the first model, as it seems to be simpler.
Final point: it can help to look at error rate instead of accuracy. If you have 98% accuracy vs 99% accuracy, that might not look like such an improvement. However, those correspond to 2% misclassification and 1% misclassification, meaning that the model with 99% accuracy could be argued to be twice as accurate (gets it wrong half as often).