After you have finished with the model building process (in which it is assumed that you have used your test set once and only once for assessing the performance of your final model on unseen data), and before deploying your model, both common sense and standard practice say that you should re-train it on all the available data, including the portion that, until then, had been put aside as test. Leaving out available data is a luxury which normally we cannot afford; and, provided that there are no issues with your model building process, and your test set is qualitatively similar to your training one (an assumption implicitly always present), there is nothing to worry about.
Qualitatively speaking, this approach is similar with what we do with cross validation, where afterwards we routinely re-train the model using all the available data.
The following Cross Validated threads might be useful; although they address the cross-validation issue, the rationale is similar - at the end, use all the data: