Before the formalization of limit in terms of $\epsilon $ and $\delta $ the arguments given in analysis were heuristic, simply because at the time no known model of reals with infinitesimals was known. People used infinitesimals intuitively, though they knew no infinitesimals existed (at least for them, at the time). The fact that (correct, in whatever sense) use of infinitesimals did not lead to any blunders was somewhat of a strange phenomenon then. Once Cauchy formalized limits using $\epsilon $ and $\delta $ it became possible to eliminate any infinitesimals from the formal proofs. One could still think infinitesimally, or not, but one could finally give rigorous proofs.
Things changed when Robinson discovered a construction, using tools from logic that were new at the time, by which one can enlarge the reals to include actual infinitesimals. Retrospectively, this discovery explained why infinitesimals did not lead to blunders. Simply since they do exist!
Today inertia dictates one's first encounter with analysis, and so non-standard analysis is usually never met until one stumbles upon it or in advanced courses, usually in logic rather than analysis. Having said that, there are textbooks aimed at a beginner's course in calculus using non-standard analysis. There are probably two reasons why that is unlikely to catch momentum. First is the name; nobody really wants to do things non-standardly. Secondly, and more importantly, the prerequisites for Cauchy's $\epsilon $ $\delta $ formalism is very modest. However, even the simplest models of non-standard analysis require a significant dose of logic, one that will take a week or two at least of a beginner's course. And since non-standard analysis is as powerful as ordinary analysis, it is difficult to justify putting in the logic(al) effort, for what many may consider to be only cosmetic gain. Some, disagree though and claim non-standard analysis is superior.