I have seen two competing definitions of the error function. When I was an undergrad, Spiegel's Mathematical Handbook of formulas and tables (mine is the 1968 edition) was the definitive authority, and it defines $$ \mathrm{erf}(x)=\frac1{\sqrt{2\pi}}\int_{-\infty}^xe^{-t^2/2}\,dt. $$ Very fitting, as a table of values of that function has the well known applications in probability theory.
More recently, I assigned the students of my freshman calculus course the task of estimating the integral $$\mathrm{erf}(1)-\mathrm{erf}(0)=\frac{1}{\sqrt{2\pi}}\int_0^1e^{-x^2/2}\,dx$$ by integrating the Taylor series termwise, and using the standard technique in estimating the cut-off error. When checking my own result with Mathematica, I was surprised to find that Wolfram uses a different definition for the error function. Wikipedia seems to agree with Wolfram as they both define $$ \mathrm{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^xe^{-t^2}\,dt. $$ Furthermore, what I thought was the error function is denoted by $\Phi(x)$ there.
I'm sure there are good reasons for prefering either. I face the task of explaining the differing practices to my students, but that's my job. But can somebody shed more light to this difference? When did the change happen? I mean, it would be very surprising if either Spiegel or Wolfram would go against accepted mainstream notation.