Regarding this apparent paradox, it is often said that $x + x + ... + x = x^{2}$ is only meaningful when $x$ is a positive integer, and that differentiation rules for $x^2$ assume $x$ is a continuous variable. However, I don't think this gets to the heart of the matter. Consider $f(x) = x^2$ with domain equal to the set of positive integers. Then we can't take the derivative in the usual way, but we can still consider $\Delta = 1$ difference quotients:
$$ \frac{f(x+1) - f(x)}{1} \;\;= \;\;(x+1)^{2} - x^{2} \;\;=\;\; 2x + 1$$
Note that for large values of $x$, $2x + 1$ is asymptotically equal to the derivative of $x^2$, but $2x + 1$ is not asymptotically equal to $1$ added to itself $x$-times, which is $1 + 1 + ... + 1 = x.$
The factor of $2$ error comes about not because we're looking at functions defined only for the integers, but because we're incorrectly applying the sum rule for differentiating $x + x + ... + x$ ($x$-times). Specifically, this isn't a fixed-length sum. In order to correctly increment $x$ in the expression $x + x + ... + x$ ($x$-times), we have to increment each of the summands of $x$ AND increment the number of the $x$'s being added. Or, another way to look at this, in getting $1 + 1 + ... + 1 = x,$ we're incorrectly applying the product rule to $x^{2}=x \cdot x$ to get $1 \cdot x=x.$
Back in 2003 I posted (in the ap-calculus listserv) an ASCII diagram of what's going on, but I haven't been able to get a version to correctly display here. However, that diagram (along with some more explanation) can be found at the following URL. (The correct display of my original post at Math Forum has not survived their last two or three site modifications.)
https://web.archive.org/web/20040831204853/http://www.ncaapmt.org/calculus/newsletters/Winter2004/Vol12issue1.asp
Incidentally, there is a Letter to the Editor by David W. Erbach in The (Two-Year) College Mathematics Journal [Volume 6, Number 4, December 1975, pp. 2-3] that uses the greatest integer function in some way to get things to work out.