I'm been working on a theory, though my math is weak. Let's say I've managed to determine that I can arrive at an answer A by always using the formula BCD / D
. Of course this evaluates to BC after canceling out D. However sometimes D can be zero 0 which results in an undefined answer. My question is theoretical in nature: Are there any mathematical theories that permit for the D's to cancel out even if D is zero?

- 272,048

- 143
-
What do you mean by "theory"? It sounds like perhaps you mean "theorem"? – amWhy Feb 24 '14 at 20:58
2 Answers
Yes, if we know that the answer has polynomial form then we can perform such cancellations. As a simple example, if we wish to solve $\, x f(x) = x^2\,$ and we know the solution $\,f\,$ is a polynomial in $\,x\,$ then the solution is $\,f(x) = x\,$ (the nonzero polynomial $x$ is cancellable, because a polynomial ring $D[x]$ over a domain $D$ remains a domain; or, more directly, compare coefficients).
This can lead to very efficient solutions in less trivial contexts. For example, see this slick proof of Sylvester's determinant identity $\rm\, det (I+AB)=det(I+BA)\, $ that proceeds by universally cancelling $\rm\ det\, A\ $ from the $\rm\, det\, $ of $\rm \ (1+A\ B)\, A\, =\, A\, (1+B\ A),\,$ thus trivially eliminating the "apparent singularity" at $\rm\ det\, A\, =\, 0.\,$ Further discussion is here.
As another example, we can algebraically define derivatives of polynomials by a formula involving universal cancellation. By the Factor Theorem we know that $\,x-y\mid f(x)-f(y)\,$ in $\,R[x,y]\,$ for any ring $\,R.\,$ Let the quotient be the polynomial $\,g(x,y)\in R[x,y].\,$ Then we easily show using linearity that the derivative of $\,f(x)\,$ w.r.t. $\,x\,$ is $\,f'(x) = g(x,x),\,$ i.e.
$$\begin{eqnarray}{}& g(x,y)\ &=&\ \frac{f(x)-f(y)}{x-y}\ \in\ R[x,y]\\ \Rightarrow\ & g(x,x)\ &=&\ f'(x)\ \in\ R[x] \end{eqnarray}$$
For example, $\,f(x) = x^n$ $\,\Rightarrow$ $\,g(x,y) = \dfrac{(x^n\!-y^n)}{(x\!-\!y)} = x^{n-1}\! + x^{n-2}y+\cdots+xy^{n-2}\!+y^{n-1}$
therefore $\,\ g(x,x) = x^{n-1} + \cdots + x^{n-1} = n x^{n-1} = f'(x).$
Another common example arises in the Heaviside cover up method of partifal fraction expansion.

- 272,048
-
Yes, my answer always has a polynomial form ... prior to being evaluated to a single value. – annoying_squid Feb 24 '14 at 22:06
-
By taylor expansion it seems to work always, see for example here: http://mathforum.org/kb/thread.jspa?forumID=13&threadID=2808473&messageID=10007836 , but then when I try f(x)=e^x I cannot do it practically. – Oct 29 '16 at 22:12
Simple answer is that $\frac{ab}{b} = a$ whenever $b \not=0$. If $b=0$, the expression simply has no defined value.
A longer answer is that $\frac{f(x)}{g(x)}$ may yet be defined even when $f(a) \to 0$ and $g(a) \to 0$ for some $x=a$. The classic example is
$$\lim_{x \to 0} \frac{\sin{x}}{x} = 1$$
Even though both numerator and denominator tend to 0. Of course, inserting $x=0$ giving $\frac{\sin{0}}{0}$ is nonsense.

- 9,720
-
Is there a way to determine that the property of a problem is best described using limits as opposed to absolute values? – annoying_squid Feb 24 '14 at 22:03
-
I'm not sure I can give a general description. If you post more details on what you are trying to calculate, I can try to help. – naslundx Feb 24 '14 at 22:03