It depends on what you mean by a mistake. If you define mistake as something that will lose you marks on a math test and you already know how to show your work and you sometimes slip up and make a mistake that you later can see is obvious why it's a mistake, maybe you could get a friend to watch you like a hawk while you're doing your homework and notify you immediately when you make a careless mistake. Then you should take it seriously making the effort to make those careless mistakes less often. According to https://www.youtube.com/watch?v=_St8gcqPB7E, one error almost everybody makes when they're adding mentally is thinking that 1000 + 20 + 30 + 1000 + 1030 + 1000 + 20 = 5000 when it really equals 4100.
That's not the only reason for making mistakes in math. According to https://www.youtube.com/watch?v=1irvvZzbJkU, a lot of people can't find the mistake in proving 2 = 0 as follows: $2 = 1 + 1 = 1 + \sqrt1 = 1 + \sqrt{(-1)^2} = 1 + \sqrt{-1}\sqrt{-1} = 1 + i^2 = 1 + (-1) = 0$ The mistake can be avoided by going down to the basics. For example, you could decide how you want to formalize a certain statement in Zermelo-Fraenkel set theory. According to this answer, instead of just declaring that certain properties of the real numbers are true and proving theorems from them, you can instead construct a set with operations in Zermelo-Fraenkel set theory and define it to be the set of all real numbers and prove that that set with those operations is a complete ordered field. It probably turns out that the standard formalization of the square root function on a complex number in $\mathbb C$ is to pick the complex number whose real part is positive and whose square is that number or if the original number was a zero or negative real number, to pick the number whose square is that number and has a nonnegative imaginary part. I beleive that complex exponentation can be defined as follows: $\forall x \in \mathbb{C} \forall y \in \mathbb{C}$
- $0^x = 0 \text{ if } re(x) > 0$
- $0^x = 1 \text{ if } x = 0$
- $0^x \text{is undefined otherwise}$
- $\text{ if } x \in \mathbb{R} \text{ and } y \in \mathbb{Q} \text{ and }y\text{ has an odd denominator in lowest terms and } x < 0, \text{ then } x^y = exp(ln(-x) \times y) \text{ if the numerator is even and } -(exp(ln(-x) \times y)) \text{ if the numerator is odd }$
- $\text{ otherwise } x^y = exp(ln(x) \times y)$
where $\forall x \in \mathbb{C}$, ln(x) is defined to be the complex number $y$ such that $exp(y) = x$ and $\frac{-\pi}{2} < \text{im}(y) \leq \frac{\pi}{2}$
Chances are in a Calculus course, you will be expected to talk about only real numbers. If a Calculus question asks you to find the set of all possible values of x for an equation with one variable x, you need to find a method that always works. If your teacher taught the class their own method that always works and you use your own method and get the right answer, you might still not get full marks because you didn't show your work using their method. You should recognize what the inverses of various functions mean. The exp function is injective, that is, it assigns a different number to every number so when ever a number is in the range of exp, its inverse of exp, ln of that number, is the one number that exp assigns that number to so if ln x = y, exp y = x. Other functions like the squaring function, sin, and cos are not injective so for each number in the range of that function, its inverse applied to that number is just one of the numbers that that function assigns that number to. When a number is in the range of the squaring function, its square root is the nonnegative number whose square is that number so if y = sqrt(x), x = y^2. For any number between 1 and -1, sin^-1 of that number is the number between π/2 and -π/2 such that the sin of it is that number so if y = sin^-1(x), x = sin(y). For any number between 1 and -1, cos^-1 of that number is the number between 0 and π such that the cos of it is that number so if y = cos^-1(x), x = cos (y). One possible method a teacher could have is having a set of allowable operations on a one variable equation, some of which never gain or lose any solutions and some of which sometimes gain solutions but never lose solutions; requiring that each equation is the result of the application of one of those operations to the equation on the previous line and that unless every single one of those operations is one that never gains or loses any solutions, you have to check each solution to the final equation to see if it is a solution to the original equation in order to get full marks on that question. Suppose f, g, and h are expressions involving the only the variable x. Here are some operations that they and their inverses never gain or lose any solutions when using real numbers. For any x, the expression f ≠ g is true if and only if x is a number such that f and g are both defined and are not equal to each other. Adding the negation symbol ¬ to the beginning of an expression changes the expression to an expression that means the original expression is not true.
f = g to g = f
f^3 = g^3 to f = g
f = g to e^f = e^g
f = g to ln e^f = g
f = e^g to g = ln f
f^2 = g^2 to (f = g or f = -g)
f ÷ g = h to (h × g = f & g ≠ 0)
f + g = h to e^f × e^g = e^h
$f = sin(g) \text{ and } \frac{-\pi}{2} < g < \frac{\pi}{2}$ to $g = sin^{-1}(f)$
$f = cos(g) \text{ and } 0 < g < \pi$ to $g = cos^{-1}(f)$
any conversion of a term of one the following forms or its inverse
- a + b to b + a
- (a + b) + c to a + (b + c)
- a × b to b × a
- (a × b) × c to a × (b × c)
- a × (b + c) to (a × b) + (a × c)
- a^b × a^c to a^(b + c) for any positive a
- (a^b)^c to a^(b × c) for any positive a
Here are some operations that sometimes gain solutions but never lose solutions.
f = g to f^2 = g^2
f ÷ g = h to h × g = f
f × g = h to ¬(ln f + ln g ≠ ln h)
ln f = ln g to f = g
f = g to ¬(f ≠ g)
¬(f ≠ g) to ¬(ln f ≠ ln g)
$f = sin^{-1}(g)$ to g = sin(f)
$f = cos^{-1}(g)$ to g = cos(f)
any conversion of a term of one of the following forms
ln a + log b to ln (a × b)
ln (a × b) to ln a + ln b for positive a and b
(ln a) × b to ln (a^b)
ln (a^b) to (ln a) × b for positive a
Maybe in a stronger subtheory of ZF than the subtheory that lets you prove anything you're supposed to be able to prove in Calculus for full marks, given a system of equations, you can do something like deduce from the statements $\forall x \in \mathbb{R}, \text{ if }f(x) \neq 0 \text{ then } g(x) \neq 0$ and $\forall x \in \mathbb{R}, \text{ if }f(x) = 0 \text{ then } g(x) \neq 0$ and $\forall x \in \mathbb{R} f(x) \times g(x) = h(x)$, the statement $\forall x \in {R} g(x) \neq 0$ and then from that $\forall x \in \mathbb{R} h(x) ÷ g(x) = f(x)$
Here are some other mistakes people could make:
- Deducing from the statement that a real natural number is not a finite Von-Neumann ordinal and the statement that the cardinality of a finite set is a natural number the statement that the cardinality of a finite set is not a finite Von-Neumann ordinal.
- Assuming that the cardinality of any set is its initial ordinal in ZF when cardinality is only defined that way in ZFC like in this question.
- Deducing from the statements that the cardinality of any well-orderable set is its initial ordinal and the cardinality of any set is Scott's definition that the cardinality of the empty set is the Von-Neumann ordinals 0 and 1 as described in this answer.