1) You want to prove that every eigenvalue of $A^2$ is real and negative. Since every eigenvalue of a real symmetric linear operator is real, proving that $A^2$ is symmetric would get you half of what you need.
2) I think the stated argument is unclear. It's much clearer to start with $A^2(v) \cdot v$ instead, as this would more clearly let one assess whether $A^2$ is positive- or negative-definite. In fact, it's much easier and faster to prove directly that $A^2$ is negative definite:
$$A^2(b) \cdot b = A(b) \cdot A^T(b) = -A(b) \cdot A(b) < 0$$
But maybe you aren't allowed to invoke the properties of a negative-definite operator. You can still use the above argument to suggest that the eigenvalue $k$ must necessarily be negative.
To the extent that the proof is convenient, or that it is unclear how one would attack the problem, I cannot comment much. Here, we find the sign of $k$ by showing that $k|v|^2 \leq 0$ and knowing that $|v|^2 \geq 0$. Using the known signs of several individual quantities and the overall sign of a quantity to solve for the sign of an unknown? I'm sure this was done in basic algebra also.
Comment responses:
The definition of the transpose is the map $A^T$ such that
$$A(b) \cdot c = A^T(c) \cdot b$$
This is a general, and very useful, definition. That works even when talking about complex vector spaces (where instead of the transpose, we call this kind of map the adjoint instead).
In this problem, we just do a little trick like so:
$$A^2(b) \cdot b = A[A(b)] \cdot b = A^T(b) \cdot A(b)$$
Finally, remember that the inner product is positive definite: the dot product of a vector with itself is always greater than zero (unless it's the zero vector). Here, we're considering the dot product of $A(b)$ with itself, so that's positive, and the minus sign incurred by swapping from $A^T$ to $A$ ensures that the quantity is overall negative.
Remember that inner products are commutative. $x \cdot y = y \cdot x$. Similarly, $A(b) \cdot c = c \cdot A(b)$. In this case, converting to matrix notation:
$$A(b) \cdot c = (Ab)^T c = b^T A^T c = b^T (A^T c) = b \cdot A^T(c) = A^T(c) \cdot b$$
Thus, there is no contradiction.
My notation is purposefully avoiding any mention that $A$ can be represented by some matrix, or that inner products can be written in terms of matrix multiplication of row and column vectors. None of the important results of linear algebra rely upon representing vectors and linear maps with row/column vectors and matrices. The latter are simply a means to performing computations.
Moreover, when working in non-Euclidean spaces, one would have to make some modifications: first, the notion of transpose being important no longer applies (obvious when considered in context of complex spaces, but it's true even in something as simple as a Minkowski spacetime); second, inner products rely upon some non-identity matrix to sit between the row and column vectors, apparently giving the Euclidean inner product a privileged character, a privileged relationship to the identity matrix that is merely an artifact of how we define matrix multiplication.