In the context of a convex optimization problem I came across with the following function:
$$f_1(\textbf{x})=(\textbf{x}-\textbf{x}_k)^T\textbf{A}(\textbf{x}-\textbf{x}_k) - t^2$$
EDIT
$f_1$ is a real-valued function and $t$ is a positive real number. Also $\textbf{A}$ is positive definite.
and I am trying to take its gradient following the rules I found here. Starting from the given form I get: \begin{eqnarray*} f_1(\textbf{x})&=&\textbf{x}^T\textbf{A}\textbf{x}-\textbf{x}^T\textbf{A}\textbf{x}_k-\textbf{x}_k^T\textbf{A}\textbf{x}+\textbf{x}_k^T\textbf{A}\textbf{x}_k-t^2\\ &=&\textbf{x}^T\textbf{A}\textbf{x}-2\textbf{x}_k^T\textbf{A}\textbf{x}+\textbf{x}_k^T\textbf{A}\textbf{x}_k-t^2\\ &=&\textbf{x}^T\textbf{A}\textbf{x}-2\textbf{q}^T\textbf{x}+\textbf{x}_k^T\textbf{A}\textbf{x}_k-t^2\\ \end{eqnarray*}
where in the last line I have considered a row vector $\textbf{q}^T=\textbf{x}_k^T\textbf{A}$
So what I get is: $$\nabla f_1(\textbf{x})=\textbf{A}\textbf{x}-2\textbf{q}=\textbf{A}\textbf{x}-2\textbf{A}\textbf{x}_k=\textbf{A}(\textbf{x}-2\textbf{x}_k)$$
But looking at my notes I see that the result is: $$\nabla f_1(\textbf{x})=2\textbf{A}(\textbf{x}-\textbf{x}_k)$$
Am I doing something wrong? Also, does it matter if $\textbf{A}$ is positive definite or not?