We have
$$\mathrm{prox}_{f}(x) = \operatorname{argmin}_{u\in \mathbb{R}^n} \left\{\|Au\|_2
+ \tfrac{1}{2}\|u - x\|_2^2\right\}. \tag{1}$$
Let $u^\ast$ be the solution of the optimization problem in (1).
Note that if $u\ne 0$, then $\|Au\|_2$ is differentiable, and $\nabla \|Au\|_2 = \frac{A^TAu }{\|Au\|_2}$.
Thus, if $u^\ast \ne 0$, then the gradient of the objective function at $u^\ast$ vanishes, i.e.
$$\frac{A^TAu^\ast }{\|Au^\ast \|_2} + u^\ast - x = 0$$
which results in
$$u^\ast = \Big(\frac{A^TA }{\|Au^\ast \|_2} + I_n\Big)^{-1} x. \tag{2}$$
Also, if there does not exist $u^\ast \ne 0$ satisfying (2), then $u^\ast = 0$.
Let us solve (2). Let $A = \mathrm{diag}(a_1, a_2, \cdots, a_n)$. Let $\lambda = \|Au^\ast \|_2 > 0$. From (2), we have
$$u^\ast_i = \frac{\lambda}{a_i^2 + \lambda} x_i, \ \forall i. \tag{3}$$
From (3) and $\lambda^2 = \|Au^\ast \|_2^2 > 0$, we have
$$\sum_{i=1}^n \frac{a_i^2x_i^2}{(a_i^2 + \lambda)^2} = 1. \tag{4}$$
Let
$$F(\lambda) = \sum_{i=1}^n \frac{a_i^2x_i^2}{(a_i^2 + \lambda)^2}.$$
Note that $F(\infty) = 0$ and $F(0) = \sum_{i=1}^n \frac{x_i^2}{a_i^2} = \|A^{-1}x\|_2^2$.
Also, $F(\lambda)$ is strictly decreasing on $[0, \infty)$.
Thus, $F(\lambda) = 1$ has a unique positive solution if and only if $F(0) > 1$ i.e. $\|A^{-1}x\|_2 > 1$.
As a result, we have
$$\mathrm{prox}_{f}(x) = \left\{\begin{array}{cc}
0 & \|A^{-1}x\|_2 \le 1 \\[6pt]
(\frac{1}{\lambda}A^TA + I_n)^{-1} x & \|A^{-1}x\|_2 > 1
\end{array}
\right. \tag{5}$$
where $\lambda$ is the unique positive solution of (4).
Remark 1: In general, $\lambda$ cannot be expressed in closed form (see Remark 2).
If $A = I_n$ (or $A = \alpha I_n$), $\lambda$ can be expressed in closed form.
For example, $A = I_n$, then (4) becomes $\frac{\|x\|_2^2}{(1+\lambda)^2} = 1$ and hence $\lambda = \|x\|_2 - 1$. Thus, from (5), we have
\begin{align}
\mathrm{prox}_{\|x\|_2}(x) &= \left\{\begin{array}{cc}
0 & \|x\|_2 \le 1 \\[6pt]
(1 - \frac{1}{\|x\|_2})x & \|x\|_2 > 1
\end{array}
\right.\\
&= \Big(1 - \frac{1}{\max(\|x\|_2, 1)}\Big)x.
\end{align}
This is a well-known result.
Remark 2: Numerical results verify our result (5). We use CVX+Matlab to solve the optimization problem in (1).
Let us see an example. Let $x = [-3, 2, -1, 1]^T$ and $A = \mathrm{diag}(1, 2, 3, 4)$. Equation (4) becomes
$$\frac{9}{(\lambda+1)^2} + \frac{16}{(\lambda+4)^2} + \frac{9}{(\lambda+9)^2} + \frac{16}{(\lambda+16)^2} = 1,$$
which, after clearing the denominator, results in
\begin{align}
&\lambda^8+60\lambda^7+1396\lambda^6+15840\lambda^5+86511\lambda^4\\
& +143320\lambda^3-568624\lambda^2-2517120\lambda-3043584 = 0.
\end{align}
In general, there is no closed form solution to an equation of 8th degree.
Using Maple, we have $\lambda = 2.989390606...$.
We use CVX+Matlab to solve the optimization problem in (1), which is compared with (5).
We have
$$\mathrm{prox}_{f}([-3, 2, -1, 1]^T) \approx [-2.2480, 0.8554, -0.2493, 0.1574]^T.$$