Unbiased estimators of $|E(Z)|$ need not exist in general. In particular, here is an example of a scenario where there are no unbiased estimators: Let $Z \sim Normal(\mu, \sigma^2)$ and let $Z_1, ..., Z_n$ be a random sample of $Z$. Our parameter of interest is $\theta = |\mu|$.
This is perhaps a much stronger result than we need to show that there does not exist an unbiased estimator of $\theta$, but there is a paper by Hirano and Porter (see here) that deals with this situation. The main result is as follows: Let $Z \sim N(h, \sigma^2)$ with $h \in \mathbb{R}$ and $\sigma^2 > 0$, let $\kappa(h)$ be a function of $h$, and let $T(Z)$ be an estimator of $\kappa(h)$. If there exists a point $h_0$ in the parameter space such that $\kappa$ is not continuously differentiable at $h_0$, then $T$ cannot be unbiased. (Their result is actually more general and deals with multivariate normal distributions, but I've simplified it to the univariate case.)
The general proof is a bit complicated, but the idea should be straightforward: Suppose that $E(T(Z)) = \kappa(h)$. Using a bounding inequality for the exponential function, we get differentiability under the integral sign and so the derivative of $E(T(Z))$ with respect to $h$ is well-defined and exists everywhere. However, $\kappa(h)$ is not continuously differentiable at $h = h_0$, a contradiction.
In our case, $\kappa(h) = |h|$, which is not continuously differentiable at $h = 0$. As such, there cannot be an unbiased estimator $T$ of $\kappa(h)$.