One thing to watch for, since you didn't clearly indicate it: the MLE is the maximum of the absolute values $|X_k|$, not simply the maximum of the $X_k$.
Taking the derivative of the log of this likelihood function isn't valid since it would lead to the conclusion that $\frac{-n}{2\theta} = 0$, which isn't helpful in finding the MLE $\hat{\theta}$ of $\theta$.
Let's step back a bit. Instead of blindly saying "take the derivative, set it equal to zero", let's look at the problem we're trying to solve with it. We're trying to find a maximum - and if you recall from your calculus class, the standard way to find a maximum is to look for critical points. The zeros of the derivative are critical points, but they're not the only critical points. In addition to them, we also have to consider any points at which the derivative fails to exist, and the boundary of the domain.
What does that give us here? As you already found, the derivative doesn't have any zeros. The domain for $\theta$ is $(0,\infty)$; as $\theta\to\infty$, the probability goes to zero so that's not the maximum. As $\theta\to 0$, our formula $(2\theta)^{-n}$ goes to $\infty$. That seems unlikely; there must be a problem here that means the formula doesn't apply. Before we go any farther, we need to resolve this issue.
Resolving this issue is simple in principle; the formula $f(x;\theta)=\frac1{2\theta}$ is only valid for $|x|\le \theta$, and the probability density is zero if $|x|>\theta$. There's more than one way to write this down - both the piecewise definition $f(x;\theta) = \begin{cases}\frac1{2\theta}& -\theta\le x\le\theta\\ 0&\text{otherwise}\end{cases}$ and the indicator function form $f(x;\theta)=\frac1{2\theta}\mathbf{1}_{[-\theta,\theta]}(x)$ are valid.
So then, the problem with the density at zero is resolved. Once we've gone closer in than some of the observed $x$-values, some of the $f(X_k;\theta)$ we're putting into the product are zero and thus the whole product is zero.
Well, that's the boundary dealt with, and we still haven't found anywhere that could be a maximum. There's only one possible critical point left to try: places where the derivative fails to exist. The pieces that go into our density function are smooth, so the only way the derivative can fail to exist is if we switch formulas. That happens when $|X_k|=\theta$ for some $k$. Looking closer, if $\theta=|X_k|$ and $|X_k|<|X_j|$ for some other $j$, the term $f(X_j;\theta)$ will be zero in an interval around $\theta$; that rules out $|X_k|$ as a potential minimum.
The only option left, then, is $\hat{\theta}=\max_k |X_k|$, the largest of them. The density function jumps from zero to $(2\hat{\theta})^{-n}$ there, and then decays as $\theta$ continues to increase.
... the question I have is about how to use this information for further steps of determining things like variance and bias of $\hat{\theta}$.
Now that we have our estimator, we shift perspectives. This is no longer a parameter estimation problem; it's a problem of finding things out about a known probability distribution. Given a fixed $\theta$ and $n$, what does $\hat{\theta}$ look like?
This distribution is pretty well known; it's an order statistic. The easiest handle we can get is the cumulative probability distribution: $\hat{\theta} \le x$ if and only if $|X_k|\le x$ for each $k$. Those events are independent and each has probability $\frac{x}{\theta}$ (for $0\le x\le\theta$), so the cumulative distribution function is
$$G(x) =P(\hat{\theta}\le x) = \begin{cases} 0 & x\le 0\\ \frac{x^n}{\theta^n}& 0\le x\le \theta\\ 1& x\ge\theta\end{cases}$$
Now that we have a cumulative distribution function $G$, we can differentiate it to find the density $g(x)$. Then things like the mean $E(\hat{\theta})=\int_{\mathbb{R}}xg(x)$, the variance $E(\hat{\theta}^2)-(E(\hat{\theta}))^2$, and other measures are routine calculations.