EDIT: Here are two methods.
Your function is always convex, proper, and continuous (hence lower-semicontinuous). Under mild conditions on your problem -- e.g. when $a\neq\mathbf{0}$ -- your function is coercive which guarantees existence of a minimizer (also if $a=\mathbf{0}$ then the problem is trivial and every number is a minimizer). Since we are now qualified to find a minimizer, let us proceed.
Method 1: The function $g\colon x\mapsto \|ax-b\|_1$ can be viewed as a linear operator applied to $x$ followed by application of a translated $1$-norm, i.e., $g= f\circ L$, where $L\colon\mathbb{R}\to\mathbb{R}^N\colon x\mapsto ax$ and $f=\|\cdot-b\|_1 $.
The goal here is to use a product-space reformulation e.g., described in Section 3.1 here and a splitting algorithm (e.g., Douglas-Rachford / ADMM). The reformulation would replace minimizing $f\circ L$ over $\mathbb{R}^n$ with minimizing $f(x) + \iota_G(y) + \iota_D(x,y)$, over $\mathbb{R}^n\times\mathbb{R}^n$, where $\iota_C$ is an $0$-$\infty$ indicator function, $G$ is the graph of $L$, and $D$ is the diagonal subspace (mentioned in the product space reformulation literature). Now, to use these algorithms, you need $\textrm{prox}_f$ and the projection onto $G$.
Regarding the prox of $f$, note that for any $\lambda\in\left]0,+\infty\right[$, the proximity operator of $\lambda \|\cdot\|_1$, is the component-wise soft thresholder with parameter $\lambda$ which I'll call $\textrm{soft}_\lambda$. The following propositions can be found in Bauschke & Combettes' book, 2nd edition. It follows from Proposition 24.8(ii) that $\textrm{prox}_{\lambda f}(x)=b+\textrm{soft}_{\lambda}(x-b)$. Several formulae for the projection onto $G$ are available in the same book, page 540.
Method 2: The subgradient descent algorithm (using subgradients of the objective function) is also guaranteed to work if we manage the stepsizes properly. The relevant literature vocabulary here is "subgradient projectors", and a good reference is Section 29.6 of Bauschke & Combettes' book (2nd edition).