The effort and best method to solve for $x$ the Discrete Logarithm Problem $g^x \equiv b\pmod p$ for prime $p$ depends on characteristics of $p$ beyond being a prime of a given size, and of what we know from or can guess from $x$, possibly by some tests on $b$.
In the first part of this answer we consider only prime $p$ of 256-bit order of magnitude, as in revision 4 of the question, and assume $x$, $g$ and $b$ are unremarkable.
In particular, the size of the highest prime factor $q$ of $p-1$ is critical to applicability of the Pohlig–Hellman algorithm, which generally has cost dominated by roughly $2\sqrt q$ modular multiplications (with Baby-Step/Giant Step to find $x\bmod q$, a little more with Pollard's rho). The probability that the highest prime factor of a random prime $p$ is less than $q$ is roughly $1/\rho(\ln(p-1)/ln(q))$ where $\rho$ is the Dickman function. E.g $\rho(4)=0.0049\ldots$ tells e.g. there is roughly one chance in 200 that for random 256-bit prime $p$ there is a factor of $p-1$ that's less than 64 bit, which would make Pohlig–Hellman worth consideration. And because it's relatively easy to screen $p$ for the condition, if we wanted to break one of thousands DLPs $g^x\equiv b\mod{p}$ with random $p$, we could pick the one on which we concentrate our efforts.
If $p$ is a safe prime or if otherwise $p-1$ does not have a particularly small largest prime factor, and the $x$ to be found has no special characteristic (e.g. being small), Pohlig–Hellman is of no much help, and above some limit (and most probably for larger than 128-bit $p$), the algorithms of choice become Index Calculus, then the DLP variant of the Number Field Sieve.
For general $p$, cost of GNFS (the algorithm used in the 795-bit record) is (see L-notation)
$$\exp\Biggl(\left(\sqrt[3]{\frac{64}9}+o(1)\right)(\ln p)^{\frac13}(\ln\ln p)^{\frac23}\Biggr)=L_p\left[\frac13,\sqrt[3]{\frac{64}9}\,\right]$$
For some rare $p$ (including I think the 78-digit safe primes $2^{256}-36113$ and $2^{256}+230191$), SNFS is applicable, with cost $L_p\left[\frac13,\sqrt[3]{\frac{32}9}\,\right]$. Index Calculus has cost $L_p\left[\frac12,\sqrt2\,\right]$, and is much easier to code.
Ignoring the $o(1)$ because we lack data about it, here is a plot of the base-2 logarithm of these quantities according to bit size of $p$. Be aware that at the very least, curves are offset vertically by a considerable amount, and data on the left especially unreliable, with Index Calculus in a better position than depicted.

For GNFS, we get that 256-bit $p$ is approximately $2^{24.6}$ (25 million) times easier than the record 795-bit. This is to be taken with a ton of salt: it could as well be 10 million or 1 million. Still, MUCH easier. So instead of 3000 core⋅year, we are talking few core⋅hour. The effort will be dominated by getting the code running. And it's possible that Index Calculus is better from this standpoint.
The question is for $p$ the largest prime less that $2^{256}-2^{32}$, used as the field order in secp256k1 for reasons discussed here. $p-1$ has a large prime factor, thus Pohlig–Hellman would not be useful for a random $x$. However, our $x$ is enormously smaller than would be expected for a random $x\in[1,p)$: we have a 123-bit $x$, which has probability $<2^{-133}$, thus this can't be accidental.
So we tackle the problem: knowing the 256-bit $p$, the rather typical factorization of $p-1$, $g=5$, $b=g^x\bmod p$, and that $0<x<2^{123}$ (or some similar upper bound that can't occur by chance), how do we find $x$?
Here, the problem is such that $g$ is a generator; that is, for each prime $r_i$ dividing $p-1$, it holds $g^{(p-1)/r)}\ne1$. This means we can find $x$ modulo each $r_i$.
For each $i$, we can find $x_i=x\bmod r_i$ by computing $g_i=g^{((p-1)/q_i)}\bmod p$, $b_i=b^{((p-1)/q_i)}\bmod p$, which are such that ${g_i}^x\bmod p=b_i$. By Fermat's Little Theorem, $g^{p-1}\bmod p=1$, thus ${g_i}^{x_i}\bmod p=b_i$.
For small $r_i$ (here $r_0=2$, $r_1=3$, $r_2=7$), we can find $x_i$ by trying at most $r_i$ values, with one modular multiplication each. For medium $r_i$ (here $r_3=13441$), be have choice between enumeration which is workable, and Baby Step/Giant Step which reduces the core of the search to about $2\lceil\sqrt{r_i}\rceil=232$ modular multiplications.
So we find $x_i=x\bmod r_i$ (here $x_0=0$, $x_1=1$, $x_2=3$, $x_3=2861$) for small $r_i$ dividing $p-1$ (here $r_0=2$, $r_1=3$, $r_2=7$, $r_3=13441$). By the Chinese Reminder Theorem that gives us the value of $x\bmod(r_0\,r_1\,r_2\,r_3)$, that is $x\bmod564522=70066$. We can now define the (unknown) $x'$ such that $x=564522\,x'+70066$, compute $g'=g^{564522}\bmod p$ and $b'=g^{-70066}b\bmod p$, and we have reduced our problem to finding ${g'}^{x'}\bmod p=b'$ with a much smaller $x'$ (reduced from 123 to 104 bits). See this for extension to $p-1$ having smalls primes with multiplicity in its factorization.
Now, if we had enough memory and computing power, we could use Baby Step/Giant Step again, which would use in the order of $2^{104/2+1}$ modular multiplications modulo $p$. With 256-bit argument broken into $k=4$ (64-bit) computer words, and for small $k$, a multiplication costs $k^2=16$ word multiplications and additions with carry, and a modular reduction a little more (here $p$ has a special form that helps, I'll ignore that), so I guestimate $2^{10\pm4}$ clock cycles per modular multiplication modulo $p$. So we are talking $2^{63\pm4}$ clock cycles ignoring memory issues on a CPU with 64x64->128 multiplier. Pollard's rho solves the memory issues (at the cost of some more work), and eases parallelization. But still $2^{63\pm4}$ cycles is decades.
So we'll have to use SNFS (rather than GNFS, because I think we can, because $p$ is close to a power of a small prime), or perhaps Index Calculus. I think that SNFS won't be able to take advantage of our smaller $x'$. If Index Calculus can, which I do not rule out, it could be the algorithm of choice. I do not know, and asked.