What does it mean for $m$ iterated square roots of some number $x$ to have a floor of $n$?
$n = \left\lfloor\overbrace{\sqrt{\ \ldots \sqrt{x}}}^{m\text{ square roots}}\right\rfloor\tag*{}$
Well, each square root is raising to the $0.5$ power, so doing this $n$ times gives us $x^{2^{-m}}$. And the floor means that this value is between $n$ and $n+1$.
So we have
$$n\le x^{2^{-m}} < n+1$$
Raising both sides to the $2^m$ power, we have
$$n^{2^m} \le x < (n+1)^{2^m}$$
So, for a given $n$, the question is whether any of the intervals $[n,n+1], [n^2, (n+1)^2], [n^4, (n+1)^4], \ldots$ contains $x$.
Taking logarithms, this asks whether $\log(x)$ is in any of the intervals $[\log(n), \log(n+1)], [2\log(n), 2\log(n+1)], \ldots$.
Equivalently, if we start with $\log(x)$ and keep dividing by $2$, the question is whether we ever pass through the interval $[\log(n),\log(n+1)]$.
Taking logs again, this is asking whether, if we start with $\log_2(\log_2(x))$ and subtract $1$ every time, we'll ever pass through the interval $[\log_2(\log_2(n)), \log_2(\log_2(n+1))]$.
So we're effectively asking about the remainder of $\log_2(\log_2(x))$ when we take out the integer part, and whether it happens to lie in this very tight interval between the non-integer parts of these two adjacent double-logs. E.g., if $n=100$, we're asking whether $\log_2(\log_2(x)) \pmod{1}$ is in the range $[0.73202, 0.73513]$.
Now, the fractional part of the binary logarithm of the binary logarithm of an iterated factorial is... kind of just some random number? Iterated factorials grow super quickly - much faster than double exponentials - so even on a log-log scale, they're just jumping all over the map, so there's no particular reason to expect a pattern to their remainders. (It's not like iterated factorials have some well-known deep connection to numbers of the form $2^{2^n}$.)
So, heuristically, we should basically expect each new iterated factorial's double logarithm's fractional part to be a pseudorandom draw from the interval $[0,1]$, without much additional structure. And if we make those draws enough times, eventually we'll get one in our target interval "by chance", since there was a positive "probability" of it happening each time.
Of course, the above is not a proof at all! It's completely non-rigorous! The iterated factorials are completely deterministic in nature, and maybe they have some crazy relationship to double binary logarithms that no one's seen yet. I certainly can't prove otherwise!
But absent some reason to think that there's a weird structure like this, you should have a strong prior that probably things just behave kind of randomly, and the conjecture will be true "by chance" - not for a deep reason, but just because it would be extremely weird for it not to be. Like, why would these psuedorandom numbers from $[0,1]$ keep avoiding the interval $[0.732, 0.735]$ for all time? That would be so bizarre.
This sort of heuristic argument is common in mathematics, and often is much much easier to give than a formal proof; Terence Tao talks about this sort of thing in an excellent blog post. We have strong heuristic reasons to think that e.g. $\pi$ is normal, or the twin prime conjecture is true, even though these statements may be incredibly difficult (or impossible) to prove.
In summary: Problems like these can be exceptionally difficult to prove, because they rely on ruling out some regular structure that we had no particular reason to expect in the first place. We could replace iterated factorials with iterated factorials plus $1$, or the Busy Beaver numbers, or the $TREE(n)$ function, or any sufficiently-fast growing function we like, and we'd have a similarly thorny-to-prove but simple-to-convince-yourself-of conjecture. Situations like this are very common in mathematics.