Let $g(x)$ be the number of digits in the base 10 representation of $x$.
I define $f(x)$, the slow growing function in question, as the following process:
Take any starting number $x$ and calculate $g(x)$. Then, plug the resulting number back into $g(x)$, and repeat $n$ number of times until $g(x)$ is one digit. The result of the function (what is returns) is $n$.
Example: $f(10) = 1$ because $g(10) = 2$ which is a one digit number. It only took one repetition of $g$ to produce a single digit number. If $g(x) > 9$, then another repetition is required, so $f(x) > 1$.
When I wrote some python code to mess around with this function I was surprised at just how slowly it grew. For instance, $f(1,000,000^{1,000,000}) = 2$ because $g(g(1,000,000^{1,000,000}) = 6000001) = 7$ and because there are $2$ layers of nesting the result is $2$. Although I'm too lazy to figure out the smallest number $n$ for which $f(n) = 3$, it is too large to easily brute force the result with my puny macbook air.
I know things such as graham's number are many, many, many, orders of magnitude larger than $1,000,000^{1,000,000}$, but I wonder if $f(graham's\ number)$ is a comprehensible number? Does anyone have a way to guess at approximate size of $f(graham's\ number)$? My guess is that it is significantly less than the number of atoms in the observable universe, but I might just be naive.