18

I've always wondered after learning addition, multiplication, and power facts (and their inverse operations) what the next higher level of facts I would need to memorize would be. However, instead of learning about the next higher operator, math instead took an entirely different turn and started going into all sorts of other things that never seemed to need any higher operator than just the three and their inverses. From my perspective, operators went from addition to multiplication (repeated addition) , to exponentiation (repeated multiplication), but I’ve never heard of any higher-order operators than exponentiation. So recently I tried to learn about this operator myself. I started like this: repeated addition = multiplication, repeated multiplication = exponentiation, repeated exponentiation = ?.

However, I see that because of exponentials being non-commutative,
$$k^{\left( k^{\left( k^{\left( ... \right)} \right)} \right)} \neq \left( \left( \left( \left( k \right)^k \right)^k \right)^{...} \right)$$ there's two ways to repeat exponentiation, leading to two new operators, with one being tetration, with right-associative exponentiation. If I represent the other left-associative operator as $?$, then:

$$k+k+k+k = k * 4\text{,}$$

$$k * k * k * k = k ^ 4\text{, and}$$

$$\left(\left(\left(k\right)^k\right)^k\right)^k = k ? 4\text{.}$$

It seems that something close to what I want is $\left(\left(\left(k\right)^k\right)^k\right)^k = k^{(k^3)}$, and while this could be shorthand, it doesn’t follow the convention of $k$ (operator) $4$. I think that left-associative exponentiation is still the consistent way with repeated addition and repeated multiplication. It's what I usually think of when I think of repeated exponentiation.

Then, according to the interwebs, it seems that tetration is usually considered the “next” operation after exponentiation. However, it seems that it isn't very common or useful. Not much can actually be represented with tetration. It seems that Tetration basically only gives you big numbers. For example, 2^(2^(2^2)) is already a huge number, and that's with really small inputs.

My question then is: Why is addition, multiplication, exponentiation (plus their 3 inverses) incredibly useful whereas tetration is the sudden cutoff for usefulness?

I mean, surely the reason for repeated operators is not only to represent bigger numbers? There are a myriad of useful things for just addition, multiplication, and exponentiation, and their inverses, but suddenly nothing useful from tetration? That doesn't make sense. It seems multiplication gives you more advanced math than just addition, and exponentiation gives more advanced math than just multiplication, so why doesn’t tetration give you even more advanced math? Or are there actually advanced uses for Tetration at least as diverse and meaningful as the previous 3 that I don’t know about?

EDIT: So far, it seems the general consensus is that tetrations are in fact not useful. Most of the answers/comments seem to reiterating this. But my central confusion is WHY this is. Multiplication seems to model more advanced real-world applications than addition. For example, multiplication finds area or volume, vs addition for finding number of apples or sheep. Going by this pattern, exponentials seem to model even more advanced real-world situations than multiplication, like expressing gravity as g = 9.8 m s^-2, or expressing polynomials which have a myriad of advanced applications. So does tetration have even more real-world applications than exponentiation? It appears no, thus far. In fact, tetration seems to have even less applications than plain old addition. This is mind boggling for me. (Does this mean that humanity simply hasn't reached that level of math yet?)

  • 5
    Simple, incomplete answer: Because tetration quickly leads to numbers so much larger than anything anyone will ever measure. Example: $\mbox{}^2 4 > 10^{75}$ times the number of atoms in the universe. – David G. Stork Nov 06 '21 at 21:17
  • https://www.johndcook.com/blog/2018/04/10/up-arrow-and-down-arrow-notation/ – Elchanan Solomon Nov 06 '21 at 21:17
  • 6
    Tetration requires the base and the exponent to be the "same kind" of thing. And that's just rare in general. Maybe that's why? – Arthur Nov 06 '21 at 21:30
  • 6
    Part of the answer is that tetration seems to have no useful combinatorial meaning like the three previous operations do. More specifically, we can define a natural version of addition, multiplication and exponentiation between any two sets of things, but tetration is only naturally defined between a set and an ordinal (a very special kind of set whose elements are ordered). See e.g. here and here. – pregunton Nov 06 '21 at 21:36
  • 2
    @DaveL.Renfro Please convert your comments into an answer & delete them. – J.G. Nov 07 '21 at 07:07
  • Does this mean that humanity simply hasn't reached that level of math yet? --- Relevant thoughts I've made about our limitations in comments: my comments to this question and my comments here about quantifier alterations. – Dave L. Renfro Nov 08 '21 at 11:22
  • @J.G. OK, I've converted the comments to an answer and (in a few seconds) will delete my earlier comments. – Dave L. Renfro Nov 08 '21 at 11:24
  • What would the inverse of tetration look like? Just as logs are the inverse of exponents. – richard1941 Nov 11 '21 at 20:51
  • The main purpose of tetration is to produce really large numbers although $a\uparrow \uparrow b$ makes sense also if $a$ is a positive real number. If "useful" means that it has applications in physics , statistics or economy , the answer is almost surely negative. – Peter Feb 01 '22 at 13:31
  • @DavidG.Stork We need $3$ fours in the power tower , $2$ fours produce just $256$. $3$ fours beat already $googol=10^{100}$ and $4$ fours beat $googolplex=10^{10^{100}}$ , but the comment is good since it shows that usually tetration numbers are beyond "astronomical". – Peter Feb 01 '22 at 14:50

6 Answers6

9

Multiplication isn't useful because it can be defined as repeated addition; it is useful because it can be defined in various different but equivalent ways, and therefore is useful in many situations where those different definitions might arise naturally. Likewise, to a lesser extent, exponentiation.

When something has multiple equivalent definitions, then not only is it more likely to appear in a problem context, but it's more useful when it does appear, because it means you can think about the problem in different ways, some of which might be more conducive to a solution.

In contrast, tetration can only really be defined as repeated exponentiation, so it only arises naturally in situations where you want to do repeated exponentiation for some reason. The only examples which come to mind are some combinatorial proofs by induction where the inductive step itself is proved by induction.

kaya3
  • 1,311
8

As with many mathematical things, until a thing demonstrably interacts with many other things, even if it has its own charm, it simply doesn't matter much. Many other things do matter more, in a relative sense, so they get more attention.

No, it's not possible to know in advance which ideas will have wider interactions than others. So from an "abstract" viewpoint there's no obvious reason why "tetration" shouldn't be as important as exponentiation or multiplication. But, "as it happens", at this moment it simply doesn't appear to be, in practical/action-oriented terms.

paul garrett
  • 52,465
7

What follows is a greatly expanded version (essentially an excerpt from my personal notes on ordinal arithmetic with operations beyond exponentiation) of my earlier comments about one of the things you bring up in your question. I originally used comments because what I said didn’t really address your question. However, someone has requested that I covert those comments into an answer.

When defining tetration and higher order operations on transfinite ordinal numbers, what you call the right-associative version doesn’t work very well, and the left-associative version is used. To illustrate this problem with transfinite ordinals, I’ll give a summary of some introductory aspects of tetration for transfinite ordinal numbers. For basic definitions and results about addition, multiplication, and exponentiating of ordinal numbers, see this 22 September 2006 sci.math post (25 September 2006 revised version). Incidentally, I planned to continue those posts (see here, for example), but I wound up getting very busy at work (my day job, which had nothing to do with these pursuits), plus I later felt that trying to write all this stuff in ASCII format for posting was too much of a time-sink. At some point I plan to provide a survey of “higher order operations for ordinal numbers” as an answer to some stack exchange question (see my comments to this question), but I don’t know when I’ll get around to doing it (could be several years from now).

Ordinal Tetration. Fix an ordinal $\alpha$. We define $\, \sideset{_{}^\beta}{}\alpha \,$ by transfinite induction on $\beta$ as follows.

(base case) $\;\; \sideset{_{}^0}{}\alpha = 1\;$ and $\; \sideset{_{}^1}{}\alpha = \alpha$

(successor case) $\;\; \sideset{_{}^{\beta + 1}}{}\alpha \; = \; \left(\sideset{_{}^{\beta}}{}\alpha \right)^{\alpha} \;$ for each $\; \beta \geq 1$

(limit case) $\;\; \sideset{_{}^{\lambda}}{}\alpha \; = \; \sup \left\{\sideset{_{}^{\beta}}{}{\alpha}: \; \beta < \lambda \right\}\;$ if $\; \lambda \;$ is a nonzero limit ordinal

Ordinal Tetration vs Usual Tetration. In the case of finite ordinals (i.e. non-negative integers), this is NOT the same as the usual tetration operation. For instance, $$\sideset{_{}^4}{}\alpha \; = \; \left( \sideset{_{}^3}{}{\alpha} \right)^{\alpha} \; = \; \left( \left( \sideset{_{}^2}{}{\alpha} \right)^{\alpha} \right)^{\alpha} \; = \; \left( \left( {\alpha}^{\alpha} \right)^{\alpha} \right)^{\alpha} \; = \; {\alpha}^{{\alpha}^{3}} $$ In fact, it follows from the result proved further below that $$\sideset{_{}^{\epsilon_0}}{}{\epsilon_0} \; = \; {\epsilon_0}^{{\epsilon_0}^{\epsilon_0}} $$ Moreover, we also have $\; \epsilon_0 \; = \; \sup\left\{\omega, \; \sideset{_{}^{\omega}}{}{\omega}, \; \sideset{_{}^{\sideset{_{}^{\omega}}{}{\omega}}}{}{\omega}, \; \ldots \right\}$, and hence it follows that $$ \epsilon_0 \; = \; {\omega}^{{\omega}^{{\omega}^{{\cdot}^{{\cdot}^{\cdot}}}}} \; = \; \sideset{_{}^{\sideset{_{}^{\sideset{_{}^{\sideset{_{}^{\sideset{_{}^{\cdot}}{}{\cdot}}}{}{\cdot}}}{}{\omega}}}{}{\omega}}}{}{\omega} $$

Strong Tetration and Weak Tetration. There does not seem to be a standard term for this distinction in the literature, probably because this distinction does not arise very often. Mark Neyrinck’s May 1995 undergraduate thesis An Investigation of Arithmetic Operations uses the term top-down and bottom-up. I propose, when these two types of tetration are being discussed together, to use the term strong tetration for the ordinary notion of tetration and the term weak tetration for the ordinal notion of tetration as defined above.

Top to Bottom Convention for Finite-Length and $\omega$-Length Towers of Exponentiation and Tetration. When writing finite-length towers of exponentiation or tetration, I will assume the evaluation is from top to bottom, as is standard convention. Also, the presence of a single ending ellipsis in such a tower represents the supremum of the corresponding finite-length towers that context suggests. Note, however, that when a finite-length TETRATED TOWER of ordinals appears, where we are assuming the evaluation is from top to bottom, the individual tetration operations that are to be performed are those of weak tetration.

Non-Associativity of Exponentiation. The presence of different notions of tetration is due to the non-associativity of exponentiation. For a fixed sequence of FINITE ordinals (i.e. natural numbers), ordinary tetration (evaluate repeated exponentiation from top to bottom) generally results in the greatest value, and the notion we're using (evaluate repeated exponentiation from bottom to top) generally results in the least value, from among all the possible methods of evaluating repeated exponentiation. [For more on this issue, see Gobel/Nederpelt, The number of numerical outcomes of iterated powers, American Mathematical Monthly 78 #10 (December 1971), 1097-1103.] Doner/Tarski’s 1969 paper An extended arithmetic of ordinal numbers explains on p. 113 that there seem to be insurmountable problems in formulating a useful notion of tetration for ordinals in which repeated exponentiation is evaluated from top to bottom.

Why Strong Tetration is Not Useful for Transfinite Ordinals. The ordinal $\epsilon_0$ is useful in seeing what goes wrong. Suppose we define tetration in the usual way (i.e. strong tetration), so that the successor case is defined as $\; \sideset{_{}^{\beta + 1}}{}\alpha \; = \; {\alpha}^{\left(\sideset{_{}^{\beta}}{}\alpha \right)} \;$ for $\; \beta \geq 1.$ Then under this definition of tetration we would have $\;\sideset{_{}^{2}}{}\omega = {\omega}^{\omega},$ $\; \sideset{_{}^{3}}{}\omega = {\omega}^{{\omega}^{\omega}},$ $\;\sideset{_{}^{4}}{}\omega = {\omega}^{{\omega}^{{\omega}^{{\omega}}}}, \; \dots,$ $\;\sideset{_{}^{\omega}}{}\omega = \epsilon_0.$ [We would keep the limit case of the definition the same, of course.] But now watch what would happen after this: $\; \sideset{_{}^{\omega + 1}}{}\omega = {\omega}^{\left(\sideset{_{}^{\omega}}{}\omega \right)} = {\omega}^{\epsilon_0} = \epsilon_0,\;$ and then $\; \sideset{_{}^{\omega + 2}}{}\omega = {\omega}^{\left(\sideset{_{}^{\omega + 1}}{}\omega \right)} = {\omega}^{\epsilon_0} = \epsilon_0,\;$ and then $\; \sideset{_{}^{\omega + 3}}{}\omega = {\omega}^{\left(\sideset{_{}^{\omega + 2}}{}\omega \right)} = {\omega}^{\epsilon_0} = \epsilon_0,\;$ and so on. Thus, we would have $\; \sideset{_{}^{\omega + n}}{}\omega = \epsilon_0\;$ for each $n < \omega.$ Therefore, since $\; \sideset{_{}^{\omega \cdot 2}}{}\omega = \sup\left\{\sideset{_{}^{\omega + n}}{}\omega: \; n < \omega \right\},\;$ we would then get $\; \sideset{_{}^{\omega \cdot 2}}{}\omega = \epsilon_0.$ In fact, one can show by straightforward transfinite induction that $\; \sideset{_{}^{\beta}}{}\omega = \epsilon_0 \;$ for each $\; \beta \geq \omega$ if strong tetration is used.

Two Formulas for $\;\sideset{_{}^{\beta}}{}\alpha \;$ in Terms of Exponentiation:

$\;$ Let $\alpha \neq 1$ be an ordinal.

  1. $\;\; n < \omega \;$ implies $\;\; \sideset{_{}^{n+1}}{}\alpha \; = \; {\alpha}^{{\alpha}^{n}} $
  2. $\;\; \beta \geq \omega \;$ implies $\;\; \sideset{_{}^{\beta}}{}\alpha \; = \; {\alpha}^{{\alpha}^{\beta}} $

One Formula for $\;\sideset{_{}^{\beta}}{}\alpha \;$ is Possible. These two formulas can be replaced with the single formula $\; \sideset{_{}^{1 + \beta}}{}\alpha \; = \; {\alpha}^{{\alpha}^{\beta}},$ but I prefer putting the result into two separate formulas that are more directly comprehended. The two separate formulas are more directly comprehended because the reader does not have to mentally supply the result "$1 + \beta = \beta$ for $\beta \geq \omega$" to get the simpler version when $\beta$ is infinite. Also, I think there is potential for error when using $\; \sideset{_{}^{1 + \beta}}{}\alpha \; = \; {\alpha}^{{\alpha}^{\beta}},$ because the reader might mistakenly think $1 + \beta$ is a typo and that the writer had intended $\beta + 1$ instead.

Proof of First Formula. We use mathematical induction on $n < \omega$. (base case) The result for $n = 0$ follows from $\sideset{_{}^1}{}\alpha = \alpha$ and ${\alpha}^{{\alpha}^0} ={\alpha}^1 = \alpha$. (successor case) Assume the result holds for $n = k.$ Then we have $\sideset{_{}^{k+1}}{}\alpha = \left(\sideset{_{}^{k}}{}\alpha \right)^{\alpha} = \left( {\alpha}^{{\alpha}^k} \right)^{\alpha} = {\alpha}^{{\alpha}^{k} \cdot \alpha} = {\alpha}^{{\alpha}^{k+1}}$ (the induction hypothesis is used in the 2nd equality), which shows that the result holds for $n = k+1.$

Proof of Second Formula. We use transfinite induction for ordinals $\beta \geq \omega$. (base case) We show the result is true for $\beta = \omega.$ By definition, we have $\sideset{_{}^{\omega}}{}{\alpha} = \sup\left\{\sideset{_{}^{n}}{}{\alpha}: \; n < \omega \right\}$ which, by making use of monotonicity in the tetrated exponent, equals $\sup\left\{\sideset{_{}^{n+1}}{}{\alpha}: \; n < \omega \right\}.$ Using what we just proved in #1, this is equal to $\sup \left\{{\alpha}^{{\alpha}^{n}}: \; n < \omega\right\} = {\alpha}^{{\alpha}^{\omega}}.$ The last equality follows by an application of continuity in the exponent. (successor case) Assume the result holds for $\beta = \eta,$ where $\eta \geq \omega$ is fixed. Then we have $\sideset{_{}^{\eta + 1}}{}\alpha = \left(\sideset{_{}^{\eta}}{}\alpha \right)^{\alpha} = \left( {\alpha}^{{\alpha}^{\eta}} \right)^{\alpha} = {\alpha}^{{\alpha}^{\eta} \cdot \alpha} = {\alpha}^{{\alpha}^{\eta + 1}}$ (the induction hypothesis is used in the 2nd equality), which shows that the result holds for $\beta = \eta + 1.$ (limit case) Let $\lambda > \omega$ be a limit ordinal and assume the result holds for all $\beta$ such that $\omega \leq \beta < \lambda.$ Then we have $\sideset{_{}^{\lambda}}{}\alpha \; = \; \sup \left\{\sideset{_{}^{\beta}}{}{\alpha}: \; \omega \leq \beta < \lambda \right\} \; = \; \sup \left\{{\alpha}^{{\alpha}^{\beta}}: \; \omega \leq \beta < \lambda \right\} \; = \; {\alpha}^{{\alpha}^{\lambda}}$ (the induction hypothesis is used in the 2nd equality), which shows that the result holds for $\beta = \lambda.$

0

I've once thought of a variant of interest-calculation. In wikipedia we have something like $$ P' = P\left( 1+\frac rn\right)^{nt} $$ I imagined, that a gambler has a certain sum to invest in a bank loan - and the rest of his money allows him to survive n timeperiods. After the time he has some more money - to reinvest again, but with longer time-period to survive. So the effect on $P'$ returns in the exponent of the formula and we can rewrite it as iterable expression $$ P°^{k+1} = P\left( 1+\frac rn\right)^{n f(P°^k)} $$ with some function $f()$ applied to the $P$ in the iterated formula, say, in the gambler-idea possibly a certain percentage of $P$.

Well, this is just a bit q&d ... but surely someone can make a more accurate ansatz with this.

Another time I read about the physical/cosmological speculation, that time might be a property of some corpuscle, and the assumed extreme inflation of the universe after big-bang might be modelled by such a recoupling of the hypothetical time-property (or parameter)...
Well, I'm not trained with such questions, but at least this idea lingers down in mind. I would not exclude, that the iterated exponentiation has a meaning/occurence in the physical world...

0

My personal take on this is that exponentiation is the critical threshold whereby you can already do everything you want to do, for the most part. A closer look at that reveals why tetration may not have much to offer.

Exponentiation symbolizes an ordered pair, as opposed to the scalar values you have with addition and multiplication. Exponentiation is similar to functions in general in that they can both be represented as ordered pairs. In lambda calculus, the standard algorithmic definition of exponentiation is $\lambda be.eb$, which is literally just defining it as a function and is by far the simplest mathematical operation in Church encoding.

So $a^{b^c}$ could be thought of as some nested functions $a(b(c))$; you would never look at that and wonder why it isn't equivalent to $(a(b))(c)$, as functions are very clearly non-commutative in general.

Also note that while you can't effectively carry out multiplication by adding (I mean, not really), or exponentiation by multiplying, the reverse is not true; exponentiation subsumes the lower operations. You can encode arithmetical expressions through judicious use of order of operations:

$$\large \log \log \left[\left(e^{\left(e^a\right)^{b}}\right)^{e^c}\right]=ab+c.$$

In fact, consider a regular pushdown stack with the following three operations:

  1. Replace the top value $a$ on the stack with $\log_2 a$.
  2. Replace the top two values $a,b$ on the stack with $a^b$.
  3. Push $2$ onto the stack.

I believe this allows any standard arithmetical operation you want, and therefore it's likely to support general computation as well.

In considering the basic operations, I think addition can be understood as combining two quantities of objects where every object is indistinguishable in every way; all you can tell is that each unit is a unit. Three of whatever plus seven of whatever else and you have ten whatevers. I figure this can be represented as a $0$-tuple, $()$, an object with no information.

Moving up to multiplication, we introduce the concept of distinctive properties, namely the prime factors involved. So multiplying two numbers can be viewed as looking through all your factors in both operands, and then lumping together (unit-style) all the factors that signify the same same prime. Thus why $$(2^3\cdot 5 \cdot 7^2)(5^3\cdot 7^4) = 2^{3+0} \cdot 5^{1+3} \cdot 7^{2+4}=2^3 5^4 7^6.$$

I figured in this way, numbers being multiplied could be treated as $1$-tuples once they're broken down to their prime atomic elements: $(i)$, where $i$ is the index of their particular prime.

And then you get to exponentiation, and I'll skip to the punchline. The new thing this one shows is order, and can be represented by a $2$-tuple, $(x,a)$. I'm not positive the $k$-tuple outlook is sound, but if it is, it's interesting that the ordering seems to just emerge as you gradually add arguments to the tuples.

And finally, this suggests why further hyperoperations like tetration and beyond don't seem to offer much practical use; any more-involved structure, be it a $3$-tuple or anything else, can be broken down and represented adequately by chains of ordered pairs, since handled appropriately, that's all you need for universal computation.

Finally, it may be tempting to say "but why can't $(x,a)$ work out to be equivalent to $(a,x)$", but that's the entire point: $(x,a)$ isn't a collection of two objects, like $2\times 3$ or $5+7$; it's a single object itself, as is $(a,x)$, and the concept of the ordered pair represents an absolutely vital step in complexity. (It occurs to me that this may be why complex math seems to be so rich.)

Trevor
  • 6,022
  • 15
  • 35
-2

https://en.wikipedia.org/wiki/Goodstein%27s_theorem

This result called Goodstein's theorem is possibly a reason why tetration may (or may not? I'm not sure.) be of some interest in at least one context.

  • This theorem has nothing to do with tetration. It is about an extremely fast growing function , but tetration appears nowhere in it. – Peter Feb 11 '22 at 11:21