4

This is a question I have always wondered about:

  • Classical theorems in calculus (e.g. Extreme Value Theorem) tells us that for some function over a given interval, a set of inputs must exist such that the function reaches a maximum and a minimum. The aim of the game is now to determine if these inputs can be determined analytically or numerically
  • Now, consider a system of Maximum Likelihood Equations
  • In some cases, a given system of Maximum Likelihood Equations has an "analytical" solution : we can find a general relationship between the parameters of the probability distribution with respect to the random variables. This makes things very convenient - if this can be done, in the future, no matter what dataset we encounter, we can very quickly calculate the parameters for any dataset because we have found a closed form solution
  • However, many times, this is not possible and we are required to solve numerically
  • Thus, it makes me wonder, perhaps in the future, a "cool mathematical trick" would be discovered that would allow for this same problem to be solved analytically

Yet, in most references I read (e.g. textbooks in probability/statistics) - I have never encountered a mathematical proof which shows that certain systems of Maximum Likelihood Equations fundamentally do not have closed form solutions. There is always this tone being implied that perhaps a closed form solution exists, perhaps it doesn't - but currently, we solve numerically. I was always curious to know if we can conclusively prove that a closed form solution is guaranteed to not exist.

I have asked similar questions in the past (see references below) and have never been able to find an exact answer on this topic. I am now trying to reformulate my question in a more concise way:

  • For a given system of maximum likelihood equations, is it possible to mathematically prove that an "elementary solution" will never exist ... no matter how much the field of mathematics ever progresses?

Thanks!

References

stats_noob
  • 3,112
  • 4
  • 10
  • 36
  • 1
    In general you prove that an equation has no closed form solutions using Galois Theory. This is the basis of the Abel Ruffini theorem which states some fifth degree polynomial equations have no solution in terms of radicals. A generalization is Differential Galois Theory which is how you prove some integrals have no elementary solution. If you could find an MLE equation whose solution was one of these integrals, it would not have an elementary solution. It also depends on what is allowed in an analytic solution. Sometimes you may allow integrals or infinite series. – Jack Nov 25 '23 at 02:08
  • In my previous comment I made one mistake. The likelihood equations are algebraic equations and not differential equations. Therefore if one could be made into one of the non-elementary algebraic equations such as $x^5-x-1=0$ then it would be an example without a closed form solution. – Jack Nov 25 '23 at 03:07
  • There are differences between analytic solutions, like exact solutions, closed forms, which can include special functions, and elementary solutions. – Тyma Gaidash Nov 25 '23 at 13:35

2 Answers2

3

In order to work on the problem, we must first clarify the terms.

We can interpret your "elementary solution" as closed-form solution, or but as solution in the elementary functions.

But also series solutions aren't numerical.

The elementary functions according to Liouville and Ritt are generated from their complex arguments from an open domain by applying finite numbers of exp, ln and/or $\mathbb{C}$-algebraic functions.

If the given function is already in closed form (means its function term is symbolically given), we can start. If not, we first have to determine its closed form, e.g. by interpolation (e.g. by the TableCurve software).

For the algebraic functions/numbers, we have Galois theory.
For integration in the elementary functions, we have Risch algorithm.
For the elementary functions, there are a few approaches for deciding if a given problem has solutions in the elementary functions or in the elementary numbers: [Liouville 1837, 1838], [Ritt 1925], [Ritt 1948], [Rosenlicht 1969], [Risch 1979], [Lin 1983], [Chow 1999], [Khovanskii 2014].

Similar approaches exist for certain special functions, e.g. the Liouvillian functions.

But decision procedures can only answer whether solutions exist in a given class of functions/numbers.

You can declare an arbitrary set of functions/numbers as closed form and you can decide if your problem has solutions in this set.

Therefore it's not possible to prove that a math problem is only solvable numerically - no matter how much the field of mathematics ever progresses.

IV_
  • 6,964
1

No. It will never be possible to prove.

You can always invent new branches of mathematics to express the problem in.

Sometimes an alternate representation is another word for solution, even.

What is the solution to $x^2+1 = 0$ ?

In school when we learn about complex numbers we may be encouraged to learn to answer $x = \pm \sqrt{-1} = \pm i$

But it can also be for example rotation 90 degrees in 2D or 3D.

It is also often useful to be able to express solutions themselves as equations.

The solutions are given by all the sets of fonctions which solve $f''(x) + f(x) = 0$

Perhaps we can solve this particular equation in "elementary" functions, but you can easily find or express harder differential equations which have no solutions in elementary functions. Sometimes the more powerful way to express a function is the more desirable one and not the other way around !

So the question rather puts light on the much broader question :

What does it mean to solve a problem in mathematics?

Usually what it means is to find some equivalent expression for some purpose which makes the solution set easier to handle / manipulate with respect to some application or deeper goal.

mathreadler
  • 25,824