If I understand it correctly, an algorithm that computes the value of a real function $f$ has computational complexity $O(g(n))$ if the following holds: When we compute $f$ to precision $\delta$ requires on the order of $g(n)$ steps.
However, what if we have an algorithm that first "finds a more efficient algorithm to compute $f$", and then computes $f$?
In other words, what if we have an algorithm $A$ that does the following:
Find an efficient algorithm $B$ for computing $f$.
use $B$ to compute $f$.
In that case, we can no longer speak of the computational time it would take to compute $f(5)$ for example, because it fully depends on whether Algorithm $A$ has already found algorithm $B$. In other words, the computing time required to compute $f(5)$ if $5$ is the first comoputed number is far greater than the computational time required to compute $f(5)$ after $f(3)$ is already computed.
My question is, is there a concept/theory about this kind of algorithm that first finds another algorithm before computing a function? Specifically I am wondering about analysis of the computational complexity of such algorithms.