This is actually a rather interesting question whose solution is obvious to all cryptographers, but I guess nobody cared yet to write it down.
After all, our computers that generate secret keys (not just GPG / RSA) are deterministic machines. These deterministic machines implement well-defined routines to generate keys of well-defined format which are later published and serve as an "image" of the function generating the keys. Note that this question can be extended to all ciphers, like good old symmetric encryption and message authentication codes because there you can verify your guess for the start value of the procedure just as easily.
Now for the two parts of your question as I see it:
- The algorithm is deterministic and thus it should be possible to reproduce it
- The algorithm is well-defined and thus it should be possible to work it backwards ("backtrack it")
To understand the specific case of GPG we need a brief overview of how the algorithm works:
- Take a large random value and convert it to an integer
- Test if the integer is prime (with high probability)
- Repeat the first two steps until two primes were found
- Multiply the primes together, this is (along with another constant) is your public key output
- The primes now form the secret key (along with a "counter-part" to the above "constant")
As you can see, inverting the algorithm would imply actually factoring the public key modulus. However this is computationally impossible for classical computers for sufficiently large primes ($>2^{1000}$), so you can't backtrack the algorithm.
But how about guessing a starting value for the randomness and then trying all options out? Well, it turns out this doesn't work, because while you could try to guess the random values, each has $2^{128}$ or more possible values and as such you can't try them all out using a classical computer during a human life-span.
Wait, didn't we establish that computers are deterministic, so why is there more than one possible value!?
Computers are indeed deterministic, but the things they use and the things that use them aren't deterministic. Computers take that randomness and "compress it" into short byte strings, which are then used to derive the "large random values" I talked about. So, where does this randomness come from? It could be random keystrokes by the user, network timing, CPU scheduling times, HDD delays (which are created by the air in the HDD cases), mouse movement by the user and actual random number generators which are fed by physically random processes (similar to the Double-slit experiment and other quantum events).