The main misconception here is, what part of the RSA problem is actually hard to compute.
Your statement is like this:
- We have $e$ and $n$.
- We know $ed=1$ mod $\phi(n)$.
- So we should be able to calculate $d$.
Your reasoning is exactly what is happening in the key generation algorithm. Division in modular arithmetic behaves just the same as with rationals, just that they are not fractions but integers with the "same property" (inverse element) - if the inverse exists (that's why gdc$(e,\phi(n))=1$).
So where is the computation problem hidden, and what error was made?
The problem is, that computing $\phi(n)$ is easy if and only if the prime factors of $n$ are known. In fact, from $n$ and $\phi(n)$ you can compute the factorization of $n$ directly. You asked in the comments:
You can solve for phi(n) by doing a quick computation in Wolfram Alpha and it tells you the totient or phi of n. How is that hard?
There is your problem. This "quick computation" is scaling super-polynomially or maybe even exponentially (if no efficient factoring algorithms are used). It might be "easy" for small integers, but factoring numbers between 10 and 100 is also easy and can even be done without a computer.
Btw, if you know $e$ and $d$ instead of $\phi(n)$, you can also calculate the prime factors of $n$ in polynomial time. This is described in Alexander May's paper "Computing the RSA Secret Key is Deterministic Polynomial Time Equivalent to Factoring" (2004). To explain what this result means: If you know $e$ and $d$, then we can also compute $e\cdot d$ in $\mathbb{Z}$. We don't know $\phi(n)$, but we know that $ed=1 + k\cdot\phi(n)$. And we know that $ed < n^2$. If $k$ is small, then this is really easy, but if $k$ close to $n$, it is harder.
edit: rephrasing the last sentence
Anyway, the hardness of the RSA problem is not based on calculating $d$ from $e$ and phi(n). It is hard because $n$ is hard to factorize, and (for RSA modulus) it is constant time equivalent to calculate the factorization from $ph(n)$ and $n$.