Checking for smoothness can be computationally expensive, depending on the size of the "small" primes (there is no "natural" definition of "small", one has to define an arbitrary limit). Also, it is not really useful.
The need for non-smooth integers comes from the $p-1$ factorization method. Let $n = pq$ be a RSA modulus that we wish to factor. Now suppose that $x$ is an integer which is a multiple of $p-1$, but not $q-1$. For an integer $v$ relatively prime to $n$ (e.g. $v = 2$), this means that $v^x = 1 \mod p$ but $v^x \neq 1 \mod q$. By the Chinese Remainder Theorem, this implies that $w = v^x - 1 \mod n$ is a multiple of $p$ but not of $q$. A simple GCD computation between $w$ and $n$ then reveals $p$, a prime factor of $n$.
So the $p-1$ factorization method is about assuming that $p$ (or $q$) is such that $p-1$ is $B$-smooth for some limit $B$. Then things go thus:
- Set $w \leftarrow 2$.
- For all primes $r$ lower than $B$, compute $w \leftarrow w^{r^{z_r}} \mod n$, where $z_r = \lfloor \frac{\log B}{\log r} \rfloor$.
- Compute the GCD of $w$ and $n$.
One way to see this method is the following: when we compute modulo $n$, by the CRT, we are actually computing modulo $p$ and modulo $q$ simultaneously. We do multiplications, so we work with two multiplicative groups, the non-zero integers modulo $p$ and the non-zero integers modulo $q$. The goal of the attacker is to find a multiple of the order of one of these two groups. The two groups have orders $p-1$ and $q-1$, respectively. If one of these two orders is $B$-smooth, the algorithm above will work with high probability (there are a few pathological cases where this does not work, due to the condition on $z_r$, but success probability is close to $1-(1/B)$).
This explains the prohibition against smooth $p-1$ and $q-1$. Note, though, that this depends on $B$: $B$ represents the effort that the attacker is ready to invest in the computation. A larger $B$ implies a more expensive attack, but also a more expensive test of non-smoothness, because the best known method for testing $B$-smoothness is trying to divide by all primes lower than $B$: this is somewhat faster than running the $p-1$ factorization method, but not by much. So to defeat an attacker ready to invest one month worth of computation with a dozen PC, we have to do ourselves a check which will take several hours one one PC.
Checking for smoothness is not useful because of the Elliptic Curve Method. ECM is an extension of the $p-1$ method. We again compute things modulo $n$, so that we are working, through the CRT, with two groups, one using integers modulo $p$, the other modulo $q$; and we still hope to hit a $x$ which is multiple of the order of one of the groups.
The groups, in ECM, are randomly chosen elliptic curves. The group law (usually denoted with an addition sign, but that's purely conventional) is weird but can be computed reasonably efficiently through a handful of operations modulo $n$. The interesting point is that when working with integers modulo $p$ (a prime), the order of an elliptic curve modulo $p$ is an integer $f = p+1-t$ where $t$ is called the "trace" and Hasse's theorem states that $|t| \leq 2\sqrt{p}$. So the order of a curve modulo $p$ is "close" to $p$, but still lies in a relatively large range surrounding $p$, and it so happens that a randomly chosen elliptic curve will have an order distributed fairly uniformly in that range.
So the ECM method works by selecting a random curve modulo $n$ (which implicitly selects two random sub-curves, one modulo $p$ and one modulo $q$), assuming that the order of one of these curves is $B$-smooth, and running the same algorithm than for the $p-1$ method, except that we use curve point additions instead of multiplications. If the algorithm fails (the final GCD yields 1), then we can start again with a new random curve.
The ECM method works as long as a randomly selected integer in the range around $p$ (of size $4\sqrt{p}$) has a non-negligible probability of being $B$-smooth from some value $B$ such that the "for all primes lower than $B$" step is computationally feasible. In other words, the RSA modulus will be secure only if the probability of a random integer of the size of $p$ (half the target size of the modulus) of being $B$-smooth is low enough.
Thus, due to ECM, we already rely on the non-smoothness of a random integer. It thus makes little sense to spend CPU cycles on checking $p-1$ specifically: even if we carefully avoid $p$ such that $p-1$ is $B$-smooth, this gains nothing about the probability of a random integer of the size of $p$ to be $B$-smooth. We still need that probability to be negligible, which is achieved by using large enough primes.
Probability of smoothness can be estimated with the de Bruijn function. But, given the discussion above, it is more practical to look at factorization records with ECM. Efficiency of ECM largely depends on the size of the smallest prime factor of the attacked integer -- that's the factor which the ECM finds. The largest factor ever found through ECM is close to $1.8*10^{73}$, i.e. a 244-bit integer. We can thus estimate that 244-bit integers have a probability of smoothness which is high enough for the ECM to work, but larger integers do not. For a 1024-bit RSA key, we use 512-bit prime factors, more than twice that record size. So we can say with some confidence that a random 512-bit integer is sufficiently non-smooth with overwhelming probability, thus "secure enough" for RSA. No need to test.