I'll start by giving an example which can be easily run in parallel: the elliptic curve method (ECM). In this method, we choose a random elliptic curve $E$, and run Pollard's $p-1$ method on this curve up to a certain point. The idea is that with probability $\alpha$, Pollard's $p-1$ method terminates within $T$ iterations. We optimize over $T/\alpha$, the end result being that $\alpha$ is quite small.
Here is how you would run ECM in parallel. Each processor (or core) runs the algorithm just described, until one of them succeeds in factoring the integer. Having $N \ll 1/\alpha$ processors (or cores) cuts the running time by a factor of $N$. An algorithm where a similar approach is possible is called embarrassingly parallel.
A slightly different example is the quadratic sieve. In this algorithm, we need to find a certain number of smooth integers and factor them. If we aim at finding $n$ such integers and there are $N$ processors available, we can assign each of them to find $n/N$ integers. A more complicated approach would be to have one central processor assign "jobs" (integers) to processors: each time a processor finishes factoring a smooth integer, it receives a new one. (In practice, since the algorithm sieve for smooth integers, each processor will get a range of potential smooth integers.) This scheme is also embarrassingly parallel.
In contrast, consider the simple Pollard's $p-1$ method. The algorithm is iterative in nature and there doesn't seem to be a simple way to run it in parallel; we can perhaps run the FFTs associated with modular exponentiation using a sophisticated parallel scheme, but this is no longer embarrassingly parallel.
At this point you should be able to answer the question on your own. The question is asking you to determine which of the algorithms are embarrassingly parallel.