The fact that a qubit has infinite allowed states can seem as though we could fit more than a bit inside it. However, no matter how fancy our proposed encoding, Holevo's bound shows us that we can never get more than a bit out. This is one effect that provides a bottleneck for how much density we can fit in a quantum processor.
given this infinite state capability wouldn't we then be in the position that largely every single quantum processor could be expressed as a single qbit with every qbit in effect acting as its own quantum processor
Suppose $B$ is some bit string of arbitrary length. We want to use this as input for some program $P$ to get the corresponding output bit string $P(B)$. It is true that we could associate $B$ with some state within the Bloch sphere, and then associate $P(B)$ with some other state. Then we could construct a unitary $U_P$ which rotates between them to reproduce the effect of the program.
As a simple example, we could use $| B \rangle = | 0 \rangle$, $| P(B) \rangle = | 1 \rangle$, and $U_P = Y$.
If we ignore the Holevo bound for a moment, it might seem as if we've reproduced an arbitrarily large computation within a single qubit. But remember that programs aren't designed for only a single input, but for many possible inputs. And they must be able to guide each to their corresponding output.
So let's take another possible input $B'$ and its corresponding output $P(B')$, and find some states within the Bloch sphere to encode them. Now our $U_P$ must simultaneously have the effect
$$U_P | B \rangle = | P(B) \rangle$$
$$U_P | B' \rangle = | P(B') \rangle$$
We could do this perhaps with $| B' \rangle = | + \rangle$, $| P(B') \rangle = | - \rangle$, and still use $U_P = Y$ as before.
Now what if we add another output? What states will we use? Will $Y$ still be a good choice to implement the program? Should we change the states we've chosen already to fit the new inputs and outputs in?
The more possible inputs we add, the more of these constraints we get. Hopefully it is clear that it would become increasingly difficult to make it all work. For many programs, it would be impossible.
Let's also ignore this for a second, and suppose that a large problem that we want to solve could be squeezed into a single qubit. We are very good at simulating single qubits with standard computers. Even if numerical accuracy starts to cause problems, we can just add some more bits to make everything right again. So if we can do everything efficiently with a single qubit, we could also do it efficiently with a classical computer. Either that, or the 'compilation' process is so hard that all our computational complexity is moved to that task instead. In either case, there is no heavy lifting that we would actually need the qubit for.
So though a single qubit contains a lot of possible states, it is nevertheless too simple to outperform a classical computer. We can't get enough information out, its state space is not big enough to guide large computations from input to output, and it can be classically simulated. Many qubits are needed to overcome all of these issues.