Think of it this way: those column matrices you're talking about are really just coordinates of some vector in a vector space. They're not necessarily the vector you are discussing, they're just a representation of that vector in an easier vector space.
So what are coordinates? There is a theorem in linear algebra that says that every vector space has a (Hamel) basis. In fact most of the vector spaces you'll deal with have an infinite number of possible bases. Once you've chosen one, then the coordinates of a vector in that space is just the coefficients of the expansion of that vector in the basis vectors.
Let's look at an example. Consider the degree $2$ polynomial space, denoted $P_2(\Bbb R)$. This is the set of all polynomials of degree at most $2$ with real coefficients along with these definitions for addition and scalar multiplication:
Let $p_1 = a_2x^2 + a_1x+a_0$ and $p_2 = b_2x^2 + b_1x + b_0$ be two arbitrary elements of $P_2(\Bbb R)$ and let $k \in \Bbb R$. Then
$$p_1 + p_2 = (a_2 + b_2)x^2 + (a_1+b_1)x + (a_0 + b_0) \\ kp_1 = (ka_2)x^2 + (ka_1)x + (ka_0)$$
It can be proven that this is in fact a vector space over $\Bbb R$.
So first, let's choose a basis for this space. In this case there are an infinite number to chose from but let's just chose the easiest one: $\epsilon = \{1, x, x^2\}$.
Now we'll consider some specific vectors in this space. Let $p_1 = 3x^2 -2$ and $p_2 = 3x$. The coordinates of each of these two vectors are then elements of the vector space $\Bbb R^3$ and are usually represented as column vectors. Remember, though, that coordinates are always given with respect to some set of basis vectors. If we chose a different basis, the coordinates of a given vector would generally change.
In this case
$[p_1]_\epsilon = \begin{bmatrix} -2 \\ 0 \\ 3\end{bmatrix}$ and $[p_2]_\epsilon = \begin{bmatrix} 0 \\ 3 \\ 0\end{bmatrix}$. This is because the first coordinate corresponds to the coefficient on $1$, the second on $x$, and the third on $x^2$. So because $p_1 = (3)x^2 + (0)x + (-2)1$, we get the above coordinate vector.
The unfortunate thing is that first courses in linear algebra often stick almost exclusively to discussing $\Bbb R^n$ which is not a very good vector space for understanding things like coordinates or motivating things like having different bases for your vector space. The reason is that it has too many nice qualities. For instance, there is an obvious coordinate vector associated with every single element of $\Bbb R^n$ -- itself.
As for dimension: just remember dimension is a property of a vector space, not of a vector OR EVEN of some arbitrary set of vectors. It has (almost) nothing to do with the number of entries of the coordinates of a vector.
The number of entries of a coordinate vector just tells you the dimension of the space that you've embedded your vector in. But sometimes that's not what you care about. Sometimes you care about what subspace your vector is an element of, and you can't just count your coordinates to tell you that one.
For instance, if I asked you what the dimension of $\operatorname{span}(p_1, p_2)$ is (with $p_1, p_2$ as defined above), you wouldn't get the right answer by saying that there are $3$ coordinates of each of their coordinate vectors and thus the dimension of this subspace is $3$. That is wrong. The answer is actually $2$. All the number of coordinates of the coordinate vectors of $p_1$ and $p_2$ tell you is that the dimension of $P_2(\Bbb R)$ is $3$ -- but we already knew that because we found a basis for it earlier.
Does that answer some of your questions?