This is highly nontrivial - it's the Completeness Theorem CT ("completeness" in its name refers to the proof system's completeness, not the completeness of any particular first-order theory.). I've given a quick summary of its proof in the first part of this answer of mine.
Specifically, CT says that every consistent theory has a model. A theory $T$ is complete iff $T$ is consistent and for each $\varphi$ exactly one of $\varphi$ and $\neg\varphi$ is in $T$. Since complete theories are consistent, CT says that complete theories in particular have models.
(OK, that may not be the definition of "complete theory" you're using; the other common one is that $T$ is complete iff $T$ is consistent and for each $\varphi$ exactly one of $\varphi$ or $\neg\varphi$ is in $T$. But that doesn't change the argument above.)
In case you're already familiar with the completeness theorem for propositional logic, it's worth noting that despite their similarities they are really fundamentally different - the propositional version is comparatively trivial. Similarly, the compactness theorem for propositional logic has a quick topological proof, whose idea breaks down for the first-order case.
Basically, the difference is that in propositional logic the semantics is already very close to the syntax: a "model" in the sense of propositional logic is simply an assignment of truth values to the sentences which satisfies some basic rules (e.g. it makes $A\wedge B$ true iff it makes $A$ true and it makes $B$ true). By contrast, the semantics for first-order logic is extremely complicated: a structure is much more than just its theory (whereas in the propositional case, a structure literally is its theory).
One technical way to make this precise is to observe that we can whip up a single first-order sentence with no computable model (e.g. take $I\Sigma_1+\neg Con(I\Sigma_1)$ and apply Tennenbaum's theorem).