Below I'm going to try to explain what the various ideas are. I'm not outlining how to prove any claims relating them, since I think at your stage it's more important to first get a clear sense of what exactly each thing you're trying to prove actually is.
"Syntax" and "semantics" are generally used to divide (most) concepts in basic logic into two parts - roughly, syntactic concepts are those which are about logic as strings-of-symbols (so "$\vdash$" is syntactic, as is the notion of wff) while semantic concepts are those which are about logic as describing properties of (classes of) structures (so "$\models$" is semantic, as ). There are of course results and concepts which straddle the two; most obviously, in a given logical system (like propositional logic or first-order logic) we will often have a notion of $\vdash$, a notion of $\models$, and completeness/soundness theorems showing that they are in fact equivalent. Although the syntax/semantics divide isn't total, it is a useful organizing idea in logic.
Now with this distinction in mind, let's look at an "idealized story" of how a logic is built:
First, we define a notion of well-formed formula (wff). A wff is simply a string of symbols with no inherent meaning; we've simply declared some strings to be "well-formed" and others to not be well-formed. At this stage, there is no notion of proof, or of satisfaction, or anything else really. We also single out at this point special wffs, called sentences, but again at this stage that's a purely formal distinction. This is on the syntactic side.
Next, we define a notion of proof. That is, we define a relation "$\vdash$ between sets of wffs and individual wffs (often we restrict attention to sentences). Thinking very abstractly, all we know about $\vdash$ is that it is a subset of $\mathcal{P}(wff)\times wff$, but generally it arises as the closure of a certain set of basic relations (e.g. the sequent rules). This is also on the syntactic side.
Having constructed the basic syntactic side of our logic, we now turn to the semantic apparatus. We define a notion of structure and a notion of satisfaction (this is "$\models$") between structures and sentences (or between structures + variable assignments and wffs). This is on the semantic side.
- At this point we introduce also a couple useful abbreviations. Expressions of the form "$\mathcal{M}\models\Gamma$" where $\Gamma$ is a set of sentences instead of a single sentence are understood as "For all $\varphi\in \Gamma$, $\mathcal{M}\models\varphi$;" similarly, expressions of the form "$\Gamma\models\varphi$" where $\Gamma$ is a set of sentences instead of a structure are understood as abbreviations for "For all $\mathcal{M}$, if $\mathcal{M}\models\Gamma$ then $\mathcal{M}\models\varphi$. But these are merely abbreviations, not new concepts.
We can now state and (hopefully!) prove the soundness and completeness theorems, which are the left-to-right and right-to-left directions respectively of the equivalence $$\Gamma\vdash\varphi\quad\iff\quad\Gamma\models\varphi.$$ This straddles both sides of the syntax/semantics divide - that's the whole point.
- In fact, in my opinion one of the main purposes of logic is to study the ways that the line between syntax and semantics gets blurred - maybe even the purpose, if you construe it broadly enough!
This is of course not the only way these concepts can be presented, and often "$\models$" is presented before "$\vdash$" (and I strongly prefer that order); however, I think that what I've written above has the advantage of clearly putting all the syntax first and then moving on to semantics, as opposed to going syntax-semantics-syntax.
That covers everything in your question ... except for the concept of functional completeness. It's important at this point to stress that the word "completeness" here has no connection whatsoever with the word "completeness" in the context of the completeness theorem; to avoid confusion on this point, I'm going to refer to it here as "functional sufficiency" instead.
Understood very abstractly, a truth functional is anything that combines sentences (or wffs) and produces a new sentence (or wff) whose truth value in a given structure (or structure + variable assignment) depends only on the truth values of the inputs in that structure (or structure + variable assignment). Basically, a truth functional is a truth table.
- And here we hit a crucial point: this notion of truth functionality only makes sense in light of $\models$, since it uses the notion of "truth-in-a-structure." Really, we should say that a set of truth functionals is/isn't functionally complete with respect to a given notion of $\models$; I think I was very unclear on this point in my previous responses to your question, so I want to highlight it here.
For example, "AND" ("$\wedge$") is a truth functional: $p\wedge q$ is true in a given structure + variable assignment iff both $p$ and $q$ are true in that structure + variable assignment. By contrast, the operator "SPLURG" which takes a wff $p$ and outputs the wff $p$ if $p$ has length $\le 6$ and $\neg p$ if $p$ has length $>6$ is not a truth functional: if $a$ is an atomic proposition, we have $SPLURG(a)=a$ but $SPLURG(\neg\neg\neg\neg\neg\neg a)=\neg a$, even though $a$ and $\neg\neg\neg\neg\neg\neg a$ are equivalent!
In the context of a specific logical system - and in fact all we need is the notion of wff and the notion of $\models$ (so "$\vdash$" isn't a priori relevant here) - a set $A$ of truth functionals is functionally sufficient if every truth functional can be written as a composition of elements of $A$.
It should be clear now why functional sufficiency and completeness/soundness are unrelated: the notion "$\vdash$" doesn't even appear in my explanation of functional sufficiency above!