For question (1):
A subalgebra of a (semisimple) Lie algebra is called toral if it is abelian and all its elements are $ad$-semisimple.
Lemma 1: If $\mathfrak g$ is a Lie algebra over a field over characteristic $0$, then a subalgebra $\mathfrak h$ of $\mathfrak g$ is a Cartan subalgebra if and only if it is maximal toral (i.e. it is toral, and no subalgebra which properly contains $\mathfrak h$ is toral).
Proof: See Equivalence of Two Cartan Subalgebra Definitions in Semi-Simple Lie Algebra and If X commutes with all elements of the Cartan subalgebra, then X is in the Cartan Subalgebra?. Confer also Are there common inequivalent definitions of Cartan subalgebra of a real Lie algebra?.
Fact 2 (alluded to in https://math.stackexchange.com/a/3820346/96384): For $n \ge 2$, every element of the Lie algebra $\mathfrak{su}(n)$ is $ad$-semisimple. Equivalently, the only nilpotent element of $\mathfrak{su}(n)$ is zero. Equivalently, the only $ad$-diagonalisable element of $\mathfrak{su}(n)$ is zero.
(Try to show this fact and/or the equivalences by hand. Such Lie algebras are called "anisotropic". For the ground field $\mathbb R$, this is the same as what is commonly called "compact". An analogous statement on the Lie group level is in Compact semisimple Lie groups contain no nontrivial unipotent elements? )
Conclusion: For every element $0\neq x \in \mathfrak{su}(4)$ there exist elements $x', x''$ such that $x, x', x''$ span a Cartan subalgebra of $\mathfrak{su}(4)$.
Proof: The one-dimensional subalgebra spanned by $x$ is toral because of Fact 2. Every toral subalgebra is contained in a maximal toral subalgebra, i.e. (by Lemma 1) a Cartan subalgebra. We know that Cartan subalgebras of $\mathfrak{su}(4)$ are three-dimensional.
This translates to your statement for a representation which consist of whatever matrices.
I am not entirely sure what you mean in question 2, but as said in a comment, it is known that every simple Lie algebra can be generated by two elements. In particular, for each simple Lie algebra contained in $\mathfrak{su}(4)$ (of which there are plenty, of various dimensions, including of course the entire $\mathfrak{su}(4)$), one can find two elements $x,x' \in \mathfrak{su}(4)$ which generate that subalgebra (and one could then just choose a third $x''$ contained in that one). If one can even pick a third element, there are obviously many more subalgebras one could generate. I find it hard to imagine any subalgebra of $\mathfrak{su}_4$ for which one could not find three elements which generate it.
But then again, as said I am not sure if that is what you ask. In particular I don't understand what you mean by finding "the minimal subalgebra" and "the explicit form" of the third element?
Added in response to comment: In principle, it is straightforward to find elements $x', x''$ to a given $x$ as above: Because as said every non-zero element of our LA is semisimple, just compute the centralizer of $x$, take one element $x'$ from that which is not a scalar multiple of $x$. Next, compute the centralizer of $x'$, take the intersection with the centralizer of $x$ you computed before; by general theory, this intersection still contains elements linearly independent from the space spanned by $x$ and $x'$; take one of them and call it $x''$. (I am not saying the computations of centralizers are easy, but they are doable.)
Maybe you should clarify for yourself here that starting with one element $x$, you can certainly find elements $x', x''$ such that they form a basis of a CSA, but in general this will be far from unique. Even in the "most unique" case of $x$ being a regular element, i.e. it is contained in a unique CSA, if you have found such $x', x''$, you can obviously replace them by any linear combination of $x, x', x''$ as long as the resulting space is still three-dimensional.
This is e.g. the case for $B = diag(0,i,-i,0)$. (Also note that I use standard math notation, for me $\mathfrak{su}(4)$ consists of traceless skew-Hermitian matrices. To get physics notation, divide everything by $i$.) Here the only choices for $B'$ and $B''$ are two linearly independent elements of the space $\{ diag(ia, -ia,ib,-ib) : a,b \in \mathbb R \}$.
But e.g. if your first element is $B=diag(i,-i,0,0)$, you can also choose $B' = \pmatrix{0&0&0&0\\0&0&0&0\\0&0&0&1\\0&0&-1&0}$, a non-diagonal matrix, and then e.g. $B'' = diag(0,2i,-i,-i)$. (Some centralizers are easily computed.)