In the comments to Rob Arthan's answer, the OP elaborated on their question:
Is there a similar Forcing type technique (that mostly keeps the interpretation of symbols same) that generates models of formal systems such that a particular statement is true in one model and the negation true in another model while maintaining internal consistency of both models?
It's this clarification which I want to address.
It's a little unclear - in particular, the phrase "that mostly keeps the interpretation of symbols same" doesn't really make sense to me - but I interpret the whole comment as asking:
Suppose $T$ is a theory and $\varphi$ is a sentence independent of $T$. Is there a method for building models of $T\cup\{\varphi\}$ and $T\cup\{\neg\varphi\}$? In particular, is there one which is similar to forcing, in that you start with a model of $T$ and "extend" it in certain ways? And finally, are there further similarities between that method and forcing?
Note that since this is a question about building models as opposed to the consistency of theories, there's a bit of extra subtlety here. (And that aspect is completely missing from the title and body question, so if this is what you're asking you should definitely edit those, and if it's not what you're asking then you need to clarify your question further.)
The answer to your question, interpreted in this way, is yes. Before I leap into the details, let me give a tl;dr: you should look into Goedel's completeness theorem (that's not a typo!) and its proof via Henkinization.
OK, now (some) details:
First of all, let's forget the "base structure" and just look at the process of building a model of a consistent theory. It turns out that this can always be done, and in fact there is a specific method for doing this, called "Henkinization"! While not equivalent to forcing, it (and more advanced ideas along the same lines) do share a family resemblance to forcing: roughly speaking, they build the desired object in "stages," meeting various "requirements" along the way, and relying on a sufficiently "generic" behavior to ensure that they work as desired. (Pinning down what all this means takes a lot of work, and forcing in particular is really really hard; but you might be interested in these takes on forcing from an intuitive perspective.)
So Henkinization lets us explicitly build a model $M$ of any consistent theory $T$. In particular, if $\varphi$ is independent of $T$ then both $T\cup\{\varphi\}$ and $T\cup\{\neg\varphi\}$ are consistent, so we have a method (Henkinization) which can build models of each. Now, what if we try to extend a given structure, rather than build one ex nihilo? That is, suppose $T$ is some theory and $\varphi$ is independent of $T$; and $M$ is a model of $T$. Can I produce models $M_0$, $M_1$ which are "bigger" than $M$, in which $\varphi$ is true and false respectively? (In particular, this process should be guaranteed to work for all $M$ - it shouldn't rely on $M$ having some specific form.)
The answer depends on exactly what you mean by "bigger" (there are lots of ways to compare mathematical structures), but under one interpretation the answer is yes for a wide class of theories. Namely, under pretty reasonable hypotheses about $T$ (it's enough for $T$ to be complete for quantifier-free sentences, which amounts to $T$ deciding exactly what kinds of "local" behavior occur in its models - e.g. the theory of groups isn't complete for quantifier-free sentences, because some groups are non-abelian (= they have the configuration "$a*b\not=b*a$") while others are abelian (= they don't have that configuration)), each of the theories $$T_0=T\cup AtDiag(M)\cup\{\varphi\}$$ and $$T_1=T\cup AtDiag(M)\cup\{\neg\varphi\}$$ is consistent; here "$AtDiag(M)$" is the atomic diagram of $M$, which describes at a very basic level how elements of $M$ relate to each other. Applying Henkinization yields models $M_0$ and $M_1$ of $T_0$ and $T_1$, respectively; and since each of $T_0$ and $T_1$ satisfy the atomic diagram of $M$, $M$ "embeds" in a precise sense into $M_0$ and $M_1$.
EDIT: One can reasonably ask:
How complicated is Henkinization?
This is a bit of a vague question, but one way to make it precise is via computability theory: we can ask, given a computable theory $T$, how easy is it to compute (= build completely explicitly) a model of $T$ by Henkinization? And, how easy is it to compute a model of $T$ via any method (that is, is Henkinization optimal)?
It turns out that Henkinization is optimal, and produces models in an "almost computable" way. Specifically, we can always find a model of $T$ which is low - that is, whose halting problem is no more complicated than the classical halting problem. More precisely, any PA degree computes a model of $T$ (and there are low PA degrees). Such a model can be built via Henkinization, and this result is optimal - there are computable theories, every model of which computes a PA degree.
And in fact, if $T$ is decidable, then the Henkinization process can be made computable! So what complexity there is comes from the technical step of completing the theory, not the model-building part of Henkinization itself. So the takeaway to my mind is that Goedel's completeness theorem is very very close to constructively true.