Quick and dirty hack
Here is a simple, easy-to-implement approach that meets your requirements and might be good enough when $X \le 100$. It does not guarantee a uniform distribution.
We will randomly and independently select $X$ numbers, from some distribution. Then, we'll test whether that list of $X$ numbers meets all the requirements (sums to the target $T$, has at least 2 negative numbers and at most $X/2$ negative numbers). If it does, we'll use that list of numbers. If it doesn't, we'll go back to the beginning and try again with new random choices. This is known as rejection sampling.
I propose a distribution on each number, to make this more efficient. In particular, let $T$ be the target number, and define $t=T/X$. Then there are two cases:
If $t \le 1.8$, define $p=(t+3.5)/5.5$, and with probability $p$, choose a random number from 0..4; otherwise choose a random number from -6..-1.
If $t > 1.8$, define $p = \min(1-2/X,(t - 1.5 + 10/X)/2.5)$, and with probability $2/X$, choose a random number from -6..-1, with probability $p$, choose 4, and with probability $1-p-2/X$, choose a random number from 0..3.
Use this distribution for each number in the list. These distributions are chosen to have expected value equal to $t$ (so that rejection sampling will work well, and the number of iterations needed will be manageable), and so that it's likely we'll have an acceptable number of negative numbers in the list.
Principled solution
Alternatively, here is a solution that gives the best possible randomness, but unfortunately will be more complicated to implement. Let's say that a list of numbers is a solution if it satisfies all of your conditions. Then, it is possible to sample uniformly at random from the set of solutions.
The core technique is that we'll construct an algorithm count the number of solutions. In fact, we'll define an ordering on all possible solutions, call it $\prec$, so that $x \prec y$ means that the solution $x$ comes before $y$ in the ordering. Then, we'll design an algorithm that given an index $r$, lets us find the $r$th solution in this ordering. With these building blocks, it becomes simple to generate a random solution: we count the number $n$ of solutions, we randomly pick a number $r$ in the range $1,\dots,n$, and then we find the $r$th solution and output it. By construction, this will sample uniformly at random from the set of all solutions.
How do we count the number of solutions? Using dynamic programming. Let $A[i,s,k]$ count the number of solutions such that the first $i$ numbers sum to $s$, and there are $k$ negative numbers among the first $i$ numbers. We can use dynamic programming to compute $A[i,s,k]$ for every value of $i,s,k$, using a recurrence:
$$A[i,s,k] = \sum_{x=-6}^{-1} A[i-1,s-x,k-1] + \sum_{x=0}^4 A[i-1,s-x,k].$$
Finally, if $t$ is the target value, then $\sum_{k=2}^{X/2} A[X,t,k]$ counts the number of solutions.
Now, given an index $r$, how do we find the $r$th solution in an appropriate ordering? This is called "unranking". The procedure itself gets messy, but conceptually it is simple: we are traversing backwards through the recurrence, using $A$ to figure out how to fill in the last element so that the number of elements that precede the solution we'll pick will be less than $r$.
Basically, we can put solutions to your problem into one-to-one correspondence (isomorphism) with paths through a DAG. Each vertex in the graph corresponds to a tuple $(i,s,k)$. The DAG has a single starting node, with an edge from the start node to $(X,t,k)$ for each $k=2,3,\dots,X/2$. The DAG also has a single ending node, $(0,0,0)$. Using the recurrence above, we can see that each path through the graph from the start node to the end node can be mapped to a solution to your problem. The dynamic programming above is basically counting the number of such paths in this graph.
Now our unranking algorithm can be viewed as finding the $r$th path in this graph, and mapping it to a solution to your problem. For how to do that, see Efficiently sampling shortest $s$-$t$ paths uniformly and independently at random and Algorithm that finds the number of simple paths from $s$ to $t$ in $G$ to understand the core concepts here.
If we expand this out in the specific case of your particular problem, I think it looks something like the following (warning: it gets incredibly messy). First, we find $k^*$ such that
$$\sum_{k=2}^{k^*-1} A[X,t,k] \le r < \sum_{k=2}^{k^*} A[X,t,k].$$
Next, we let $r' = r - \sum_{k=2}^{k^*-1} A[X,t,k]$. Next, defining
$$f(x^*) = \sum_{x=-6}^{\min(-1,x^*)} A[X-1,t-x,k^*-1] + \sum_{x=0}^{x^*} A[X-1,t-x,k^*],$$
we find $x^*$ such that
$$f(x^*-1) \le r' < f(x^*),$$
we define $r'' = r - f(x^*-1)$, and $k'=k^*-1$ if $x^*<1$ or $k'=k^*-1$ if $x^*\ge 1$. Then we decide that the last element in the list of numbers will be $x^*$, and the list will contain $k^*$ negative numbers. We can recurse to fill in the entire list, going backwards to front.
I haven't checked the math above carefully, so please try to understand the concept, and then work out the solution for yourself, to make sure I haven't made mistakes along the way.