This is not a full answer since I talk only about the case $\Omega = [0,1]$, but this question is linked to generalizations of the mean value theorem to vector-valued functions.
Observe that the hypothesis : $f \in V, \sup_{\Omega} f \leq 0 \Rightarrow \int_0^1 f \leq 0$ is equivalent to $g \in V \Rightarrow \int_0^1 g \leq \sup_{\Omega} g$.
The $\Leftarrow$ direction is straightforward. For the $\Rightarrow$ direction, you apply the hypothesis on $f(x) = g(x) - \sup_{\Omega} g$.
Taking $-g$ instead of $g$, we also find that $g \in V \Rightarrow \int_0^1 g \geq \inf_{\Omega} g$, i.e.
$$g \in V \Rightarrow \inf_{\Omega} g \leq \int_0^1 g \leq \sup_{\Omega} g$$
In the case $\Omega = [0,1]$ (which is closed), then this is just the standard lower and upper bounds for a Riemann integral and we get nothing substantial from the hypothesis.
So let's assume that $\Omega = [0,1]$ and $V$ is a $N$-dimensional subspace of $C[0,1]$ with basis $\{u_1,...,u_N\}$.
Define the vector-valued function $F : [0,1] \rightarrow \mathbb R^N$ by
$$F(x) = \begin{pmatrix}u_1(x) \\ \vdots \\ u_N(x) \end{pmatrix}$$
A generalization of the mean value theorem given in "Mean value theorems for vector valued functions" by Robert M. McLeod implies that there exists $x_1,...,x_N \in (0,1)$, $\lambda_1,...,\lambda_N \geq 0$ such that $\sum_{k=1}^N \lambda_k = 1$ and
$$\int_0^1 F = \sum_{k=1}^N \lambda_k F(x_k)$$
where the integral $\int_0^1 F$ is performed component-wise.
For an arbitrary $g \in V$, we can write
$$g = \alpha \cdot F(x)$$
for some $\alpha \in \mathbb R^N$, where $\cdot$ denotes the usual dot product.
Hence,
$$\int_0^1 g = \int_0^1 \alpha \cdot F = \alpha \cdot \int_0^1 F = \alpha \cdot \sum_{k=1}^N \lambda_k F(x_k) = \sum_{k=1}^N \lambda_k \alpha \cdot F(x_k) = \sum_{k=1}^N \lambda_k g(x_k)$$