Big-$O$ and $\Omega$ are actually defined entirely separately from complexity theory. All they are is properties of functions.
$O$ is used for asymptotic upper-bounds of a function, and $\Omega$ is used for asymptotic lower bounds. If for some functions $f,g$ we have $f \in O(g)$, and $f \in \Omega(G)$, you can use big-theta notation, $f \in \Theta(G)$.
So, we can say something like $n^2 \in O(n^3)$, or $n^3 \in \Omega(n^2)$. That statement is entirely separate from what meaning we are attaching to $n^2$ or $n^2$.
For the formal definition of what we mean by "lower bound" or "upper bound", see here.
When we're analyzing an algorithm, there are a few cost functions we like to look at. One is the greatest time required for any input of size $n$ (worst case), one is average number of steps for an input of size $n$. There are numerous other things you can look at, such as the number of bits of memory used, the maximum number of cache misses, the number of queries to a database, etc.
The important thing is, all of these are just functions, and we can describe them with any of $O, \Omega, \Theta$. Big-O and friends describes their growth, but carries no meaning if we don't specify what function's growth we are describing.
Note that worst-case and upper bounds are different things. The "worst-case complexity" means, for any input of size $n$, what is the greatest time the algorithm takes. What's important here is, we're taking the worst over all inputs of size $n$, but imagining a function returns the exact worst case time for any input $n$.
On the other hand, saying that a function is an upper bound just means that we know the function is asymptotically less than that upper bound. So, you can still have an upper bound on the average case analysis (in fact, you usually will).