As stated by Dan Boneh:
Different communities define these (negligible and non-negligible) differently
For practitioners they are basically a scalar $\varepsilon$, where, for example:
- $1/2^{30}$ is considered non-negligible because an event with this probability will probably happen after $2^{32}$ bits (1 gigabyte) of data
- $1/2^{80}$ is considered negligible because an event with this probability will not likely happen over the life of the key.
On the other hand, in theory, $\varepsilon$ is considered a function $\mathbb{N} \to \mathbb{R}$.
Saying that a function is non-negligible means that the function is bigger than some polynomial infinitely often:
$\exists d: \varepsilon(\lambda) \geqslant 1/\lambda^d$
Saying that a function is negligible means that the function is smaller than all polynomials:
$\forall d, \lambda \geqslant \lambda_d: \varepsilon(\lambda) \leqslant 1/\lambda^d$
where $\lambda_d$ is some integer depending on $d$ and, in both cases, $\lambda$ is an integer.
That said we can see that $\varepsilon(\lambda) = 2^{-\lambda} = 1/2^\lambda$ is negligible, because for any constant $d$ there is a sufficient large $\lambda$ so $\varepsilon(\lambda) \leqslant 1/\lambda^d$.
We can also verify the other example from the link e-sushi provided: $\varepsilon(\lambda) = 2^{-100} = 1/2^{100}$ is non-negligible because if we set $d = 1000$ this function is clearly bigger than $1/\lambda^d$
$\square$
Note: Most of the answer is based on Dan Boneh's explanation about this subject on his coursera course
Note 2: This is also a nice answer on the subject