I think it is just called the output size or possibly the security in bits, as the hash output size is directly tied to it. That is: we presume that the resistance towards pre-image & collision attacks is related to the output size; if it isn't then the hash function is not considered to be cryptographically secure.
The Wikipedia entry reads that hash functions should have the following properties:
- pre-image resistance
- 2nd pre-image resistance
- collision resistance
- pseudo-random
What is missing from there is that we expect the (2nd) pre-image resistance to be about the same strength as the output size of the hash function, and that we expect the collision resistance to be about half of the output size of the hash function. That last point may not be entirely correct as it is more of a time / memory tradeoff but the idea is still valid.
Obviously if there is a significant skew towards some output values of the hash functions then this security argument won't hold. As such the pseudo-randomness of the output can be thought of as an property that will emerge if the other properties hold; it is certainly possible to find academic material where this property is not directly mentioned.
If we're going a bit more into the academic: in principle we don't have to define the output domain to consist of a set of bits; we can define it to be anything as long as the output is well distributed within the domain the hash function can be secure.
Sometimes we see textbook examples where a secure hash function is extended by a 0
bit and asked if it is still secure. Although it would uphold the original security in bits it of course fails to meet the expected security of a larger hash output size as well as the pseudo-randomness property. Or in other words: it retains the security by defining a group of bits that always ends with a 0
bit.
So given this background we can finally answer your questions:
Any hash function has fixed output size but it does not mean that all values of output values are ever used.
Well, no, but in general hash functions are designed not to skew the results, which means that it needs to keep as much information as possible (between rounds and blocks, if we dive into common hash function design). So generally we would expect a rather even distribution over all the output bits.
What's the term for the size of output set?
As indicated most hash functions are simply defining an output size. It is expected that a hash function uses all or most output values assuming that the number of messages is high enough. Note that the number of possible input messages is either very high or even infinite, so if an output value can be reached it is very likely that it will be reached.
Can it be calculated?
Not really. We can assume pseudo-randomness / normal distribution and do some calculations based on that - assuming that the input message domain is not infinite of course. But to see if an output value can be reached it is required to perform the reverse calculation.
How do most widely used hashes compare by that metric?
They'll have comparable results as they are expected to have pseudo-random output, as indicated.
For example, does SHA256 use 100% of output values?
We don't know. It's likely but it would be very hard to prove.
Note that SHA-256 does not have infinite input; it "only" allows for messages up to $2^{64}$ bits. That means that it allows for approximately $2^{2^{65}}$ messages. Now that's not exactly infinite, but it is vastly higher than the $2^{256}$ output size. That means that most outputs will have close to $2^{2^{65}}$ elements in them as dividing by $2^{256}$ doesn't do much to such a large value.