After some more research, I believe this is a type of "redundant" feature, in the language of Machine Learning. A redundant feature is one that can be proven to add no information by looking at the inputs only. This might also be used to describe a feature which has a correlation of ±1 with another feature, but perfectly-correlated and zero-variance features are both redundant in that you can see they add no information without looking at the targets.
I'd probably call it a "redundant, constant-value feature" to be specific.
Redundant features are related to irrelevant features, which have no predictive power. However, it is necessary to consider the targets to determine if a feature is irrelevant. This can be done by calculating feature importance, for example. An example of an irrelevant feature can be one that takes a random value, which will thus not be correlated with the targets, and therefore can't possibly help predict the target value of a previously unseen example.
The Wikipedia article on feature selection does a pretty good job of explaining these concepts.
I believe "Zero Variance Predictor" as Dan Scally answered is equally valid, just more common in Statistics - so if that's your field then it might be the more appropriate term.
I find "near-zero variance predictor" to be a lot more descriptive in the case where the feature is close to but not exactly zero variance. "Zero variance predictor" seems like a convoluted way of saying the feature always takes the same value, so I'd prefer to call it a constant-value feature, but this is just my preference.