The sample mean of $N$ independent random variables with the same distribution is an estimator that is unbiased and consistent? And if one random variable and calculare the sample mean is that estimator also consistent and unbiased? I was reading about but got a bit confused with the mixed information.
1 Answers
Bias has to do with the expectation of that variable. It doesn't matter what size your sample is. If a random variable $X$ has expectation $E[X_i] = \mu$, the expectation of the sample mean given sample size $N$ is:
$$ E\left[\frac{1}{n}\sum_{i=1}^N {X_i}\right] = \frac{1}{n}E\left[\sum_{i=1}^N {X_i}\right] = \frac{1}{n}\sum_{i=1}^N {E\left[X_i\right]} = \frac{1}{n}n\mu = \mu.$$
$N$ could be $1$, it could be $10$, it could be a million, it doesn't matter, the sample mean is still an unbiased estimator of the population mean.
Consistency is a bit different. The idea behind an estimator being consistent is that as $N\to \infty$, the estimator converges in probability to the population parameter you're trying to estimate.
The sample mean is a consistent estimator of the population mean because as your sample size increases, the sample mean converges in probability to the population mean (see this question for more on this). That is a property of the formula for the sample mean, not something that can be true or false for individual samples or something that's contingent on sample size.

- 2,790