Generally, data gathered from natural phenomena obeys normal distribution, I think your data should obey the normal distribution as well. The more you have data the better you may do estimation or prediction.
Wikipedia says:
In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally distributed.
There are many methods to do the test.
Simple back-of-the-envelope test takes the sample maximum and minimum and computes their z-score, or more properly t-statistic (number of sample standard deviations that a sample is above or below the sample mean), and compares it to the 68–95–99.7 rule: if one has a 3σ event (properly, a 3s event) and substantially fewer than 300 samples, or a 4s event and substantially fewer than 15,000 samples, then a normal distribution will understate the maximum magnitude of deviations in the sample data.
But if your sample data is fixed ( I mean does not change with time) then you can precompute necessary statistics and later compare and draw conclusion in $O(1)$ time. But I would update my sample data once a day or hour, and then check it visually (for example you may detect outliers) and recompute statistics on a regular base.