There is a very nice way to compute the variance of a moving window as detailed by Knuth and Cook and answered locally here, also on a blog here. The method requires you to make use of the data in the window in the form of $(x_N - x_0)$. I can't store the windowed data, but would still like to simulate the behaviour of having a windowed running update. Is there a way to maintain an estimate of the evolving variance using only the previous step parameters $\mu, \sigma^2, d^2$?
Essentially, the parameters give me an estimate for the distribution for the previous time step, then I see a data point, and if it is an "unexpected" point I want the variance to inflate slightly, and if it is "expected" I want the variance to shrink a bit. The mean and variance of my data stream moves around, but eventually converges, and I'd like the running estimate to capture this behaviour.
Here's a contrived example. $\mu$ starts at 0, linearly increases and is then flat at 5 again. $\sigma$ starts at 1, linearly increases and decreases back to 1. The blue line is computed using a window of 30 points. Green line is using the normal streaming equations, but as expected it is computing the params for the whole data sample, not just the window. The orange line is just capping $N = min(N, 30)$ in the normal streaming setting, but it does not adapt quickly enough, or forget the old samples. Any help would be much appreciated!