As said in the title, are stationarity and low autocorrelation the prerequisite of general / linear regression model ? That is, if a time series is non-stationary or has large autocorrelation, would it be easier or harder to be predicted using regression model such as linear model and deep learning ?
2 Answers
For (linear) regression models, the following assumptions have to hold:
- Linear Relationship
- Multivariate normality including your error term with zero mean + some infinite variance following a normal distribution
- No-autocorrelation
- No (strong) multicollinearity
- No homoscedasticity
Non-stationarity (caused by changing mean or/and variance over time) is prevailing in most level data (e.g. stock prices) - as it has a unit root or is trend stationary.
To cure your data from non-stationarity in most of the times, it is sufficient to use the relative change (percentage change - or log changes). Taking logarithmic changes, can also cure of homoscedasticity a bit.
To test for stationarity, you can use the Augmented-Dickey-Fuller-Test where the Null hypothesis states that a unit-root is present in your data set, thus you have a non-stationary variable.
Coming to your second question about time-series and deep learning. Unfortunately, I haven't found a good paper/article yet that discusses the assumptions on the main data properties that have to be satisfied to get correct statistical results. See my own post - not answered yet
However, as deep learning models make use of the same underlying statistical assumptions - e.g. probability distributions etc. or even the underlying models (see e.g. linear activation function) - I guess, in a time-series framework, the same assumptions have to hold, otherwise, there is the chance of spurious regression.

- 550
- 2
- 15
-
1To leverage my answer and give another insight I found this answer about stationarity in an LSTM framework. I hope that helps! – Maeaex1 Dec 17 '19 at 08:55