The system: \begin{align} \dot{x}(t) &= f(x(t))\\ x(t_0) &= x_0 \end{align} is being solved by an explicit linear multi-step method (assume perfect initialization): \begin{equation} \widetilde{x}_{n+s} = \sum_{j=0}^{s-1}a_j\widetilde{x}_{n+j} + h\sum_{j=0}^{s-1}b_jf(\widetilde{x}_{n+j}) \end{equation} What are sufficient conditions that there exist constants $C, H > 0$ such that for all $h \in [0, H)$ the numerical solution of the method is bounded \begin{equation} \widetilde{x}_n < C \end{equation} in the interval $t_0 \leqslant t_0 + nh \leqslant t_{end}$.
Assumption:
The above method is zero-stable if the roots of the polynomial: \begin{equation} \psi(u) = u^s -\sum_{j=0}^{s-1}a_ju^j \end{equation} are within a closed unit disk and the roots on the unit circle are simple. Zero-stability is a sufficient condition for the above method to give a bounded input in the sense stated above.
My thoughts:
Zero-stability is a necessary condition for the above system to be convergent (see e.g. Stoer, Bulirsch - Introduction to Numerical Analysis (2002)). This can be shown by choosing an example system $\dot{x} = 0$. The same proof can be made to show that zero-stability is a necessary for the existence of the above bounds.
Zero-stability and consistency of a method are sufficient conditions for convergence of that method. The solution $x(t)$ is differentiable on the whole interval. Differentiable functions are bounded on the closed interval. If the method is convergent than it has a bounded error for infinitesimally small step sizes. This means convergence is sufficient for the bounded output of the method.
Can this condition be relaxed? My intuition tells me that zero-stability without consistency is enough for the method to give a bounded output. Is this intuition correct and can it be proven?