Let $(A_j)_{j\in\mathbb N}$ be a sequence of events. Think of them like a temporal sequence of coin flips. We will sequentially check to see if each $A_j$ occurs. Let events $E_j$ denote either $A_j$ or $A_j^c$. If $A_1$ occurs, then we set $E_1=A_1$. If $A_2$ doesn't occur, we set $E_2=A_2^c$, etc. But we have generally that $\mathbb P(A_2\mid A_1)\neq \mathbb P(A_2\mid A_1^c)$. Similarly, the probability of $A_n$ can change when conditioned on different histories for steps 1 through $n-1$.
Suppose we can find an $\epsilon>0$ such that $\mathbb P(A_n\mid E_1,E_2,\ldots,E_{n-1})>\epsilon$ for any $n$ and any realization of the sequence of $E_j$, which is some sequence of a mix of $A_j$ and $A_j^c$, e.g. $\{A_1, A_2^c, A_3^c, A_4,\ldots, A_{n-1}^c, A_n\}$ could be a possibility for the first $n$ of the $E_j$.
Question: Then do we have that $\mathbb P(A_n \text{ i.o.})=1$?
Here is some attempted reasoning. We have that $\mathbb P(A_1)>\epsilon$. And then, whether or not $A_1$ occurs, we have $\mathbb P(A_2\mid E_1)>\epsilon$. Likewise, as we iterate $n$, we will, with probability one, come to some $n_1$ such that $A_{n_1}$ does actually occur. Then we start over, and, with prob. 1, we find an $n_2>n_1$ such that $A_{n_2}$ occurs, etc. Thus, with probability one, we can always find another $n_j$ such that $A_{n_j}$ actually occurs. Hence, infinitely many such $A_n$ must occur, with probability one.
The probability of $\{A_n^c \text{ e.a.}\}$ that eventually the $A_n$ stop occurring would be bounded above by $(1-\epsilon)^\infty=0$ in a limiting sense.
Am I making a logical error here? Surely there must be a better way to formalize this idea or with better notation. Is there an established version of a Borel-Cantelli type result that says this? It seems like this version from Durrett's textbook says exactly what I want (there are several questions about this version of Borel-Cantelli here on MathSE.