It seems both examples fit into the following setting. One starts from a (deterministic) dynamic system defined by $a_0\in A$ and $a_{t+1}=u(a_t)$ for every nonnegative integer $t$, for a given function $u:A\to A$, and one considers the $X$-valued process $(x_t)_{t\geqslant0}$ defined by $x_t=\xi(a_t)$ for every nonnegative integer $t$, for a given function $\xi:A\to X$.
For every fixed $a_0$, $(x_t)_{t\geqslant0}$ is deterministic hence $(x_t)_{t\geqslant0}$ is a (quite degenerate) inhomogenous Markov chain whose transition at time $t$ is the kernel $Q_t$ such that $Q_t(x,y)=P(x_{t+1}=y\mid x_t=x)$ is undefined for every $x\ne\xi(a_t)$ and $\delta_y(\xi(a_{t+1}))$ if $x=\xi(a_t)$.
One way to get a truly random process $(x_t)_{t\geqslant0}$ in this setting is to choose a randomly distributed $a_0$. But then there is every reason to expect that $(x_t)_{t\geqslant0}$ will not be a Markov chain and in fact the construction above is a classical way to encode random processes with a complex dependence structure.
One example which might help get a feeling of what is happening is the case when $A=\mathbb R$, $u(a)=a+\frac15$ for every $a\in A$, $a_0$ uniformly distributed on $(0,1)$, and $\xi:\mathbb R\to\mathbb N$ the function integer part. Then $x_{t+1}\in\{x_t,x_t+1\}$ with full probability but $(x_t)_{t\geqslant0}$ is not Markov. However, $x_{t+1}$ is a deterministic function of $(x_{t},x_{t-1},x_{t-2},x_{t-3},x_{t-4})$ (or something similar) hence $(x_t)_{t\geqslant0}$ is a (degenerate) fifth-order Markov process.
Edit A feature preventing $(x_t)_{t\geqslant0}$ from being Markov in the example above is the existence of points $a$ and $a'\ne a$ in $A$ visited by the process $(a_t)_{t\geqslant0}$ such that $\xi(a)=\xi(a')$ but $\xi(u(a))\ne\xi(u(a'))$. This is reminiscent of the condition for a hidden Markov chain to be a Markov chain, which reads as follows.
Assume that $(a_t)_{t\geqslant0}$ is a Markov chain with transition kernel $q$ and let $(x_t)_{t\geqslant0}$ denote the process defined by $x_t=\xi(a_t)$ for every nonnegative $t$. Then $(x_t)_{t\geqslant0}$ is a Markov chain for every starting distribution of $a_0$ if and only if the sum
$$
q_\xi(a,y)=\sum\limits_{b\in A}q(a,b)\cdot[\xi(b)=y]
$$
depends on $a$ only through $x=\xi(a)$, that is, if and only if $q_\xi(a,y)=Q(x,y)$ for a given function $Q$. When this condition, called lumpability, holds, the Markov chain $(a_t)_{t\geqslant0}$ is said to be lumpable (by the function $\xi:A\to X$) and $Q$ is the transition kernel of the Markov chain $(x_t)_{t\geqslant0}$.
The question to know whether $(x_t)_{t\geqslant0}$ is a Markov chain for a given starting distribution of $a_0$ is more involved but a condition for this to hold is stated here.
Second edit Here is an example in continuous state space showing that the initial distribution is important.
Let $A=\mathbb R/\mathbb Z$ denote the unit circle, $u:A\to A$ defined by $u(a)=2a$, $X=\{0,1\}$, $\xi:A\to X$ defined by $\xi(a)=[a\in A_1]$ where $A_1=(\mathbb Z+[\frac12,1))/\mathbb Z$, and $a_{t+1}=u(a_t)$ and $x_t=\xi(a_t)$ for every nonnegative $t$. Then, if the distribution of $a_0$ is uniform on $A$, the process $(x_t)_{t\geqslant0}$ is a Markov chain since it is in fact i.i.d. with $x_t$ uniform on $X$ for every $t$.
This is adapted from the example based on the logistic map presented in these notes by Cosma Shalizi, which goes as follows.
Let $B=[0,1]$, $v:B\to B$ defined by $v(b)=4b(1-b)$, $\eta:B\to X$ defined by $\eta(b)=[b\in B_1]$ where $B_1=[\frac12,1]$, and $b_{t+1}=v(b_t)$ and $y_t=\eta(b_t)$ for every nonnegative $t$. Then, if the distribution of $b_0$ is the arcsine distribution, with density $1/(\pi\sqrt{b(1-b)})$ on $B$, the process $(y_t)_{t\geqslant0}$ is a Markov chain since it is in fact i.i.d. with $y_t$ uniform on $X$ for every $t$. Shalizi notes that $(y_t)_{t\geqslant0}$ is a Markov chain with respect to its own filtration, since the distributions of $y_{t+1}$ conditionally on $(y_s)_{s\leqslant t}$ or conditionally on $y_t$ are the same (and both are the uniform distribution on $X$). On the other hand the distribution of $y_{t+1}$ conditionally on $(b_s)_{s\leqslant t}$ is the Dirac measure at $+1$ or at $0$ since $y_{t+1}$ is a deterministic function of $b_t$. More precisely, this conditional distribution is the Dirac measure at $\eta(v(b_t))$.
Finally, the examples based on $u$ and $v$ are conjugate since $v\circ \sigma=\sigma\circ u$ with $\sigma:A\to B$ defined by $\sigma(a)=\sin^2(\pi a)$. Note that, if $a_0$ is uniform on $A$, then $\sigma(a_0)$ follows the arcsine distribution on $B$.