I would like to ask for a clarification to the comments to the question here ;
Proof that every open subset of $\mathbb{R}^n$ is uncountable
As far as I can tell the comment implies for $\textbf{x}\in\mathbb{R}^{n}$ the mapping $f(\textbf{t})\mapsto \textbf{x}+\textbf{t}(1,0,0,...,0)$ for all $|\textbf{t}|<\epsilon$ is a bijective and hence any open set $U\in\mathbb{R}^{n}$ satisfies $|U|=\aleph$. However I could not make this work for me. Instead, letting $e_{1}=(1,0,0,...,0)$,...,$e_{n}=(0,0,0,...,1)$ be the basis vectors for $\mathbb{R}^{n}$ then I believe the mapping $f(\textbf{t})\mapsto \textbf{x}+\sum_{i=1}^{n}\textbf{t}\textbf{e}_{i}$ is in fact required. Where am I going wrong? Below is my "proof" of my claim;
Finally, can the basis vector in the referenced question, or (if I am somehow correct) can the basis vectors in my question, be replaced by basis vectors in a general metric space to come up with a very similar proof - i.e. are all open sets in a metric space uncountable?
************ proof - I switched here to $k$ rather than $n$ *******
For any $\textbf{y}\in\mathbb{R}^{k}$ and $\epsilon>0$ let $B_{\epsilon}(\textbf{y})$ be an open ball centered at $\textbf{y}$. For $i=1,2...,k$ let $\textbf{e}_{i}\in\mathbb{R}^{k}$ be a vector with a $1$ in the $i^{th}$ component and zeroes elsewhere - i.e. $\textbf{e}_{i}$ is the $i^{th}$ basis vector for $\mathbb{R}^{k}$. Now for all $t_{i}$ satisfying $|t_{i}|<k^{-1/2}\epsilon$ we have
\begin{align*} d_{k}\left[\textbf{y},\textbf{y}+\sum_{i=1}^{k}t_{j}\textbf{e}_{j}\right]&= d_{k}\left[\sum_{i=1}^{k}y_{i}\textbf{e}_{i},\sum_{i=1}^{k}y_{i}\textbf{e}_{i}+\sum_{i=1}^{k}t_{i}\textbf{e}_{i}\right]\\ &=\sqrt{\sum_{i=1}^{k}\left(y_{i}(\textbf{e}_{i})_{i}-y_{i}(\textbf{e}_{i})_{i}-t_{i}(\textbf{e}_{i})_{i}\right)^{2}}\\ &=\sqrt{\sum_{i=1}^{k}\left(-t_{i}\right)^{2}}\\ &=\sqrt{\sum_{i=1}^{k}t_{i}^{2}}\\ &<\sqrt{k\cdot k^{-1}\epsilon^{2}}\\ &=\epsilon. \end{align*}
Accordingly for $\textbf{t}:=(t_{1},...,t_{k})$ satisfying $|t_{i}|<k^{-1/2}\epsilon$ for all $i=1,...,k$ (which implies $||\textbf{t}||_{k}=\sqrt{\sum_{i=1}^{k}t_{i}^{2}}<\sqrt{k\cdot k^{-1}\epsilon^{2}}=\epsilon$), if we define the vector $\textbf{y}_{\textbf{t}}=\textbf{y}+\sum_{j=1}^{k}(\textbf{t})_{j}\textbf{e}_{j}$ then the above equation means $d_{k}[\textbf{y},\textbf{y}_{\textbf{t}}]<\epsilon$ which implies $\textbf{y}_{\textbf{t}}\in B_{\epsilon}(\textbf{y})$. Thus defining the mapping $f_{\textbf{y}}:(-\epsilon,\epsilon)\longrightarrow B_{\epsilon}(\textbf{y})$, $f_{\textbf{y}}(\textbf{t})\mapsto \textbf{y}_{\textbf{t}}$, we have $f_{\textbf{y}}((-\epsilon,\epsilon))\subseteq B_{\epsilon}(\textbf{y})$. Conversely choose a $\textbf{x}\in B_{\epsilon}(\textbf{y})$ which implies $d_{k}[\textbf{x},\textbf{y}]:=d<\epsilon$, and write $\textbf{x}$ and $\textbf{y}$ in terms of the basis vectors $\textbf{x}=\sum_{i=1}^{k}\textbf{x}_{i}\textbf{e}_{i}$ and $\textbf{y}=\sum_{i=1}^{k}\textbf{y}_{i}\textbf{e}_{i}$ so that $d_{k}^{2}[\textbf{x},\textbf{y}]$ can be written as
\begin{align*} d_{k}^{2}[\textbf{x},\textbf{y}]&=\sum_{i=1}^{k}\left(\textbf{x}_{i}-\textbf{y}_{i}\right)^{2}(\textbf{e}_{i})_{i}^{2}\\ &=\sum_{i=1}^{k}\left(\textbf{x}_{i}-\textbf{y}_{i}\right)^{2}\\ &=d^{2}\\ &<\epsilon^{2}. \end{align*}
Defining $t_{i}(\textbf{x},\textbf{y}):=(\textbf{x}_{i}-\textbf{y}_{i})$, and in turn $t(\textbf{x},\textbf{y})=\left(t_{i}(\textbf{x},\textbf{y})\right):=((\textbf{x}_{1}-\textbf{y}_{1}),...,(\textbf{x}_{k}-\textbf{y}_{k}))$, using the above equation implies $||t(\textbf{x},\textbf{y})||_{k}=d_{k}[\textbf{x},\textbf{y}]=d<\epsilon$. Thus we have
\begin{align*} \textbf{x}&=\textbf{y}+\sum_{i=1}^{k}\left(\textbf{x}_{i}-\textbf{y}_{i}\right)\textbf{e}_{i}\\ &=\textbf{y}+\sum_{i=1}^{k}t_{i}(\textbf{x},\textbf{y})\textbf{e}_{i}\\ &=\textbf{y}+\textbf{t}(\textbf{x},\textbf{y})\textbf{e}_{i}\\ &=\textbf{y}_{\textbf{t}(\textbf{x},\textbf{y})}, \end{align*}
for some $||t(\textbf{x},\textbf{y})||_{k}<\epsilon$, which leads to the conclusion $\textbf{x}\in f_{\textbf{y}}((\epsilon,\epsilon))$. Accordingly $B_{\epsilon}(\textbf{y})\subseteq f_{\textbf{y}}((-\epsilon,\epsilon))$ and so the conclusion $f_{\textbf{y}}((-\epsilon,\epsilon))= B_{\epsilon}(\textbf{y})$ follows. Thus $f_{\textbf{y}}:(-\epsilon,\epsilon)\longrightarrow B_{\epsilon}(\textbf{y})$ is an onto function. Furthermore for all $\textbf{s},\textbf{t}\in\mathbb{R}^{k}$ since $(\textbf{y})_{i}+(\textbf{s})_{i}=(\textbf{y})_{i}+(\textbf{t})_{i}$ if and only if $(\textbf{s})_{i}=(\textbf{t})_{i}$ for all $i=1,...,k$ implies $\textbf{y}+\sum_{j=1}^{k}(\textbf{s})_{j}\textbf{e}_{j}=\textbf{y}+\sum_{j=1}^{k}(\textbf{t})_{j}\textbf{e}_{j}$ if and only if $\textbf{s}=\textbf{t}$ then $f_{\textbf{y}}:(-\epsilon,\epsilon)\longrightarrow B_{\epsilon}(\textbf{y})$ is also an injective function, and hence is a bijection. Now $|(-\epsilon,\epsilon)|=\aleph$ is well known [and I do not prove it here] and so $|-\epsilon,\epsilon|=|B_{\epsilon}(\textbf{y})|$ (i.e. $(-\epsilon,\epsilon)$ is equipotent to $B_{\epsilon}(\textbf{y})$). By Definition of equality of set cardinalities we conclude $|(-\epsilon,\epsilon)|=| B_{\epsilon}(\textbf{y})|=\aleph$.