Let $G$ denote the group of bijective maps $g : \mathbb{Z}\to \mathbb{Z}$ such that $g$ fixes all but finitely many integers. Show that there does not exist a field $F$ and an $n\ge 1$ so that $G$ is isomorphic to a subgroup of (embeds in) $GL_n(F),$ the set of invertible $n\times n$ matrices with entries in $F$.
I know that for any field $F$, every finite group embeds into $GL_n(F)$ for some $n\ge 1.$ I think it might be useful to use a contradiction here. So suppose there exists an $n\ge 1$ and a field $F$ so that $\phi : G\to S$ is an isomorphism, where $S$ is a subgroup of $GL_n(F),$ denoted $S\leq GL_n(F).$ There are various properties of isomorphisms that may be useful; for instance, the order of $g\in G$ equals the order of $\phi(g)$ in $S$, $S$ is abelian if and only if $G$ is abelian, etc. I'm not sure how to use the properties of $G$ and $\phi$ to obtain a contradiction here though.
Edit: I was wondering if @JyrkiLahtonen could elaborate on his answer? I think I mostly understand it, but I don't get a few details. Below is my understanding of his answer.
$G_2$ is abelian because the permutations $\sigma_i$ and $\sigma_j$ commute for any $i$ and $j$ and any element of $G_2$ is of the form $\sigma_{i_1}^{a_{i_1}}\cdots \sigma_{i_n}^{a_{i_n}}$ for some $i_j \ge 1, a_{i_j} \in \mathbb{Z}\,\forall i_j$ (so $\sigma_{a}^{b}\sigma_{c}^{d} = \sigma_{c}^{d}\sigma_{a}^{b}$ for any $b,d \in\mathbb{Z}, a,c\ge 1$). It is an infinite direct sum of cyclic groups of order two because for any $i$ the cyclic group $\langle \sigma_i \rangle = \{e, \sigma_i\}$ has trivial intersection with the subgroup $\langle \{ \langle \sigma_j\rangle : j\neq i\}\rangle$, $G_2$ is abelian so every subgroup $\langle \sigma_i\rangle$ of $G_2$ is normal in $G_2$, and $G_2 = \langle \{ \langle \sigma_j\rangle : j\ge 1\} \rangle = \langle e,\sigma_1,\sigma_2,\cdots \rangle = \langle \sigma_1,\sigma_2,\cdots \rangle.$
The vectors $\frac{1}2(x+\phi(\sigma_i)(x))$ and $\frac{1}2(x-\phi(\sigma_i)(x))$ are eigenvectors of $\phi(\sigma_i)$ with corresponding eigenvalues +1 and $-1$. Since $x$ was arbitrary, this shows that every vector of $V$ is a linear combination of eigenvectors of $\phi(\sigma_i),$ so the set of eigenvectors of $\phi(\sigma_i)$ spans $V$. Hence we can obtain a basis of eigenvectors of $\phi(\sigma_i)$ for $V$ (e.g. we start with an eigenvector $v_1$ and then if $V\neq \mathrm{span} \{v_1\}$, we pick an eigenvector $v_2 \in V\neq \mathrm{span} \{v_1\}$ and continue until we get a basis). Since $V$ has an ordered basis consisting of eigenvectors of $\phi(\sigma_i)$, the matrix $\phi(\sigma_i)$ is diagonalizable.
$N$ is the number of matrices. The above shows that the space $V$ has a basis consisting of eigenvectors of $\phi(\sigma_i)$ for fixed $i$. Every element of $V$ can be written as a linear combination of two eigenvectors of $\phi(\sigma_{k+1})$ corresponding to the eigenvalues $+1$ and $-1$. Also, these eigenspaces are disjoint because if $0\neq w\in V_+\cap V_-,$ then $w$ is an eigenvector of $\phi(\sigma_{k+1})$ with eigenvalue $1$ and $-1$, which isn't possible. So $V= V_+\oplus V_-.$
The transformations $\phi(\sigma_j), 1\leq j\leq k$ commute with $\phi(\sigma_{k+1})$ because $\phi(\sigma_j)\phi(\sigma_{k+1}) = \phi(\sigma_j \sigma_{k+1}) = \phi(\sigma_{k+1}\sigma_j) = \phi(\sigma_{k+1})\phi(\sigma_j).$ Fix $1\leq j\leq k.$ Let $v_1$ and $v_2$ be the eigenvectors of $\phi(\sigma_{k+1})$ corresponding to eigenvalues $1$ and $-1$. Then $\phi(\sigma_j)(v_1) = \phi(\sigma_j)(\phi(\sigma_{k+1})(v_1)) = \phi(\sigma_{k+1})(\phi(\sigma_j)(v_1)),$ so $\phi(\sigma_j)(v_1)$ is an eigenvector of $\phi(\sigma_{k+1})$ with eigenvalue $1.$ Hence $\phi(\sigma_j)(V_+) \subseteq V_+.$ Similarly $\phi(\sigma_j)^{-1}(V_+)\subseteq V_+$ so $V_+ = \phi(\sigma_j) (V_+).$ Similarly $\phi(\sigma_j)(V_-) = V_-.$
So does the induction hypothesis apply for $N=k$?
An eigenvector $v$ is a shared eigenvector of all $\phi(\sigma_i)$ if it's an eigenvector of every $\phi(\sigma_i),$ right?
What does "they must all respect the decomposition (*)" mean precisely?
Hence if $v$ is a shared eigenvector of all $\phi(\sigma_j)$'s in the basis for $V_-,$ for any $1\leq j\leq k, \phi(\sigma_{k+1})(v) = \phi(\sigma_{k+1})(-\phi(\sigma_j)(v)) = -\phi(\sigma_j)(\phi(\sigma_{k+1})(v))$, so $\phi(\sigma_{k+1})(v)$ is an eigenvector of $\phi(\sigma_j)$ with eigenvalue -1. But is $v$ an eigenvector of $\phi(\sigma_{k+1})$ with eigenvalue $-1$?
How does the claim follow from the fact that $GL_n(F)$ contains only $2^n$ diagonal matrices with entries $\pm 1$?
The statement is clearly true, but each $\phi(\sigma_j)$ is diagonalizable, so there is an invertible matrix $P_j$ so that $P_j^{-1} \phi(\sigma_j) P_j$ is diagonal for all $j$.
Why can we adjoin a primitive third root of unity, $\omega$ to the field $F$? Does this just mean we replace $F$ with $F\cup \{\omega\}$?