This is a nice idea, but it would require a lot of work to make it a proper proof. Even in the simplest case $n = 2$ one has to consider a number of special cases and sometimes to simultaneously "move" more than one entry. See Izaak van Dongen's comment: It does not make sense to look for a path from the matrix $\begin{pmatrix}0 & 1 \\ 1 & 1 \end{pmatrix}$ to the matrix $\begin{pmatrix}1 & 1 \\ 1 & 1 \end{pmatrix}$.
Therefore I doubt that your approach, if done properly, will give a transparent proof.
However, your idea works nicely for any upper triangular matrix $X \in GL(n,\mathbb C)$. The determinant of $X$ is the product of its diagonal entries $x_{jj}$ which must therefore be non-zero. For $j \ne k$ let $\alpha_{jk}$ be the linear path in $\mathbb C$ from $x_{jk}$ to $0$ (i.e. $\alpha_{jk}(t) = (1-t)x_{kk}$). For $j = k$ we can find paths $\alpha_{jj}$ in $\mathbb C \setminus \{0\}$ from $x_{jj}$ to $1$ (if $x_{jj} \notin (-\infty, 0)$, simply take the linear path, otherwise $\alpha_{jj}(t) = (1-2t)x_{jj} +2ti$ for $t \le 1/2$ and $\alpha_{jj}(t) = (2-2t)i + 2t -1$ for $t \ge 1/2$). Then
$$\alpha(t) = (\alpha_{jk}(t))$$
is a path in $GL(n,\mathbb C)$ connecting $X$ and the unit matrix $I$. Actually all $\alpha(t)$ are upper triangluar matrices with non-zero diagonal entries, thus $\det \alpha(t) \ne 0$ for all $t$.
Thus it remains to show that each $A \in GL(n,\mathbb C)$ admits a path in $GL(n,\mathbb C)$ which starts at $A$ and ends at some upper triangular matrix $X \in GL(n,\mathbb C)$.
To do this, I suggest to modify the approach given in
Path from an invertible matrix $A$ to a diagonal matrix $E=(e_{ij})$ with $e_{11}=\mbox{sgn}(\det (A))$ for $GL(n,\mathbb R)$.
Let $r_1,\dots,r_n$ denote the rows of $A \in GL(n,\mathbb C)$. We introduce the following transformation:
(1) For $i \ne j$ and $m \in\mathbb{C}$ replace $r_i$ by $r_i + m\cdot r_j$, written as $A \mapsto A(i;j,m)$.
Three applications of (1) gives us the transformation
(2) For $i \ne j$ simultaneously replace $r_i$ by $r_j$ and $r_j$ by $-r_i$ (which exchanges two rows up to a sign):
$$ A \mapsto A' = A(i,j;1) \mapsto A'' = A'(j,i;-1) \mapsto A''' = A''(i,j;1) .$$
To see what $A'''$ is, note that all three transformation only modify rows $i$ and $j$. We have
$$(r_i, r_j) \mapsto (r_i + r_j, r_j) \mapsto (r_i + r_j, -r_i) \mapsto (r_j, -r_i) .$$
Since the determinant is multilinear (as a function whose entries are the $n$ rows of a matrix), we have
$$\det A(i,j;m) = \det A \text{ for all } m \in \mathbb C .$$
Thus transformation (1) does not change the determinant, and therefore also transformation (2) does not.
Moreover, transformation (1) can be realized by a path in $GL(n,\mathbb{C})$. Simply take
$$u(t) = (1-t)A + tA(i;j,m) = A(i;j,(1-t) + tm) .$$
Then $u(0) = A, u(1) = A(i;j,m)$ and $\det(u(t)) = \det(A) \ne 0$.
The same is therefore true for transformation (2).
It is well-known that Gaussian elimination using a sequence of transformations (1) and (2) transforms each matrix $A$ into a matrix $E$ in row echelon form. The sequence of transformations can be realized by a path in $GL(n,\mathbb{C})$.
Since all square matrices in row echelon form are upper triangular matrices, we are done.