Note that if $k = 1$ then your linear functional is $\varphi \colon \mathbb{R}^n \rightarrow \mathbb{R}$ given by $\varphi(x^1, \ldots, x^n) = x^1 + \ldots x^n$ and it doesn't give the one-dimensional signed area (length in this case) of $(x^1, \ldots, x^n)$. In general, you can't expect to be able to describe the "signed length" of a vector $x \in \mathbb{R}^n$ by a linear functional $\varphi \colon \mathbb{R}^n \rightarrow \mathbb{R}$ as it has a kernel of dimension $\geq n - 1$.
However, there is a construction that generalizes the (absolute value of the) determinant in some sense and results in the non-signed $k$-area of the $k$-dimensional parallelepiped generated by the vectors $v_1, \ldots, v_k$. Let $V$ be a finite dimensional vector space and endow $V$ with an inner product $\left< \cdot, \cdot \right>$ so that you can talk about lengths of vectors in $V$. The inner product $\left< \cdot, \cdot \right>$ extends naturally to an inner product on $\Lambda^k(V)$ defined on simple $k$-vectors by
$$ \left< v_1 \wedge \dots \wedge v_k, w_1 \wedge \dots \wedge w_k \right> := \det \left( \left< v_i, w_j \right> \right)_{i,j=1}^k $$
and extended multi-linearly. The matrix $G(v_1, \dots, v_k) = ( \left< v_i, v_j \right>)_{i,j=1}^k$ is called the Gram matrix of $(v_1, \dots, v_k)$ and the norm $||v_1 \wedge \dots \wedge v_k|| = \sqrt{\det G(v_1, \dots, v_k)}$ gives the signed $k$-area of the $k$-dimensional parallelepiped generated by the vectors $v_1, \dots, v_k$ and is zero if and only if the vectors $v_1, \dots, v_k$ are linearly dependent (in which case $v_1 \wedge \dots \wedge v_k = 0$).
If $V = \mathbb{R}^n$ with the standard inner product, then if we treat the vectors $v_1, \ldots, v_k$ are columns of the matrix $A \in M_{n \times k}(\mathbb{R})$, then $G(v_1, \ldots, v_k) = A^T A$ and $||v_1 \wedge \dots \wedge v_k||^2 = \det(A^T A)$. In particular, if $k = n$ then $||v_1 \wedge \dots \wedge v_n||^2 = \det(A^T A) = \det(A)^2$ so $||v_1 \wedge \dots \wedge v_n|| = |\det(A)|$.
A construction of different nature generalizing the determinant (including signs) is obtained by the $k$-th exterior product of a linear map $T \colon V \rightarrow W$, resulting in a linear map $\Lambda^k(T) \colon \Lambda^k(V) \rightarrow \Lambda^k(W)$. If $V = W$ and $k = \dim V$, then $\Lambda^n(V)$ is one dimensional and under a choice of orientation, $\Lambda^n(V)$ can be identified with a single scalar called the determinant of $T$. If you apply this to a non-square matrix (interpreted as a linear map) $A \in M_{l \times n}$, then the components of $\Lambda^k(A)$ with respect to bases induced on $\Lambda^k(\mathbb{R}^n)$ and $\Lambda^k(\mathbb{R}^l)$ from the standard bases will be the $k \times k$ minors of $A$. You can learn about it more from the lecture notes of Paul Garrett here.