@AlexRavsky argues in their answer that the determinant of the unique linear operator on the zero vector space is undefined, because the zero vector space has no basis.
I consider the convention that an empty sum equals $0$ (and that an empty product equals $1$) to be very natural. So, for me, it is a consequence of the definition of a basis that the zero vector space over any field has a (unique) basis, namely the empty set.
So, for me, it makes sense to talk about the determinant of the unique linear operator on the zero vector space, and
I argue below that its determinant exists and equals $1$, regardless of which definition of determinant we choose.
Definition 1. The determinant of $T \colon V \to V$ is the determinant of the $n \times n$ matrix of $T$ with respect to any ordered basis of $V$.
Let $\mathbb{F}$ be a field and $V$ be the zero vector space over $\mathbb{F}$, so that $V$ has dimension $0$ over $\mathbb{F}$ and the empty set is the unique basis of $V$ over $\mathbb{F}$.
Question: Is the empty set an ordered basis for $V$?
Answer: Yes, the unique relation on $\emptyset$ also happens to be a total ordering.
Question: What is the matrix of $T\colon V \to V$ with respect to the ordered basis $\emptyset$?
Answer: It is the empty matrix, the unique element of the space $\mathbb{F}^{0 \times 0}$.
Explanation: Note that $\mathbb{F}^{m \times n}$ denotes the set of all $m \times n$ matrices, and is defined as the set of all functions $\{ (i,j) : 1 \leq i \leq m, 1 \leq j \leq n\} \to \mathbb{F}$. When $m = 0 = n$, the domain is $\emptyset$, and there is a unique function $\emptyset \to \mathbb{F}$, namely the empty function. This is what we call the empty matrix.
So, the empty matrix is the unique candidate for being the matrix of $T$ with respect to the ordered basis $\emptyset$, but is it actually the matrix of $T$?
For the matrix $A \in \mathbb{F}^{n \times n}$ to be the matrix of the linear transformation $T \colon V \to V$ with respect to the ordered basis $\mathcal{B} = (v_1,\dotsc,v_n)$, we should have that, for every $1 \leq i \leq n$, its $i$th column is the coordinate vector of $v_i$ with respect to $\mathcal{B}$. When $n = 0$, this is vacuously true, so the empty matrix is the matrix of $T$ with respect to the ordered basis $\emptyset$.
Question: Does $\mathbb{F}^{0 \times 0}$ contain the identity matrix?
Answer: Yes, the empty matrix is the identity matrix in $\mathbb{F}^{0 \times 0}$.
Explanation: The identity matrix $I_n \in \mathbb{F}^{n \times n}$ is defined by: for every $1 \leq i, j \leq n$, $(I_n)_{i,j} = 1$ if $i = j$ and $0$ if $i \neq j$. These conditions are vacuously satisfied when $n = 0$. So, every element of $\mathbb{F}^{0 \times 0}$ is the identity matrix, or said less dramatically, the unique element of $\mathbb{F}^{0 \times 0}$ is the identity matrix.
Note that the unique element of $\mathbb{F}^{0 \times 0}$ is also the zero matrix for the same reason. In fact, it is a scalar matrix for any choice of scalar from $\mathbb{F}$, for the same reason.
Question: Is there a determinant function on $\mathbb{F}^{0 \times 0}$?
Answer: Yes, the function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ that maps the empty matrix to $1$ is a determinant function.
Explanation: For a function $f \colon \mathbb{F}^{n \times n} \to \mathbb{F}$ to be a determinant function, it should be a multilinear, alternating function of the rows (or columns, it makes no difference), such that the identity matrix $I_n$ is mapped to $1$.
We have shown that the empty matrix is also the identity matrix. There is a unique function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ that maps the empty matrix to $1$, since $\mathbb{F}^{0 \times 0}$ is a singleton set.
We will show that this function is also a multilinear and alternating function on the rows, so it will turn out to be the unique determinant function on $\mathbb{F}^{0 \times 0}$.
A function $f \colon \mathbb{F}^{n \times n} \to \mathbb{F}$ is a multilinear function of the rows if, for every $1 \leq i \leq n$ and every choice of vectors $v_1,\dotsc,v_{i-1},v_{i+1},\dotsc,v_n \in \mathbb{F}^{n}$, the function $g \colon \mathbb{F}^n \to \mathbb{F}$ defined by $g(v) = f(v_1,\dotsc,v_{i-1},v,v_{i+1},\dotsc,v_n)$ for all $v \in \mathbb{F}^n$ is linear. When $n = 0$, this condition is vacuously true. So, every function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ is a multilinear function of the rows.
A function $f \colon \mathbb{F}^{n \times n} \to \mathbb{F}$ is an alternating function of the rows if, for every $1 \leq i < j \leq n$, whenever $v_1,\dotsc,v_n$ are vectors in $\mathbb{F}^n$ with $v_i = v_j$, we have $f(v_1,\dotsc,v_n) = 0$. Again, when $n = 0$, this condition is vacuously true. So, every function $\mathbb{F}^{0 \times 0} \to \mathbb{F}$ is an alternating function of the rows.
Hence, the function that maps the empty matrix to $1$ is a determinant function.
Hence, $\det(T)$ exists and equals $1$.
Next, let us see what happens with the basis-free definition of the determinant of a linear operator.
Definition 2. The determinant of $T\colon V \to V$ is the unique scalar $c \in \mathbb{F}$ such that the induced linear operator $\Lambda^n(T) \colon \Lambda^n(V) \to \Lambda^n(V)$ satisfies $\Lambda^n(T)(v_1 \wedge \dotsb \wedge v_n) = c \cdot (v_1 \wedge \dotsb \wedge v_n)$ for all $v_1, \dotsc, v_n \in V$.
Again, let $\mathbb{F}$ be a field and $V$ be the zero vector space over $\mathbb{F}$, so that $V$ has dimension $0$ over $\mathbb{F}$ and the empty set is the unique basis of $V$ over $\mathbb{F}$.
Question: What is the $0$th exterior power of $V$, $\Lambda^0(V)$?
Answer: $\Lambda^0(V) = \mathbb{F}$.
Explanation: The $k$th exterior power of $V$, $\Lambda^k(V)$ is the vector supspace of the exterior algebra $\Lambda(V)$ generated by the set of all vectors in $\Lambda(V)$ of the form $v_1 \wedge \dotsb \wedge v_k$, where $v_1,\dotsc,v_k \in V$.
Hence, $\Lambda^0(V)$ is the vector subspace spanned by the empty wedge product. By convention, the empty wedge product is equal to $1$ in $\Lambda(V)$, which is just the scalar $1 \in \mathbb{F}$ by the canonical embedding of $\mathbb{F}$ into $\Lambda(V)$. The vector subspace of $\Lambda(V)$ generated by $1$ is just $\mathbb{F}$. Hence, $\Lambda^0(V) = \mathbb{F}$. Note that this is actually true regardless of whether $V$ is the zero vector space or not.
Question: What linear operator does $T$ induce on $\Lambda^0(V)$?
Answer: If $f \colon V \to V$ is a linear operator on the $n$-dimensional vector space $V$ over $\mathbb{F}$, then $f$ induces the operator $\Lambda^n(f) \colon \Lambda^n(V) \to \Lambda^n(V)$ defined by $\Lambda^n(f)(v_1 \wedge \dotsb \wedge v_n) = f(v_1) \wedge \dotsb \wedge f(v_n)$, for all $v_1,\dotsc,v_n \in V$.
When $n = 0$, the induced operator $\Lambda^0(T)$ is defined by the single relation $\Lambda^0(T)(1) = 1$, since every $n$-fold wedge product appearing in the definition of $\Lambda^n(T)$ becomes the empty wedge product equal to $1$.
Question: Does the determinant of $T \colon V \to V$ exist?
Answer: Yes!
Explanation: When $n = 0$, we have $\Lambda^0(T)(1) = 1 = 1 \cdot 1$. In case that's completely opaque, try the following:
$\Lambda^0(T)(\wedge_{i=1}^0 v_i) = 1 \cdot \wedge_{i=1}^0 v_i$ for all $v_i \in V$ with $1 \leq i \leq 0$.
Hence, $\det(T)$ exists and equals $1$.
One can also see what happens when trying to extend the formula for the determinant of an $n \times n$ matrix over $\mathbb{F}$ to the case $n = 0$.
Definition 3. The determinant of the $n \times n$ matrix $A = (a_{ij})$ over $\mathbb{F}$ is given by $$\det(A) = \sum_{\sigma \in \mathfrak{S}_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n a_{i\sigma(i)}.$$
Question: What is $\mathfrak{S}_0$?
Answer: $\mathfrak{S}_0$ is the trivial group.
Explanation: When $n = 0$, $\mathfrak{S}_0$ is the set of bijections $\emptyset \to \emptyset$. Since there is only one such map, namely the empty map, there is only one permutation in $\mathfrak{S}_0$.
Question: What is the sign of the empty permutation?
Answer: $1$.
Explanation: There are several ways to define the sign of a permutation, but in any definition it must be a group homomorphism from $\mathfrak{S}_n$ to $\{ \pm 1 \} = \mathbb{Z}^\times$. Since $\mathfrak{S}_0$ is a singleton set, the sign of the empty permutation must be $1$.
Question: What is the determinant of the empty matrix?
Answer: $1$.
Explanation: Let $n = 0$ and $A$ be the empty matrix. Then, in the definition of $\det(A)$ above, the $n$-fold product inside the summation is the empty product, which equals $1$ by convention. So, we have $\det(A) = \sum_{\sigma \in \mathfrak{S}_0} \operatorname{sgn}(\sigma) \cdot 1 = 1 \cdot 1 = 1$.
Hence, the determinant of the unique $0 \times 0$ matrix exists and equals $1$.
This part was also pointed out in the comments by @Crostul.
There is yet another formula that defines the determinant of an $n \times n$ matrix over $\mathbb{F}$ recursively. We can try to extend this to $n = 0$, too.
Definition 4. Define $\det((a)) = a$ for every matrix $(a) \in \mathbb{F}^{1 \times 1}$. Let $n > 1$, and $A \in \mathbb{F}^{n \times n}$. Denote by $A[i|j]$ the $(n-1) \times (n-1)$ matrix obtained by deleting the $i$th row and $j$th column of $A$. Then, $$\det(A) = \sum_{i=1}^n (-1)^{i+j} A_{ij} \det(A[i|j]).$$
How would we extend the inductive step to $n \geq 1$? We start by noting that if $n = 1$, and $A \in \mathbb{F}^{1 \times 1}$, then there is only one possible value of $i$ and $j$, namely $1$. Moreover, $A[1|1]$ is the empty matrix for every $A \in \mathbb{F}^{1 \times 1}$.
Let $A = (a) \in \mathbb{F}^{1 \times 1}$ and let $E \in \mathbb{F}^{0 \times 0}$ be the empty matrix. If the formula were to be true when $n = 1$ as well, then it would read $$\det(A) = \sum_{i=1}^1 (-1)^{1+1} A_{11} \det(A[1|1]), \quad \text{i.e.} \quad a = a \det(E).$$ Since we want this to be true for all $a \in \mathbb{F}$, and $\mathbb{F}$ has at least one nonzero element, namely $1$, we see that we are forced to define $\det(E) = 1$.
Thus, the only extension of Definition 4 that works for all $n \geq 1$ is with the initialization $\det(E) = 1$, where $E$ is the empty matrix.
To the best of my knowledge, standard linear algebra textbooks do not concern themselves with the case $n = 0$ when discussing determinants.
I checked Linear Algebra by Hoffman and Kunze, and they start by defining determinants of $n \times n$ matrices where $n$ is consistently assumed to be a positive integer (i.e. $> 0$). So, they don't deal with determinants of $0 \times 0$ matrices.
They do follow it up with a more general section describing determinants of linear operators on free modules over commutative rings with identity, of rank $n$. There they prove a result (on page 172) equivalent to our Definition 2. But, they implicitly only consider the case $n > 0$ again, since they make no remarks at all about the case $n = 0$.
(It might be worth noting that Hoffman and Kunze do state that the zero vector space has dimension $0$ and has the empty set as a basis (on page 45), so they do not entirely avoid discussing the $n = 0$ case everywhere.)
I also checked Algebra by Serge Lang, and he follows a similar line of exposition, and he too makes no remark about the case $n = 0$. But, throughout the book, Lang is not concerned with such matters, so this is not particularly surprising. Same is the case in Basic Algebra, Volume I, by Nathan Jacobson.