Determinants are of great theoretical significance in mathematics, since in general "the determinant of something $= 0$" means something very special is going on, which may be either good news of bad news depending on the situation.
On the other hand determinants have very little practical use in numerical calculations, since evaluating a determinant of order $n$ "from first principles" involves $n!$ operations, which is prohibitively expensive unless $n$ is very small. Even Cramer's rule, which is often taught in an introductory course on determinants and matrices, is not the cheapest way to solve $n$ linear equations in $n$ variables numerically if $n>2$, which is a pretty serious limitation!
Also, if the typical magnitude of each term in a matrix of of order $n$ is $a$, the determinant is likely to be of magnitude $a^n$, and for large $n$ (say $n > 1000$) that number will usually be too large or too small to do efficient computer calculations, unless $|a|$ is very close to $1$.
On the other hand, almost every type of numerical calculation involves the same techniques that are used to solve equations, so the practical applications of matrices are more or less "the whole of applied mathematics, science, and engineering". Most applications involve systems of equations that are much too big to create and solve by hand, so it is hard to give realistic simple examples. In real-world numerical applications, a set of $n$ linear equations in $n$ variables would still be "small" from a practical point of view if $n = 100,000,$ and even $n = 1,000,000$ is not usually big enough to cause any real problems - the solution would only take a few seconds on a typical personal computer.