Languages are a good way of discussing yes or no questions with a finite bit-length input size. There are plenty of alternatives to languages, in complexity theory! They often have some 'moral equivalence' to languages.
Two important cases are Function problems and Approximation problems.
Function problems give you some bitstring $s$ and you need to compute some output function $f(s)$. Now instead of the set $\mathsf{P}$ of polynomial time languages, you talk about the set $\mathsf{FP}$ of functions that a language can compute in polynomial time. There is also $\mathsf{FNP}$, problems where you can compute the answer if you manage to guess the right 'proof'; $\mathsf{FBPP}$, where you can compute the right answer with probability at least 2/3; and so on.
Approximation problems are sort of a refinement of function problems, where now the input string $f(s)$ is interpreted as a real number (in some standard encoding) and you're asked to give not necessarily the exact number, but to give a number somewhere close to $f(s)$. The input consists of the problem string $s$ together with a precision parameter $\epsilon$. Typically you talk about multiplicative approximation where you need to answer within a factor of $1\pm \epsilon$, or additive approximation where you need to answer within an additive error of $\pm \epsilon$. Some important classes here are $\mathsf{FPTAS}$, see here, which includes problems that you can solve in time $O(n^3 / \epsilon^3)$, for instance; $\mathsf{APX}$, see here, where you can solve in polynomial time as long as $\epsilon > c$ for some constant $c$; and others.
Alright, with that out of the way!
"Solving the Navier-Stokes equations", as a problem, I would generally write down as an approximation problem as follows. The input string gives some finite-size description of the initial state. This would probably be a grid size $n$, where I specify initial pressure and velocity values at each point on the grid. Then we have some agreed-upon encoding to turn this into a continuous function: for instance, third-order polynomial interpolation. This defines an initial state. There is similar data for boundary conditions. Then you give me a time $t$, a point $(x,y,z)$, a query variable that is one of the thre symbols $\{VX,VY,VZ\}$, and a precision $\epsilon$. The problem is then to output the relevant variable (the velocity $VX$, or $VY$, or $VZ$) after evolving the initial conditions for $t$ time, and to output with an additive error $\epsilon$.
The difficulty of the problem would be quantified in terms of different quantities like $n$ (the complexity of the boundary conditions), the precision $\epsilon$, and the time $t$. Difficulty could also be in terms of the Lipschitz constant $K$ of the boundary conditions, or the magnitude $M$ of the initial conditions (if I make all the initial velocities extremely fast, I could make things happen very quickly and the problem might become might harder!), or the energy $E$ of the system, or the worst condition number $\kappa$ reached during solving.
I think people would consider the Navier-Stokes equations to be "polynomial time" if there was an algorithm to solve this, with a runtime like $O(n^{c_1} t^{c_2} \epsilon^{-c_3} K^{c_4} M^{c_5})$, where $c_i$ are all some constants. Equivalently, if it run in $poly(ntKM/\epsilon)$ time.
The differential equations that people generally regard as "well-behaved" all have algorithms of this form. There are some differential equations that compute (say) the answer to an NP-hard problem in a small amount of time -- in the sense that, if you had a poly time algorithm to solve them (in the sense I just gave), that it would imply $P=NP$.