113

This is a sort of soft-question to which I can't find any satisfactory answer. At heart, I feel I have some need for a robust and well-motivated formalism in mathematics, and my work in geometry requires me to learn some analysis, and so I am confronted with the task of understanding weak solutions to PDEs. I have no problems understanding the formal definitions, and I don't need any clarification as to how they work or why they produce generalized solutions. What I don't understand is why I should "believe" in these guys, other than that they are a convenience.

Another way of trying to attack the issue I feel is that I don't see any reason to invent weak solutions, other than a a sort of (and I'm dreadfully sorry if this is offensive to any analysts) mathematical laziness. So what if classical solutions don't exist? My tongue-in-cheek instinct is just to say that that is the price one has to pay for working with bad objects! In other words, I do not find the justification of, "well, it makes it possible to find solutions" a very convincing one.

A justification I might accept, is if there was a good mathematical reason for us to a priori expect there to be solutions, and for some reason, they could not be found in classical function spaces like $C^k(\Omega)$, and so we had to look at various enlargements in order to find solutions. If this is the case, what is the heuristic argument that tells me whether or not I should expect a PDE (subject to whatever conditions you want in order to make your argument clear) to have solutions, and what function space(s) are appropriate to look at to actually find these solutions?

Another justification that I would accept is if there was some good analytic reason to discard the classical notion of differentiability all together. Perhaps the correct thing to do is to just think of weak derivatives as simply the 'correct' notion of differentiability in the first place. My instinct is to say that maybe weak solutions are a sort of 'almost-everywhere' type generalization of differentiability, similar to the Lebesgue integral being a replacement for the Riemann integral which is more adept at dealing with phenomena only occurring in sets of measure $0$.

Or maybe both of these hunches are just completely wrong. I am basically brand new to these ideas, and wrestling with my skepticism about these ideas. So can somebody make me a believer?

Worth noting is that there is already a question on this site here, but the answer in this link is essentially that there exist a bunch of nice theorems if you do this, or that physically we don't care very much about what happens pointwise, only in terms of integrals over small regions. It should be clear why I don't like the first reason, and the second reason I may accept if it could be turned into something that looks like my proposed justification #2 - if integrals over small regions of derivatives are the 'right' mathematical formalism for PDEs. I just don't understand how to make that leap. In other words, I would like a reason to find weak solutions interesting for their own sake.

A. Thomas Yerger
  • 17,862
  • 4
  • 42
  • 85
  • 1
    I'm not really accustomed to PDE's, but I've read a bit about distribution theory in my nonstandard-analysis book. In it, one first models the linear functional you want using a fitting nonstandard function, and then (if some conditions are met) can find the differentiation of the linear functional simply by using the "normal" nonstandard-differentiation on the representing nonstandard-function – Sudix Aug 06 '19 at 05:19
  • 7
    In general, "it makes it possible to have solutions" is always an insufficient explanation in mathematics, and never what the speaker really means to say. Clearly more needs to be said, if there are no restrictions at all one what a "generalized solution" is, you may as well prove RH by saying "define a 'generalized proof' to mean this drawing of a puppy I did on a napkin...". The problem is that very often the reasons why your "generalized solution" concept is sensible are hard to verbalize and only understood subconciously, through lots of experience working with the concept... – Jack M Aug 06 '19 at 09:37
  • 7
    Is the issue appreciating weak solutions for their own sake? I think a nice comparison to make is the theorem that a real symmetric matrix (or linear operator, if you work on an abstract finite-dimensional real inner product space) has a basis of eigenvectors. The key point is that all eigenvalues are real and its proof first passes to $\mathbf C$ to get all eigenvalues and then use symmetry of the matrix to prove $\overline{\lambda}=\lambda$ for every eigenvalue $\lambda$, so all eigenvalues are real. Should someone "not believe" in $\mathbf C$ when it helps in this way? – KCd Aug 06 '19 at 17:51
  • 1
    @KCd, I think in that setting, there are plenty of other reasons to believe in the complex numbers first, and in that sense we are simply utilizing them in this case. – A. Thomas Yerger Aug 06 '19 at 18:01
  • 2
    You didn't answer the question I had asked: does "believe in" mean finding interest in something for its own sake? – KCd Aug 06 '19 at 18:20
  • @KCd, Sure, that seems reasonable. – A. Thomas Yerger Aug 06 '19 at 18:23
  • Maybe adding a comment along those lines would make it clearer to people reading your question what the intent is (and is not). – KCd Aug 06 '19 at 18:27
  • Check out the eikonal equation from wikipedia. It has a simple physical or mathematical interpretation from which it is obvious that the solution is not always smooth. – Elcyc Aug 07 '19 at 02:07
  • Aren't you worried about your immortal soul? How dare you doubt the word of the lord? You must believe. Repent! The end is nigh... :-) – einpoklum Aug 08 '19 at 14:52
  • I keep my copy of the Bible (Evans PDEs) by my night stand and repeat the proof of the Sobolev Embedding Theorem before bed each night, but memorizing the Lord's Prayers doesn't seem to help :P – A. Thomas Yerger Aug 08 '19 at 17:52
  • I've thought a lot about this in the last few days, and I'll accept an answer soon. I really appreciate everyone's contributions but I can only pick one answer. Lastly, I want to leave this link here, as it's contained some useful examples and approximation theorems I didn't know about, as well as some characterizations of Sobolev spaces, and just all kinds of odds and ends I've also really found helpful, so hopefully it'll help out some other people too! https://math.aalto.fi/~jkkinnun/files/sobolev_spaces.pdf – A. Thomas Yerger Aug 10 '19 at 05:08
  • I'm late to this game, but I used to be skeptical as well. The reason which converted me over was this. In physical life, one can never actually measure functions 'pointwise', rather, a probe (such as a thermometer) is really measuring the average values of a small ball around some center. Exact pointwise measurements correspond to multiplying a function by a dirac delta and integrating it. A better model to physical measurements is hence to multiply by an approximation to the identity then integrating it. The closure of all such test functions indeed gives us C^\infty_c. – Jan Lynn Oct 29 '20 at 10:31

9 Answers9

79

First, you should not believe in anything in mathematics, in particular weak solutions of PDEs. They are sometimes a useful tool, as others have pointed out, but they are often not unique. For example, one needs an additional entropy condition to obtain uniqueness of weak solutions for scalar conservation laws, like Burger's equation. Also note that there are compactly supported weak solutions of the Euler equations, which is absurd (a fluid that starts at rest, no force is applied, and then it does something crazy and comes back to rest). They are a useful tool, connected to physics sometimes, but that is it.

In general, it is naive to ignore applications when studying or looking for motivations for theoretical objects in PDEs. Nearly all applications of PDEs are in physical sciences, engineering, materials science, image processing, computer vision, etc. These are the motivations for studying particular types of PDEs, and without these applications, there would be almost zero mathematical interest in many of the PDEs we study. For instance, why do we spend so much time studying parabolic and elliptic equations, instead of focusing effort on bizarre fourth order equations like $u_{xxxx}^\pi = u_y^2e^{u_z}$? (hint: there are physical applications of elliptic and parabolic equations). We study an extremely small sliver of all possible PDEs, and without a mind towards applications, there is no reason to study these PDEs instead of others.

You say you do not know anything about physics; well I would encourage you to learn about some physics and connections to PDEs (e.g., heat equation or wave equation) before learning about theoretical properties of PDEs, like weak solutions.

PDEs are only models of the physical phenomenon we care about. For example, consider conserved quantities. If $u(x,t)$ denotes the density (say heat content, or density of traffic along a highway) of some quantity along a line at position $x$ and time $t$, then if the quantity is truly conserved, it satisfies (trivially) a conservation law like $$\frac{d}{dt} \int_a^b u(x,t) \, dx = F(a,t) - F(b,t), \ \ \ \ \ (*)$$ where $F(x,t)$ denotes the flux of the density $u$, that is, the amount of heat/traffic/etc flowing to the right per unit time at position $x$ and time $t$. The equation simply says that the only way the amount of the substance in the interval $[a,b]$ can change is by the substance moving into the interval at $x=a$ or moving out at $x=b$.

The function $u$ need not be differentiable in order to satisfy the equation above. However, it is often more convenient to assume $u$ and $F$ are differentiable, set $b = a+h$ and send $h\to 0$ to obtain (formally) a differential equation $$\frac{\partial u}{\partial t} + \frac{\partial F}{\partial x} = 0. \ \ \ \ \ (+)$$ This is called a conservation law, and we can obtain a closed PDE by taking some physical modeling assumption on the flux $F$. For instance, in heat flow, Newton's law of cooling says $F=-k\frac{\partial u}{\partial x}$ (or for diffusion, Fick's law of diffusion is identical). For traffic flow, a common flux is $F(u)=u(1-u)$, which gives a scalar conservation law.

Whatever physical model you choose, you have to understand that (*) is the real equation you care about, and (+) is just a convenient way to write the equation. It would seem absurd to say that if one cannot find a classical solution of (+), then we should throw up our hands and admit defeat.

Most applications of PDEs, such as optimal control, differential games, fluid flow, etc., have a similar flavor. One writes down a function, like a value function in optimal control, and the function is in general just Lipschitz continuous. Then one wants to explore more properties of this function and finds that it satisfies a PDE (the Hamilton-Jacobi-Bellman equation), but since the function is not differentiable we look for a weak notion of solution (here, the viscosity solution) that makes our Lipschitz function the unique solution of the PDE. This point is that without a mind towards applications, one is shooting in the dark and you will not find elegant answers to such questions.

Jeff
  • 5,617
49

Reason 1. Even if you actually care only about smooth solutions, in some cases it is easier to first establish that a weak solution exists and separately show that the structure of the PDE and the geometry of the domain on which you are solving actually enforce it to be smooth. Existence and regularity are handled separately and using different tools.

Reason 2. There are physical phenomena which are described by discontinuous solutions of PDEs, e.g. hydrodynamical shock waves.

Reason 3. Discontinuous solutions may be used as a convenient approximation for describing macroscopic physics neglecting some details of the microscopic theory. For example in electrodynamics one derives from the Maxwell equations that the electric field of an electric dipole behaves at large distances in a universal way, depending only on the dipole moment but not on the charge distributions. On distances comparable to the dipole size these microscopic details start to become important. If you don't care about these small distances you may work in the approximation in which dipole is a point-like object, with charge distribution given by a derivative of the delta distribution. Even though the actual charge distribution is given by a smooth function, it is more convenient to approximate it by a very singular object. One can still make sense of the Maxwell equations, and the results obtained this way turn out to be correct (provided that you understand the limitations of performed approximations).

Reason 4. It is desirable to have "nice" spaces in which you look for solutions. In functional analysis there are many features you might want a topological vector space to have, and among these one of the most important is completeness. Suppose you start with the space of smooth functions on, say, $[0,1]$ and equip it with a certain topology. In this case it is completely natural to pass to the completion. For many choices of the topology you will find that the completed space contains objects which are too singular to be considered as bona fide functions, e.g. measures or distributions. Just to give you an example of this phenomenon: if you are interested in computing integrals of smooth functions, you are eventually going to consider gadgets such as $L^p$ norms on $C^{\infty}[0,1]$. Once you complete, you get the famous $L^p$ spaces, whose elements are merely equivalence classes of functions modulo equality almost anywhere. Space of distributions on $[0,1]$ may be constructed very similarly: instead of $L^p$ norms you consider the seminorms $p_f$ given by $p_f(g)= \int_{0}^1 f(x) g(x) dx$ for $f,g \in C^{\infty}[0,1]$. If you can justify to yourself that it is interesting to look at this family of seminorms, then distibutions (and also weak solutions of PDEs) become an inevitable consequence.

UserA
  • 1,650
Blazej
  • 3,060
  • 2
    I am not sure that I like this because it sounds still like the "because it makes it possible to prove things" answer, which seems question-begging. The second reason might compel me if I knew things about physics, but I don't :( – A. Thomas Yerger Aug 05 '19 at 20:36
  • 35
    You have important real-life applications and you can prove strong theorems. I don't quite see how you can expect more from a tool in mathematics. – Blazej Aug 05 '19 at 20:46
  • 1
    I would like a motivation which is intrinsic to mathematics, if that is another way of describing my skepticism about this concept. Pretend I'm one of those purists who doesn't believe in applications, or even worse, some kind of solipsist who doesn't even believe in the world. Why would I be motivated to study these objects? – A. Thomas Yerger Aug 05 '19 at 20:47
  • 5
    I don't know about other researchers, but I regard distribution theory as a tool. Tool is good if it does the job it is supposed to do. I don't need a motivation to use a hammer, other than the fact that I occasionally need to drive some nails. – Blazej Aug 05 '19 at 20:50
  • 2
    Well, one way of viewing the complex numbers is to say they are just a tool as an extension of R, and they have all these benefits of nice theorems in algebra and analysis, etc. But I like to think of them geometrically, as an extension of R, yes, but also you can think of them geometrically, as being an algebraic object representing rotations and dilations of the plane. With both of these viewpoints together, lots of basic stuff about complex analysis become crystal clear to me. I feel I have no shortage of practical reasons. I want an intrinsic reason to have as a guiding motivation. – A. Thomas Yerger Aug 05 '19 at 21:00
  • 5
    @AlfredYerger The "enables you to prove strong theorems" should already be more than enough to satisfy your "intrinsic to mathematics/solipsism" angle. That's the singular goal: proving stuff, strong stuff all the better. Proofs of "A implies B" very rarely go directly from A to B, but instead take a detour to C, D, E, etc., with a different vehicle on each journey. Much as a journey from Los Angeles to New York will pass through many different locations, and may involve cars, planes, trains, buses, etc. So lots of new things get introduced in proofs to complete the journey. – zibadawa timmy Aug 06 '19 at 06:13
  • 8
    "The second reason might compel me if I knew things about physics, but I don't " Since most of the important PDEs come from physics, this is a very compelling argument. If you don't find it compelling, you should learn more about physics rather than reject it. – eyeballfrog Aug 06 '19 at 13:53
  • 1
    Well, I'm not going to drop what I'm doing and learn physics. I said in the OP I'm interested in understanding some phenomena in geometry. I also do not care about physics. And, also as stated, I have no shortage of practical reasons. I want a good philosophical and motivating reason. Repeating these things after I say I already have them is not really adding to the conversation. – A. Thomas Yerger Aug 06 '19 at 15:26
  • 2
    Great answer. "There are physical phenomena which are described by discontinuous solutions of PDEs." In such cases, should the physicists have formulated their models as integral equations rather than as differential equations? It seems that perhaps one might think that the physicists made a modeling error in this case. – littleO Aug 07 '19 at 09:21
  • 3
    That it is hard to justify certain theories to "purists who don't believe in applications" reflects poorly not on the theories, but on the "purists". The applied and the pure have always coexisted and interbred, greatly to their mutual benefit. – Simon Aug 07 '19 at 12:12
  • 2
    I think the notion of approximation is extremely relevant to this question. Analysis is after all in many ways the theory of limits, and many areas of differential equations are only of interest in the first place because they approximate some harder-to-study situation as some parameter goes to zero (or infinity). The dipole is a good example of this, but far from the only one: the situations being approximated can themselves be described by differential equations, so this can be seen as an issue of tractability for theoreticians. user7530's answer also touches on this. – Robin Saunders Aug 08 '19 at 11:12
  • Dear littleO, there are systems of equations which are better thought of as integral, rather than differential. An example is given by inviscous hydrodynamics equations, which can be thought of as describing conservation of energy and momentum. Differential versions refer to derivatives of currents, while integral refer only to fluxes through surfaces. The latter make sense for functions which are not differentiable, e.g. for shock waves. In general it depends very much on the type of problem which formulation should be seen as the most fundamental one. – Blazej Aug 08 '19 at 16:58
  • 2
    Please note that I modified the post above to include a fourth reason. I think this one may be somewhat more appealing to a purist mathemathician. – Blazej Aug 10 '19 at 07:47
21

Let's have a look at the Dirichlet problem on some (say smoothly) bounded domain $\Omega$, i.e. $$ -\Delta u=f \text{ in } \Omega\\ u=0~ \text{ on } \partial \Omega $$ for $f \in \text{C}^0(\overline{\Omega})$. Then, Dirichlet's principle states a classical solution is a minimizer of an energy functional, namely $E(u):=\dfrac{1}{2}\int_\Omega \left|\nabla u\right|^2 \mathrm{d}x-\int_\Omega f u ~\mathrm{d}x$. (Here we need some boundary condition on $\Omega$ for the first integral to be finite).

So the question one may ask is, if I have some PDE why not just take corresponding the energy functional, minimize it in the right function space and obtain a solution of the PDE. So far so good. But the problem that may occur is finding this minimizer. It can be shown that such functionals are bounded by below, so we have some infimum. As also stated in the Wikipedia article, it was just assumed (e.g. by Riemann) that this infimum will always be attained, which shown by Weierstrass unfortunately not always is the case (see also this answer on MO).

Hence, we find differentiable functions which are "close" (in some sense) to a "solution" of the PDE, but no actual differentiable solution. I feel that this is quite unsatisfactory.

So have could we save this? We can multiply the PDE (take the Laplace equation for simplicity) with some test function and integrate by parts to obtain $$ \int_\Omega \nabla u \cdot \nabla v~\mathrm{d}x= \int_\Omega fv~\mathrm{d}x $$ for all test functions $v$. But from what space should $u$ come from? What do we need to make sense to the integral?

Well, $\nabla u \in \text{L}^2(\Omega)$ would be nice, because then the first integral is well-defined via Cauchy-Schwarz. But as shown by Weierstrass, classical derivatives are not enough, so we need some weaker sense. And here we got to Sobolev Spaces and looking again at the last formula, we see the weak formulation.

I am aware that this does not give a full explanation to why one should "believe" in weak solutions, Sobolev spaces and so on. What I stated above is a quick run through how in my course on PDE the step from classical to weak theory was motivated and at least I was quite happy about it.

Jonas Lenz
  • 1,389
  • 1
    This is already tempting though, I like this answer best so far. There is a proof of the Riemann mapping theorem which feels related. You look at injections from your domain to the disk with the largest norm derivative at the point you want to send to $0$. The idea is that this guy has the 'best chance' of filling out your domain. Then the completeness of the function space allows you show this guy actually exists and you can check it is surjective, blah blah blah. This kinda seems like a watered-down variational idea, and uniformization also has a variational approach. – A. Thomas Yerger Aug 05 '19 at 21:05
  • If it was the case that some very large class of PDEs had variational approaches to them, then maybe this would convince me. The 'missing link' would be this idea of energy minimization. – A. Thomas Yerger Aug 05 '19 at 21:07
  • After having some thinking, it is clear to me that Sobolev spaces are a like a "$L^\infty$" closure of $C^k$. This helps, at least for now. I'll keep pressing on. Usually I'm OK with accepting variations on a construction, such as $L^p$ for non-integer $p$, once I am motivated by the main concept, or seeing how just some of these guys may arise. Like discussing with my analysis friends the significance of $L^{4/3}$ has just dispelled any issues I may have had with those guys. I expect that similarly I'll feel comfortable with the full machinery after just seeing more of them get used. – A. Thomas Yerger Aug 05 '19 at 21:41
  • 13
    I'd like to make two comments to this. (1) There exist problems in which variational formulation is more fundamental than the PDE itself. Indeed, optimization is a very important problem in applied mathematics. (2) Similarly, many differential equations may be transformed to integral equations. It turns out that for some equations these integral formulations have greater scope of applicability than their differential versions. Example of this is provided by systems of conservation laws in hydrodynamics. – Blazej Aug 06 '19 at 07:42
  • 4
    @AlfredYerger In a practical sense the class of PDEs with a variational approach is huge. A lot of physics is ruled by Hamilton's principle, which relates the evolution of a system to the stationary points of an integral. In other words nearly every physically meaningful PDE has an underlying variational formulation. – mlk Aug 06 '19 at 08:22
  • 1
    I accepted this answer as it is the best of those answers that don't just say "learn some physics." You have given a mathematician's take on the question, and I appreciate that. – A. Thomas Yerger Jul 06 '20 at 16:49
  • While Weierstrass shot down the Dirichlet principle, does this apply to your example of the Dirichlet problem? So do we know that the Dirichlet problem (of electrostatics) does not always have regular solutions? – lalala Nov 03 '21 at 08:27
17

People can maybe talk more generally but I have a really simple example (but helpful in my opinion):

Not all waves are differentiable. We want all waves to satisfy the wave equation (in some sense). That sense is weak.

  • This feels like it should be a comment, because I don't think I know any waves or descriptions thereof that are not differentiable. I also don't know any reason why I should just privilege the wave equation above all else so that all waves satisfy that equation. Why is this not just a criticism of the wave equation? You see this issue... it feels circular. – A. Thomas Yerger Aug 05 '19 at 20:22
  • @AlfredYerger You can just draw a wave that's not differentiable for your first question. And I would argue that it is a criticism of the wave equation (the way it's normally understood). If waves (in naive sense) are the things that solve the wave equation (in weak sense), that's a pretty good argument for understanding wave equation in weak sense. –  Aug 05 '19 at 20:26
  • 3
    @AlfredYerger https://en.wikipedia.org/wiki/Sawtooth_wave here is a wave that's not even continuous. If this doesn't solve the wave equation, that sounds like the wave equation is pretty flawed. And it is, if we understand it in strong sense. –  Aug 05 '19 at 20:30
  • Ah yeah, I did forget things like square and saw waves exist. But why should I want them to fit into the viewpoint of the wave equation? Also, we'd need a multi-dimensional analogue for PDEs, but that's probably not hard to write down. – A. Thomas Yerger Aug 05 '19 at 20:34
  • 24
    "But why should I want them to fit into the viewpoint of the wave equation?" Because you can! Isn't that just amazing? You have this equation which forces solutions to be twice differentiable, but you can come up with a notion of solution that work for far more general functions. If this isn't mathematically satisfying, I don't know what is. – Dirk Aug 06 '19 at 07:54
  • 1
    @Dirk, I thought about this a bit. It seems to me that this is related to another answer where I thought about completeness. These waves are limits of things that do satisfy the equation, but they themselves are not even differentiable. So perhaps this is really all it boils down to. The natural spaces to look at are classical function spaces, but then there are two problems: (1) there are natural enough candidates for solutions that are not, for regularity reasons, and (2) the lack of certain point-set topological features of the space make solving other problems hard. (1/2) – A. Thomas Yerger Aug 06 '19 at 18:04
  • (2/2) Sobolev spaces take care of both of these issues at once, by suitably closing classical function spaces. Does this seem like a fair assessment of the story? – A. Thomas Yerger Aug 06 '19 at 18:05
  • 3
    Sure. Regularity (or the lack thereof) is a central issue in differential equations. – Dirk Aug 06 '19 at 18:49
  • "But why should I want [things like square and sawtooth waves] to fit into the viewpoint of the wave equation?" Because they're elegant and accurate models for some of the ways things like violin strings and accordion reeds can vibrate. https://plus.maths.org/content/why-violin-so-hard-play The wave equation does a good job of predicting which $C^2$ vibrations these things can sustain. One might hope, for the sake of parsimony, that it could do a good job for other vibrations too—and if one accepts weak solutions, it does! https://youtu.be/6JeyiM0YNo4 (cont.) – Vectornaut Oct 08 '19 at 14:35
  • (cont.) In geometry, according to Wikipedia, similar considerations motivated the definition of a manifold. https://en.wikipedia.org/wiki/Manifold#History The motion of a falling object or a pendulum can be elegantly described by a geometric structure on a plane or a cylinder. One might hope, for the sake of parsimony, that the motion of a flying cannonball, a double pendulum, or the planets and moons in a solar system could be described in the same way. If one accepts higher-dimensional versions of the plane, and then the now-standard definition of a manifold, that hope is fulfilled! – Vectornaut Oct 08 '19 at 14:36
17

Absolutely nothing in physics is completely described by a PDE, if you look at a sufficiently small resolution, because space and time are not continuous. (Since the OP has said in a comment that he doesn't know much physics, google for "Planck length" for more information.)

However almost everything in physics is described at a fundamental level by conservation laws which are most naturally expressed mathematically as integral equations not as differential equations.

Integral equations can be converted to differential equations with some loss of generality - i.e. you exclude solutions of the integral equations which are not sufficiently differentiable. But the solutions you might have excluded are interesting and useful from a physicist's point of view, so excluding them simply "because PDEs are easier to work with than integral equations" is throwing the baby out with the bathwater.

Hence, "weak solutions of PDEs" are a thing worth studying. Of course if you want to convert any interesting theorems about weak solutions back into the language of integral equations, feel free to do that - or even better, figure out a way to unify the two subjects using nonstandard analysis, or something similar! (Nonstandard analysis corresponds very well with physicists' idea of "infinitesimal quantities" which can be treated mathematically as if they are numbers even though they are not!)

alephzero
  • 1,261
  • 4
    Well put. This point is rarely made (that the more genuine descriptions are often integral equations rather than PDEs...) – paul garrett Aug 06 '19 at 18:42
  • 16
    "because space and time are not continuous". You state it as if this were a fact, but it is not. Noone knows what spacetime looks like on the scale of the Planck length. Your suggestion is just one of many possibilities that we cannot distinguish right now. – M. Winter Aug 07 '19 at 11:18
  • 3
    The site is Mathematics and OP stated that’s the realm he is interested in. Why are so many explaining why they are important in Physics? – WGroleau Aug 07 '19 at 14:45
  • @WGroleau because a (the?) major reason for studying PDE's as such is their applicability to equations that correspond to physical reality; and the necessity of getting solutions (even if weak) to specific equations that matter for some practical use case instead of solely looking into elegant solutions to equations that represent nothing of relevance. The latter is also useful, but the former is the major driver of the whole field. – Peteris Aug 08 '19 at 11:22
  • 1
    Then the answer to OP’s question is “if you’re only interested in mathematics, you don’t.” – WGroleau Aug 08 '19 at 13:55
  • Unfortunately, sentences containing the words "nothing" or "everything" are very rarely completely true. – Blazej Aug 10 '19 at 07:49
  • Surely, if you are only interested in mathematics, you can "believe" anything you choose to believe. – Philip Roe May 19 '20 at 13:55
  • The first physical situation one is taught to model is dropping a ball from height $h$ under earth's gravitational field. You can use Newton's second law to get a differential equation which can be solved for velocity as a function of time. However, conservation of energy will only tell you the final velocity right before it his the ground. What if I want to know the velocity at time $t$? I cant get that from the conservation law. Can you expand on how you reason that 'almost everything is described by a conservation law'?. Cheers –  Feb 19 '21 at 05:10
9

It is a fact that not all physical problems have smooth solutions. Often this situation arises from a set of conservation laws that are expressed mathematically by applying such laws to a finite control volume to obtain an integral equation. Then we let the size of the control volume go to zero and arrive at some PDEs if the flow is smooth. But then we discover that the PDEs are unable to solve many important problems and have to rethink our strategy.

When this first occurred to me I found it a bit shocking because surely differential calculus was the natural language for describing continua? After a bit I realised that the integral calculus is more fundamental. It can be applied to functions that are more general (Anything can be integrated, but not everything can be differentiated) and it is the form in which much physical knowledge comes to us.

I suspect you felt the same surprise that I did. I thought that I wanted to solve differential equations, so why would I start integrating things? The truth is the reverse. I really want to solve integral equations, and the PDE is a powerful tool, but only if it is valid. That it often is should come as another surprise.

Philip Roe
  • 1,140
  • 1
    I agree with the spirit of this, but don't you think everything can be integrated is a bit too general? – Allawonder Aug 06 '19 at 21:54
  • It's not true literally, but it is in practice. How many people who work with differential equations, even mathematicians, find themselves working with functions which aren't (locally, almost-everywhere) integrable, unless they're specifically looking for counterexamples? Even outside differential equations, such functions are mainly of interest as counterexamples and to set theorists. – Robin Saunders Aug 08 '19 at 11:02
6

The existing answers provide good reasons towards the question in the title, but from the perspective of a geometer I feel the applications in physics aren't quite as convincing. It's true that singular phenomena that arises in for example conservation laws requires a suitable notion of a generalised solution, but why is it also useful for geometric problems?

One way I think of weak solutions is that they provide a candidate for a strong solution. Suppose you want to a solve a particular PDE problem with suitable data and you can prove the following:

  1. A weak solution exists.
  2. Any classical solution, if it exists, is also a weak solutions.
  3. The weak solution is suitably unique.

Then from the above you can infer that if a classical solution exists, it must be the unique weak solution. Hence the problem of existence is effectively reduced to proving the regularity of the weak solution.

Hence in nice cases where existence can established in general (e.g. linear elliptic problems), weak solutions provide a way of solving PDE problems using the above methodology. This is method is effective for the technical reason that it allows us to work in spaces with better compactness properties.

If a solution doesn't always exist however, things get more interesting. If you can still establish the first three points, the solubility criterion is reduced to a regularity problem and we can then look for necessary/sufficient conditions based on this.

Example (Harmonic map flow): If $(M,g)$ and $(N,h)$ are Riemannian manifolds, a classical problem in geometric analysis is whether a non-trivial harmonic map $u : M \rightarrow N$ exists. In the case when $M$ is a closed surface, we have the following sufficient condition for existence due to Eells and Sampson; non-trivial harmonic maps $M \rightarrow N$ exist provided there exists no non-trivial harmonic map $S^2 \rightarrow N.$

This theorem can be proved using the harmonic map flow to "evolve" a given map $u_0$ into a harmonic map $u_*,$ which is the work of Struwe. This method doesn't always work as the flow may develop singularities in general, but the non-existence condition about harmonic spheres provides a sufficient condition to prevent these singularities from forming.

ktoi
  • 7,317
5

To the excellent longer answers above I will add a short one: weak solutions in a conveniently-chosen (and in particular, finite-dimensional) function space can often be explicitly computed, whereas strong solutions often cannot (even if one can prove a solution must theoretically exist). Computability has obvious and immense practical importance.

Of course, one does not simply believe in the weak solutions: one proves existence, approximability, and conservation theorems, etc, for the weak solutions.

user7530
  • 49,280
1

Well, I hope this doesn't come off as snarky, but why should we expect that $$x^2 +1 =0$$ should have solutions? And why should we abandon the meaning of "squaring" that we all first learned for real numbers and adopt $$(a,b)^2 = (a^2-b^2, 2ab)$$

It's not a perfect analogy but I think it's rather similar to your questions about PDE solutions.

JonathanZ
  • 10,615
  • Complex numbers do have a beautiful geometric interpretation though, and if I had a beautiful analytic interpretation for weak solutions, then I would be very pleased. – A. Thomas Yerger Aug 08 '19 at 02:22
  • I agree with this perspective. I believe the notion of complex numbers first come from the solvability of $x^2+1=0$, rather a geometric problem. Although, it is possible that people found a connection to a geometric interpretation later on. – induction601 Dec 19 '19 at 17:41