5

The main objective is to find some upper bound for the maximum rate of change as "tight" as possible, hopefully related to characteristics of the functions as its signal energy, or its higher Fourier coefficient, or something easy to obtain from the function itself. Also, figure out what is needed for a finite-energy time-limited signal (so, with infinite bandwidth), to have a bounded rate of change $\max_t |\frac{df(t)}{dt}|<\infty$.

For some partial answers you can go directly to my 2nd answer here

Following the notation of exercise 4.49 of the book "Signals and Systems, 2nd Edition" (Alan V. Oppenheim, Alan S. Willsky, with S. Hamid) [1], the Fourier Transform is defined as $F(j \omega) = \int_{-\infty}^{\infty} f(t) e^{-j \omega t} dt$, so the function can be described as $f(t) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} F(j \omega) e^{j \omega t} d\omega$, where $j = \sqrt{-1}$.

Let $f(t)$ being a function which fulfill the conditions to have a Fourier Transform $F(j \omega)$. Then using the composition of a complex number in their amplitude and phase, and the “triangle inequality”, I can establish the following (here $|\cdot |$ is the absolute value): $$ \begin{equation}\begin{split} M = \max_{t}\left\{\left|{\frac{d f(t)}{dt}}\right|\right\} & = \max_{t}\left\{ \left|{\frac{1}{2\pi}} \int_{-\infty}^{\infty}j\omega F(j\omega)e^{j \omega t} d\omega \right|\right\} \texttt{ (Eq. 1)} \\ & =\max_{t}\left\{ \left|{\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|j\omega F(j\omega)\right| e^{j\sphericalangle\left(j\omega F(j\omega )\right)} e^{j \omega t} d\omega \right| \right\} \\ & \leq \max_{t}\left\{ {\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left| \left|j\omega F(j\omega)\right| e^{j\sphericalangle\left(j\omega F(j\omega )\right)} e^{j \omega t}\right| d\omega \right\} \\ & \leq \max_{t}\left\{ {\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|j \right|\left|\omega F(j\omega)\right|\left| e^{j\sphericalangle\left(j\omega F(j\omega )\right)}\right| \left| e^{j \omega t}\right| d\omega \right\} \\ & = {\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|\omega F(j\omega)\right| d\omega \texttt{ (Eq. 2)}\end{split}\end{equation}$$ since $1 = |e^{j\phi}|, \forall \phi \in \mathbb{R}$ (note that $j = e^{j\frac{\pi}{2}}, \omega t \in \mathbb{R},$ and angle $ \sphericalangle\left(j\omega F(j\omega )\right) \in \mathbb{R}$), and the remaining integral is independent of $t$.

In the book "Fourier Series: A Modern Introduction - Volume 1 (2nd Edition)" (R. E. Edwards) [2], on Chapter 2 point 2.3.6 (point (3) of the "Remarks") is proved that if $f(t)$ is of bounded variation, and $\hat{f}(n)$ is its Fourier transform (defined differently from here), then: $$ \left|{n \hat{f}(n)}\right| \leq V(f), \texttt{ (Eq. 5)}$$ with $V(f)$ the total variation of $f(t)$.

Since the total variation for a Riemann integrable function $f(t)$ with $M < \infty$ can be defined as: $$ V(f) = \int_{-\infty}^{\infty} \left| \frac{df(t)}{dt} \right| dt $$ I believe that the bound $\max_{t}\left\{\left|{\frac{d f(t)}{dt}}\right|\right\} \leq V(f)$ is going to be "too loose", because is the same situation as considering the sum of a series of positive coefficients $a_n \geq 0$ and asking to compare $\max_{n}\{a_n\} \leq \sum_{-\infty}^{\infty}a_n$ (!).

(!): Caution is needed, because the sum is under an integral, so the "$dt$" could change the intuition. As example, consider the ramp function that starts at the origin and ends at the point $(1/2,\pi)$, its max value is $\pi$ but the sum of the area under the curve is $\sqrt{\pi/2}/2 < \pi$, but on converse, if the edge is on $(2,\pi)$ it’s area under the curve will be $2 \sqrt{\pi} > \pi$).

So I want to know:

A. Is the bound ${\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|\omega F(j\omega)\right| d\omega$ tight "enough"? or It will be a "loose one" as $V(f)$?

Here I know that: $$ \frac{df(t)}{dt}\Big\vert_{t=0} = {\frac{1}{2\pi}} \int_{-\infty}^{\infty} j\omega F(j\omega) d\omega $$ by making $t=0$ in the exponent $e^{j \omega t} = e^0 = 1$ of the inverse Fourier transform definition, so I have hope is not "too loose".

For avoiding any kind of "strange behavior" like continuous functions nowhere differentiable or differentiable functions nowhere continuous, and the full zoo of functions in between, please consider that the functions $f(t)$ is as following:

  1. $$f(t) = x(t) \cdot (\theta(t-t_0) - \theta(t-t_F))$$ is a non-constant one-variable real-valued function defined for every $t \in (-\infty; \infty)$ with $\theta(t)$ the unitary step function and $t_0<t_F$, so $f(t)$ haves a beginning at $t_0$ and an end at $t_F$, being $f(t) = 0 \text{ if } t\leq t_0 \text{ or } t_F \geq t$, letting the property $\mathbb{F}\left\{\frac{df(t)}{dt}\right\} = j\omega F(j \omega)$ being true.
  2. Let $f(t)$ be a Lebesgue integrable function: $\int_{-\infty}^{\infty}|f(t)|dt < \infty$, and also a finite energy function: $\int_{-\infty}^{\infty}|f(t)|^2dt < \infty$. If needed, also Riemann integrable.
  3. Consider that function $x(t)$ is continuous, also smooth so all derivatives exists and are bounded (or at least, one time differentiable), so using $\max$ or $\sup$ or else is equivalent (same for $f(t)$ except at the points $f(t_0)$ and $f(t_F)$), and also that $f(t)$ is of bounded variation. In the same mentioned point of [2], at "Remarks" section, is said that Wiener have proved that for a bounded function to be continuous if and only if it behave as: $$ \lim_{N \to \infty} \frac{1}{N} \sum_{|n| \leq N} \left|n \hat{f}(n) \right| = 0 \texttt{ (Eq. 6)}$$ Also assume that the function $f(t)$ follows the Rieman-Lebesgue Lema [3], and the conditions needed to have a Fourier transform $F(jw)$ described by a function -not by a distribution as Dirac's delta $\delta(t)$ or others- so that the Paley–Wiener theorem is fulfilled [4].
  4. I would like to represent "naively" physically possible time-limited phenomena with $f(t)$, so I don't want in principle, to put restrictions to $f(t_0)$ or $f(t_F)$, but if needed, first start with $f(t_F)=0$, and as last resource, add $f(t_0)=0$, making $f(t)$ compact-supported but not necessarily $f(t) \in C_c^{\infty}$, since to be a Bump function it also requires that $\lim_{t->\partial t} \frac{d^n f(t)}{dt^n} = 0$ so every derivative is continuous at the boundaries - and just if nothing else is possible, let $f(t)$ be a Bump function $f(t) \in C_c^{\infty}$. I don't know if there exists a space of non-analytic $C_c^{\infty}$-alike functions that can have $f(t_0)\neq 0$ and/or $f(t_F)\neq 0$ or both, if exists, please let me know how is called and any reference to search for them (I left it as a separated question in here).
  5. Since $\frac{df(t)}{dt} = \frac{dx(t)}{dt}\cdot (\theta(t-t_0) - \theta(t-t_F)) + x(t)\cdot\delta(t-t_0) - x(t)\cdot\delta(t-t_F)$, when looking for $\max_t |\cdot|$, it will be "infinite" because $\delta(t) = \infty$ at $t=0$. Because of this, I am explicitly avoiding the discontinuity at the edges, so $t_0 < t < t_F$ could let me work with $\frac{df(t)}{dt} = \frac{dx(t)}{dt}\cdot (\theta(t-t_0) - \theta(t-t_F))$, so $\max_{t_0 < t < t_F} |\frac{df(t)}{dt}| = \max_{t_0 < t < t_F} |\frac{dx(t)}{dt}|$.
  6. If the bound is applicable for more general functions, please let me know which constraints you have removed.

B) What other tight bounds for $\max_{t}\left\{\left|{\frac{d f(t)}{dt}}\right|\right\}$ are known??

Since using the same argument of the main equations I will have that $\max_{t}\left\{\left|{f(t)}\right|\right\} \leq {\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|F(j\omega)\right| d\omega \leq \int_{-\infty}^{\infty}\left|f(t)\right| dt \textit{ ¿}\leq\textit{?} \sqrt{\int_{-\infty}^{\infty}\left|f(t)\right|^2 dt} = \sqrt{E_0}$, so I am trying to find something proportional somehow to the energy $E_0 = \int_{-\infty}^{\infty}\left|f(t)\right|^2 dt$ of the function $f(t)$, even tried to multiply by $\frac{E_0}{E_0}$ to form things of the fashion of $\int_{-\infty}^{\infty}\frac{\left|F(j \omega)\right|^2}{E_0} d\omega = 1$ so $g(\omega)= \frac{\left|F(j \omega)\right|^2}{E_0}$ could be think as a probability distribution and use bounds for the expected value $E_g[\omega]$ and $E_g[\omega^2]$ with unsuccessful results.

I have found on internet some bounds as the Kalman-Rota or the Landau-Kolmogorov-Hadamard inequalities that states that $||f'||_2 \leq \sqrt{2} ||f||_2^{1/2}||f''||_2^{1/2}$ [5], but my intuition says that commonly $\max_{t}|f'| \ll \max_{t}|f''|$. I also found other inequalities like Poincare's, Sobolev's, Friedrichs's, or Uncertainty Principle relations, but the inequality goes on the other direction $\textit{something}(f) \leq \sup |f'|$.

On the comments were mentioned some bounds applicable to band-limited functions (Bernstein inequality [11], Markov brothers' inequality [12], Others [13], follow the main article [14]), but since here I am asking about time-limited functions, which are going to have unbounded domain on the frequencies [10], I believe they are not applicable.

C) What restrictions have to fulfill $f(t)$ so it happen to be true that $\max_{t}\left\{\left|{\frac{d f(t)}{dt}}\right|\right\} \leq \max_{\omega}\left\{\left|\omega \cdot F(j \omega)\right|\right\}$ ????

I have tried with a few functions and it happen to be true, so if you know of any demonstration related please share any reference. I also found counterexamples for finite-energy time-limited, so I want to know which conditions must happen to make it “useful”.

This bound "conjecture" come from the following mistake: $$ \begin{equation}\begin{split} M = \max_{t}\left\{\left|{\frac{d f(t)}{dt}}\right|\right\} & = \max_{t}\left\{ \left|{\frac{1}{2\pi}} \int_{-\infty}^{\infty}j\omega F(j\omega)e^{j \omega t} d\omega \right|\right\} \texttt{ (Eq. 1)} \\ & =\max_{t}\left\{ \left|{\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|j\omega F(j\omega)\right| e^{j\sphericalangle\left(j\omega F(j\omega )\right)} e^{j \omega t} d\omega \right| \right\} \end{split}\end{equation}$$ Where, if I let $M_\omega^* = \max_\omega |j\omega F(j \omega)|$ which happens at $\omega^* = \arg \max |j\omega F(j \omega)|$, then at this I will have that $e^{j\sphericalangle\left(j\omega F(j\omega )\right)} = e^{j \phi^*}$ for some $\phi^* \in \mathbb{R}$, so: $$ \begin{equation}\begin{split} M = \max_{t}\left\{\left|{\frac{d f(t)}{dt}}\right|\right\} & \leq? \max_{t}\left\{ \left|{\frac{M_\omega^* \cdot e^{j \phi^*}}{2\pi}} \int_{-\infty}^{\infty} e^{j \omega t} d\omega \right|\right\} \texttt{ (Eq. 3)} \\ & \leq \max_{t}\left\{ M_\omega^* |e^{j \phi^*}|\left|{\frac{1}{2\pi}} \int_{-\infty}^{\infty}e^{j \omega t} d\omega \right| \right\} \\ & = M_\omega^* \max_{t}\left\{\left|\delta(t) \right| \right\} \\ & = \max_\omega |j\omega F(j \omega)| \max_{t}\left\{\left|\delta(t) \right| \right\} \texttt{ (Eq. 4)}\end{split}\end{equation}$$ In Eq. 4 the result will be $0$ for $t \neq 0$, and "$\infty$" if $t=0$, so certainly is not a rightfully obtained bound, but ignoring somehow the delta function makes me wonder about the value of $$M_\omega^* = \max_\omega |j\omega F(j \omega)| \texttt{ (Eq. 7)}$$.

Added later:


Following a suggestion, I am going to review some examples.

(!!) I solved these examples using Wolfram-Alpha website [7], which works by default with a different definition of the Fourier transform, so be careful about it. I don't review if the results are "theoretically" right, and I have already found examples where Wolfram-Alpha gives numerically wrong results.

Gaussian function example_______________________

First one, the case of the Gaussian function: Let $f(t)$ a less restricted function that is not time-limited, but vanishes at infinity, $f(t)=\frac{1}{\sqrt{2\pi}} e^{-\frac{t^2}{2}}$ the standard Gaussian function distribution so $\int_{-\infty}^{\infty} f(t) dt = \int_{-\infty}^{\infty} |f(t)| dt = 1$ and its signal energy is finite $\int_{-\infty}^{\infty} |f(t)|^2 dt = \frac{1}{2\sqrt{\pi}} \approx 0.282 \ll \infty$. Also, following “our” notation, the non-unitary Fourier Transform for the angular frequency $\omega$ of $\mathbb{F_t}\left\{ e^{-a t^2}\right\}(\omega) = \sqrt{\frac{\pi}{a}} e^{\frac{-\omega^2}{4a}}\,$, so for this $f(t)$ we have $F(j\omega)=e^{-\frac{\omega^2}{2}}$ (the property that the Fourier Transform of Gaussians is also Gaussian - unbound domain in time and frequency, vanishing at infinity in both). Then, the following is true:

$$ \begin{equation}\begin{split} M_{std.gauss} & = & \max_{t}\left\{\left|{\frac{d f(t)}{dt}}\right|\right\} = \frac{1}{\sqrt{2\pi e}}\text{ on } t^* = \pm 1 & \approx 0.24197 \\ & \leq & {\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|\omega F(j\omega)\right| d\omega = \frac{1}{\pi} & \approx 0.31831 \\ & \leq & \max_{\omega}\left\{\left|{\omega F(j\omega)}\right|\right\} = \frac{1}{\sqrt{e}} \text{ on } \omega^* = \pm 1 & \approx 0.60653 \\ & \leq & V(f) = \int_{-\infty}^{\infty} \left| \frac{df(t)}{dt} \right| dt = \sqrt{\frac{2}{\pi}} & \approx 0.79788 \end{split}\end{equation}$$

Is interesting to note that both bounds of Eq. 2 and Eq. 7 have worked better than $V(f)$. Also note that the bound Eq. 2 is “much tighter” than the bound of Eq. 7, so it’s happened what was commented on (!).

Is interesting to think that the Gaussian is the function that maximizes the Uncertainty Principle, so no other one-variable function with the same energy (since is a one parameter function), can be more concentrated in both time and frequency domains at once ([9] and [10]), so my intuition says that time-limited functions (which can be thought as convolution on the frequency domain of a standard function with a "sinc function"), are going to be even more spread in the frequencies, so it is going to be less likely to found a higher peak for $|\omega F(j\omega)|$ that the ones is achieved by the Gaussian function with the same energy.

But anyway, in this example it can be seen that this "conjecture" is not a total nonsense.

Classic functions examples_____________________

Here I review the simplest cases of traditional functions which its Fourier transforms are tabulated in [1] and in Wikipedia [4]. Knowing beforehand they don't fulfill my requirements, it will be a logic start I think, since many people had worked with them before.

The following notation is used from now on:

  • "$E°$" is used for each signal energy (definition on each table),
  • $\Pi(t) = 1, |t|\leq \frac{1}{2}$ is the standard rectangular function (Unitbox(t) in Wolphram-Alpha),
  • $\delta(t)$ is the Dirac's delta distribution (Diracdelta(t) in Wolphram-Alpha),
  • $\theta(t)=1, t \geq 0$ is the standard step function (Unitstep(t) in Wolphram-Alpha),
  • $\Lambda (t)=1-|t|, |t|\leq 1$ is the standard triangular function (Unittriangle(t) in Wolphram-Alpha),
  • $H_1(t)=2\cdot t\,$ is the Hermite polynomial of the first kind, which fulfill the equation $H_n(t) = (-1)^n\,e^{t^2}\frac{d^n}{dt^n}(e^{-t^2})$ (HermiteH(1,t) in Wolphram-Alpha),
  • and $\mathscr{C}=0.91596559\cdots\,$ is the Catalan's constant.

I have let with (*) the results I believe have questionable accuracy.

$$ \begin{array}{|c:c|c:c|c|c:c:c|c:c:c:c|} \hline f(t) & \text{dom}(f(t)) & F(j\omega)=\mathbb{F}\{f(t)\}(\omega) & \text{dom}(F(j\omega)) & \max_t |f'(t)| & \frac{1}{2 \pi} \int_{-\infty}^{\infty} |j\omega F(j\omega)|d\omega & \max_\omega |j\omega F(j \omega)| & V(f) = \int_{-\infty}^{\infty} |f'(t)|dt & \max_t |f(t)| & ||f||_1 = \int_{-\infty}^{\infty} |f(t)|dt & E° = \int_{-\infty}^{\infty} |f(t)|^2 dt & ||f||_2 = \sqrt{E°} \\ \hline \Pi (t) & [-\frac{1}{2}; \frac{1}{2}] & \text{sinc}(\frac{\omega}{2}) & (-\infty; \infty) & \infty^* & \infty^* & 2 & 0^* & 1 & 1 & 1 & 1 \\ \hdashline \text{sinc}(\frac{t}{2}) & (-\infty; \infty) & 2\pi \cdot \Pi (\omega) & [-\frac{1}{2}; \frac{1}{2}] & 0.218 & \frac{1}{4} = 0.25 & \pi = 3.1416 & \infty & 1 & \infty & 2\pi = 6.2831 & 2.5066 \\ \hdashline \text{sinc}^2(\frac{t}{2}) & (-\infty; \infty) & \Lambda (\omega) & [-1; 1] & 0.27 & \frac{1}{3} = 0.33 & \frac{\pi}{2} = 1.57079 & \infty & 1 & 2\pi = 6.2831 & \frac{4\pi}{3} = 4.18879 & 2.046 \\ \hdashline \Lambda (t) & [-1; 1] & \text{sinc}^2(\frac{\omega}{2}) & (-\infty; \infty) & undefined & \infty & 1.44922 & 2^* & 1 & 1 & \frac{2}{3} = 0.66 & 0.816 \\ \hdashline e^{-t} \cdot \theta (t) & [0; \infty) & \frac{1}{(1+j\omega)} & (-\infty; \infty) & 1^* & \infty & 1^* & 1^* & 1 & 1 & \frac{1}{2} = 0.5 & 0.707 \\ \hdashline t\cdot e^{-t} \cdot \theta (t)& [0; \infty) & \frac{1}{(1+j\omega)^2} & (-\infty; \infty) & 1^* & \infty & \frac{1}{2} = 0.5 & \frac{2}{e}^* = 0.735759^* & \frac{1}{e} = 0.3678 & 1 & \frac{1}{4} = 0.25 & \frac{1}{2} = 0.5 \\ \hdashline \frac{1}{\sqrt{2\pi}}e^{-\frac{t^2}{2}} & (-\infty; \infty) & e^{-\frac{\omega^2}{2}} & (-\infty; \infty) & \frac{1}{\sqrt{2 e \pi}} = 0.24 & \frac{1}{\pi} = 0.31831 & \frac{1}{\sqrt{e}} = 0.606 & \sqrt{\frac{2}{\pi}} = 0.797 & \frac{1}{\sqrt{2\pi}} = 0.39 & 1 & \frac{1}{2\sqrt{\pi}} = 0.282 & 0.531 \\ \hdashline \frac{1}{\sqrt{\pi}}e^{-j\frac{t^2}{2}} & (-\infty; \infty) & (1-j)\cdot e^{j\frac{\omega^2}{2}} & (-\infty; \infty) & \infty & \infty & \infty & \infty & \frac{1}{\sqrt{\pi}} = 0.564 & \infty & \infty & \infty \\ \hdashline e^{-|t|} & (-\infty; \infty) & \frac{2}{(1+\omega^2)} & (-\infty; \infty) & 1^* & \infty & 1 & \infty^* & 1 & 2 & 1 & 1 \\ \hdashline e^{-\frac{t^2}{2}}H_1(t) & (-\infty; \infty) & -j\omega \cdot 2\sqrt{2\pi}\cdot e^{-\frac{\omega^2}{2}} & (-\infty; \infty) & 2 & 2 & \frac{4\sqrt{2\pi}}{e} = 3.68 & \frac{8}{\sqrt{e}} = 4.8522 & \frac{2}{\sqrt{e}} = 1.213 & 4 & 2\sqrt{\pi} = 3.5449 & 1.883 \\ \hdashline \text{sech}(t) & (-\infty; \infty) & \pi \cdot \text{sech}(\frac{\pi \omega}{2}) & (-\infty; \infty) & \frac{1}{2} = 0.5 & \frac{8 \mathscr{C}}{\pi^2} = 0.74245 & 1.32549 & 2 & 1 & \pi = 3.1416 & 2 & 1.414 \\ \hline \end{array} $$

Unfortunately, the upper bound of Eq. 2 diverges on many examples (I wasn´t expecting it to work for these examples anyway), but for the cases where it results to be finite, it shows to be much better than the bound given by Eq. 5.

On the other hand, the upper bound of Eq. 7 results to be finite on much more examples, and also lower than the bound of Eq. 5, but much closer to it. Also, for the example $f(t) = t\,e^{-t}\theta(t)$ it shows to be lower than the maximum rate of change. This is why I am asking for which conditions this bound have to fulfill to become a valid upper bound (I don't expect that this bound is going to be valid for every possible function, but it could be an alternative when the bound of Eq. 2 doesn't converges).

Also note that for the same case of $f(t) = t\,e^{-t}\,\theta(t)$ the maximum rate of change results to be higher than the Total Variation $V(f)$, going against my intuition at least, so maybe the validity of the bound of Eq. 5 is not universal (could be related to the Fourier transform definition used in [2] being different from the one I am using here, or also a numerical issue).

Bounded domain functions examples________________

Here I review some simple cases of time limited signals.

I have tried to find simple cases of continuous functions: signals that starts from $0$ and rises/decline “slowly” ($f(t)=f’(t)=0$), others that starts “sharply”, other that starts from a point different than $0$, odd and even signals, signals with a discontinuity, the same signal with different compact domain, positive signals, signals with flat-top, etc. (some signals which I could found a simple Fourier transform to work with).

Unfortunately there is no case of a proper “smooth function” since I don’t found any simple “bump function” $\in C_c^\infty$ with a simple Fourier transform, both in “closed form” (I left the question here, and already test all the functions of here on Wolfram-Alpha with negative results).

I extend the previous table notation with these:

  • $J_1(t)$ is the Bessel function of the first kind of order 1 (BesselJ(1,t) in Wolphram-Alpha),
  • $\text{Si}(t)$ is the “Sine integral” function (SinIntegral(t) in Wolphram-Alpha)

Also, since for time limited functions the domain is restricted to $t_0 \leq t \leq t_F$, the definitions for the Fourier transform changes to $F(j \omega) = \int_{t_0 }^{t_F} f(t) e^{-j \omega t} dt$, and also all the time domain integrals change its integration limits correspondingly (see headers of the table 2). Be attentive of this, since the Fourier transform of the functions are different with other integration limits.

Also related, as explained in point (5) for avoiding the problem at the domain "edges", I use $\max_{t_0 < t < t_F} |f'(t)|$ without including the boundaries.

Again, I have let with (*) the results I believe have questionable accuracy.

$$ \begin{array}{|c:c|c:c|c|c:c:c|c:c:c:c|} \hline f(t) & \text{dom}(f) = [a\,;\,b] & F(j\omega)=\mathbb{F}_{[a\,;\,b]}\{f(t)\}(\omega) & \text{dom}(F(j\omega)) & \max_{a < t < b} |f'(t)| & \frac{1}{2 \pi} \int_{-\infty}^{\infty} |j\omega F(j\omega)|d\omega & \max_\omega |j\omega F(j \omega)| & V_a^b(f) = \int_{a}^{b} |f'(t)|dt & \max_{a\leq t \leq b} |f(t)| & ||f||_1 = \int_{a}^{b} |f(t)|dt & E° = \int_{a}^{b} |f(t)|^2 dt & ||f||_2 = \sqrt{E°} \\ \hline \sqrt{1-t^2} & [-1; 1] & \pi \cdot \frac{J_1(\omega)}{\omega} & (-\infty; \infty) & \infty & \infty^* & 1.82798 & 2 & 1 & \frac{\pi}{2} = 1.57079 & \frac{4}{3} = 1.33 & 1.1547 \\ \hdashline \sin(\frac{t\pi}{2}) & [-1; 1] & -j\frac{8\,\omega\cos(\omega)}{(\pi^2-4\,\omega^2)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \infty^* & 2.75144 & 2 & 1 & \frac{4}{\pi} =1.2732 & 1 & 1 \\ \hdashline \sin^2(\frac{t\pi}{2}) & [-1; 1] & \frac{(\pi^2-2\,\omega^2)\sin(\omega)}{(\pi^2\,\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \infty^* & 2.8929 & 2 & 1 & 1 & \frac{3}{4} = 0.75 & 0.866 \\ \hdashline \cos^2(\frac{t\pi}{2}) & [0; 1] & j\frac{(\pi^2(1-e^{-j\omega})-2\,\omega^2)}{2\,\omega\,(\omega^2-\pi^2)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \infty^* & 1.4562 & 1 & 1 & \frac{1}{2} = 0.5 & \frac{3}{8} = 0.375 & 0.612 \\ \hdashline \cos^2(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2\sin(\omega)}{(\pi^2\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & 2.28547^* & 1.63641 & 2 & 1 & 1 & \frac{3}{4} = 0.75 & 0.866 \\ \hdashline \frac{(1+\cos(t\pi))^2}{4} & [-1; 1] & \frac{3\,\pi^4\sin(\omega)}{(\omega^5-5\pi^2\omega^3+4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 2.61265^* & 1.58242 & 2 & 1 & \frac{3}{4} = 0.75 & \frac{35}{64} = 0.546875 & 0.7395 \\ \hdashline \sin(\frac{t\pi}{2})\cos^2(\frac{t\pi}{2}) & [-1; 1] & j\frac{16\, \pi^2\, \omega \cos(\omega)}{(16\, \omega^4-40\, \pi^2 \omega^2+9\,\pi^4)} & (-\infty; \infty) & 1.5708 & 1.93647^* & 1.23244 & 1.5396^* & \frac{2}{3\sqrt{3}} = 0.3849 & \frac{4}{3\sqrt{\pi}} = 0.42441 & \frac{1}{8} = 0.125 & 0.354 \\ \hdashline \text{sinc}(t\pi)\cos(\frac{t\pi}{2}) & [-1; 1] & \frac{1}{2\pi}\left(\text{Si}(\frac{\pi}{2}-\omega)+\text{Si}(\frac{3\pi}{2}-\omega)+\text{Si}(\frac{\pi}{2}+\omega)+\text{Si}(\frac{3\pi}{2}+\omega)\right) & (-\infty; \infty) & 1.62897 & \infty^* & 1.61724 & 2 & 1 & \frac{2\,\text{Si}(\pi)}{8} = 1.17898 & \frac{2\,\text{Si}(2\pi)}{\pi} = 0.9028 & 0.9501 \\ \hdashline 1-\sin^4(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2(5\pi^2 - 2\,\omega^2) \sin(\omega)}{(\omega^5 - 5\pi^2\omega^3 + 4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 3.01547^* & 1.8225 & 2 & 1 & \frac{5}{4} = 1.25 & \frac{67}{64} = 1.04688 & 1.023 \\ \hdashline \sin(|t|\pi) & [-1; 1] & \frac{2\pi\,(1+\cos(\omega))}{(\pi^2-\omega^2)} & (-\infty; \infty) & \pi = 3.1416 & \infty^* & 2.90769 & 4^* & 1 & \frac{4}{\pi} =1.2732 & 1 & 1 \\ \hline \end{array} $$

Against my expectation, the time limited functions behave much worst I intended, but because they are following my intuition: it decays slowly than the Gaussian function, but so slowly that the integral bound of Eq. 2 diverges in many cases, and also so spread, that the peak given by the bound of Eq. 7 is lower than the maximum rate of change. In many cases, the maximum rate of change results higher than the Total Variation $V_a^b(f)$, against Eq. 5.

The positive side, when the integral of Eq. 2 converges, it results to be a proper bound of the maximum rate of change, even when it was higher than the Total Variation.

Unfortunately, all the results for the bound of Eq. 2 have “*” since its validity is questionable, and its value were obtained by numerical approximation through Nintegrate in Wolfram-Alpha, so I will review another bound to "double check". Using Hölder’s inequality, it can be stated that for two functions $f(t)$ and $g(t)$ the following is true: $$\int_{-\infty}^\infty |f(t) \cdot g(t)|\,dt \leq \int_{-\infty}^\infty |f(t)|\,dt \cdot \sup_t |g(t)|$$ I am going to applied it to the bound of Eq. 2 to avoid the multiplication: $$\frac{1}{2 \pi} \int_{-\infty}^\infty |\omega \, g(\omega) \cdot \frac{F(j \omega)}{g(\omega)}|\,d\omega \leq \frac{1}{2 \pi} \int_{-\infty}^\infty |\omega\, g(\omega)|\,d\omega \cdot \sup_\omega |\frac{F(j \omega)}{g(\omega)}| \texttt{ (Eq. 8)}$$ So, I tested some functions $g(\omega)$ that makes the integral of the right side converge, and then see if they have a supremum when tested against the function $f(t) = \cos^2(\frac{t\pi}{2}),\,|t| \leq 1\,$: $$\begin{array}{|c|c:c|} \hline g(\omega) & \int_{-\infty}^\infty |\omega\, g(\omega)|\,d\omega & \texttt{(Eq. 8)} \\ \hline e^{-\sqrt{|w|}} & 24 & 12.1623 \\ \hline \end{array} $$ Here is really interesting that having a "clear result" for $g(\omega) = e^{-\sqrt{|w|}}$ proves that the results of table 2, maybe numerical inaccurate, are still valid (at list for the chosen $f(t)$), so there exists time-limited functions for which the integral of Eq. 2 converges. But not only this, also it proves something maybe obvious:

There exists TIME-LIMITED functions (with unbound domain on the frequencies), which have a maximum rate of change bounded, (for functions with bounded domain in the frequencies, the results is known and presented in [14])

but better, seen the same problem for a different “not so obvious angle”, I believe it could be reinterpreted as:

  • There exists some “$\text{mysterious conditions}\,\mathbb{X}$” that makes that some time-limited signals with unlimited bandwidth will have a bounded maximum rate of change, conditions I am trying to figure out.

Additionally, there other issue I found reviewing the results of table 2: for every function a tried with values at the “edges” of its domain different from zero, the integral of Eq. 2 diverges. This is confusing for me, especially for the functions $f(t) = \cos^2(\frac{t\pi}{2})$ when changing the domain from $[-1,\,1]$ to $[0,\,1]$: technically there is no differences in the achieved maximum slopes of the curves, however for the reduced domain one, the integral of Eq. 2 diverges. I think it could be related to the phenomena I am avoiding according point (5), but somehow it manifests on the frequency domain. I tried to solve it by changing the function to $f(t) = \theta(-t)+\cos^2(\frac{t\pi}{2}),\, 0 \leq t \leq 1\,$ so no disruptive changes happen to the function, but as I was expecting its Fourier Transform in $[0,\,1]$ was the same to the previous functions (it was adding just a zero-measure point, so it doesn’t change the integral).

I have tried many other different functions and combinations on $[-1,\,1]$ with $f(-1) \neq 0$, and I couldn’t find anyone where the integral of Eq. 2 converges. It doesn’t mean that its maximum rate of change was unbounded (Eq. 2 is an upper bound), but its seems as a new conjecture: for a time-limited function $f(t)$ to have a finite ${\frac{1}{2\pi}} \int_{-\infty}^{\infty}\left|\omega F(j\omega)\right| d\omega < \infty $ it must have value zero at its domain boundaries $f(t_0) = f(t_F) = 0$ (don’t meaning this, it need to be a bump function $\in C_c^\infty$, which it’s much more restrictive). I left this into another question in here.

Continues in answers section...

Joako
  • 1,380
  • 1
    Isn't the imaginary unit represented by $i$? – Rounak Sarkar Oct 07 '21 at 03:30
  • 10
    He's an engineer... – Jake Mirra Oct 07 '21 at 04:20
  • 2
    I am using $j$ instead of $i$ to represent the imaginary unit because is the notation used in the mentioned book: is really common in electric engineering so you don't mistake i's from "current intensity" with "i" of its phasor's representation's imaginary part. – Joako Oct 07 '21 at 13:23
  • 1
    @Joako: Just a thought on exploring tightness, see what happens when $f$ is a Gaussian. – A rural reader Oct 07 '21 at 15:19
  • @Aruralreader thanks for the idea, I added to the question. – Joako Oct 07 '21 at 19:50
  • The bound is tight. The rate of change of f in time is precisely due to the frequencies that accumulate. higher frequencies = faster rates of change for the same amplitudes. E.g., $f(t) = a \cos(w_1 t) + b \cos(w_2 t)$, $f'(t) = a w_1 \sin(w_1 t) + b w_2 \sin(w_2 t)$ and the max of this value occurs for some t, in this case when the sin's are one. It won't be tight though depend on phase angles(destructive interference) but assuming loose we have $max_t f'(t) <= a w_1 + b w_2$. The point here is that the max value depends on the sum of the product amplitudes and frequencies. – Gupta Oct 10 '21 at 04:15
  • This is what you are expressing in your first deduction, but for general $f(t)$. The problem is that destructive interference can occur and you cannot predict where for arbitrary f. It should be contained in the expression though because you only want the max value and do not care about the location of where it occurs. Since the Fourier transform will contain all the phase information and properly handle the phase relationships it will all sum up correctly. – Gupta Oct 10 '21 at 04:18
  • The problem is when you take the absolute value and then use the triangle inequality you ignore the phase relationships and so the final result is not tight. This is generally the case with the triangle inequality. That phase info you are modding out is significant. $g(t) = a\cos(w_1 t) + b \sin(w_2 t)$ is different than the $f(t)$ I gave earlier. It also depends on the ratios of $w_1$ and $w_2$. – Gupta Oct 10 '21 at 04:25
  • Should be "The bound is not tight". Can't edit it. This should be pretty obvious though. The phase relationships drastically changes the function and hence the derivative. The first line you write in your deduction is valid but after that you ignore phase relationships and that severely reduces the tightness. Essentially you then assume that all the frequencies "align" at some point to produce the maximum possible rate of change and this won't happen in general. – Gupta Oct 10 '21 at 04:33
  • @Gupta thanks for the comments, I have corrected the question accordingly. However, the mentioned bound of Eq.1 is an equivalence, not a bound, that is why I am trying to found an upper bound that could be related to the characteristics of the function in the time world, or points of the frequency world: knowing everything about the function from it definition implies that I already know its max rate of change beforehand, so why I will be looking for upper bounds in the frequency domain? – Joako Oct 13 '21 at 20:51
  • @Gupta My idea is knowing something with physical meaning, like its energy and that is time-limited, will tell me that is max change of rate is bounded to some value related to this knowledge. – Joako Oct 13 '21 at 20:52
  • 1
    @Joako I do not think there is a local solution. This is a global search problem and unless you know something specifically about $f(t)$ that allows you to constrain it then you won't be able to know it. If, say, you know it is bandlimited then you can get a "tight" as possible upper bound but it won't be "tight" in the sense of optimal. It might be terrible depending on the function. If by tight as possible then I doubt your method is the best as there are probably better approximations that can be made. – Gupta Oct 13 '21 at 21:08
  • 1
    There are other transformations that might work better such as wavelets, gabor, windowed, etc and these might let you get much tighter control. The issue here is mainly the triangle inequality you use which significant reduces the accuracy. You could try to partition it and use it on smaller partitions on the integral and since $f$ will be bounded in the real world you might get better approximations. Ultimately, as I said though, it has to do with the phase relationships and so that is where the real loss is. – Gupta Oct 13 '21 at 21:10
  • 1
    Maybe helpful: https://dsp.stackexchange.com/questions/51617/bounds-of-the-derivative-of-a-bounded-band-limited-function – S.H.W Oct 13 '21 at 23:19
  • @S.H.W thanks for the link. I reviewed it and the mentioned bounds are applicable only for band-limited functions. Since I am specifically asking about time-limited functions, which are going to have unbounded domain in the frequencies, they are not valid here. I will incorporate them to the question. – Joako Oct 14 '21 at 17:55
  • @Gupta I have found the mistakes I have done, and in the Gaussian example the proposed bound actually works really fine (at least compared with existent one $V(f)$). – Joako Oct 14 '21 at 19:08
  • If you have unbounded frequencies then you will have unbounded derivatives and which case your bound will be meaningless and those functions are not realizable in physical space. – Gupta Oct 14 '21 at 19:57
  • @gupta you are right if a think about "power signals", but if the Fourier spectra follows the Riemann-Lebesgue Lema is not necessarily right, as example, the Gaussian function is unbounded in both domains, and its total variation is finite. Actually, any bounded energy and time-limited function with bounded variation will have a unbounded Fourier spectra in the frequency domain, but will have a bounded $max_t |f'(t)|$ and bounded $||\hat{f}(\omega)^2||$ because of Parseval's identity. I am not trying to find a bound for every possible function, but for finite-enery time-limited ones. – Joako Oct 15 '21 at 02:06
  • @Gupta When doing examples I figure that maybe the bound of the Total Variation $V(f)$ is not "true" (as my last answer), but since the bound $\int_{-\infty}^\infty |j\omega F(j\omega)| d\omega$ is right even if the signal has unbounded domain in the frequencies, at least for every $f(t)$ for which happen to converge the integral $\int_{-\infty}^\infty |j\omega F(j\omega)| d\omega < \infty $ it will have a finite $\max_t |f'(t)|$, so there exists time-limited functions with finite max rate of change which decays slower than the Gaussian but faster than the inverse of a first grade polynomial. – Joako Oct 18 '21 at 02:00
  • @Gupta Hope you can read the updated question. I believe I have proved that for a function with unbounded domain on the frequencies, doesn't necessarily means that in every conditions it will have an unbounded possible maximum rate of change, but instead, there exists some $\text{mysterious conditions},\mathbb{X},$ under which the unlimited bandwidth signals seen limited it's slew rate. – Joako Oct 21 '21 at 18:44
  • Your question is too long, you should try to have more focus. Some remarks: your first inequality follows directly from the fact that $|\int f| ≤ ∫ |f|$ (so you can erase the intermediate steps). There is a parameter in the Fourier transform in WolframAlpha to change your convention of the Fourier transform. You should avoid writing in capital letters! Also every function whose Fourier transform has bounded domain is analytic so your claim that there are time limited functions with bounded rate of change is indeed trivial (you can safely erase it) – LL 3.14 Oct 22 '21 at 18:33
  • @LL3.14 thanks for taking the time to read it. Actually time limited functions, for which I am asking for, have unlimited bandwidth (unbounded domain in frequency), which is exactly the opposite situation you mention. Even more, as example, compact-supported smooth functions even can´t been analytic, as is stated in the wiki page of Analytic functions here (at the counterexamples), that why I believe this question is interesting. I will modify what you said about capital letters. – Joako Oct 22 '21 at 19:10
  • No, that's exactly why I say it is not very interesting since there are a lot of non analytic functions ... so of course some of them have bounded variation ... – LL 3.14 Oct 22 '21 at 19:12
  • @LL3.14 Oh I get it know... well, I believe exactly the opposite: founding the $\text{mysterious conditions},\mathbb{X}$ that makes them have a bounded slew rate is actually really important and interesting. Thanks anyway for commenting. – Joako Oct 22 '21 at 19:21
  • @Gupta I have added now a second answer with some results I found interesting, hope you can see them and comment. Thanks you very much. – Joako Nov 10 '21 at 10:19

4 Answers4

2

This may not be a proper answer to your questions, but I hope it lends you some intuition that the bound $\sup_t|df/dt| \le \int |\omega\hat f(\omega)|\,d\omega$, while tight in the sense that there is equality for $t = 0$, can be a "loose" bound for many values of $t$, in a quantitative sense. This answer relates to your question when you consider time-limited functions (say to $(-\pi,\pi)$ to be concrete) extended periodically on the whole line.

Consider a much simpler "model". Let $f(t) = \sum_{n=-N}^N a_ne^{int}$, where the $a_n$ are complex numbers, let's say all satisfying $|a_n| = 1$ to start. Then we can ask about how close to being an equality the following inequality is: $$ |\sum_{n=-N}^N a_ne^{int}| \le \sum_{n=-N}^N|a_ne^{int}| = 2N+1.\tag{1} $$ For $t = 0$, we have equality, but there could be a lot of cancellation making the inequality "loose" for other values of $t$. Let's consider some special cases to build intuition.

  1. $a_n = 1$ for every $n$. In this case, we can sum $f(t)$ as a geometric series to see $$f(t) = \frac{\sin((N+1/2)t)}{\sin(t/2)}.$$ The function $f(t)$ is periodic with period $2\pi$. It is a calculation that $$c\log N\le \int_{-\pi}^\pi |f(t)|\,dt\le C\log N,$$ which we can interpret as meaning that for an "average" point $t\in (-\pi,\pi)$, $|f(t)|\approx \log N$, meaning that there is actually lots of cancellation happening, and the inequality $(1)$ is quite loose on average.

  2. Example 1 was a very special case (the function $f(t)$ in that example is known as the $N$th Dirichlet kernel, so let's consider something more general. Suppose that the $a_n$ are independent random signs, meaning $\mathrm{Prob}(a_n = 1) = \mathrm{Prob}(a_n = -1) = 1/2$. Then $f(t)$ is a random trigonometric sum, and is something of a model for a "generic" or "arbitrary" function. By Khintchine's inequality, $$\mathbb E|f(t)| \le C(2N+1)^{1/2},$$ which says that on average, $|f(t)|$ is smaller than the bound $2N+1$ by a square-root, again implying $(1)$ is a very loose bound. Heuristically, $f(t)$ is very similar to a "random walk", and this inequality is an expression of the well-known "root-mean square displacement" of random walk.

In general, quantifying the extent to which an inequality like $(1)$ is tight or not is a difficult question, but for a "random" or "generic" function, we often expect to improve upon $(1)$ (in the sense that we expect we can replace the right-hand side $2N+1$ with something smaller), as Example 2 might suggest.

As a side-note, I suspect that asking about the tightness of the inequality $$\sup_t|df/dt| \le \int|\omega\hat f(\omega)|\,d\omega \tag{1'}$$ is equivalent to asking about the tightness of the inequality $$\sup_t|g(t)|\le \int|\hat g(\omega)|\,d\omega\tag{2}$$ considering that $df/dt$ and $\omega\hat f(\omega)$ are related (up to unital complex numbers) by the Fourier transform, so $(1')$ is a special case of $(2)$ when we set $g(t) = df/dt$.

Alex Ortiz
  • 24,844
  • Thanks for this interesting answer, and sorry about my late response. For your point one, actually someone proves here that under some conditions the bound of Eq. 2 will always diverge. Right know I am trying a different approach here. Is really interesting to see how much cancellation could be happening, I had no previous intuition before. – Joako Oct 31 '21 at 18:42
  • For point two, even when I believe is totally right and I get what you are trying to explain, I think is no quite applicable here, since when introducing the probability jumps you are creating a function that will behave similarly like a Brownian Motion /Wiener Process, so I think it will have an unbounded Total Variation, which is much worst behaved as the function I am asking for (as I said on point (3)). Maybe because of it, there are better estimates for the max slew rate (I hope). – Joako Oct 31 '21 at 18:47
  • I added now the end of the question as a second answer, where a I reviewed the bound for the Fourier Series, which I approximated with a bound a bit higher than the 2N+1, but the same thing mentioned in your point one holds even where no cancellation is happening (since the Dirichlet Kernel takes into consideration every harmonic), and ironically, functions of Table 1 behaves much better than compacted supported examples when using Eq. 2 - I am still puzzled why is that so? – Joako Oct 31 '21 at 18:53
  • @ Alex Ortiz I have added now a second answer with some results I found interesting, hope you can see them and comment. Thanks you very much. – Joako Nov 10 '21 at 10:21
2

Added a few years later: In this answer I found some improvements to traditional bounds (aren't discoveries tho, there are some references, but with less detail), but finally I realized that a time-limited continuous function could be of unbounded rate of change, as counter-example: $$f(t) = \begin{cases} t\ln(|t|)\exp\left(\dfrac{t^2}{t^2-1}\right),\quad |t|<1 \\ 0,\quad \text{otherwise}\end{cases}$$ which discards the existence of an universal bound for every possible scenario. I keep the previous answer since got some interesting results, but is also full of BS (this was my 1st question in MSE), so sorry in advance, and proceed with caution if you want to use it - please previously verify yourself the formulas shown here.


Here the author of the question with part 3 (part 1 in question, part 2 in previous answer), finally with some answers.

Trying to find some alternative bounds from Eq. 2, by a "lucky mistake" working with the Cauchy-Schwarz Inequality, I found some bounds that improves the result, at least for the same functions of Table 2 for which Eq. 2 gives finite results... but caution with them, because I don´t really know why they work since are obtain with an "illegal" approach and as I will explain later, maybe they would not work for every kind of functions.... How I get them is shown in detail in this another question, here I will only list them and show its results: $$\begin{array}{c} \frac{ \sqrt{\pi} }{4\pi} \left| \sqrt{ \int_{-\infty}^\infty \left| |\omega | (1+4\omega^2) F^2(j\omega) \right| d\omega } \right| \,\,\,\texttt{(Eq. 10)} \\ \frac{ \sqrt{\pi} }{4\pi} \left| \sqrt{ \int_{-\infty}^\infty \left| |\omega | (1+j4 \omega^2) F^2(j\omega) \right| d\omega } \right| \,\,\,\texttt{(Eq. 11)} \\ \frac{ \sqrt{2\pi} }{4\pi} \left| \sqrt{ \int_{-\infty}^\infty \left| \omega^2 (1+2\omega) F^2(j\omega) \right| d\omega } \right| \,\,\,\texttt{(Eq. 12)} \end{array}$$ For the test functions of Table 2, the bound of Eq. 12 shown to be the tighter. But some show finite results even when the function has unbounded slew rate, so as a method, I first check if the following bound is finite, and then apply the bounds of Eq. 10 - Eq. 12: $$ \frac{1}{2\pi}\cdot \frac{4}{5}\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{4}{5}\right) \cdot \sup\limits_\omega \left|F(j\omega)\,(1+|\omega |^{2.5})\right|\,\,\,\texttt{Eq. 13}$$ Bound of Eq. 13 was obtained through Hölder's Inequality as an improved version of the bound of Eq. 8, so it will be higher than Eq. 2. Also note that the exponent was chosen “experimentally” by working with $f(t) = \cos^2(t\pi/2),\,|t|\leq 1$, so as I explained in the other question, they could been easily improved if you can work with Meijer's G functions. Now the table of results: $$ \begin{array}{|c:c|c:c|c|c:c:c:c:c|} \hline f(t) & \text{dom}(f) = [a\,;\,b] & \mathbb{F}_{[a\,;\,b]}\{f(t)\}(\omega) & \text{dom}(F(j\omega)) & \max_{a < t < b} |f'(t)| & \frac{1}{2 \pi} \int_{-\infty}^{\infty} |j\omega F(j\omega)|d\omega & Eq. 13 & Eq. 10 & Eq. 11 & Eq. 12 \\ \hline \cos^2(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2\sin(\omega)}{(\pi^2\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & 2.28547^* & 6.8973 & 1.9069 & 1.8733 & 1.8708 \\ \hdashline \frac{(1+\cos(t\pi))^2}{4} & [-1; 1] & \frac{3\,\pi^4\sin(\omega)}{(\omega^5-5\pi^2\omega^3+4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 2.61265^* & 9.9209 & 2.4135 & 2.3876 & 2.3864 \\ \hdashline \sin(\frac{t\pi}{2})\cos^2(\frac{t\pi}{2}) & [-1; 1] & j\frac{16\, \pi^2\, \omega \cos(\omega)}{(16\, \omega^4-40\, \pi^2 \omega^2+9\,\pi^4)} & (-\infty; \infty) & 1.5708 & 1.93647^* & 8.6398 & 1.8215 & 1.8077 & 1.8075 \\ \hdashline \text{sinc}(t\pi)\cos(\frac{t\pi}{2}) & [-1; 1] & \frac{1}{2\pi}\left(\text{Si}(\frac{\pi}{2}-\omega)+\text{Si}(\frac{3\pi}{2}-\omega)+\text{Si}(\frac{\pi}{2}+\omega)+\text{Si}(\frac{3\pi}{2}+\omega)\right) & (-\infty; \infty) & 1.62897 & \infty^* & 7.2904 & 1.9552 & 1.9229 & \textbf{1.9276} \\ \hdashline 1-\sin^4(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2(5\pi^2 - 2\,\omega^2) \sin(\omega)}{(\omega^5 - 5\pi^2\omega^3 + 4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 3.01547^* & 10.8628 & 2.2717 & 2.2362 & 2.2332 \\ \hdashline \sin(|t|\pi) & [-1; 1] & \frac{2\pi(\cos(\omega)+1)}{(\pi^2-\omega^2)}& (-\infty; \infty) & \pi^* (``\textit{jump}\,\textit{disc.}") & 426.324^* & \infty & 11.4467^* & 11.4362^* & 10.3742^* \\ \hline \end{array} $$ Now I Will review how to overpass the problem with the other functions of Table 2, because I have had another “lucky strike”. Since I can find the maximum rate of change of a time limited function $f(t) = x(t)\left(\theta(t-t_0)-\theta(t-t_F) \right)$ in the time domain avoiding the problem in its boundaries $\partial t = \{t_0,\,t_F\}$ by using: $$ \max_t \left| \frac{df(t)}{dt}\right| \approx \max_t \left| \frac{dx(t)}{dt}\cdot\left(\theta(t-t_0)-\theta(t-t_F) \right)\right|$$

I am going to study this figure. From now on, I will use $\Delta \theta \cong \left(\theta(t-t_0)-\theta(t-t_F) \right)$ as shorthand. Since the Dirac's delta function could be defined as $\delta(t) = \frac{d\theta(t)}{dt} = \theta'$, and the Sifting Property $x(t)\delta(t-a)=x(a)\delta(t-a)$, I will have that the following is true: $$\frac{df(t)}{dt} = \frac{dx(t)}{dt}\Delta\theta + x(t)\Delta\theta'= \frac{dx(t)}{dt}\Delta\theta + x(t)\Delta\delta = \frac{dx(t)}{dt}\Delta\theta + x(t_0)\delta(t-t_0)-x(t_F)\delta(t-t_F)$$ So, $$ \max_t \left| \frac{df(t)}{dt}\right| = \max_t \left| \frac{dx(t)}{dt}\Delta\theta + x(t_0)\delta(t-t_0)-x(t_F)\delta(t-t_F)\right|$$ Here two things can be noted: first, clearly the maximum rate of change of a time-limited function which have any value in its borders different from zero will diverge because of it, clearly answering “Yes” to my conjecture of the end of the question (1st part), and also, it says which “thing” I need to subtract to be working with the required term: $$\begin{array}{r c l} \max\limits_t\left| x'\Delta\theta\right| & = & \max\limits_t \left| \frac{df(t)}{dt} + x(t_F)\delta(t-t_F)-x(t_0)\delta(t-t_0)\right| \\ & = & \max\limits_t \left| \frac{1}{2\pi}\int\limits_{-\infty}^\infty \mathbb{F}_{[t_0,\,t_F]}\left\{\frac{df(t)}{dt} + x(t_F)\delta(t-t_F)-x(t_0)\delta(t-t_0)\right\}e^{j\omega t}d\omega \right| \\ & = & \max\limits_t \left| \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left(j\omega F(j\omega) + x(t_F)\mathbb{F}_{[t_0,\,t_F]}\left\{\delta(t-t_F)\right\}-x(t_0)\mathbb{F}_{[t_0,\,t_F]}\left\{\delta(t-t_0)\right\}\right)e^{j\omega t}d\omega \right| \\ \end{array}$$ Unfortunately, the term $\mathbb{F}_{[t_0,\,t_F]}\left\{\delta(t-a)\right\}$ is not an easy defined one, as is shown on the answers of my question here, but for now on, I will use the following without a formal prove, but it has shown to work like magic: $$ \int\limits_{t_0}^{t_F}\delta(t-a)e^{-j\omega t}dt = e^{-j\omega a} \int\limits_{t_0}^{t_F} \delta(t)e^{-j\omega t}dt = e^{-j\omega a}$$ Please by now assume that is right, but for example, Wolfram-Alpha when evaluating it for $a<b$ gives: $$ \int\limits_{a}^{b}\delta(t)e^{-j\omega t}dt = \begin{cases} 1,\,\,\text{if}\,a<0<b \\ 0,\,\, a<b<0\,\vee \,0<a<b \end{cases} $$ With this, I will have that: $$\begin{array}{r c l} \max\limits_t\left| x'\Delta\theta\right| & = & \max\limits_t \left| \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left( j\omega F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right)e^{j\omega t}d\omega \right|\,\,\,\texttt{(Eq. 14)}\\ & \leq & \max\limits_t \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left| j\omega F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right| d\omega \\ & \overset{\text{indep. of}\,t}{=} & \frac{1}{2\pi}\int\limits_{-\infty}^\infty \left| j\omega F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right|d\omega \,\,\,\texttt{(Eq. 15)} \end{array}$$ Note that this new upper bound of Eq. 15 is going to be the same of Eq. 2 when used with the functions for whom Eq. 2 have already worked, since they fulfill that $x(t_0) = x(t_F) = 0$. Now, updating table 2 with this new bound you will see that works perfectly removing the effect of the discontinuity on the edges of the compact-support in time from the spectra on the frequency domain (in bold the new results, others for comparison): $$ \begin{array}{|c:c|c:c|c|c:c:c|c:c:c:c:c|} \hline f(t) & \text{dom}(f) = [a\,;\,b] & F(j\omega)=\mathbb{F}_{[a\,;\,b]}\{f(t)\}(\omega) & \text{dom}(F(j\omega)) & \max_{a < t < b} |x'\Delta\theta| & Eq. 15 & Eq. 20 \\ \hline \sqrt{1-t^2} & [-1; 1] & \pi \cdot \frac{J_1(\omega)}{\omega} & (-\infty; \infty) & \infty \,\, (|x'(t_0)|= |x'(t_F)|=\infty)& \infty^* & \infty \\ \hdashline \sin(\frac{t\pi}{2}) & [-1; 1] & -j\frac{8\,\omega\cos(\omega)}{(\pi^2-4\,\omega^2)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \mathbf{1.8522}^* & \mathbf{1.2564}^*\,(< \max) \\ \hdashline \sin^2(\frac{t\pi}{2}) & [-1; 1] & \frac{(\pi^2-2\,\omega^2)\sin(\omega)}{(\pi^2\,\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \mathbf{2.28547}^* & 1.8702^* \\ \hdashline \cos^2(\frac{t\pi}{2}) & [0; 1] & j\frac{(\pi^2(1-e^{-j\omega})-2\,\omega^2)}{2\,\omega\,(\omega^2-\pi^2)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & \mathbf{1.85684}^* & \mathbf{1.2320}^*\,(< \max) \\ \hdashline \cos^2(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2\sin(\omega)}{(\pi^2\omega-\omega^3)} & (-\infty; \infty) & \frac{\pi}{2} = 1.57079 & 2.28547^* & 1.87029^* \\ \hdashline \frac{(1+\cos(t\pi))^2}{4} & [-1; 1] & \frac{3\,\pi^4\sin(\omega)}{(\omega^5-5\pi^2\omega^3+4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 2.61265^* & 2.3863^* \\ \hdashline \sin(\frac{t\pi}{2})\cos^2(\frac{t\pi}{2}) & [-1; 1] & j\frac{16\, \pi^2\, \omega \cos(\omega)}{(16\, \omega^4-40\, \pi^2 \omega^2+9\,\pi^4)} & (-\infty; \infty) & 1.5708 & 1.93647^* & 1.8080^*\\ \hdashline \text{sinc}(t\pi)\cos(\frac{t\pi}{2}) & [-1; 1] & \frac{1}{2\pi}\left(\text{Si}(\frac{\pi}{2}-\omega)+\text{Si}(\frac{3\pi}{2}-\omega)+\text{Si}(\frac{\pi}{2}+\omega)+\text{Si}(\frac{3\pi}{2}+\omega)\right) & (-\infty; \infty) & 1.62897 & \mathit{14.0197^*} &1.9271^* \\ \hdashline 1-\sin^4(\frac{t\pi}{2}) & [-1; 1] & \frac{\pi^2(5\pi^2 - 2\,\omega^2) \sin(\omega)}{(\omega^5 - 5\pi^2\omega^3 + 4\pi^4\omega)} & (-\infty; \infty) & \frac{3\sqrt{3}\pi}{8} = 2.0405 & 3.01547^* & 2.2302^* \\ \hdashline \sin(|t|\pi) & [-1; 1] & \frac{2\pi\,(1+\cos(\omega))}{(\pi^2-\omega^2)} & (-\infty; \infty) & \pi^* (``\textit{jump}\,\textit{disc.}") & \mathit{426.324^*} & \mathit{44.2918^*} \\ \hline \end{array} $$ Numbers with $(^*)$ were obtained through numerical integration by Nintegrate function in Wolfram-Alpha, and numbers in italic have different values in previous tables since recently I obtained their values through changing the parameter PrecisionGoal.

In the table can be seen that the new values for the previous divergent bounds are corresponding with the values of the other functions listed which behaves similarly within the compact-support, supporting the validity of this method so far.

But conversely, I already have tried to make upper bounds by the same procedure I got Eq. 10 - Eq. 12 with this modify spectra but unfortunately I have found results which are lower than the maximum rate of change, and this is Why I said that these bounds have to be used with Caution since its validity or the conditions under they work correctly haven´t been proved yet (and I will not do it since I don´t have the enough knowledge). Also, as example, they gives a finite value for $f(t) = \sin(|t|\pi)$ when its derivative has a “jump” discontinuity within its compact support. For an example of these Upper Bounds with the formula of Eq. 15, I also listed later one on the last table under Eq. 20: $$ \frac{ \sqrt{2\pi} }{4\pi} \sqrt{ \int_{-\infty}^\infty \left| \sqrt{1+2\omega}\cdot \left( j\omega\, F(j\omega) + x(t_F)e^{-j\omega t_F}-x(t_0)e^{-j\omega t_0} \right)\right|^2 d\omega } \,\,\,\texttt{(Eq. 20)} $$

At here, with this review, I can already see that at least any time-limited function for which Eq. 15 is finite, it will have a bounded maximum slew rate within its compact-support, thinking about the conditions for which unbounded bandwidth signals will have a limitation over the maximum rate of change they can achieve.

Now, the same procedure to obtain Eq. 14 could be applied to the second derivative: $$ \frac{d^2f(t)}{dt^2} = \frac{d^2x(t)}{dt^2}\Delta\theta + 2\frac{dx(t)}{dt}\Delta\theta +x(t)\Delta\theta''$$ Here, the only unknown term is $\Delta\theta'' = \Delta\delta'$, which has the same kind of issues that before for the integral of $\delta(t)$... on the comments of the same question, and also through some properties of Wikipedia (here the Spanish version, since these properties aren´t show directly on the English version, maybe because are not totally right under all possible interpretations of the Dirac's Delta function): $$\begin{array}{c} h(t)\delta'(t-a) = h(a)\delta'(t-a)-h'(a)\delta(t-a) \\ \left<\nabla \delta_a,\,\varphi \right> = -\nabla\varphi(a) \Rightarrow \left<\nabla \delta_a,\,e^{-j\omega t}\right> = \int\limits_{t_0}^{t_F} \delta(t-a)\,e^{-j\omega t}dt = -\frac{d}{dt}\left( e^{-j\omega t}\right)\Big|_{t=a} = j\omega\, e^{-j\omega a} \end{array}$$ Using these two properties as true, I can find that: $$ \frac{d^2f(t)}{dt^2} = \frac{d^2x(t)}{dt^2}\Delta\theta + e^{-j\omega t_0}\left(x'(t_0)+j \omega \, x(t_0) \right) -e^{-j\omega t_F}\left(x'(t_F)+j\omega \, x(t_F) \right)$$ $$ \Rightarrow \frac{d^2x(t)}{dt^2}\Delta\theta = \frac{d^2f(t)}{dt^2} + e^{-j\omega t_F}\left(x'(t_F)+j\omega \, x(t_F) \right)-e^{-j\omega t_0}\left(x'(t_0)+j\omega\, x(t_0) \right) \,\,\,\texttt{Eq. 16}$$ Which can be used to work with the 2nd derivative within the compact-support, avoiding the problems at its edges. With this, allowing the following abuse of notation: $$\begin{array}{c} f(t_0) = \lim\limits_{t \to t_0^+} f(t) = x(t_0) \\ f'(t_0) = \lim\limits_{t \to t_0^+} f'(t) = x'(t_0) \\ f(t_F) = \lim\limits_{t \to t_F^-} f(t) = x(t_F) \\ f'(t_F) = \lim\limits_{t \to t_F^-} f'(t) = x'(t_F) \\ \end{array}$$

Now, it is possible to treat the method of Eq. 14 and Eq. 16 as if it were a transform, defined as: $$\begin{array}{l l l} \mathring{\mathbb{F}}\{1\}_{(\omega)} & = & \mathbb{F}_{[t_0,\,t_F]}\{1\}_{(\omega)} =\displaystyle{ \int\limits_{t_0}^{t_F} e^{-j\omega t}\,dt} = \frac{j}{\omega}\cdot\left( e^{-j\omega t_F}- e^{-j\omega t_0}\right) \\ \mathring{\mathbb{F}}\{f(t)\}_{(\omega)} & = & \mathbb{F}_{[t_0,\,t_F]}\{f(t)\}_{(\omega)} =F(j\omega) \\ \mathring{\mathbb{F}}\{f'(t)\}_{(\omega)} & = & j\omega\,F(j\omega) + e^{-j\omega t_F}f(t_F)-e^{-j\omega t_0}f(t_0) \\ \mathring{\mathbb{F}}\{f''(t)\}_{(\omega)} & = & (j\omega)^2\,F(j\omega) + e^{-j\omega t_F}\left(f'(t_F)+j\omega f(t_F)\right)-e^{-j\omega t_0}\left(f'(t_0)+j\omega f(t_0)\right)\,\,\,\,\,\,\,\texttt{(Eq. 17)} \\ \end{array}$$ I haven't see these transforms before, but probably they already exists, so please share with me how they are named so I can look for any reference, but for now on, just in case I accidentally found something new, lets called them "Herreros' Transforms" (yes, because of ego $\texttt{XD}$).

As is said before, this transform will be useful to avoid the discontinuity at the edges of the compact-supported of the time-limited functions, but also, something interesting happens when applyed to ordinary linear differential equations:

Let $y(t)$ be a function defined by the following equation with initial conditions at time $t_i$, let $a, b, c \in \mathbb{C}$ arbitrary constants, and then let apply this "new" transform taking advantage of the fact that it inherits the linearity of the Fourier transform: $$\begin{array}{r c l} y'+by+c & = & 0,\,\,\,\,\,y(t_i), \,\,\,\,\,\,\Bigg/ \,\,\mathring{\mathbb{F}}\{\,\,\}\\ \mathring{\mathbb{F}}\{y'\}+b\,\mathring{\mathbb{F}}\{y\}+\mathring{\mathbb{F}}\{c\} & = & 0\\ j\omega\,Y(j\omega)+e^{-j\omega t_F}y(t_F)-e^{-j\omega t_0}y(t_0)+b\,Y(j\omega)+\frac{jc}{\omega}\,\left( e^{-j\omega t_F}- e^{-j\omega t_0}\right) & = & 0\\ Y(j\omega)(j\omega+b)+e^{-j\omega t_F}\left(y(t_F)+\frac{jc}{\omega}\right)-e^{-j\omega t_0}\left(y(t_0)+\frac{jc}{\omega}\right) & = & 0\\ Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_0}\left(y(t_0)+\frac{jc}{\omega}\right) - e^{-j\omega t_F}\left(y(t_F)+\frac{jc}{\omega}\right)}{(j\omega+b)}}\\ Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_F}\left(j\omega\,y(t_F)-c\right)- e^{-j\omega t_0}\left(j\omega\,y(t_0)-c\right)}{\omega\,(\omega-jb)}} \quad \texttt{(Eq. 18)} \end{array}$$

The same procedure could be done also for second order linear ordinary differential equations: $$\begin{array}{r c l} y'' +ay'+by+c & = & 0,\,\,\,\,\,y(t_i), y'(t_i), \,\,\,\,\,\,\Bigg/ \,\,\mathring{\mathbb{F}}\{\,\,\}\\ \mathring{\mathbb{F}}\{y''\}+a\,\mathring{\mathbb{F}}\{y'\}+b\,\mathring{\mathbb{F}}\{y\}+\mathring{\mathbb{F}}\{c\} & = & 0 \end{array}$$ $$(j\omega)^2\,Y(j\omega) + e^{-j\omega t_F}\left(y'(t_F)+j\omega y(t_F)\right)-e^{-j\omega t_0}\left(y'(t_0)+j\omega y(t_0)\right) +ja\omega\,Y(j\omega)+ae^{-j\omega t_F}y(t_F)-ae^{-j\omega t_0}y(t_0)+bY(j\omega)+\frac{jc}{\omega}\,\left( e^{-j\omega t_F}- e^{-j\omega t_0}\right) = 0 $$ $$ Y(j\omega)(-\omega^2+ja\omega+b)+e^{-j\omega t_F}\left(y'(t_F)+y(t_F)(j\omega+a)+\frac{jc}{\omega}\right)-e^{-j\omega t_0}\left(y'(t_0)+y(t_0)(j\omega+a)+\frac{jc}{\omega}\right) = 0 $$ $$\begin{array}{r c l} Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_0}\left(y'(t_0)+y(t_0)(j\omega+a)+\frac{jc}{\omega}\right) - e^{-j\omega t_F}\left(y'(t_F)+y(t_F)(j\omega+a)+\frac{jc}{\omega}\right)}{(ja\omega-\omega^2+b)}}\\ Y(j\omega) & = & \displaystyle{\frac{e^{-j\omega t_F}\left(\omega\,y'(t_F)+\omega\,y(t_F)(j\omega+a)+jc\right)-e^{-j\omega t_0}\left(\omega\,y'(t_0)+\omega\,y(t_0)(j\omega+a)+jc\right)}{\omega\,(\omega^2-ja\omega-b)}} \quad \texttt{(Eq. 19)} \end{array}$$ Here the amazing thing (at least form me), is that $Y(j\omega)$ obtained by Eq. 18 and Eq. 19 are actually the Fourier Transforms of the time-limited versions with domain $[t_0,\,t_F]$ of the solution functions y(t) for the initial values $y(t_i),\,y'(t_i)$, obtained simple by formula without needing to evaluate a CONVOLUTION, which is MUCH easier at least for me: given $y(t)$ I take the derivatives of it, form its linear differential equation, and then just use the formulas of Eq. 18 or Eq. 19.

And also note that for these inhomogeneous linear differential equations for $y(t)$, if $b\neq 0$ and its solution is given by the form $y(t)=x(t)-c/b$ with $x(t)$ the solution of the homogeneous versions $x'+bx=0$ or $x''+ax'+bx=0$ for each case, in these scenarios, since $y(t_0)=x(t_0)-c/b$ and $y(t_F)=x(t_F)-c/b$ and the linearity of the Fourier Transform, I can write the solutions of Eq. 18 and Eq .19 using the same forms for $Y(j\omega)$ as: $$Y(j\omega) = X(j\omega)\Big|_{c=0}-\frac{jc}{b\omega}\left(e^{-j\omega t_F}-e^{-j\omega t_0}\right)$$ which will be useful to compare the results with the solutions delivered by Wolfram-Alpha.

I have already test these formulas with the functions $y(t)=2\,e^{-\frac{t}{2}},\,t\in [-1,1];\,$ $y(t)=\sin^2\left(\frac{t\pi}{2}\right),\,t\in [-\frac{3}{4},\frac{1}{4}];\,$ $y(t)=e^{-t}\sin(t)+\frac{1}{2},\,t\in [-\pi,4\pi];\,$ $y(t)=\pi e^{-5t},\,t\in [-\pi,-1];\,$ and $y(t)=6\,e^{-t}\cos\left(\frac{t\pi}{2}\right),\,t\in [-3,-2];\,$ and the formulas have worked perfectly, even when the domains don´t contain $0$ as a possible issue that could been rise because of the definitions of the Dirac's delta function used, so I am very confident they work at least for "traditional" functions (a formal prove is required, and in math "weird functions" could drop the assumptions, but at least for me are enough, and I can't make something more elaborated than these explanations).

Thinking now in the conditions that will make unlimited bandwidth signals to have a bounded slew rate, I think is interesting to analyze the case of Eq. 19 since linear differential equation of second order are of widespread use as approximation, as in the harmonic oscillator: its solutions in closed for are already known, but thinking this as an example of what can be done, let see Eq. 15 using the results of Eq. 19, then we will have that: $$ \max_t |x' \Delta\theta| \leq \frac{1}{2\pi} \int\limits_{-\infty}^\infty \left|\mathring{\mathbb{F}}\{ Y(j\omega)\} \right|dw $$

So, if we put attention to Eq. 19, since the angular frequency $\omega$ is defined as $\omega \in \mathbb{R}$, at least in the scenario when the $y'$ term is present (so $a\neq 0$), and when having $y(t_0)=y(t_F)=y'(t_0)=y'(t_F)=0$, the integrand will be a polynomial order $1/(\omega^2+\text{non-zero elements})$ so the solutions will have bounded slew rate. I have extended this topic into this and this questions.

So far, we this results I have convinced myself that really is there a physical law hiding in finding the conditions for which unlimited bandwidth signals will have bounded slew rate, but I don´t have the enough knowledge to analyze the convergence of this kind of integrals, so I hope someone could take this work and find it and share it here, since from it I believe better upper bounds could be found and with them at least I can built many new things.

Nevertheless, as example, have divergent Eq. 15 upper bound don´t directly means that the function has unbounded maximum slew rate, which can be seen with the function $y(t)=c_1+t\,c_2,\,\,t_0 \leq t \leq t_F $, which will have equation $y' = c_2$ a constant, so using Eq. 18 its upper bound will be the integral of a trigonometric function divided by $\omega$, so its result will be infinite, when instead by taking $\max_t |x' \Delta\theta| = c_2$ so its slew rate is actually bounded.

Also note that these transform of Eq. 17 allows to extend the traditional properties of the solutions of linear differential equations to these time-limited functions, so I wonder if this allows working with these time-limited functions, or if it can be extended to any kind of time-limited differential equations, as if they were a mathematical space as the Bump functions space $C_c^\infty$, so I left the question here. I don´t know enough about this topic but thinking in how probability spaces are made by cadlag functions CDFs, maybe is possible to think about them as piece-wise functions when removing their border-discontinuities.

But unfortunately, I don’t believe the transforms of Eq. 17 are a method to solve time-finite differential equations, but instead, a method to extract time-limited “pieces” of a already determined solution: if I have a differential equation with a determined solution $y(t)$ defined by its initial conditions at time $t_i$ (let say $y(t_i)$, $y’(t_i)$,… etc.), then I can choose any points $(t_0, y(t_0))$ and $(t_F, y(t_F))$ which lies in the specific solution $y(t)$ to use them in the formulas shown above, but if a pick arbitrary numbers to fill $y(t_0)$ and $y(t_F)$, I believe the result will be garbage (not sure if they will still lie in the specific $y(t)$ or not, maybe a iterative method to find them could be made, but I don´t look for them).

A “true” time-limited differential equation, since I will have compact-support, can´t be analytic, so its solution can be solved through a power series, neither can be represented by a linear differential equation, so the transforms of Eq. 17 will only remove the discontinuity at its domain borders after the solution have been already found (and if its values at the borders are finite), but nevertheless, maybe it could be useful to find terms to make some “matchings” among equations’ constants (I hope). I believe that as an example of a "true" time-limited differential equation, one can think in things like: $$ \frac{y''}{y'}+\frac{y'}{y}+\frac{2t+1}{t^2}=0,\,\,t_i = \frac{1}{1+\log(2)}\approx 0.59 ,\,\, y(t_i)=1,\,\, y(t_i)=-(\log(2)+1)^2 \approx -2.86 $$ which has as solution $$ y(t)= \sqrt{e^{\frac{1-t}{t}}-1} $$ that lives in the reals only for $t \in (0,1]$.

I don't know if these results are already known, or if I accidentally discovered them, in that case, please tell me so I can look for someone who can formalized them (thinking in mathematicians I met at the university many years ago), and if you would like to consider this introduction by yourself, I hope you consider me as the last author, it could help me if I try to work again on research, my profile is here.

If a third world country unemployed engineer -with too much free time- can discover that much for himself here, means that actually there is a lot to do related to this problem… I hope I have motivated you to try to find these $\text{mysterious conditions}\,\mathbb{X}$, prove the validity of these methods and bounds from above, use these tools to work with time-limited functions (maybe avoiding convolutions), and extend them, as example, maybe the transform $\mathring{\mathbb{F}}\{\,\}$ could be used to find new constant for the Kalman-Rota or the Landau-Kolmogorov-Hadamard inequalities.

I believe that at least any time-limited function for which: $$ \int\limits_{-\infty}^\infty \left|\mathring{\mathbb{F}}\left\{f(t) \right\}\right| dw < \infty $$ will have bounded maximum slew rate, $\text{mysterious conditions}\,\mathbb{X}$ I continue looking for here.

For all your comments and help, thanks you very much, and in special to user @LL3.14.

Joako
  • 1,380
1

This is the second part of the question, with an introduction and motivation about the topic - I have added a new answer with actual results I have found here.

Here the author of the question, extending it because I ran out of space.

The Fourier series of a periodic function $x(t)$ with period $T$ is given by: $$\begin{array}{r c l} sx(t) & = & \sum\limits_{k=-\infty}^{\infty} a_k \, e^{j \omega t},\, \omega = k\,\omega_0,\, \omega_0 = \frac{2\pi}{T} \\ \text{with}\,\,\,a_k & = & \int\limits_T\,x(t)\,e^{-j \omega t}\, dt \\ \end{array}$$ When starting to solve the problem, I really started from here: What is the worst possible “basic scenario" of infinite slew rate, or “jumps”, extendable to any other situation? I believe is the rectangular function with an arbitrary amplitude, since any weird function could in the limit be made by "infinitely-thin steps” (here, transformed in delta functions), and any slew rate possible will be obtained by changing its height. But for using the Fourier Series, instead of working with the rectangular function, I will work with the symmetric square-wave, which it first period wave is defined by (following the notation of Chapter 4 of [1]): $$x(t) = \begin{cases} A\,,\,\text{if}\,\,\,0 \leq |t| \leq T_1 \\ 0\,,\,\text{if}\,\,T_1 < |t| \leq T \end{cases} $$ For this signal, the Fourier coefficients are given by: $$\begin{array}{r c l} a_k & = & \frac{2A}{T}\cdot\frac{\sin(\omega T_1)}{\omega} \\ \Rightarrow sx(t) & = & \frac{2A}{T}\sum\limits_{k=-\infty}^{\infty} \frac{\sin(\omega T_1)}{\omega}\cdot e^{j \omega t} \\ \end{array}$$ Now, for studying the maximum possible rate of change, I will truncate the series of the square-wave to its $|N|$ component ($N>0$), and take its derivative with respect to $t$: $$\begin{array}{r c l} sx_N(t) & = & \frac{2A}{T} \sum\limits_{k=-N}^{N} \frac{\sin(\omega T_1)}{\omega}\cdot e^{j \omega t} \\ \Rightarrow y_N(t) = \frac{d}{dt}sx_N(t) & = & j\frac{2A}{T} \sum\limits_{k=-N}^{N} \sin(\omega\,T_1)\, e^{j \omega t} \\ \end{array}$$ Finding the maximum rate of change is equivalent to study $max_t\,|y_N(t)|$, and for this, we can expand $y_N(t)$ using that $\sin(x) = \frac{1}{2j}(e^{jx}-e^{-jx})$: $$\begin{array}{r c l} y_N(t) & = & \frac{A}{T} \sum\limits_{k=-N}^{N} e^{j \omega t}\cdot \left( e^{j \omega T_1} - e^{-j \omega T_1} \right)\\ & = & \frac{A}{T}\cdot \left( \underbrace{\sum\limits_{k=-N}^{N} e^{j \omega (t+T_1)}}_{\text{Dirichlet Kernel}} -\underbrace{\sum\limits_{k=-N}^{N} e^{j \omega (t-T_1)}}_{\text{Dirichlet Kernel}} \right)\\ & = & \frac{A}{T}\cdot \left( \frac{\sin\left((2N+1)\cdot\frac{\omega_0}{2}\cdot(t+T_1)\right)}{\sin\left( \frac{\omega_0}{2}\cdot(t+T_1)\right)}-\frac{\sin\left((2N+1)\cdot\frac{\omega_0}{2}\cdot(t-T_1)\right)}{\sin\left( \frac{\omega_0}{2}\cdot(t-T_1)\right)}\right)\,\,\,\,\texttt{(Eq. 9)} \\ \end{array}$$ This function $\sin((2N+1)\,x)/\sin(x)$, named "Dirichlet Kernel" [15], is an even periodic function which principal period "looks alike" a "high" frequency Sinc Function, with a main lobe that attain a maximum value of $(2N+1)$ [16]. Since both Dirichlet Kernels of Eq. 9 can't attain each $|\max|$ at the same time, I will work with the limit case at $t \to -T_1$: $$ \Rightarrow y_N^*(t) = \frac{A}{T}\cdot \left( 2N+1-\frac{\sin\left((2N+1)\cdot\omega_0 T_1\right)}{\sin\left( \omega_0 T_1\right)}\right) $$ Since the remaining term is negative, the maximum possible value of $y_N^*(t)$ will be attained at the minimum value of $\sin((2N+1)\,x)/\sin(x)$: graphically, it can be seen that the minimum is attained on the first negative lobes, which moves over the curve $-1/\sin(x)$ when changing $N$, then, making an equality for the first right negative lobe ($x>0$): $$ \frac{\sin\left((2N+1)\,x\right)}{\sin(x)} = -\frac{1}{\sin(x)} \Rightarrow \sin\left((2N+1)\,x\right) = -1 \Rightarrow (2N+1)\,x = \frac{3\pi}{2} \Rightarrow x^*= \frac{3\pi}{2\,(2N+1)} $$ Now, using the "small angle approximation" $\sin(x) \approx x$, and replacing with $x^*$, I can make an approximation for the minimum value: $$ \min_{0<x<2\pi}\left\{\frac{\sin\left((2N+1)\,x\right)}{\sin(x)}\right\} = -\frac{1}{\sin(x)} \approx -\frac{1}{x^*} \approx - \frac{2\,(2N+1)}{3\pi} = y^*$$ where the effective minima "is a bit lower" than $y^*$. With this, I can make a lower bound for the maximum rate of change: $$\Rightarrow y_N^{LB}(t) = \frac{A}{T}\cdot\left(2N+1+\frac{2\,(2N+1)}{3\pi}\right)= \frac{A}{T}\cdot\left(1+\frac{2}{3\pi}\right)\cdot\left(2N+1\right) < \max_t |y_N(t)|$$ Here, if $N \to \infty$ then $\max_t |y_N(t)| \to \infty$, and this is why "infinite bandwidth" signals could achieve and infinite maximum rate of change.

Similarly, since the amplitude of the negative lobe of the Dirichlet kernel is always smaller than the main lobe amplitude, an upper bound for the maximum rate of change can be made as: $$\Rightarrow y_N^{UB}(t) = \frac{2A}{T}\cdot\left(2N+1\right) > \max_t |y_N(t)|$$ But from here, no matter how large $N$ is, if the signal is band-limited, then the signal will have a finite maximum rate of change. Now, thinking in what would happen if I suppress a "symmetric" band of frequencies (the positive and corresponding negative frequencies), starting at an arbitrary frequency $c$, and extracting a band of large $d$, then, using that: $$ \sum\limits_{k = c}^{d-c+1} b_k = \sum\limits_{k = 1}^{d-c+1} b_k -\sum\limits_{k = 1}^{c-1} b_k $$ The maximum rate of change of this "cropped signal" will be given by: $$ \begin{array}{r c l} y_N^\text{cut} (t) & = & \frac{A}{T}\cdot\left(1+\frac{1}{3\pi}\right)\cdot\left\{2N+1-\left[2\cdot(d-c+1)+1-\left(2\cdot(c-1)+1 \right)\right] \right\}\\ & = & \frac{A}{T}\cdot\left(1+\frac{1}{3\pi}\right)\cdot\left\{N+4c-2d-1\right\}\\ \end{array}$$ So, it doesn´t matter how many components I take out of the Fourier series, on the $N \to \infty\,$ scenario the maximum rate of change could be infinite, even if I choose $d \gg 0$ and the Fourier coefficients have already decay "near zero value" long ago (because of Riemann-Lebesgue Lemma), and I am adding just "almost-zero values" $\ll 0.1$, since they are still infinite many, it could add a constant (bounded maximum rate of change), or add infinite (unbounded maximum rate of change). This is why I believe these $\text{mysterious conditions}\,\mathbb{X}$ will be (totally) related to the decay of the Fourier spectrum.

Unfortunately as an engineer I don’t have the mathematical toolbox to analyze functions decays, and until now, neither the smartness to found a “good” upper bound for the maximum rate of change of time-limited functions which bounded slew rate, neither approximations of it: already tried using $A_0 = \int_{-\infty}^{\infty} |F(j\omega)|dw \rightarrow A_0\cdot \int_{-\infty}^{\infty} |\omega \frac{F(j\omega)}{A_0}|dw = A_0 \cdot E_F[\omega]$ a weighted expected value and didn´t found any bound, also trying to think the exponential and polynomial parts of $\int_{-\infty}^{\infty} j\omega\, e^{j\omega t} F(j\omega)\,dw$ as a Gamma function through change of variables unsuccessfully, and right now I am stuck trying to found an approximation through the Stationary-Phase method here.

After two month and one and a half copybook of dead ends I am dry of ideas, but it was really interesting to found that known properties were popping-up through my attempt to find a solution, and I hope that beginners could interest on these questions and learn as much I did writing it, about integral norms, the intuition of the Total Variation, compactly-supported and Bumps functions, different definitions of the Fourier transform with finite integration limits, etc., so please share it with your teachers or department coworkers to see if they get involved in founding a solution.

And for which are mathematicians and physicist, maybe the question seems obvious. But believe me if it isn´t for engineers, and realizing that really smart people have been run after this question before, I believe that given the successful results of Weierstrass founding continuous functions nowhere differentiable, the Brownian motion which haves infinite Total Variation, fractals, the Fabius Function which is differentiable but nowhere analytic, the success of Topology that explains things been as general as possible, and the previous result that says that unlimited bandwidth signals could have infinite maximum rate of change, have move mathematicians and scientists out from studying these more specific signals for which being time-limited functions, also have bounded slew rate, and founding these $\text{mysterious conditions}\,\mathbb{X}$ could lead to find bounds that could be useful for engineers (as exists for bandlimited functions, or maybe through optimization), and even more, if physical phenomena could be described under these conditions, then a physical law have been also being found (to keep physics discussions out from here I left the physical motivation and possible applications on this question): like the property that says that “if a function is continuous and compact-supported, then is bounded” [16], it will be awesome to find that if a function follow $\text{mysterious conditions}\,\mathbb{X}$ then its maximum rate of change is bounded by $\text{(insert bound here)}$.

Or conversely, given my limited mathematical knowledge and since I prove myself that time-limited functions with bounded maximum slew rate do exists (it wasn´t obvious for me), at least for me, to assume that "because infinite bandwidth signals could achieve infinite max slew rate" $\Rightarrow$ "there is no conditions where under them, time-limited functions could be achieving bounded max slew rates" (so examples are just “happy coincidences”), will be falling into a logical fallacy (I believe is named Hasty generalization). So, if you could prove that these $\text{mysterious conditions}\,\mathbb{X}$ are nonexistence, it will be also great, since I could be moving out from trying to solve this problem.

I hope you can join me to work on this question, so if you are taking this seriously, please share with me from which department you are to start following your results. And if you believe that nothing can be done, I can tell you that already and incredible smart person have proved (I think), that at least for one-variable real compact-supported functions which have $f(t_0) = f(t_F) \neq 0$, the upper bound of Eq. 2 will always diverge (you can check this here), and unbelievable it was done in half an hour!!... so, I am very hopeful that interesting results can be achieved with your help. For reaching so far, thanks you very much.

Joako
  • 1,380
1

This all is idealistic math. Do such a search approximation quality of a measured function under fourier transformation on the web to gather ideas what really are the problems of the FT in real-world applications. So the step function can not be FTed and there is overswing and others. Nice FT can do wolframalpha easier than this community.

Generalization from the unit step-function is really hard and done by the various authors to put that all together is more expensive than a bounty. FT has the two approaches infinite of finite interval and the variants continuous or discrete spectrum. A continuous spectrum is more flexible for the function to transform but the discrete spectrum is more real in measurements. The parameters of FT problem sets the length of the interval, sampling rate and the curvature change of the function under measurement are subject to many multivariate - multiscaling multidimensional analyses since all internal and external conditions may change during measurement and FT.

Answers in the scope of this community shall focus on questions that address subranges of the full problem sets.

  • Thanks for the answer, actually I made, I think, a naive question based on intuition and it has extended a lot because I actually have been founding some results. About the realization of the FT, as electrical engineer a see a few, like noise due discretization of pixels in LCD screen that acts as a holographic grid generating copies of the laser beam. The question is indeed for the "ideal world", and only focused in time-limited functions (so is just one sample for all time), and the bounty is because I want to know if these $\mathring{\mathbb{F}}$ transform previously exists or not. – Joako Nov 16 '21 at 20:45
  • with the time-limited, I meaning that the Step function is not really "made", is just to model that the function have a start and a end in time, and it is problematic in the Fourier domain as you pointed, that is why I working with this transform which allows you to avoid them.... hope this could better explain why I am asking that, I have learned a lot here so far through the comments of the community. – Joako Nov 16 '21 at 20:48