I am trying to understand notation which I can not find the explanation for in the book... It says: Since $$ (f_1(\mathbf{x}), \ldots, f_n(\mathbf{x})) = A (\mathbf{x} - \mathbf{x}_0) + O(|\mathbf{x} - \mathbf{x}_0 |^2) $$ we have $$ |\mathbf{x} - \mathbf{x}_0| \leq \| A^{-1} \| (|(f_1(\mathbf{x}), \ldots, f_n(\mathbf{x}))| + C|\mathbf{x} - \mathbf{x}_0 |^2 ). $$ Here $A$ is an invertible real matrix, $|\cdot |$ denotes $L^2$ norm on $\mathbb{R}^n$ and $\mathbf{x}_0$ is a fixed vector. What exactly does $\| A^{-1}\|$ mean in this context? Thank you for the clarification.
-
To @JohnnyT: What book you are reading? – Anton Vrdoljak Dec 27 '23 at 13:08
-
Context matters here, surely the reference defines the notation it uses. There are several "natural" norms on matrix spaces. Given the use of the $L^2$ on $\mathbb R^n$, perhaps they mean something like that on matrix spaces but, as I say, the book will state, somewhere, exactly what they mean. – lulu Dec 27 '23 at 13:14
-
It is Hormander: The analysis of linear partial differential opeators I. I just can not find the definition... – Johnny T. Dec 27 '23 at 13:21
-
5Most likely $| A^{-1}|$ denotes the operator norm of $A^{-1}$. – jd27 Dec 27 '23 at 13:22
-
@jd27 When you hover over the "Add comment" button, it displays a tooltip. "Use comments to ask for more information or suggest improvements. Avoid answering questions in comments." – Martin Brandenburg Dec 27 '23 at 13:27
2 Answers
Most likely the induced (operator or 2 norm) norm arising from the $L^2$ norm on vectors,
$\left| B \right|=\underset{\left| x \right|=1}{\mathop{\sup }}\,\left| Bx \right|$
It can be found explicitly as the square root of the largest eigenvalue of of $B^TB$ if B is a matrix.

- 1,482
I am sure that Hörmander has defined which matrix norm $\lVert B \rVert$ he uses in his book.
But certainly any matrix norm satisfying $$\lvert B \cdot \mathbf x \rvert \le \lVert B \rVert \cdot \lvert \mathbf x \rvert \tag{1}$$ gives the desired result. Condition $(1)$ says that the matrix norm $\lVert - \rVert$ is consistent with the vector norm $\lvert - \rvert$.
Given any vector norm, the operatornorm $$\lVert B \rVert = \sup \left\{ \frac{\lvert B \cdot \mathbf x \vert}{\lvert \mathbf x \vert} \mid \mathbf x \ne 0 \right\}$$ always has this property.
However, there are many other examples of consistent matrix and vector norms. I guess that Hörmander uses the Euclidean norm (aka Frobenius norm) $$\lVert B \rVert = \sqrt{\sum_{i,j} b_{ij}^2} .$$
This norm is submultiplicative which means that $$\lVert B \cdot C \rVert \le \lVert B \rVert \cdot \lVert C \rVert. $$ See here.
Taking any matrix norm, we can define an associated vector norm by $$\lvert \mathbf x \rvert_{ass} = \left \lVert M(\mathbf x,n) \right \rVert .$$ Here we regard $\mathbf x$ as a column vector and form the matrix $M(\mathbf x,n)$ having $\mathbf x$ in all $n$ columns.
If the matrix norm is submultplicative, then it is clearly consistent with its associated vector norm.
Working with the Euclidean matrix norm gives $$\lvert \mathbf x \rvert_{ass} = \sqrt n \lvert \mathbf x \rvert .$$ Therfore $$\lvert B \cdot \mathbf x \rvert = \frac{1}{\sqrt n} \lvert B \cdot \mathbf x \rvert_{ass} \le \frac{1}{\sqrt n} \lVert B \rVert \cdot \lvert x \rvert_{ass} = \lVert B \rVert \cdot \lvert x \rvert .$$

- 76,394
- 12
- 43
- 125