2

The example is given below:

enter image description here

But I do not understand the details of calculating $\phi_{BB}(\alpha_{v})$, could anyone explain this for me please?

The definition of $\phi_{BB}(\alpha_{v})$ is given below:

enter image description here

EDIT: I mean how the definition of the linear transformation given affect the matrix?

Intuition
  • 3,269

5 Answers5

3

With the equations of $\alpha_v$:

Let $\:w={}^{\mathrm t\mkern-1.5mu}(x, y,z)$. The coordinates of $v\times w$ are obtained as the cofactors of the determinant (along the first row):

$$\begin{vmatrix} \vec i&\vec j&\vec k \\ a_1&a_2 & a_3 \\ x&y&z \end{vmatrix} \rightsquigarrow \begin{pmatrix} a_2z-a_3y\\a_3x-a_1z \\a_1y-a_2x \end{pmatrix}=\begin{pmatrix} 0&-a_3&a_2\\a_3& 0 &-a_1 \\ -a_2 &a_1&0 \end{pmatrix}\begin{pmatrix} x \\y\\z \end{pmatrix}$$

Bernard
  • 175,478
  • what about if we have 4 $2 \times 2$ matrices ..... I will insert the link of the question in a comment. – Intuition Sep 19 '19 at 03:42
2

The details probably come in the proof of Theorem 8.1 (which you should read).

Let $B = (v_1,\dots,v_n)$ and $D = (w_1,\dots,w_k)$ be the given bases. Suppose that $\alpha\in\operatorname{Hom}(V,W)$. For each $i$ in $1,\dots,n$ there exist scalars $\phi_{ij} \in F$ such that $$ \alpha(v_i) = \phi_{1i}w_1 + \phi_{2i}w_2 + \dots + \phi_{ki} w_k $$ Set $\Phi_{BD}(\alpha)$ to be the $k\times n$ matrix whose $(i,j)$-th entry is $\phi_{ij}$.

Now we come to angryavian's suggestion. Here $V = W = \mathbb{R}^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $\alpha(w) = v \times w$ for a fixed $v = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}$. So you need to find the coefficients of $\alpha(e_1)$, $\alpha(e_2)$ and $\alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.

1

The first column of the matrix is $v \times \begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix}$, the second column is $v \times \begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix}$, and the third is $v \times \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}$.

angryavian
  • 89,882
1

If $B = \{e_1,\dots,e_n\}$ and $D = \{f_1,\dots,f_m\}$ and $T$ is a linear transformation, then $\Phi_{BD}(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,\dots,f_m$. That is, if

$$ T(e_j) = \sum_{i=1}^m a_{i,j}f_i, $$

then the $j$-th column of $\Phi_{BD}(T)$ is

$$ \begin{bmatrix} a_{1,j} \\ a_{2,j} \\ \vdots \\ a_{m,j} \end{bmatrix}. $$

For example, $\alpha_v(e_1) = v \times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $\Phi_{BB}(\alpha_v)$ is $[0,a_3,-a_2]^T$.

Trevor Gunn
  • 27,041
1

Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $\Phi_{BD}$ is, or how to compute it. It simply asserts existence.

It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $\operatorname{Hom}(V, W)$ and $M_{k \times n}(F)$ (call it $\phi$), then letting $\Phi_{BD} = \phi$ would technically satisfy the proposition, and constitute a proof!

Fortunately, I do know what the proposition is getting at. There is a very natural map $\Phi_{BD}$, taking a linear map $\alpha : V \to W$, to a $k \times n$ matrix.

The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $\alpha : V \to W$, and a basis $B = (v_1, \ldots, v_n)$ of $V$. That is, every vector $v \in V$ can be expressed uniquely as a linear combination of the vectors $v_1, \ldots, v_n$. If we know the values of $\alpha(v_1), \ldots, \alpha(v_n)$, then we essentially know the value of $\alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, \ldots, a_n \in F$ such that $$v = a_1 v_1 + \ldots + a_n v_n.$$ Then, using linearity, $$\alpha(v) = \alpha(a_1 v_1 + \ldots + a_n v_n) = a_1 \alpha(v_1) + \ldots + a_n \alpha(v_n).$$

As an example of this principle in action, let's say that you had a linear map $\alpha : \Bbb{R}^2 \to \Bbb{R}^3$, and all you knew about $\alpha$ was that $\alpha(1, 1) = (2, -1, 1)$ and $\alpha(1, -1) = (0, 0, 4)$. What would be the value of $\alpha(2, 4)$?

To solve this, first express $$(2, 4) = 3(1, 1) + 1(1, -1)$$ (note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $\Bbb{R}^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then, $$\alpha(2, 4) = 3\alpha(1, 1) + 1 \alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$ There is a converse to this principle too: if you start with a basis $(v_1, \ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, \ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $\alpha : V \to W$ such that $\alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.

That is, if we fix a basis $B = (v_1, \ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map $$\operatorname{Hom}(V, W) \to W^n : \alpha \mapsto (\alpha(v_1), \ldots, \alpha(v_n))$$ is bijective. This is related to the $\Phi$ maps, but we still need to go one step further.

Now, let's take a basis $D = (w_1, \ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, \ldots, w_m$. So, we have a natural map taking a vector $$w = b_1 w_1 + \ldots + b_n w_n$$ to its coordinate column vector $$[w]_D = \begin{bmatrix} b_1 \\ \vdots \\ b_n \end{bmatrix}.$$ This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.

So, if we can express linear maps $\alpha : V \to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(\alpha(v_1), \ldots, \alpha(v_n))$, think about $$([\alpha(v_1)]_D, \ldots, [\alpha(v_n)]_D).$$ Equivalently, this list of $n$ column vectors could be thought of as a matrix: $$\left[\begin{array}{c|c|c} & & \\ [\alpha(v_1)]_D & \cdots & [\alpha(v_n)]_D \\ & & \end{array}\right].$$ This matrix is $\Phi_{BD}$! The procedure can be summed up as follows:

  1. Compute $\alpha$ applied to each basis vector in $B$ (i.e. compute $\alpha(v_1), \ldots, \alpha(v_n)$), then
  2. Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[\alpha(v_1)]_D, \ldots, [\alpha(v_n)]_D$), and finally,
  3. Put these column vectors into a single matrix.

Note that step 2 typically takes the longest. For each $\alpha(v_i)$, you need to find (somehow) the scalars $b_{i1}, \ldots, b_{im}$ such that $$\alpha(v_i) = b_{i1} w_1 + \ldots + b_{im} w_m$$ where $D = (w_1, \ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.

As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v \in V$, $$[\alpha(v)]_D = \Phi_{BD}(\alpha) \cdot [v]_B,$$ which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v \in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.


So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = \Bbb{R}^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $\alpha_w : \Bbb{R}^3 \to \Bbb{R}^3$ such that $\alpha_w(v) = w \times v$. Let's follow the steps.

First, we compute $\alpha_w(1, 0, 0), \alpha_w(0, 1, 0), \alpha_w(0, 0, 1)$: \begin{align*} \alpha_w(1, 0, 0) &= (w_1, w_2, w_3) \times (1, 0, 0) = (0, w_3, -w_2) \\ \alpha_w(0, 1, 0) &= (w_1, w_2, w_3) \times (0, 1, 0) = (-w_3, 0, w_1) \\ \alpha_w(0, 0, 1) &= (w_1, w_2, w_3) \times (0, 0, 1) = (w_2, -w_1, 0). \end{align*}

Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) \in \Bbb{R}^3$, $$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) \implies [(a, b, c)]_B = \begin{bmatrix} a \\ b \\ c\end{bmatrix}.$$ In other words, we essentially just transpose these vectors to columns, giving us, $$\begin{bmatrix} 0 \\ w_3 \\ -w_2\end{bmatrix}, \begin{bmatrix} -w_3 \\ 0 \\ w_1\end{bmatrix}, \begin{bmatrix} w_2 \\ -w_1 \\ 0\end{bmatrix}.$$

Last step: put these in a matrix:

$$\Phi_{BB}(\alpha_w) = \begin{bmatrix} 0 & -w_3 & w_2 \\ w_3 & 0 & -w_1 \\ -w_2 & w_1 & 0 \end{bmatrix}.$$

Theo Bendit
  • 50,900
  • what about if we have 4 $2 \times 2$ matrices what will be the second step and what will be the dimension of $\phi_ (B, B)$ in this case? – Intuition Sep 19 '19 at 03:17
  • @hopefully Well, the second step really depends on the elements of the codomain $W$, not so much the dimensions of $V$ and $W$. It really comes from the fact that $D$ is a basis for $W$; in order for this to be true, there must be a proof that $D$ spans $W$, and in that proof must be instructions for how to express $\alpha(v_1), \ldots, \alpha(v_n)$ as linear combinations in terms of $D$. But, the details of this proof will depend on the specific vector space (and perhaps, the basis as well). I can't really say anything more specifically without a specific problem. – Theo Bendit Sep 19 '19 at 04:04
  • 1
    @hopefully Now that I've seen (and answered) your latest question, I think I see what you mean. – Theo Bendit Sep 19 '19 at 04:21