4

I've recently been trying to find analogues between linear algebra ideas on $\mathbb R^2$ and $\mathbb C$, such as how the function $\textrm{Re}$ on $\mathbb C$ is equivalent to the linear operator given by the projection matrix $\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}$ on $\mathbb R^2$.

I now want to do the same thing with complex multiplication. One constraint is that I really want to find a matrix that gives complex multiplication that is not a function of one of the arguments. With the cross-product, for example, we can write $u \times v = Av$ but $A$ depends on $u$. I want to find a linear operator $A$ that encodes complex multiplication independent of the arguments, just like the projection matrix for $\textrm{Re}$.

At first I was seeking a linear operator $A : \mathbb R^2 \times \mathbb R^2 \to \mathbb R^2$, but I don't know to do this with a matrix. The only things I can think of are $Av$ or $u^T A v$ but the first has the wrong domain and the second has the wrong codomain.

My next idea was to consider $A: \mathbb R^4 \to \mathbb R^2$ so essentially I'd just stack the two, but this would require having $ac-bd = r_1a + r_2b + r_3c + r_4d$ which is impossible with my goal of fixed $r_i$ independent of the arguments.

Am I wrong about this being linear? I know it's at least bilinear, but did I make a mistake here?

To summarize all of this, I want to see if there's a way to represent complex multiplication using a fixed matrix $A$ between real-valued vector spaces.

alfalfa
  • 1,499

2 Answers2

2

I think the cross product is a good analogy here: you essentially want a map that you feed two vectors and returns a vector. If the map is to be bilinear, this must be done by a three-index object: working in the basis $\mathbf{e}_i$, the inputs are $\mathbf{a} = \sum_i \mathbf{e}_i a_i $ and $\mathbf{b} = \sum_i \mathbf{e}_i b_i $, and the product will be of the form $$ \mathbf{m}(\mathbf{a},\mathbf{b}) = \sum_i \mathbf{e}_i m_i(\mathbf{a},\mathbf{b}) = \sum_{i,j,k} \mathbf{e}_i M_{ijk} a_j b_k. $$ This is precisely what happens with the cross product: we have the formula $$ (\mathbf{a}\times \mathbf{b})_i = \sum_{i,j,k} \epsilon_{ijk} a_j b_k, $$ where $\epsilon_{ijk}$ is $1$ if $ijk$ is an even permutation of $123$, $-1$ if an odd permutation, and $0$ otherwise. We can do the same thing to construct the complex product by looking at the basis $(\mathbf{1},\mathbf{i})$: we have $$ \mathbf{m}(\mathbf{1},\mathbf{1}) = \mathbf{1} + 0\mathbf{i} \\ \mathbf{m}(\mathbf{1},\mathbf{i}) = \mathbf{m}(\mathbf{i},\mathbf{1}) = 0\mathbf{1}+1\mathbf{i} \\ \mathbf{m}(\mathbf{i},\mathbf{i}) = -1 \mathbf{1} + 0\mathbf{i}, $$ from which we find $$ M_{111}=1, \quad M_{211} = 0 \\ M_{112}=0, \quad M_{212} = 1 \\ M_{121}=0, \quad M_{221} = 1 \\ M_{122}=-1, \quad M_{222} = 0, $$ a $2 \times 2 \times 2$ array of numbers. Also, notice that $\mathbf{m}(\mathbf{a},\cdot)$ is a map $\mathbb{R}^2 \to \mathbb{R}^2 $, which is represented by a matrix, with elements $\sum_{j} m_{ijk}a_j$.

One can determine properties of the product from the array: for example, if $M_{ijk}=M_{ikj}$, the product is commutative (i.e. $\mathbf{m}(\mathbf{a},\mathbf{b}) = \mathbf{m}(\mathbf{b},\mathbf{a})$). Associativity $\mathbf{m}(\mathbf{a},\mathbf{m}(\mathbf{b},\mathbf{c}))=\mathbf{m}(\mathbf{m}(\mathbf{a},\mathbf{b}),\mathbf{c})$ corresponds to $$ \sum_{k} M_{ijk} M_{klm} = \sum_k M_{ikm} M_{kjl} $$ Both of these hold for the complex multiplication product, but not for the cross product, which is instead anticommutative and satisfies the Jacobi identity $$ \mathbf{a} \times (\mathbf{b} \times \mathbf{c}) + \mathbf{c} \times (\mathbf{a} \times \mathbf{b}) + \mathbf{b} \times (\mathbf{c} \times \mathbf{a}) = 0. $$

Chappers
  • 67,606
1

Fix a complex number $z=a+ib$. Indeed, multiplication by $z$ yields a linear map $\mathbb{R}^2\to\mathbb{R}^2$, and so, it can be expressed by a matrix. To construct the matrix, as always, one should check the images of the standard basis vectors.

The first basis vector represents $1$, and $z\cdot1=z=a+ib$. The second standard basis vector represents $i$, and $z\cdot i=-b+ia.$ Hence, the matrix you are looking for is$$M_z=\left(\begin{array}{cc}a&-b\\b&a\end{array}\right).$$

Amitai Yuval
  • 19,308
  • 1
    It seems to me that OP is looking for a universal matrix, independent form $z$. – user 1987 May 25 '17 at 15:25
  • thank you for the asnwer, but as @user1987 mentioned I really was hoping to find a fixed operator that doesn't depend on either argument – alfalfa May 25 '17 at 16:53