Can we invert a vector like we do with matrices, and why?
I didn't see in any linear algebra course the concept of the "vector inverse", and I was wondering if there is any such thing, and if not, why.
Can we invert a vector like we do with matrices, and why?
I didn't see in any linear algebra course the concept of the "vector inverse", and I was wondering if there is any such thing, and if not, why.
The inverse of an object $a$ over some operation $\mathbb S\ @\ \mathbb S \Rightarrow \mathbb S$ with identity $e$ is the unique object $a^{-1}$ such that $a\ @\ a^{-1} = a^{-1}\ @\ a= e$. $e$ itself must be such that given any object $b$, $b\ @\ e=e\ @\ b=b$.
Vector addition has an obvious inverse: since adding vectors is simply the same as adding their components in whatever basis you feel like, the additive inverse of $v$ has the opposite of those components. But that's simply $-v$.
Scalar multiplication has no sensible vector inverse, because the inputs are necessarily from two different groups.
Dot product doesn't provide inverse elements, because the result of a dot product isn't a vector. While it is possible to come up with a vector such that $a\cdot b=1$, there are an infinite number of such vectors, one for every $c$ such that $a\cdot c=0$.
Cross product doesn't provide inverse elements either: for any non-zero vector $a$, $a\times b = c$ has many possible values for $b$ that give the same $c$.
Linear transformation doesn't provide inverses for the vector: again, the inputs must necessarily be from two different groups, a vector and a matrix.
Componentwise multiplication $(a,b,c)(d,e,f)=(ad,be,cf)$, while it does provide an inverse for many values (reciprocal of each of the components), has huge hunks of non-invertible values (anywhere one of the components is $0$), and the inverse changes based on your choice of basis.
Wedge product produces a result in a different group: the result is an $m+n$-vector where $u$ is an $m$-vector and $v$ is an $n$-vector.
Outer product results in a matrix, which is in a different group.
Tensor product results in a tensor, which is, you guessed it, a different group.
Clifford product This one's interesting. It works like the cross and dot product combined, and so it looks like it's not coming up with the same thing, but it's actually better than both: $v\cdot v$ gives just a scalar part (because the dot product will be non-zero and the cross product will be zero), and if you divide by that for $v^{-1} = \frac{v}{v\cdot v}$ you get a vector in the same direction as $v$ but whose magnitude is the reciprocal, so $v^{-1}\cdot v = 1$, and since we're in a land where scalars and vectors combine, this works perfectly. Also, unlike the infinity that cross and dot products give us, there isn't any freedom because any adjustment to the vector changes the vector part of the product. I don't know clifford algebra terribly well though so I don't know how well this works on things that are already scalar and vector combined...
Those are the ones I can think of. I'm sure others know more operations than I do.
In some applications (e.g. extrapolation methods), one often considers what is called the Samelson inverse of a vector:
$$\mathbf v^{(-1)}=\frac{\bar{\mathbf v}}{\bar{\mathbf v}\cdot\mathbf v}=\frac{\bar{\mathbf v}}{\|\mathbf v\|^2}$$
(where the bar denotes complex conjugation), which can be easily shown to satisfy $\mathbf v^{(-1)}\cdot\mathbf v=\mathbf v\cdot\mathbf v^{(-1)}=1$ (the Moore-Penrose conditions in vector garb). (As usual, $\mathbf v\neq\mathbf 0$.)
Some references that use this inverse include this, this, and this.
It depends on what you mean by "inverse". Inverse in relation to the addition operation is just the negative of the vector $-v$. Because trivially $v + (-v) = 0$.
Perhaps this definition of the inverse vector will help you:
An inverse rectilinear vector ā' is a vector which is co-directed (in the same direction as) a vector ā and differs from it in magnitude according to: $$ |\bar{a'}|=\dfrac{1}{|\bar{a}|} $$ Projections on the coordinate axes of inverse rectilinear vectors are equal according to: $$\quad a'_{x}=\dfrac{a_{x}}{a_{x}^{2}+a_{y}^{2}+a_{z}^{2}};\quad a'_{y}=\dfrac{a_{y}}{a_{x}^{2}+a_{y}^{2}+a_{z}^{2}};\quad a'_{z}=\dfrac{a_{z}}{a_{x}^{2}+a_{y}^{2}+a_{z}^{2}} $$ An example of solving problems with this vector can be found here https://en.wikipedia.org/wiki/Talk:Cross_product#Cross_product_does_not_exist https://doi.org/10.5539/jmr.v9n5p71
In this new definition, with inverse vectors, you can do the same actions as with conventional vectors (addition, multiplication by number, dot product, cross product). In this case, the dot product and the cross product allow us to obtain vector division (an analog of arithmetic division). We can carry the vector through the sign = as in arithmetic. These actions allow solving problems that previously could not be solved. For example, find the force from the torque in a coordinate-vector form.