The operation $x \mapsto \frac{1}{x}$ has the property that when you perform it twice (on real numbers) you get back $x$, and when performed on the number $1$, leaves it invariant (which also happens to be true for $-1$). "Negation" is another such operation (which leaves $0$ invariant, instead of $1$).
The source of the operation in the "reciprocal vectors" question has developed an operation on triples of independent vectors in 3-space with the property that when performed twice, brings you back to the same triple-of-independent-vectors. In that sense, it's a little bit analogous to the $x \mapsto \frac{1}{x}$ operation on the real line. And if you regard the standard basis as a particularly nice triple of vectors (because when you stack them up into a $3 \times 3$ matrix, you get the identity), this operation has the property that it leaves invariant this 'special' triple. (I believe that it leaves invariant any positively-oriented orthonormal basis, but I could be wrong --- I didn't look all that closely.)
Note that this is an operation on triples of vectors. If you have a single vector $a$, but lack $b$ and $c$, you can't even define $A$. So it's certainly not very strongly analogous to reciprocals of single numbers in the real number line.
One last point: there are situations in which you have lists of non-zero numbers, and it makes sense to invert each one of them. This essentially never comes up in linear algebra, so it doesn't get a name there, but it comes up in some applications where we have lists of numbers, so it appears in some programming languages. In matlab, for instance, if a
is a list of (nonzero) numbers, then 1 ./ a
is the list whose $i$th element is the reciprocal of the $i$th element of a
. When might this be useful? Suppose that the $i$th element indicates the number of cars (uniformly randomly) crossing a "measuring point" on road $i$ in each hour. Then the $i$th element of the reciprocal indicates the expected waiting time for a car on road $i$, in hours.