Is there a way to represent a matrix as a scalar without losing information? I know that I can use something as the Frobenius Norm, but that does not account for change in element positions. For example:
The F norm of the (1x3) matrix (2, 1, 0) is equal to the F norm of a matrix like (1, 2, 0). But the matrices are different. So is there a way to represent a matrix by a scalar and also detect these position changes?
UPDATE: My practical problem here is I have a measurement process that outputs two matrices for a specific measurement. Both are square, symmetrical and integer matrices.
First one looks like:
[,1] [,2] [,3] [,4]
[1,] 0 1 1 0
[2,] 1 0 0 1
[3,] 1 0 0 0
[4,] 0 1 0 0
Second one:
[,1] [,2] [,3] [,4]
[1,] 0 0 1 0
[2,] 0 0 0 1
[3,] 1 0 0 0
[4,] 0 1 0 0
Now, suppose I have a chain of measurements and need to detect an "anomalous" measurement (say, the matrix elements change radically in position and/or value). What I need to face this problem is to somehow convert a matrix to a single value and then perform anomaly detection on this chain of scalars. I thought about using the norm, but the posiotining information is lost, like I described above.