2

The gradient of a scalar/vector function gives the vector/tensor of greatest change. I am looking for the inverse concept, which gives me the direction of least change.

Inverting the gradient vector/tensor will obviously not work since it corresponds to the direction of greatest negative change.

I am aware that this direction has to be orthogonal to the gradient. However there are infinitely many vectors orthogonal to the gradient in 3D+. This is where I am stuck.

The intent is applying this to a regular grid through finite differencing.

  • How about putting the function inside some monotonic decreasing function? Like $f(g(x)) = 1/g(x)$ or $f(g(x)) = -\log(g(x))$. – mathreadler Aug 19 '19 at 13:24
  • 1
    "I am aware that this direction has to be orthogonal to the gradient. However there are infinitely many vectors orthogonal to the gradient in 3D+. This is where I am stuck." Why are you stuck? That's all there is to it. You pick one of the infinitely many directions and go with it. Unless you have any other requirements you want from your direction, I don't really see what we can help you with. – Arthur Aug 19 '19 at 13:24
  • Yes as @Arthur mentions, it will be a whole tangent plane of dimension $N-1$. You will need additional information or constraints to choose some particular direction in this subspace over another. One way would be a higher order polynomial approximation like for example a second-order Taylor polynomial instead of a plane. – mathreadler Aug 19 '19 at 13:33
  • 1
    Thank you for the tips. What I can't convince myself is: Think of a scalar 3D grid. We take one voxel and compute its gradient via finite differences according to its adjacents, yielding a vector. In this case, the tangent space would form a disk orthogonal to the gradient vector, centered at the base of the vector. I cannot believe that any direction I take on this disk has the same amount of change since the grid will have varying values in the region in which that disk is defined. – demiralp Aug 19 '19 at 13:42
  • @ganeshie8 I accidentally modified your comment. Would it not be f(x,y,z) = s in 3D leading to gradient vectors with 3 components? Apologies, tangent space is the wrong word. Normal plane. I meant a unit disk drawn on the normal plane. – demiralp Aug 19 '19 at 13:51
  • @demiralp your previous comment cleared it up. I too am finding it hard to believe that in every direction on the plane orthogonal to gradient, the change in f is least.. Btw good question :) – AgentS Aug 19 '19 at 13:54
  • @demiralp You are correct, it won't, because partial differentiation of first order does not capture nearly all information in any neighbourhood, only a first order approximation. But the best we can do with the limited measurements done with partial first derivatives will be gradient direction. – mathreadler Aug 19 '19 at 13:58
  • I am looking into potential use of the Hessian to extract this information. Will update. – demiralp Aug 19 '19 at 14:47
  • Whether or not you want to believe it, not only is the absolute rate of change of a differentiable function in directions orthogonal to the gradient minimal, the function is in fact stationary in those directions. However, it looks like you might be trying to visualize this result in relation to level sets of the function, to which the function’s gradient is normal. That’s a different situation. I don’t really see how the Hessian would be relative in either case except at a critical point since the approximation to the change is dominated by the linear terms of the Taylor series. – amd Aug 20 '19 at 00:32
  • It depends on how close. A second order approximation even if ad-hoc can often give more valuable information for example in BFGS optimization can give much faster convergence than just using gradient / Taylor expansion of first order. – mathreadler Aug 20 '19 at 08:02
  • I think I figured out a solution. The trick is to compute the structure tensor (i.e. second moment matrix, see https://en.wikipedia.org/wiki/Structure_tensor ) from the gradient, by simply multiplying it with its transpose. The eigenvector corresponding to the maximum eigenvalue of the structure tensor is the direction of largest change, and is equal to the gradient if all other eigenvalues are zero. Otherwise, the eigenvector corresponding to the minimum eigenvalue is the direction of least change. – demiralp Aug 20 '19 at 12:34

1 Answers1

0

I figured out a solution. The trick is to compute the structure tensor (i.e. second moment matrix, see https://en.wikipedia.org/wiki/Structure_tensor ) from the gradient field. The eigenvector corresponding to the minimum eigenvalue is the direction of least change.