8

Image reference for the question:

object space vs world space

(image from the CG tutorial)

The D3D9 API got us used to world matrices.

However, if you use world matrices, then you have to do an extra matrix multiply in the shader (which ends up being the same for a whole lot of vertices).

Hence the OpenGL convention of concatenating the modelling and viewing matrices into one matrix (the GL_MODELVIEWMATRIX = View*World).

What's better, and why?

bobobobo
  • 17,074
  • 10
  • 63
  • 96

3 Answers3

7

No. At no time should you ever have an explicit world matrix in your shader.

A detailed explanation for why can be found here, but the short version is really very simple: you never need it and it can kill your floating-point precision.

If your world space is too big, then a camera that is far from the origin can cause floating point precision problems.

All world space is is nothing more than an intermediary between model space and camera space. It's a place where you can express the camera and all other objects in the same space. But all you use it for is to generate a world-to-camera matrix, which you then apply to all of the model-to-world matrices to create model-to-camera matrices.

You can deal with precision problems in C++ by using doubles instead of floats for matrix computations. You can convert these back to floats before uploading them to the shader.

So why would you ever need an explicit world-space transform in your shader? In your source code, yes. But in your shader? What would you do with it that you can't do with camera space?

Lighting can be done in camera space just as easily as world space; all you have to do is transform your light positions/directions into camera space. After all, camera space has the same scale as world space. You do this transformation once per frame per light; hardly a performance burden even on the CPU.

So there is absolutely no point in ever exposing your shaders to an explicit world space transform. It's just an intermediate step that you fold into your matrices.

Nicol Bolas
  • 26,068
  • 3
  • 76
  • 104
1

You wouldn't do the extra matrix multiply in your shader. The trick is that you do your matrix multiplication once-per-frame on the CPU, then upload the final result to your vertex shader. That gives you one position by matrix multiplication per-vertex, irrespective of whether you have world and view separate or concatenated.

Maximus Minimus
  • 20,144
  • 2
  • 38
  • 66
0

In many cases, you want to have the world pos in the vertex shader anyway, for other purposes. For example, you need to compute the view vector, to pass down into the pixel shader to evaluate specular.

The local-to-world matrix is also needed to transform tangent vectors and normal vectors[1] into world space for shading, assuming that you do shading in world space (you might do it in tangent space, in which case you'd need a different set of matrices).

So IMO, it makes sense to have two matrices: local-to-world and world-to-clip. The latter is the product of the view matrix and the projection matrix. Pass both to the vertex shader, and do the multiplications like:

posWorld = mul(posLocal, matLocalToWorld)
posClip = mul(posWorld, matWorldToClip)

[1] (So long as you don't have nonuniform scaling. In that case normals must be transformed by the inverse transpose of the local-to-world matrix.)

Nathan Reed
  • 33,657
  • 3
  • 90
  • 114