4

My lecturer for a computer graphics (raytracing) paper has stated that 'It is easier to apply the inverse transform to the world than it is to apply the transform to the camera.' The example given was that is it easier to shift all objects in a scene 5 units to the left than it is to shift the camera 5 units to the right.

This seems to imply that it's easier to iterate through and transform multiple objects in a scene than it is to apply a single transformation to the camera.

Is this correct, and if so, why is this the case?

Laserbreath
  • 143
  • 4
  • Yes, this is correct. The iteration is happening when you multiply a camera matrix with every point you want to project. In short, this is because you ultimately need to transform to screen coordinates. You could certainly project points/things to another plane in 3d space without applying the inverse of the camera to the objects. Positioned elsewhere. But ultimately need it back in screen resolution. – Andrew Wilson Apr 05 '19 at 04:01
  • @AndrewWilson: "But ultimately need it back in screen resolution" That's only true if you're doing rasterization. You don't need "screen resolution" if you're doing raytracing; you merely need to know the ray direction for a particular pixel, which isn't difficult either way. – Nicol Bolas Apr 05 '19 at 04:24
  • For raytracing it's not 'easier'. But it might have some precision benefits. – Yakov Galka May 06 '19 at 05:02

1 Answers1

2

That's actually incorrect. You can transform every ray of your camera if you wish (and numerous implementations do so). There are some advantages and disadvantages to each method (e.g. if your rays are more than your vertices you end up doing more transformations, however you don't incur cache misses by running over all vertices).

lightxbulb
  • 2,226
  • 1
  • 6
  • 14