I’m using a 3d framework that allows me to draw 3d things into a canvas. I can successfully render 3d objects by drawing them on a 2d canvas using the camera perspective matrix, view matrix and the 3d objects model matrixes.
The world space has its own units obviously and get rendered in -1/-1 -> 1/1 space and I have to translate/multiply/perspective divide the (x,y) coordinates to create the perspective and fit things inside my canvas. I think I’m doing it right, I can see the camera focus in middle of the screen and the axis/origin in a comfortable position (z is going up, x to my right, y to the my back/left).
Let’s say I have a rectangular face (RF) in my 3d space that gets projected on the screen. I also have another screen element (PIC) that’s also rectangular and have the same aspect ratio as RF. Obviously the size of the PIC is in pixels being rendered on the screen. The coordinate system of the PIC is origin at top/lef, positive axis x/y going left/right and the -z forward. I can apply 3d transform matrixes to the PIC (the second framework that I’m using allows me to do that). I also can specify the origin when doing the transform. I thing that will translate the PIC before applying the transform.
I’d like to find out what’s the transform matrix that I can apply to the PIC to get it rendered exactly in the same position on the screen as the RF gets rendered by the other framework.
I could try to use common sense like rotate PIC as RF is rotates in relation to the camera, move it to the ‘middle of the screen’ and zoom it out to fit the RF projection somehow (not clear to me how exactly). Also, I would need to adjust the transform matrix to fit the perspective from the first framework.
I’m sure there’s a clear way yo do this using the coordinates and matrixes that I already have but I would need a deeper understanding of how math works here :).
I could use: models matrixes: modelMatrix, modelViewMatrix, their inverses. the view matrix, camera matrix (camera model matrix), camera perspective matrix camera properties: fov, position, rotation in relation to the world.
There is a similar work done in the three.js framework in Css3dRenderer that’s basically doing the same thing (applying a transform to a html DIV element to render it ‘in the same perspective projection’ used to render things using webgl) but I couldn’t use it as it is and I’d like to understand how things work.
Again, the question is what’s the transform matrix that would get PIC rendered on screen as RF is rendered.
Even though it could do the trick, the question/solution from Finding the Transform matrix from 4 projected points (with Javascript) is different because it doesn't handle normals on the cube face... This question is about a face in a 3d space that can be oriented in both ways: front/back to the camera. Basically here we have a 3d transform and the other question/solution is about 2d projections.
About mapping the PIC to RF, that's a way of saying it, maybe. Actually I need to match the result of transform(PIC) to the perspective projection of RF to the screen.
– smike Sep 26 '19 at 18:06apply perspective
take PIC_in_world to camera space
PIC_center_scaled -> PIC_in_world
move PIC to center, scale to the 2x2 size of the projection screen
?