0

I’m using a 3d framework that allows me to draw 3d things into a canvas. I can successfully render 3d objects by drawing them on a 2d canvas using the camera perspective matrix, view matrix and the 3d objects model matrixes.

The world space has its own units obviously and get rendered in -1/-1 -> 1/1 space and I have to translate/multiply/perspective divide the (x,y) coordinates to create the perspective and fit things inside my canvas. I think I’m doing it right, I can see the camera focus in middle of the screen and the axis/origin in a comfortable position (z is going up, x to my right, y to the my back/left).

Let’s say I have a rectangular face (RF) in my 3d space that gets projected on the screen. I also have another screen element (PIC) that’s also rectangular and have the same aspect ratio as RF. Obviously the size of the PIC is in pixels being rendered on the screen. The coordinate system of the PIC is origin at top/lef, positive axis x/y going left/right and the -z forward. I can apply 3d transform matrixes to the PIC (the second framework that I’m using allows me to do that). I also can specify the origin when doing the transform. I thing that will translate the PIC before applying the transform.

I’d like to find out what’s the transform matrix that I can apply to the PIC to get it rendered exactly in the same position on the screen as the RF gets rendered by the other framework.

I could try to use common sense like rotate PIC as RF is rotates in relation to the camera, move it to the ‘middle of the screen’ and zoom it out to fit the RF projection somehow (not clear to me how exactly). Also, I would need to adjust the transform matrix to fit the perspective from the first framework.

I’m sure there’s a clear way yo do this using the coordinates and matrixes that I already have but I would need a deeper understanding of how math works here :).

I could use: models matrixes: modelMatrix, modelViewMatrix, their inverses. the view matrix, camera matrix (camera model matrix), camera perspective matrix camera properties: fov, position, rotation in relation to the world.

There is a similar work done in the three.js framework in Css3dRenderer that’s basically doing the same thing (applying a transform to a html DIV element to render it ‘in the same perspective projection’ used to render things using webgl) but I couldn’t use it as it is and I’d like to understand how things work.

Again, the question is what’s the transform matrix that would get PIC rendered on screen as RF is rendered.

Even though it could do the trick, the question/solution from Finding the Transform matrix from 4 projected points (with Javascript) is different because it doesn't handle normals on the cube face... This question is about a face in a 3d space that can be oriented in both ways: front/back to the camera. Basically here we have a 3d transform and the other question/solution is about 2d projections.

smike
  • 1
  • What you describe about "PIC" is commonly know in 3d frameworks as a "texture". And you want to "map" it to the "RF" rectangle. Lots of 3D tutorials are available in the web. If you want to do it all on your own, do you have access to the view and projection matrices used for the RF? – Ripi2 Sep 26 '19 at 16:13
  • I have access to all the matrixes and the code for the first framework and the sources too.

    About mapping the PIC to RF, that's a way of saying it, maybe. Actually I need to match the result of transform(PIC) to the perspective projection of RF to the screen.

    – smike Sep 26 '19 at 18:06
  • If you use some graphics implementation (OpenGL, DirectX) then there are commands to make your life easy. If not, then you need to transform PIC coordinates to View space, rotate/move/scale it so it layouts exactly to the RF and then apply the same projection as you did for the RF. Or you can follow the same way as "textures", this is, work with projected texture-coordinates and retrieve for each pixel (most similar to "fragment" in OGL parlance) in RF the color from the PIC. – Ripi2 Sep 26 '19 at 23:41
  • Thanks ripi2, would that be M4_identity * perspMatrix apply perspective
    • viewMatrix take PIC_in_world to camera space
    • movelViewMatrixInv PIC_center_scaled -> PIC_in_world
    • M4_identity.scaled(2 / width, 2 / height).translated(screen.w/2, screen.h/2) move PIC to center, scale to the 2x2 size of the projection screen?
    – smike Sep 27 '19 at 02:08
  • I've just tried the formula from the previous comment. Something's wrong there, even the rect model translation doesn't work properly. It looks like the rotation is done almost ok. If I keep the perspective multiplication I can't see anything on the screen. If I remove it, PIC gets positioned to world center as it should and apparently the rotation works. I would upload screenshots but I can't here apparently. – smike Sep 28 '19 at 00:37
  • @amd thank you, I've just adapted the code with the 2d projection and it works as expected. The issue is there is too much computation going and I think using the precomputed matrixes that I already could improve the performance. – smike Sep 30 '19 at 04:54
  • @Ripi2, could you add an answer with the matrix computed similarly to the texture mapping technique you've mentioned in the previous comment? – smike Oct 04 '19 at 15:40
  • amd, Yanior Weg, nmasanta, José Carlos Santos, Ak19 please be serious. This is not duplicate of the other thing. I'm asking for a 3d transform, the other is a 2d transform – smike Oct 07 '19 at 19:59

0 Answers0