2

A scene can be pre-rendered from a single camera position.

Can a scene be efficiently rendered for a fixed set of positions and lines that the camera can move through?

Consider games like escape room games where the camera only assumes fixed locations. Can the transition between those positions also be rendered more efficiently than frame-by-frame?

Suppose I wanted the player to move freely along a fixed linear (or curved) track, how can I show the maximum render quality?

Elliot JJ
  • 133
  • 4
  • Yes, no, possibly. Not as a normal physically based 3D but still traditional animation has done this for ages. But its a trick that relys on a nonphysically correct image an stylized look and feel. Hard to say anything useful as the question is a bit open ended – joojaa Nov 26 '17 at 15:14
  • Sure, you can pre-render the entire sequence of images along some dense sampling of the path, what's the problem? –  Nov 26 '17 at 19:36
  • @Rahul, is it any faster than Render Time x number of samples? – Elliot JJ Nov 26 '17 at 19:43
  • 1
    What "efficiency" are you worried about? Pre-rendering image sequences has a big up-front cost per scene but a tiny cost for the client. Are you trying to reduce the amount of data you need to give the client, or the amount of up-front work (e.g. because the scene is generated afresh each time)? – Dan Hulme Nov 27 '17 at 10:17
  • I'd look into making BSP trees that were fourth dimensional - the fourth dimension being time. – Alan Wolfe Dec 01 '17 at 20:40
  • Bleeding edge question :) There is work currently going on with Google Seurat, but unfortunately not too much info out yet. See e.g. https://www.roadtovr.com/preview-google-seurat-ilm-xlab-mobile-vr-rendering/ – Mikkel Gjoel Dec 06 '17 at 13:32
  • I'd think rendering images with depth buffers and using real time height field raymarching for reconstruction would be a decent solution. of course, you might miss some information between "samples" – Sebastián Mestre Dec 29 '17 at 15:54

1 Answers1

1

Would Diego Nehab et al's "Accelerating Real-Time Shading with Reverse Reprojection Caching" be the sort of thing you are looking for?

Paper Link and/or Slide presentation from Graphics Hardware 2007

IIRC, the idea is to reuse a previous frame's rendered image results in the current frame rather than running a, potentially expensive, shader for every pixel.

At the conference, I did ask Diego if he'd considered doing 'bidirectional' reuse (i.e. rendering frames out of order somewhat akin to h.264) but I can't recall his answer. I think in your proposed use case it might be of value.

Simon F
  • 4,241
  • 12
  • 30