93

I'm new to both gamedev and Blender, and there's something I can't shake:

In Blender, a single render (even using the more advanced Cycles renderer) can take up to 45 seconds on my machine. But obviously, in a game, you can have amazing graphics, and so rendering is obviously happening continuously, multiple times a second in real-time.

So I'm also wondering what the disconnect is, regarding how "slow" Blender's renders seem to be, versus how game engines achieve real-time (or near real-time) rendering.

Vaillancourt
  • 16,325
  • 17
  • 55
  • 61
smeeb
  • 1,045
  • 1
  • 8
  • 8
  • 3
    Real-time rendering is a huge topic in itself, there's a lot of books written about it (including "Real-Time Rendering"). And renderers like Cycles work completely differently than 3D renderers in game engines - you can't really compare them – UnholySheep Feb 03 '17 at 12:53
  • 43
    @UnholySheep Of course you can compare them. How else would anyone explain the difference, to answer the question? – user985366 Feb 03 '17 at 22:12
  • 1
    Blender has to do everything; a video game's engine only has to do a limited set of things that were selected because they could be made sufficiently performant. If some rendering task is hard, Blender tries to optimize it as best it can while a video game engine simply omits it. – Nat Feb 03 '17 at 23:28
  • Is it possible that Blender does not use the GPU in your system? Besides Blender not making many approximations, the difference between CPU and GPU performance can be devastating for certain tasks. – Martin Ueding Feb 04 '17 at 10:37
  • 1
    @MartinUeding 45 seconds for a single frame is nothing unusual for Blender if the scene is complex or the settings are up particularly high, even on GPU rendering. – Pharap Feb 04 '17 at 18:22
  • Btw, there is a blender stack exchange! blender.stackexchange.com – 10 Replies Feb 04 '17 at 22:26
  • 2
    @10Replies But this question would not be topical on that site. – GiantCowFilms Feb 04 '17 at 22:29
  • @GiantCowFilms I think it could be worded in a way that it is on topic. How about "What shortcuts does the BGE take to get real time rendering" or something of the sort. – 10 Replies Feb 04 '17 at 22:30
  • 3
    @10Replies: While the OP does mention Blender, the question essentially boils down to why real-time game engines seem to render 3D scenes faster than approximately-photo-realistic 3D renderers (such as Blender, but also many others). Note that this is also the question answered by the accepted answer. With that in mind, I agree the question is more on-topic here on [gamedev.se], where questions about general game development technology can be asked, rather than on [blender.se], where questions are more specific to Blender in particular. – O. R. Mapper Feb 04 '17 at 23:22
  • @O.R.Mapper Hence why I didn't flag for migration. Just thought that it might be useful to the OP – 10 Replies Feb 04 '17 at 23:23
  • There are some additional answers on Blender SE site that might be helpful. – A C Feb 05 '17 at 00:01
  • 2
    Is it possible that Blender does not use the GPU in your system? Blender can be configured to use the GPU, and it does give a speed improvement. However it doesn't take a 45 minute render down to 1/60 of a second. :) – Nick Gammon Feb 05 '17 at 07:34
  • 2
    For some graphical examples, see for example DOOM (2016) - Graphics Study How a Frame is Rendered, as well as the links at the bottom of the article. – Joel Purra Feb 05 '17 at 11:10
  • 3
    I guess the secret here is that amazing doesn't have to be precise. There are fast approximations for math used in 3D rendering, like InvSqrt – Dmitry Grigoryev Feb 06 '17 at 17:37
  • How do you create a scene in blender in the first place? Using a realtime rendering engine. That's how I explain it to my self. Also take note that you can create and render a scene in blender using generic hardware, but most games would struggle without a dedicated graphics card. Also, compare the CUDA/Cycles "Rendered" mode of working in blender with generic, non-CUDA, CPU based things. Essentially, non real-time rendering is about end-result justifying the means, while real-time rendering is more about getting there in the first place. – Ate Somebits May 13 '19 at 13:48

2 Answers2

119

Real-time rendering, even modern real-time rendering, is a grab-bag of tricks, shortcuts, hacks and approximations.

Take shadows for example.

We still don't have a completely accurate & robust mechanism for rendering real-time shadows from an arbitrary number of lights and arbitrarily complex objects. We do have multiple variants on shadow mapping techniques but they all suffer from the well-known problems with shadow maps and even the "fixes" for these are really just a collection of work-arounds and trade-offs (as a rule of thumb if you see the terms "depth bias" or "polygon offset" in anything then it's not a robust technique).

Another example of a technique used by real-time renderers is precalculation. If something (e.g. lighting) is too slow to calculate in real-time (and this can depend on the lighting system you use), we can pre-calculate it and store it out, then we can use the pre-calculated data in real-time for a performance boost, that often comes at the expense of dynamic effects. This is a straight-up memory vs compute tradeoff: memory is often cheap and plentiful, compute is often not, so we burn the extra memory in exchange for a saving on compute.

Offline renderers and modelling tools, on the other hand, tend to focus more on correctness and quality. Also, because they're working with dynamically changing geometry (such as a model as you're building it) they must oftn recalculate things, whereas a real-time renderer would be working with a final version that does not have this requirement.

Maximus Minimus
  • 20,144
  • 2
  • 38
  • 66
  • 15
    Another point to mention is that the amount of computation used to generate all the data a game will need to render views of an area quickly may be orders of magnitude greater than the amount of computation that would be required to render one view. If rendering views of an area would take one second without any precalculation, but some precalculated data could cut that to 1/100 second, spending 20 minutes on the precalculations could be useful if views will be needed in a real-time game, but if one just wants a ten-second 24fps movie it would have been much faster to spend four minutes... – supercat Feb 03 '17 at 16:53
  • 9
    ...generating the 240 required views at a rate of one per second. – supercat Feb 03 '17 at 16:53
  • @supercat and because of this your renders are pretty much free of hastle and you gain much control over the process. You could use a game engine to render... if you would be ready to sacrafice on features. But as you said its not worth it. – joojaa Feb 05 '17 at 07:18
  • One striking example of this that I can recall is the original Quake engine (~1996), which was able to achieve relatively mind-blowing real-time 3D graphics on very limited machines using combinations of extremely time-consuming pre-calculation techniques. BSP trees and pre-rendered lighting effects were generated ahead of time; designing a level for that engine typically involved hours (usually overnight) of waiting for map compilation tools to finish. The trade-off was, essentially, decreased rendering times at the expense of authoring time. – Jason C Feb 07 '17 at 05:54
  • (The original Doom engine [1993] had similar precalculations. Marathon may have as well, but I don't recall, I remember building Marathon levels but I can't remember what was involved.) – Jason C Feb 07 '17 at 06:02
  • I focussed on lighting and shadows here, and raytracing naturally falls out of that discussion. There are other examples, some of which also suggest raytracing (refraction/reflection), some of which don't (order-independent translucency). – Maximus Minimus Feb 07 '17 at 06:22
113

The current answer has done a very good job of explaining the general issues involved, but I feel it misses an important technical detail: Blender's "Cycles" render engine is a different type of engine to what most games use.

Typically games are rendered by iterating through all the polygons in a scene and drawing them individually. This is done by 'projecting' the polygon coordinates through a virtual camera in order to produce a flat image. The reason this technique is used for games is that modern hardware is designed around this technique and it can be done in realtime to relatively high levels of detail. Out of interest, this is also the technique that was employed by Blender's previous render engine before the Blender Foundation dropped the old engine in favour of the Cycles engine.

Polygon Rendering

Cycles on the other hand is what is known as a raytracing engine. Instead of looking at the polygons and rendering them individually, it casts virtual rays of light out into the scene (one for every pixel in the final image), bounces that light beam off several surfaces and then uses that data to decide what colour the pixel should be. Raytracing is a very computationally expensive technique which makes it impractical for real time rendering, but it is used for rendering images and videos because it provides extra levels of detail and realism.

Raytracing Rendering


Please note that my brief descriptions of raytracing and polygon rendering are highly stripped down for the sake of brevity. If you wish to know more about the techniques I recommend that you seek out an in-depth tutorial or book as I suspect there are a great many people who have written better explanations than I could muster.

Also note that there are a variety of techniques involved in 3D rendering and some games do actually use variations of raytracing for certain purposes.

Pharap
  • 1,627
  • 1
  • 10
  • 15
  • 3
    +1 for a very good point; I deliberately didn't go down the rabbit hole of raytracing vs rasterization, so it's great to have this as a supplemental. – Maximus Minimus Feb 03 '17 at 19:28
  • 17
    This answer gets more to the heart of the difference. Game engines perform rasterization (forward or deferred) while offline renderers (like Blender, Renderman, etc.) perform ray-tracing. Two completely different approaches to drawing an image. – ssell Feb 03 '17 at 19:31
  • 4
    @LeComteduMerde-fou As gamedev is aimed at game developers I felt a supplemental technical explanation would be of benefit to the more technically inclined reader. – Pharap Feb 03 '17 at 19:33
  • It's worth pointing out that Blender will probably have more engines in future, including a realtime PBR engine called EEVEE. – z0r Feb 06 '17 at 05:29
  • I wonder when hardware will turn from rasterization to ray tracing. I believe that ray tracing is an embarassingly parrallel problem, which, given that silicon is not speeding up, but just increasing in amount, seems likely the future of rendering. – Chii Feb 06 '17 at 10:54
  • 1
    @ssell True, but it's not just about ray-tracing - even without ray-tracing, even with GPU rendering, Blender's rendering is usually much more detailed and slower. This mostly has to do with the focus on correctness - better texture filtering and resolution, anti-aliasing, lighting, shadow mapping, Z-accuracy, quads, bi-directional surfaces, large polygon counts, higher-resolution output, accurate bump-mapping, lack of pre-calculated maps, morphing, accurate kinematics... it's a long list of features that game engines lack or fake their way through. – Luaan Feb 06 '17 at 12:33
  • @Chii https://www.imgtec.com/blog/real-time-ray-tracing-on-powervr-gr6500-ces-2016/ -- And I don't recall the manufacturer now but I do remember there being scalable (but expensive and not great at the time) specialized real-time raytracing hardware at least a decade ago, a bit ahead of its time though. Also, apparently (inferred from that blog) there is work on hybrid ray-tracing / rasterization engines as well. – Jason C Feb 07 '17 at 06:05
  • @Pharap FYI, "rasterization engines" (or "rasterizing engines", sometimes) are the official name of the first engine you described, as contrasted with "ray-tracing". See also the issue-ridden wikipedia article, https://en.wikipedia.org/wiki/Rasterisation. – Jason C Feb 07 '17 at 06:09
  • 1
    @Chii I misremembered. I was thinking of ART VPS, it was just acceleration, not real-time. – Jason C Feb 07 '17 at 06:12
  • 1
    It has been pointed out that Cycles' "path" tracing works from the illuminated object to find the sources of illumination that can affect it. It uses a random technique which can produce dreaded "fireflies" when adjacent pixels "see" different things. The other standard way to do the same thing, "ray" tracing, goes the opposite way – tracing from a light-source to determine what it hits. A major difference between them, therefore, is how they handle indirect lighting – where light bounces off something else on its way to a particular destination. "Rasterizing" is something else again. – Mike Robinson Jan 25 '19 at 16:12