11

More out of curiosity than for any practical purpose, I'm wondering what is lacking from older GPUs that causes them to be unable to support Vulkan.

I know that some argue that some hardware, such as nVidia's Fermi series of GPUs, could support Vulkan in hardware but the vendors simply won't implement it for business reasons, but even disregarding that, there is obviously some most recent hardware that really doesn't support Vulkan, and I'm curious what they lack. Like, say, the nVidia Tesla GPUs, AMD's TeraScale (VLIW) GPUs, or Intel's Gen8 hardware.

I'm mostly curious in knowing if the features in Vulkan that precludes support for these GPUs are somewhat "peripheral" in the API (like, say, required support for some particular texture format that the API could have just as well not required but did anyway to create a baseline that was considered reasonable), or if it's truly central to the Vulkan model.

Dolda2000
  • 221
  • 2
  • 5
  • 1
    Is your question about older hardware or recent hardware? The title says "older" but your actual question text seems like your asking more about more recent models. I suspect the answer will be different between the 2. – user1118321 May 04 '17 at 06:07
  • I believe that the some of the required features (including compute! and coarse occlusion queries) and some of the required limits may be out of reach. – ratchet freak May 04 '17 at 10:37
  • @user1118321: I mostly mean the most recent GPUs that don't support Vulkan. By "older" I just meant "old enough to not support it". – Dolda2000 May 04 '17 at 12:06
  • 1
    @Dolda2000: I don't think the question is well-founded. You list three pieces of hardware, but you assume that they don't have Vulkan implementations for non-"business reasons". Do you have a basis for such an assumption? – Nicol Bolas May 04 '17 at 14:45
  • @NicolBolas: Sorry, it seems I made myself misunderstood. I really just intended to speak of whatever most recent hardware that technically cannot support Vulkan. Those three specific architectures were just my best guess on what those might be, but I'm entirely open to having guessed wrong. – Dolda2000 May 04 '17 at 17:37
  • 1
    @Dolda2000: But you already know why GL 3.x/D3D10-class hardware can't support Vulkan: because they don't have very obvious hardware features that Vulkan requires. So the only hardware you seem to be talking about is GL 4.x/D3D11-class hardware that doesn't have Vulkan implementations. – Nicol Bolas May 04 '17 at 18:08
  • @NicolBolas What are these "very obvious hardware features"? I think this is an excellent question. Also, I'm not sure I follow that GL 3.x can't support Vulkan since there exist GPUs that support Vulkan and can't support GL 3.2 devices because of lacking geometry shaders. – aces May 05 '17 at 01:52
  • 2
    @aces: The "very obvious features" in question would be compute shaders and image load/store. Obvious due to the fact that it's right there in the Vulkan specification that these are not optional. The only GPUs that can provide these yet aren't GL 3.x capable would be mobile GPUs. But they don't support OpenGL of any version; they support OpenGL *ES* versions. – Nicol Bolas May 05 '17 at 02:10
  • 4
    @NicolBolas You should write this as an answer :), as well as explain what hardware changes are needed to support these. – aces May 05 '17 at 02:31
  • 2
    It also raises the question in what way eg. Tesla GPUs don't support compute shaders. Since they support CUDA, why can they not support Vulkan compute shaders? Ie., what actual hardware feature do they lack that prevents them from running compute shaders. – Dolda2000 May 05 '17 at 11:57
  • @Dolda2000 But don't they in turn (theoretically) lack all the other things apart from compute shaders, like all the features related to rasterization and whatever graphics stuff? – Christian Rau Oct 01 '17 at 18:06
  • @ChristianRau: I don't know, that's why I'm asking. :) That being said, though, I'm not aware of any rasterization features that Vulkan requires which haven't been in OpenGL for a long time. – Dolda2000 Oct 02 '17 at 00:54
  • Sure, but do Teslas even support (reasonably modern) OpenGL to begin with? Afterall, they're deliberately non-graphics. – Christian Rau Jun 17 '19 at 11:07
  • @ChristianRau: I was referring to the Tesla microarchitecture, not the workstation GPU brand (blame nVidia for the confusion). Tesla is also used from the GeForce 8XXX series up to the 3XX series. – Dolda2000 Jun 19 '19 at 21:33

1 Answers1

7

So, I'll start by saying I'm not a driver developer, but I have read many comments and docs in this regard.

First of all, we can see how Khronos's own slides on presentation day were mentioning some rather vague "any OpenGL ES 3.1 GPU" (or desktop 4.X). This means something added around that mark would be the secret to it. Compute shaders are definitively [at least one of] the things, being confirmed here, by an Intel developer, and in the spec with no possible unsupported value for MaxComputeWorkGroupSize and this likely smoking gun:

If an implementation exposes any queue family that supports graphics operations, at least one queue family of at least one physical device exposed by the implementation must support both graphics and compute operations.

In addition to that though, it could also be there are other mandatory capabilities (such as indirect draws or load&store, mentioned in the comments above too). But I have some slight difficulty interpreting the document. With the exception of robustBufferAccess, it seems like just about anything else big you could imagine is optional (from anisotropic filter, to geometry and tessellation shaders, to any special number format other than good old float32)

This being the most hypothetically simple case then, doesn't mean it's all fine and dandy. Not having virtual memory will make your life a hell for example, and lack of other architectural features may basically nullify whatever theoretical advantage in speed the API has.

These links should also answer your wonders about ATi cards. Likely same story for NVidia then (aside of perhaps some microbenchmarks, I'm not sure how much really workable Fermi's 2-years-delayed bare-bones DX12 support eventually is). As for Intel, I don't know what you are talking about, Gen8 is supported since Day1 and in fact even Gen7 shakily works now.

Going even further back, I guess if any the question becomes "what's so special about compute shaders again".

And, well, the thing isn't probably the very specific feature itself (because it's not that you couldn't already do some sort of general purpose calculation even before the unified shader model), but more like the whole general GPU architecture needed to support it. Check this for some backstory.

Before DX11 raised the bar for example, just think that you couldn't even run more than a single "task" on a Tesla GPU, without having to call a context switch (with comparable limitations also likely existing for TeraScale 1 and Gen6). And may God save us from when pixel and vertex units were separated hardware.

fun fact: it turns out that if you are a low-end kind-of-modern mobile gpu (rather than an old desktop card from a time when certain concepts hadn't even been invented yet), engineers can cut you down so much that even if you just sport D3D 9_3 feature level that's still potentially enough to support Vulkan.

EDIT: for such somewhat twisted reasons, it can also happen that while you are just GLES 2.0 compliant (and compute shaders themselves aren't supported atm, even though the architecture would be capable) a restricted version of Vulkan could also be possible. It doesn't give you everything, but it can still lead to benefits over just "old apis".

mirh
  • 185
  • 1
  • 6