The Second Annual Unofficial OpenGL Feature Awards!

I hereby hand out the following awards:

We (Finally) Did What We Said We Were Gonna Award

The conformance test suite.

I’ll just quote Tychus Findley, “Hell, it’s about time!”

One Little Mistake Award

ARB_multi_bind

This was good functionality, until I saw glBindImageTextures. I can’t really think of a more useless way to specify that. It applies the “defaults”, based on the texture target. Which means that an array texture will always be bound layered.

OK to be fair, you can use texture_views to effectively create single textures who’s defaults match what you would pass to glBindImageTexture. And then you just bind them all in one go.

3D Labs Is Really, Really Dead Award

ARB_enhanced_layouts

So, not only can we specify uniform locations in the shader, we can even specify packing behavior. To the point that we can steal components from other vectors and make them look just like other variables.

That one must be a nightmare to implement. I hope the ARB has a really comprehensive conformance test for it…

Oh, and this also wins Most Comprehensive Extension Award. It lets us steal components from other interface elements, specify the uniform/storage block layout, define locations for interface block members, and define transform feedback parameters directly in the shader.

Is OpenGL Still Open Award?

ARB_bindless_texture

So. NVIDIA comes out with NV_bindless_texture. And unlike bindless attributes and pointers in GLSL, they actually patent this.

And now it’s an ARB extension. It’s not core… but it’s not a proprietary extension. Yet anyone who implements it will almost certainly be stepping on US20110242117, and therefore must pay whatever NVIDIA says they have to pay. Unless NVIDIA has some agreement with the ARB, granting anyone a license to implement ARB_bindless_texture without paying a fee.

The really disconcerting part is that the patent encumbrance issue… isn’t mentioned in the spec. Other extensions like EXT_texture_compression_s3tc mention their patent issues. But not this one.

Last Kid Picked for Baseball Award

EXT_direct_state_access

When bindless texturing gets the nod from the ARB, and this doesn’t, something interesting is happening behind the scenes. How much does the ARB not want this in core GL, for them to deal with sparse and bindless textures first?

Then again, apparently NVIDIA wants to support DSA so badly that they may be updating the DSA extension with new stuff… and not telling anyone else who’s implementing it. If true, that’s not playing fair, guys. There’s clearly some kind of huge fight happening around this functionality within the ARB.

So I hope nobody’s holding their breath on this one.

Fragmenting The World Award

ARB_compute_variable_group_size

I understand why ARB_bindless_texture and ARB_sparse_texture aren’t core. That reason being (besides the patent issues) that we don’t want IHVs to have to say that <insert hardware here> can do 4.3, but not 4.4. There are lower-classes of 4.x hardware that just can’t do this stuff. So we leave them as extensions until as such time as the ARB decides that the market penetration of higher-end hardware is sufficient to incorporate them.

Or until we finally decide to have 5.0 (ie: when Microsoft decides to go to D3D 12).

But compute_variable_group_size really seems like something any 4.x-class hardware should be able to handle. Something similar goes for ARB_indirect_parameters.

Hey, That’s Actually Useful Now Award

ARB_buffer_storage

This extension adds functionality to glFlushMappedBufferRange, one of the more useless functions from ARB_map_buffer_range. Now, you can effectively keep a buffer mapped indefinitely and simply synchronize yourself with the server by flushing written ranges.

You Were Right Award

Um, me. Admittedly, it’s old, but I still called it: immutable buffer object storage, with better, enforced behavior. I even called the name (which admittedly is derivative and therefore obvious). Though to be fair, the whole “render while mapped” was not something I predicted.

I was going to say that it seemed odd that not specifying GL_DYNAMIC_STORAGE_BIT still allowed you to map the buffer for writing. But I did see a notation that glBufferStorage will fail if you use GL_MAP_WRITE_BIT without also specifying GL_DYNAMIC_STORAGE_BIT. And of course, you can’t map an immutable buffer for writing without GL_MAP_WRITE_BIT.

It doesn’t have all of the bits I would have liked to see. But it has all the bits the IHV’s wanted (they even said so in the “issues” section). So I’ll call that “good enough.”

Oh, and as for the complaints about GL_CLIENT_STORAGE_BIT being a hint… then don’t use it. Remember: the problem with the current buffer object setup isn’t merely that they’re hints (that contributes, but that alone isn’t the full problem). It’s that the hints don’t really communicate what you are going to do with the buffer. Buffer Storage lets you do that.

You describe exactly how you intend to use it. The bits answer the important questions up-front: will I write to it, will I map it for reading or writing, do I want OpenGL to access it while it’s mapped, etc. And the API enforces every single one of these methods of use.

I have no idea why they even bothered to put GL_CLIENT_STORAGE_BIT there, since that doesn’t describe how you will use the buffer. But as the issue rightly stated, drivers will ultimately just ignore the hint.

So encourage them to do so by ignoring it yourself.