8

Whenever I make semantic or syntax errors in OpenGL, either I get a black screen or the program crashes. I looked up on internet on how to do error handling in OpenGL and in the documentations I found glGetError(). To my understanding, glGetError() will return one or more error code(s) if I call it after calling a "normal" OpenGL function, provided I have made some error. Also I have to call it in while loop to get all the errors.

I have multiple problems with this. One I can't find the line number on which the error has occurred, second it doesn't tell me properly why the error occurred (not a big problem though) and lastly I have to include a while loop after each OpenGL function call.

Here's an example,

unsigned int buffer = 0;
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(float), vertices, GL_STATIC_DRAW); 

Now if I write it with glGetError(), it would look something like

unsigned int err = 0;
glGenBuffers(1, &buffer);
while( !(err = glGetError()) ){
    std::cout << err;
}  
glBindBuffer(GL_ARRAY_BUFFER, buffer);
while( !(err = glGetError()) ){
    std::cout << err;
}  
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(float), vertices, GL_STATIC_DRAW); 
while( !(err = glGetError()) ){
    std::cout << err;
}  

Surely this isn't the way to go, right ? The source code looks so ugly.

How do I do proper error handling with OpenGL ?

My graphics card supports OpenGL 4.4 and I am using C++ .

Nicol Bolas
  • 9,762
  • 18
  • 25
user8277998
  • 215
  • 1
  • 2
  • 5
  • I partially answered to a similar question recently. See https://computergraphics.stackexchange.com/a/5773/182 – Julien Guertault Dec 06 '17 at 02:45
  • shouldn't that read while ( err = glGetError() ) ) – johannes_lalala Feb 06 '20 at 13:54
  • It returns GL_NO_ERROR which I assumed to have value 0, when there is no error, that's why I had != there. In any case it should be while((err = glGetError()) != GL_NO_ERROR). – user8277998 Feb 21 '20 at 17:42
  • I like the way The Cherno wrote his error handling code and he goes through it really clearly. https://www.youtube.com/watch?v=FBbPWSOQ0-w&list=PLlrATfBNZ98foTJPJ_Ev03o2oq3-GGOS2&index=10 . I will however be using glDebugMessageCallback once I can figure it out:) – Sam Keightley Jul 10 '21 at 08:43

3 Answers3

13

Yes, there is a better way! OpenGL 4.3 and later support the glDebugMessageCallback API, which allows you to specify a function in your app that GL will call to issue a warning or error. In this function you can do whatever you like, such as setting a breakpoint in the debugger, or printing the error to a log file. This way you only need to setup the callback during initialization—much nicer than putting glGetError calls everywhere.

To turn on this functionality your GL context also needs to be a "debug context", which is specified by setting a flag when you create the context. The details will depend on which OS / windowing system / framework you're using. A debug context might be slower (on the CPU) than a regular context, but that's the price you pay for more detailed debugging info, kinda like turning optimizations off in the compiler.

See also this wiki page for more information. There's a bunch of other debugging-related functionality. For instance you can give human-readable names to textures, buffers, framebuffers, etc using glObjectLabel, which will then (I assume) be used in error messages. There are also APIs for turning off certain messages or categories of messages, in case you get spammed by warnings that you don't care about or some such.

Nathan Reed
  • 25,002
  • 2
  • 68
  • 107
8

While Debug Output is good, and manual glGetError usage is adequate, it's often better to employ a more dedicated tool for finding exactly where OpenGL errors came from.

RenderDoc is probably the most up-to-date tool for this process, but there are quite a few in various states of functionality. These tools can also give you a detailed log of every OpenGL call you've made.

Nicol Bolas
  • 9,762
  • 18
  • 25
1

If you can't go to OpenGL 4.3 (or your implementation doesn't support glDebugMessageCallbacks), you can simplify your code in a number of ways.

The first is to move the calls to glGetError() into a function like this:

void checkGLError()
{
    GLenum err;
    while((err = glGetError()) != GL_NO_ERROR){
        std::cout << err;
    }  
}

That reduces your code to:

unsigned int buffer = 0;
glGenBuffers(1, &buffer);
checkGLError();
glBindBuffer(GL_ARRAY_BUFFER, buffer);
checkGLError();
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(float), vertices, GL_STATIC_DRAW); 
checkGLError();

However, calling glGetError() frequently can hurt performance, so it's best to conditionalize it so it only gets called in debug builds by doing something like this:

#if DEBUG
#define checkGLError() debugCheckGLError()
#else
#define checkGLError()
#endif

Then name the actual function debugCheckGLError(). It will get called in debug builds but not release builds.

user1118321
  • 3,401
  • 11
  • 14
  • I used the debug macro with the other other answer. Thanks. – user8277998 Dec 04 '17 at 16:23
  • 1
    You can go even further with the macro and do something like this: https://github.com/bkaradzic/bgfx/blob/8d471959eb3cbc16c0e7fdac25efcf842abd2ad1/src/renderer_gl.h#L994 so your code just becomes: GL_CHECK(glGenBuffers(1, &buffer)); and reports everything you need. – Julien Guertault Dec 06 '17 at 02:47
  • I don't understand while( !(err = glGetError()) Why the ! ? Shouldn't it be: while(err = glGetError()) so that it loops and outputs error messages? Otherwise, isn't it an infinite loop that repeatedly displays 0 on the first successful call? – Wyck Mar 04 '20 at 05:49
  • @Wyck Yes, you are correct. I've updated it to make it more explicit by comparing it to the proper error value. – user1118321 Mar 07 '20 at 21:07