Get ready to be underwhelmed. This entire section is dedicated to information that is no longer current and/or valid. While this might seem worthless (and you're welcome to skip to Part 2) this section contains what I think many OpenGL tutorials lack: perspective.
While the pun is intended, it is important to note that OpenGL has been a living API for 22 years, with changes constantly being proposed and adapted. Unsurprisingly over the years, failed attempts to future-proof have resulted in bits of unnecessary and/or redundant code with little to current use. When code like this is discovered in any software project, it is labeled as deprecated.
Code marked deprecated is usually removed after a few version iterations of the project, and for this reason should be used with caution. In 2008, with the launch of OpenGL version 3.0, portions of the API were officially marked as deprecated. While this should have served as a useful way encourage developers to practice better API usage, it has instead fragmented the existing resources on OpenGL and made it near impossible to learn the correct usage of the API.
Which leads me to this article, overviewing the design of the original OpenGL API – version 1.0. Along the way, I will point out sections of API that should never again see the light of day, as well as some that still hold strong in modern GL. Many of the API's quirks stem from these founding concepts, and so hopefully this preface will clear up many common points of confusion for those who would normally dive directly into the modern API.
Table of Contents
The [Deprecated] Pipeline
A tenant of graphics programming is understanding the rendering pipeline. Conceptually, if something is being drawn to the screen, a pipeline exists in some shape or form. At its core, the pipeline is simply a description of data flow, that starts with user (read developer) input and ends with pixels on the screen.
Data (Input) -> Processing -> Screen
Here is the OpenGL 1.0 pipeline:
As you can see, at the top is the input vertex data, and the output, pixel data, is on the bottom. All the boxes are stages of the pipeline. These stages are configured through the OpenGL API.
Calls that change the pipeline state (from
glTexImage2D() are buffered internally, and the hardware driver then asynchronously consumes the commands. Buffering the commands prevents synchronization between the GPU and CPU with every function call.
However, when accessing data from the API, all previously buffered commands must be processed while the CPU waits for the GPU to catch up and return whatever API state you requested. This is especially bad if the GPU was just given a lot of work:
glMatrixMode(GL_MODELVIEW); glRotate(45, 0, 1, 0); ... glBegin(GL_TRIANGLE_FAN); ... glEnd(); glBegin(GL_TRIANGLE_FAN); ... glEnd(); // Transfer the pixels rendered into a texture char data = malloc(WIDTH * HEIGHT * 3); glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, data); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, WIDTH, HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glReadPixels() will block, waiting on the GPU to get through all its commands so the API can give you the current data. This is called a sync point, and is a really bad thing for real-time graphics applications. Successive versions of OpenGL introduce many mechanisms – including framebuffer objects, memory mapping, and pixel pack/unpack buffers – to reduce the amount of sync points necessary in production code. It is still useful however to access state (especially
glGetError()) while debugging.
Another problem with OpenGL's command buffering is its choke points – driver overhead. In early OpenGL, it was common to define every vertices' attributes with function calls, using
glColor(), etc. between
glEnd(). While this works, it shoves a lot of commands into the command stream. Drawing every mesh this way clogs the buffer, forcing the CPU to wait for the GPU to finish its work before it can issue more commands.
Even the first version of GL had a solution for this – display lists. You can see them as an entire stage of the pipeline above. Display lists were essentially compiled lists of commands stored and executed with a single call. Storing all your meshes as display lists would significantly reduce the amount of data going into the command stream.
However even with this boost, since all the data is provided sparsely throughout the command stream, there is no way to further accelerate the drawing process by streaming and caching vertex data. Thus, GL evolved to provide buffer objects as a way of passing predictable and densely-packed data to the GPU.
Textures have been around for as long as OpenGL, so there are a lot of ragged edges in the API. Textures are generated like all other objects in GL, with a
glGenTextures() creates handles to texture objects. These objects can be bound to different texture targets, like 1D or 2D – and later 3D or cubemap – and given texture data. In GL 1.0, there is one texture unit.
GL 1.2 was quick to fix this, including multitexturing and
glActiveTexture(). Now, every texture unit has an independent set of targets to bind to. Because no part of the pipeline was programmable, textures could be rendered through a simple texture environment, allowing textures to alpha-blend or multiply their colors with the underlying color.
In GL 1.0, fragment operations were the only way to perform special rendering techniques like reflections and fog-of-war. Even today, stencil-testing, depth-testing, alpha-testing, and blending operations are configured with this API. Even with programmable pipelines, many of these operations have to be enabled with the fixed-function
All fixed-function pipeline operations can be enabled and disabled, and many are disabled by default.
The desire for more control over fragment operations became one of the big pushes towards programmable shading.
Vertex data in GL 1.0 could only be manipulated by the fixed-function matrix stack. There were multiple stacks, separated into modes, for transforming different data.
glMatrixMode(GL_MODELVIEW) would switch matrix operations to affecting the vertex position data, while
GL_TEXTURE would affect the vertex's texture coordinate. Finally,
GL_PROJECTION allows the user to configure an orthographic or perspective projection.
Matrices can be saved and restored by pushing or popping them from the stack. This was commonly used for parenting transforms, where calling
glPushMatrix(), followed by
glMultMatrixf() would concatenate the transforms, but simply calling
glPopMatrix() would restore the matrix to the parent's transform.
The other push for programmable shading came from the desire for more control in vertex transformation, leading to vertex shaders.
An Evolving API
I have given a few examples throughout where some short-sighted feature of GL was adapted or expanded into a whole new and useful feature. This is the magic of OpenGL – constantly evolving and extending to stay modern and practical.
The next article will explore the modern OpenGL pipeline and its features. Many of these features fascilitate fine-grain control, lower driver overhead, or asynchronous data transfer – issues GL has been fighting since its beginning.
- SongHo.ca - a great reference for legacy OpenGL
- OpenGL 1.0 - the actual specification
- Image credit (top) - glprogramming.com