C++ Rendering Engine I – Abstracting the Render Device

Design Choices

Before we start coding, let’s make some design choices.

Handles or Objects

The first decision to make is whether or not we are using C or C++ for the render device. If choosing C, I would make each render device object be represented using an integer type for the handle. If choosing C++, I would use object pointers.

In this article, we are choosing C++ and objects. There are not a lot object-oriented features we will be using in the render device, so this choice is purely a based on style. You can just as easily and with the same level of expressiveness go with straight C.

Statefull Engines and Leaky Abstractions

Since our render device is wrapping a “stateful” abstraction, it can end up being quite a “leaky” abstraction. If we choose to put member functions on our render device objects such as Texture2D, PixelShader, and VertexBuffer, we can end up changing the underlying OpenGL state whenever we update the contents of these objects. This “leaks” the object-oriented abstraction and requires the programmer using the render device to be aware of these pitfalls.

For example, if we added a function to Texture2D called Update() that is responsible for updating the texture contents, we need to bind the underlying OpenGL texture to an active texture slot, using glActiveTexture(). The programmer calling this function may be expecting that whatever Texture2D object they had previously bound to that active texture slot to remain and be surprised when that is not the case.

We have two choices here: 1) when you change the global OpenGL state in order to modify an object, save and restore the state to what it was previously, or 2) flush relevant draw states on every draw call of the render device.

Ultimately, we will make this choice on a case-by-case basis resulting in a mixed approach. This can increase the learning curve for programmers, so a good balance must be struck.

Abstraction Translation and Performance

When performing any operation, such as initializing a Texture2D‘s pixel format, we will have to translate the pixel format description from our render device’s abstraction to that of OpenGL’s abstraction.

This translation can slightly impact performance if done too frequently inside inner draw loops of your rendering engine.

We prefer to perform this translation only when resources are acquired. We can ensure this resource acquisition occurs most frequently at startup, on major scene transitions, or on rendering context changes.

Take, for example, the draw commands that use glDrawArrays(). Instead of translating from our own render device enum to OpenGL’s enum for primitive type on every call, we can abstract the draw operation into an object where this translation has already taken place.

Creating objects that encapsulate commands like this comes with some advantages and disadvantages. This encapsulation lets us produce commands in our rendering system ahead of time. Sorting and processing these commands can occur separately and even in parallel on another thread. Ultimately, this is where a rendering engine ends up if it is to be highly performant.

A disadvantage is the proliferation of objects that can occur. Wrapping glDrawArrays() in an object can seem like overkill, when you are programming directly to the render device. We will mitigate this disadvantage by grouping render state changes into objects by functionality and frequency of update.

For example, we can make a single object called DepthStencilState that encapsulates all state related to the depth and stencil state. Additionally, RasterState will encapsulate everything related to the rasterizer.

As an alternative to encapsulating glDrawArrays calls using an object, we can implement several different draw methods for each primitive type (e.g. DrawTriangles(), DrawTriangleStrip(), and so on).

We will mix and match these choices throughout the render device.

6 thoughts on “C++ Rendering Engine I – Abstracting the Render Device

        1. Andy Post author

          It certainly has been a while.

          I am currently working on a Vulkan engine (one at work and one at home). Soon, I will write about the choices I am making to support both OpenGL and Vulkan abstractions without sacrificing performance or expressiveness with Vulkan.

          Reply
  1. Mathew

    Thank you so much, words can explain how great it is to finally find something on this topic that is relevant and makes sense. I haven’t yet gotten to implementing the shader uniforms or texture part of the abstraction but with the knowledge I’ve gained so far, I now I understand that the possibilities are truly endless.

    I honestly think I’ve never truly grasped abstraction until today.

    I now understand that in order to create an API agnostic rendering engine, things need to be broken down, but not to the point where you end up defining methods that are specific to each API.

    When I first saw that you created a “VertexShader” and a “PixelShader” class I was so confused thinking to myself “But wait a minute… Those two things could just be grouped together as one ‘Shader’ class” and then as I soon as I started to implement the design myself… Something clicked inside of me! Those two classes are of course better off separate because we as programmers are horrible at seeing the future.

    Once again, I am thrilled to have found these articles.

    Thank you!

    Reply
    1. Andy Post author

      Thank you so much for the encouragement. A year later, it really is about time to do the next article in the series.

      My recent work is in task-based multithreading. Perhaps it is time for a particle system post.

      Reply

Leave a Reply to Andy Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.