C++ Rendering Engine I – Abstracting the Render Device

Additional Features

We can now clear the color buffer and render triangles in normalized device coordinates using a vertex and pixel shader.

Let’s round out the functionality a bit. On this page, we will implement support for index buffers, shader uniforms, textures, raster states, and depth/stencil states.

The results of everything on this page can be found at this commit: 9661cc0

Index Buffers

Index buffers are very straightforward and quite a bit simpler than vertex buffers. Index buffers only carry, well, indices of one of three types: 8-bit, 16-bit, and 32-bit. All of them unsigned.

Following is the added code to support index buffers.

#include "render_device/platform.h"
 
int main()
{
	platform::InitPlatform();
 
	platform::PLATFORM_WINDOW_REF window =
		platform::CreatePlatformWindow(800, 600, "Triangle");
	if(!window)
	{
		platform::TerminatePlatform();
		return -1;
	}
 
	// ...
 
	while(platform::PollPlatformWindow(window))
	{
		// ...
 
		platform::PresentPlatformWindow(window);
	}
 
	// ...
 
	platform::TerminatePlatform();
 
	return 0;
}

Shader Uniforms

In order to be able to render geometry in other than normalized device coordinates, we must have the ability to bind shader uniform variables.

We will query shader uniform variables by name in order to get an interface that lets us bind our variable using different types, such as int, float, float array, and 4×4 float matrix.

Following is the additional interface.

namespace render {
 
// Encapsulates a vertex shader
class VertexShader
{
public:
 
	// virtual destructor to ensure subclasses have a virtual destructor
	virtual ~VertexShader() {}
 
protected:
 
	// protected default constructor to ensure these are never created
	// directly
	VertexShader() {}
};
 
// Encapsulates a pixel shader
class PixelShader
{
public:
 
	// virtual destructor to ensure subclasses have a virtual destructor
	virtual ~PixelShader() {}
 
protected:
 
	// protected default constructor to ensure these are never created
	// directly
	PixelShader() {}
};
 
// Encapsulates a shader pipeline
class Pipeline
{
public:
 
	// virtual destructor to ensure subclasses have a virtual destructor
	virtual ~Pipeline() {}
 
protected:
 
	// protected default constructor to ensure these are never created
	// directly
	Pipeline() {}
};
 
// Encapsulates the render device API.
class RenderDevice
{
public:
 
	// virtual destructor to ensure subclasses have a virtual destructor
	virtual ~RenderDevice() {}
 
	// Create a vertex shader from the supplied code
	// code is assumed to be GLSL for now
	virtual VertexShader *CreateVertexShader(const char *code) = 0;
 
	// Destroy a vertex shader
	virtual void DestroyVertexShader(VertexShader *vertexShader) = 0;
 
	// Create a pixel shader from the supplied code
	// code is assumed to be GLSL for now
	virtual PixelShader *CreatePixelShader(const char *code) = 0;
 
	// Destroy a pixel shader
	virtual void DestroyPixelShader(PixelShader *pixelShader) = 0;
 
	// Create a linked shader pipeline given a vertex and pixel shader
	virtual Pipeline *CreatePipeline(VertexShader *vertexShader,
		PixelShader *pixelShader) = 0;
 
	// Destroy a shader pipeline
	virtual void DestroyPipeline(Pipeline *pipeline) = 0;
 
	// Set a shader pipeline as active for subsequent draw commands
	virtual void SetPipeline(Pipeline *pipeline) = 0;
};
 
// Creates a RenderDevice
RenderDevice *CreateRenderDevice();
 
// Destroys a RenderDevice
void DestroyRenderDevice(RenderDevice *renderDevice);
 
} // end namespace render

Textures

For Textures, I am going to avoid dealing with image file formats in this article so we can stay focused on the render device abstraction. Therefore, our test image will be embedded as an array in C code.

Also, we are going to assume all textures coming in are three component RGB values using 8 bits per pixel. Red is the least significant byte and the highest significant byte is unused. Later, we will look at other texture formats.

To actually use a texture, we must do two things: 1) bind an integer to a sampler2D shader uniform parameter, and 2) bind the texture to a texture slot using the same integer.

Following is the additional interface elements supporting 2D textures.

#include "render_device/platform.h"
 
#include "render_device/render_device.h"
 
const char *vertexShaderSource = "#version 410 core\n"
	"layout (location = 0) in vec3 aPos;\n"
	"void main()\n"
	"{\n"
	"   gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n"
	"}\0";
const char *pixelShaderSource = "#version 410 core\n"
	"out vec4 FragColor;\n"
	"void main()\n"
	"{\n"
	"   FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n"
	"}\n\0";
 
int main()
{
	platform::InitPlatform();
 
	platform::PLATFORM_WINDOW_REF window =
		platform::CreatePlatformWindow(800, 600, "Triangle");
	if(!window)
	{
		platform::TerminatePlatform();
		return -1;
	}
 
	render::RenderDevice *renderDevice = render::CreateRenderDevice();
 
	render::VertexShader *vertexShader =
		renderDevice->CreateVertexShader(vertexShaderSource);
 
	render::PixelShader *pixelShader =
		renderDevice->CreatePixelShader(pixelShaderSource);
 
	render::Pipeline *pipeline =
		renderDevice->CreatePipeline(vertexShader, pixelShader);
 
	renderDevice->DestroyVertexShader(vertexShader);
	renderDevice->DestroyPixelShader(pixelShader);
 
	// ...
 
	while(platform::PollPlatformWindow(window))
	{
		renderDevice->SetPipeline(pipeline);
 
		// ...
 
		platform::PresentPlatformWindow(window);
	}
 
	// ...
 
	renderDevice->DestroyPipeline(pipeline);
 
	platform::TerminatePlatform();
 
	return 0;
}

Raster States

For raster states, we will group all related states into a single, encapsulating object. We do this because all of these states are likely to change at the same time. Also, changing any one of these states is likely to have the same impact on the graphics pipeline as changing all of them at once.

Take notice of the fact that the default arguments to CreateRasterState are the actual default values for a newly instantiated RenderDevice. This serves as the documentation of the default state values.

Following is the interface for raster states.

// ...
 
// Encapsulates a vertex buffer
class VertexBuffer
{
public:
 
	// virtual destructor to ensure subclasses have a virtual destructor
	virtual ~VertexBuffer() {}
 
protected:
 
	// protected default constructor to ensure these are never created
	// directly
	VertexBuffer() {}
};
 
// Encapsulates a vertex buffer semantic description
class VertexDescription
{
public:
 
	// virtual destructor to ensure subclasses have a virtual destructor
	virtual ~VertexDescription() {}
 
protected:
 
	// protected default constructor to ensure these are never created
	// directly
	VertexDescription() {}
};
 
// Encapsulates a collection of vertex buffers and their semantic descriptions
class VertexArray
{
public:
 
	// virtual destructor to ensure subclasses have a virtual destructor
	virtual ~VertexArray() {}
 
protected:
 
	// protected default constructor to ensure these are never created
	// directly
	VertexArray() {}
};
 
// Describes a vertex element's type
enum VertexElementType
{
	VERTEXELEMENTTYPE_BYTE = 0,
	VERTEXELEMENTTYPE_SHORT,
	VERTEXELEMENTTYPE_INT,
 
	VERTEXELEMENTTYPE_UNSIGNED_BYTE,	
	VERTEXELEMENTTYPE_UNSIGNED_SHORT,
	VERTEXELEMENTTYPE_UNSIGNED_INT,
 
	VERTEXELEMENTTYPE_BYTE_NORMALIZE,
	VERTEXELEMENTTYPE_SHORT_NORMALIZE,
	VERTEXELEMENTTYPE_INT_NORMALIZE,
 
	VERTEXELEMENTTYPE_UNSIGNED_BYTE_NORMALIZE,	
	VERTEXELEMENTTYPE_UNSIGNED_SHORT_NORMALIZE,
	VERTEXELEMENTTYPE_UNSIGNED_INT_NORMALIZE,
 
	VERTEXELEMENTTYPE_HALF_FLOAT,
	VERTEXELEMENTTYPE_FLOAT,
	VERTEXELEMENTTYPE_DOUBLE
};
 
// Describes a vertex element within a vertex buffer
struct VertexElement
{
	// location binding for vertex element
	unsigned int index;
 
	// type of vertex element
	VertexElementType type;
 
	// number of components
	int size;
 
	// number of bytes between each successive element (leave zero for
	// this to be assumed to be size times size of type)
	int stride;
 
	// offset where first occurrence of this vertex element resides in
	// the buffer
	long long offset;
};
 
// Encapsulates the render device API.
class RenderDevice
{
public:
 
	// ...
 
	// Create a vertex buffer
	virtual VertexBuffer *CreateVertexBuffer(long long size,
		void *data = nullptr) = 0;
 
	// Destroy a vertex buffer
	virtual void DestroyVertexBuffer(VertexBuffer *vertexBuffer) = 0;
 
	// Create a vertex description given an array of VertexElement structures
	virtual VertexDescription *CreateVertexDescription(unsigned int numVertexElements,
		const VertexElement *vertexElements) = 0;
 
	// Destroy a vertex description
	virtual void DestroyVertexDescription(VertexDescription *vertexDescription) = 0;
 
	// Create a vertex array given an array of vertex buffers and associated vertex
	// descriptions; the arrays must be the same size.
	virtual VertexArray *CreateVertexArray(unsigned int numVertexBuffers,
		VertexBuffer **vertexBuffers, VertexDescription **vertexDescriptions) = 0;
 
	// Destroy a vertex array
	virtual void DestroyVertexArray(VertexArray *vertexArray) = 0;
 
	// Set a vertex array as active for subsequent draw commands
	virtual void SetVertexArray(VertexArray *vertexArray) = 0;
};

Depth/Stencil States

Again, we group all depth and stencil states since they are closely related and likely to change with the same frequency and performance impact.

Like with raster states, the default arguments for CreateDepthStencilState serve as documentation of the default state values.

Following is the interface for depth and stencil states.

// ...
 
int main()
{
	// ...
 
	float vertices[] = {
		-0.5f, -0.5f, 0.0f, // left  
		 0.5f, -0.5f, 0.0f, // right 
		 0.0f,  0.5f, 0.0f  // top   
	}; 
 
	render::VertexBuffer *vertexBuffer = renderDevice->CreateVertexBuffer(sizeof(vertices), vertices);
 
	render::VertexElement vertexElement = { 0, render::VERTEXELEMENTTYPE_FLOAT, 3, 0, 0, };
	render::VertexDescription *vertexDescription = renderDevice->CreateVertexDescription(1, &vertexElement);
 
	render::VertexArray *vertexArray = renderDevice->CreateVertexArray(1, &vertexBuffer, &vertexDescription);
 
	// ...
 
	while(platform::PollPlatformWindow(window))
	{
		// ...
 
		renderDevice->SetVertexArray(vertexArray);
 
		// ...
 
		platform::PresentPlatformWindow(window);
	}
 
	// ...
 
	renderDevice->DestroyVertexArray(vertexArray);
	renderDevice->DestroyVertexDescription(vertexDescription);
	renderDevice->DestroyVertexBuffer(vertexBuffer);
 
	// ...
 
	platform::TerminatePlatform();
 
	return 0;
}

Cube Sample

There exists a sample in the GitHub project, called Cube, that demonstrates a textured cube drawn using some of the functionality from this page. All of the code thus far is available at this commit: 9661cc0.

Using the left mouse button, you can rotate the cube. Using the mouse horizontal scroll wheel, you can zoom the camera in and out.

Following is the relevant code.

// ...
 
class RenderDevice
{
public:
 
	// ...
 
	// Clear the default render target's color buffer to the specified RGBA
	// values
	virtual void ClearColor(float red, float green, float blue, float alpha) = 0;
 
	// Draw a collection of triangles using the currently active shader pipeline
	// and vertex array data
	virtual void DrawTriangles(int offset, int count) = 0;
};

Next Steps

At this point, we have stood up a render device abstraction that we can build upon. Of course, we are missing blend states, uniform buffers, offscreen render targets, and other shaders (e.g. tessellation and geometry). I would implement those features in roughly that order. Also, our buffer data so far has always been static. We need to add support for dynamic updates.

If you want to be notified when these features get added to my GitHub project, please star it. These additions will be made even before I write more about it in this article. To be continued…

6 thoughts on “C++ Rendering Engine I – Abstracting the Render Device

        1. Andy Post author

          It certainly has been a while.

          I am currently working on a Vulkan engine (one at work and one at home). Soon, I will write about the choices I am making to support both OpenGL and Vulkan abstractions without sacrificing performance or expressiveness with Vulkan.

          Reply
  1. Mathew

    Thank you so much, words can explain how great it is to finally find something on this topic that is relevant and makes sense. I haven’t yet gotten to implementing the shader uniforms or texture part of the abstraction but with the knowledge I’ve gained so far, I now I understand that the possibilities are truly endless.

    I honestly think I’ve never truly grasped abstraction until today.

    I now understand that in order to create an API agnostic rendering engine, things need to be broken down, but not to the point where you end up defining methods that are specific to each API.

    When I first saw that you created a “VertexShader” and a “PixelShader” class I was so confused thinking to myself “But wait a minute… Those two things could just be grouped together as one ‘Shader’ class” and then as I soon as I started to implement the design myself… Something clicked inside of me! Those two classes are of course better off separate because we as programmers are horrible at seeing the future.

    Once again, I am thrilled to have found these articles.

    Thank you!

    Reply
    1. Andy Post author

      Thank you so much for the encouragement. A year later, it really is about time to do the next article in the series.

      My recent work is in task-based multithreading. Perhaps it is time for a particle system post.

      Reply

Leave a Reply to Ricardo Antunes Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.