On renderer API abstraction

Jack Claw was in development between 2006-2008 and was released in the Humble Frozenbyte Bundle for the community to play and build upon.
alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 1:34 pm

Currently there are two separate renderers, storm3d and stormgl. This is bad because there is significant code duplication and there are some bugs in the OpenGL renderer. On the other hand there are slightly more comments in stormgl.

So this mess should be refactored. The sanest way is like this:

There's some kind of renderer abstraction which abstract the low-level graphics API, be it OpenGL, Direct3D or something else. This level has things like textures, shaders, framebuffers and index/vertex buffers. The game logic has no idea about this abstraction.

There's a Storm3D module which the game talks to and which deals with the renderer abstraction. This level has things like models, materials and lights.

In this thread we discuss what this renderer abstraction should look like.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 1:59 pm

First the renderer object itself.

Direct3D requires there to be a device object. All graphics calls go through this object. On OpenGL there are just global functions but there must be an OpenGL rendering context.

There are several ways we can abstract this:

1. Have a Renderer object which looks a lot like Direct3D device object. Storm3D creates it and keeps it around. All graphics calls go through this object.

2. Have a global object. Storm3D creates it and stores in the global pointer. Then new texture/shader/buffer objects can be created with their own constructors and access the global object if necessary. This has the bad feature that you can accidentally try to create an object before the renderer has been initialized. This could be implemented with a Singleton if necessary but it's still essentially the same thing.

3. Global object through which other graphics object must be created. Hybrid of 1 and 2 with disadvantages of both and advantages of neither.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 2:12 pm

State management.

Things like depth and stencil write/test modes, alpha blending modes, culling modes etc.

These are almost exactly alike on OpenGL and Direct3D 9. You call a function to set a state variable and that's it. There are slight differences in handling culling mode/triangle winding. On OpenGL alpha-to-coverage requires an extension, on Direct3D it requires horrible hacks.

On Direct3D 10/11 things are very different. You first need to create a state block. There are different kinds of state blocks like blend state, depthstencil state and rasterizer state. When you want to change state you activate one of the state blocks you created previously. Also some state has been removed like alpha testing. If alpha testing needs to be done you need to do it yourself in the shader.

The current code has been written for Direct3D 9 model. If we want Direct3D 10/11 we need to either:

1. Have the renderer object manage state blocks and present Direct3D 9 -like interface to Storm. This might be slow. No sane way to handle alpha testing so it must be moved to shaders.

2. Refactor the code to use state blocks. On Direct3D 9 and OpenGL we would emulate this by manually calling proper state functions. This is a lot easier than trying to juggle state blocks behind Storm3Ds back but requires massive refactoring of Storm3D.

In either case alpha testing should be moved to the shaders. But first we need to rewrite the shaders to avoid (even more) combinatorial explosion.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 2:28 pm

Shader management.

There's a separate thread for deciding shader language. The following considers the code in the renderer.

HLSL, Cg and OpenGL asm shaders can be used by setting all shaders separately. Vertex and fragment shaders can be freely mixed and matched together and even, with some limitations, with fixed function pipeline.

GLSL shaders are different. The different shaders are linked together to a shader object and when that object is activated it sets all the shaders. This has the advantage that the driver can sometimes optimize the combination more than with separate shaders. The disadvantage is that you need to think about your shaders in a very different way and might lead to some combinatorial explosion.

There's a rather new extension which allows free mix-and-match of GLSL shaders. However it wasn't quite thought through: http://www.g-truc.net/post-0348.html . It's also an extension so it's not guaranteed to be there.

It's also possible to present an abstraction to Storm3D which makes it look like we have separate shaders and do the linking inside the renderer object. However this might lead to pauses if some shader needs to be linked in the middle of action. This could be avoided by prelinking the necessary shader combinations but then we would have to figure out what those are.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 2:55 pm

Shader parameters.

I'm writing from memory here and haven't entirely though this thing through. Correct me if I'm wrong.

Direct3D 9 has global state for vertex shaders and pixel shaders separately. With asm shaders you need to know the index. With HLSL first query the shader for a parameters index in these tables. When you update you should update the whole thing at once. Matrices are not represented as matrices but three of four contiguous vectors. Can't access fixed function state.

OpenGL asm has both global and shader local state for vertex and pixel shaders separately. Almost like above except update of multiple parameters at once requires a separate extension. Also possible to access fixed function state. In the current code this is used with some matrices.

OpenGL GLSL has only shader object specific parameters, no globals. Parameters are shared between vertex and pixel shaders. Vectors and matrices are updated with different functions. Not possible to change multiple parameters with single call.

Direct3D 10/11 does not have global shader state. Every shader has "parameter blocks". You create a buffer with the values you want and the attach it to the shader. One buffer can be attached to multiple shaders. This allows you to easily have things like "per-frame state" and "per-object state" and only update what needs to be updated.

On OpenGL this is possible if you have ARB_uniform_buffer_object or OpenGL 3.1. Older EXT_bindable_uniform is not enough. Also GLSL required, asm or Cg does not support this.

The current code is a big mess. Parameters are referred by their index and not their name since they don't have names. Parameters are also not categorized by their update frequency.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 3:08 pm

Vertex buffers.

On Direct3D 9 shader inputs are bound by semantic. The name of the parameter is irrelevant. Vertex buffers have a format attached to them which tells what parameters this buffer has.

On OpenGL shader inputs are bound by name. There is no semantic. There are generic shader attributes. On Nvidia these alias fixed functions attributes. On ATI they don't. Vertex buffers have no attached format. User is responsible for binding the buffer and then setting the pointer to point the correct address inside it.

On Direct3D 10/11 there is separate input layout state. It's created from shader input description and vertex buffer layout description. Thus we need to know which shaders and which vertex buffers will be in use when this input layout is used.

Our code mostly has vertex buffers. There is some description of their format somewhere in the code. Some places use D3D renderUp which has been removed from D3D10. Some OpenGL code uses raw gl*Pointer calls.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 3:35 pm

On framebuffers.

Direct3D lets you create textures and pretty freely bind them as render targets as long as the format is compatible.
Textures can be different size as long as depthstencil is bigger. Can mix user-created and system-created targets with each other. Having a null color target requires hacks. Directly accessing depth target as texture requires hacks.

OpenGL requires a separate frame buffer object (FBO). FBO is bound and then texture is bound to an attachment point of the FBO. Can't mix user-created and system-created targets with each other. Having null color target mostly works. Direct access to depth target if EXT_gpu_shader4, otherwise not. With EXT framebuffer extension targets must be same size. There are no guaranteed supported formats so getting a working format can be somewhat hard. Blits are separate extension. ARB extension corrects all problems but might not be supported on older GPUs. It's still pretty easy to get an incomplete framebuffer error.

Current interface:
You can (attempt to) set any texture as render target. Minimal error checking facilities. In case of incompatibility stuff will just not render correctly. Direct3D renderer renders to front buffer and then blits to a texture. Has better AA support but it's broken on latest Nvidia drivers.

Proposed interface:
Framebuffer object which is created through main renderer object. This object creates the textures bound to it. If you want to share some target with another FBO then there's a separate function for that. You can set the framebuffer as a render target. You can't bind arbitrary textures as render target. Eliminates incomplete frambuffer error. If framebuffer was successfully created then it's valid.

What about antialiasing? Behind the scenes there is either a render buffer with antialiased storage or on newer hardware an antialiased texture. Neither can be sampled directly as conventional texture. Need a separate AA resolving blit to a normal texture. Antialiased texture allows use of explicit resolve in shader but current engine has no need for it.

Should it be possible to get a texture from framebuffer? Probably yes so non-AA targets are easy to use. What about AA targets? Can the texture be queried or is it necessary to create a separate framebuffer to which we blit to resolve AA and then query that for the texture? How do we ensure that the blit happens if necessary? How do we ensure that the framebuffer is not used as (writable) target and source at the same time?

And someone needs to go through the code and document the different render targets, their size, format, creation point and uses both as target and source. This is not an easy task. Expect significant SAN loss.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 3:52 pm

Odds and sods.

On Direct3D the device capabilities can be queried as soon as the device object has been created. On OpenGL/SDL the video mode must be set first to create the OpenGL context. It is NOT safe to first create a 1x1 "fake" window just to query capabilities. On some computers all the following SetVideoMode calls will fail. I have no idea why.

On Direct3D available video modes can be queried through IDirect3D object which is created before device object. On SDL the modes can be queried as soon as SDL is initialized.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 4:00 pm

Input handling.

There is some philosophical debate as to whether input handling is part of rendering subsystem or not. Things to consider:

When using SDL the input should be queried in the same thread where rendering takes place. Not sure how hard a requirement this is. Let's play it safe.

On Windows input is a mess. DirectInput (which we currently use) is officially deprecated. The recommended way is to have a message loop for mice/keyboards and XInput for controllers.

It might be possible to use XInput with SDL/OpenGL as long as we use SDL 1.2 which does not do this itself.

It's possible to do window management and input with SDL but use Direct3D for rendering. There are minor problems with msvcrt.dll.

If we want to have multiple mice/keyboards on Linux have to wait until SDL folks fix their support for them and then add support for SDL 1.3 because the API changed. Or toss out the SDL input code and do all X input stuff ourselves. I won't do it and I don't recommend it for anyone else either.
Turo Lamminen
Alternative Games

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Fri Jun 03, 2011 6:59 pm

Texture sampling state

Direct3D uses separate texture samplers. Sampler state like filters, mipmapping and anisotrophy is configured per sampler unit.

Classic OpenGL has sampler state as part of texture. OpenGL >3.3 has sampler objects and also available as extension. But this is not guaranteed to be supported.

Current code is a mess. Propose the following:

Use Direct3D -style abstraction. If OpenGL sampler objects are available, use them. If not, update texture state when necessary. And only when necessary. Need to shadow this state in the texture objects to avoid unnecessary changes.
Turo Lamminen
Alternative Games

dublindan
Posts: 8
Joined: Wed Apr 20, 2011 6:16 am

Re: On renderer API abstraction

Postby dublindan » Sun Jun 05, 2011 10:46 pm

This could be implemented with a Singleton if necessary

Please don't do this.

In either case alpha testing should be moved to the shaders.

Agreed.

GLSL shaders are different. The different shaders are linked together to a shader object and when that object is activated it sets all the shaders.

For HLSL, Cg and OpenGL asm shaders, I would personally handle them the same way as GLSL. I would abstract this into a shader object, where vertex, fragment and geometry shaders are specified and then it will internally either specify the shaders separately or combine them into a single shader program (in the case of GLSL). You do lose some flexibility int he case of HLSL, Cg and OpenGL asm shaders, but at least the interface is consistent with that of GLSL.
If you want to regain the flexibility, then I would go the pre-linking approach you specified. Since these shader combinations won't need to be created mid-level, prelinking them during level-loading should be fine. I'm not sure how this is any more complicated than in the HLSL method? Why is determining what the combinations any harder than.. specifying the combinations for HLSL? Or is each object able to specify its own shaders (meaning that the combinations depend on the objects in the level)? If thats the case, I would ask if you need that level of flexibility?

On OpenGL/SDL the video mode must be set first to create the OpenGL context. It is NOT safe to first create a 1x1 "fake" window just to query capabilities. On some computers all the following SetVideoMode calls will fail. I have no idea why.

If moving to SDL 1.3, is it possible to create a two windows (since SDL 1.3 supports multiple windows) - one "fake" one to query the capabilities and a second real one?


When using SDL the input should be queried in the same thread where rendering takes place. Not sure how hard a requirement this is. Let's play it safe.

The renderer shouldn't handle input, but on the other hand, the renderer thread should, so we can be sure not to run into this issue... In some code I wrote a few months ago, I gathered input events before rendering and then passed them to the real input handling code (in another thread) to process (map key bindings to actions).

Personally, I would just use SDL (1.3 would be my choice) regardless of if rendering is done by DirectX or not, but I'm not going to recommend either way because I have no real world experience here (I've always used SDL + OpenGL in my projects).

I don't really have an opinion on the other points raised, in most cases because I don't have the experience needed to really comment on them (like I said above, I don't have any real DirectX experience).

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Mon Jun 06, 2011 12:02 pm

dublindan wrote:
This could be implemented with a Singleton if necessary

Please don't do this.

I know singletons are evil and personally avoid them whenever possible but I wanted to raise the option.

For HLSL, Cg and OpenGL asm shaders, I would personally handle them the same way as GLSL. I would abstract this into a shader object, where vertex, fragment and geometry shaders are specified and then it will internally either specify the shaders separately or combine them into a single shader program (in the case of GLSL). You do lose some flexibility int he case of HLSL, Cg and OpenGL asm shaders, but at least the interface is consistent with that of GLSL.
If you want to regain the flexibility, then I would go the pre-linking approach you specified. Since these shader combinations won't need to be created mid-level, prelinking them during level-loading should be fine. I'm not sure how this is any more complicated than in the HLSL method? Why is determining what the combinations any harder than.. specifying the combinations for HLSL?

Because the current code is a mess. Take a look at Storm3D_ShaderManager.cpp . This controls which vertex shader is used when rendering geometry. The different rendering phases (basic geometry, spotlights, shadow maps) use different pixel shaders which are set devil only knows where. When rendering the effects (distortion, glow, procedural effects) the shaders are usually set together.

If moving to SDL 1.3, is it possible to create a two windows (since SDL 1.3 supports multiple windows) - one "fake" one to query the capabilities and a second real one?

Can we move universally to SDL 1.3? It's not stable yet, it's API is not stable yet and almost no-one ships it. So we'd have to ship our own version with any binaries and anyone wanting to compile from source would first have to get it. Even then I'm not sure if the two-window approach works.

I'd like to have SDL 1.3 as an option. At compile time if SDL_VERSION >= 1.3 then we should do the 1.3 thing and otherwise the 1.2 thing. It shouldn't affect too many places in the code.

Personally, I would just use SDL (1.3 would be my choice) regardless of if rendering is done by DirectX or not, but I'm not going to recommend either way because I have no real world experience here (I've always used SDL + OpenGL in my projects).

It is possible to use SDL+Direct3D combination but it has a few problems. The biggest one is that standard SDL.dll links against msvcrt.dll so you have to be careful about which library you specify to MSVC. Otherwise you get at least a whole bunch of warnings.
Turo Lamminen
Alternative Games

AndySmile
Posts: 12
Joined: Sat Jun 18, 2011 8:32 pm

Re: On renderer API abstraction

Postby AndySmile » Mon Jun 20, 2011 11:01 pm

hi @all,

I'm just thinking, why do we have actually an directX renderer? I mean, that graphics quality is great but not that high
end like Crysis 2 or something. So, I guess, we can keep that quality if we just using the OpenGL render. On this way we have
just care about one renderer that runs on all three platforms, or using that engine currently some specific features of dx
that isn't supported in OpenGL yet?

For shaders, well, we could using Cg by Nvidia, on this way we can use the same shaders for all three platforms(but I'm not
sure if it'S actually supported for linux, but I guess so). That makes the shader handling more comfortable. What do u think?

So, do u want to redesign the entire engine (or even the renderer parts?)?

See ya
Andy

User avatar
Urfoex
Posts: 50
Joined: Fri Apr 15, 2011 11:14 am

Re: On renderer API abstraction

Postby Urfoex » Tue Jun 21, 2011 12:55 am

AndySmile wrote:hi @all,

I'm just thinking, why do we have actually an directX renderer? I mean, that graphics quality is great but not that high
end like Crysis 2 or something. So, I guess, we can keep that quality if we just using the OpenGL render. On this way we have
just care about one renderer that runs on all three platforms, or using that engine currently some specific features of dx
that isn't supported in OpenGL yet?

For shaders, well, we could using Cg by Nvidia, on this way we can use the same shaders for all three platforms(but I'm not
sure if it'S actually supported for linux, but I guess so). That makes the shader handling more comfortable. What do u think?

So, do u want to redesign the entire engine (or even the renderer parts?)?

See ya
Andy


There are some answers following that question that should answer your questions also:
viewtopic.php?f=19&t=3570#p14991
+-----------------------------------------------------------------\
| Debian testing 64Bit on
| * AMD Phenom x4 905e (4x2500Mhz)
| * 6GB Ram
| * AMD/ATI Radeon HD4770 (fglrx)
+-----------------------------------------------------------------/

AndySmile
Posts: 12
Joined: Sat Jun 18, 2011 8:32 pm

Re: On renderer API abstraction

Postby AndySmile » Tue Jun 21, 2011 1:14 am

indeed, it answers my questions. Thx for the link. I just thought, it deals with the render pipeline of the storm engine, that
kind of discussion belongs to this place. But doesn't matter, now, I'm know more ;).

Is there a place where we collect all that decisions together? Like a TDD and a GDD? Who'll make the main decisions anyway?

See ya
Andy =)

alt_turo
Posts: 195
Joined: Mon Dec 13, 2010 11:06 am

Re: On renderer API abstraction

Postby alt_turo » Mon Jun 27, 2011 1:31 pm

AndySmile wrote:Is there a place where we collect all that decisions together?

Not right now. I suppose there should be some kind of README or TODO file where these things are collected. Feel free to collect them :)

Like a TDD and a GDD? Who'll make the main decisions anyway?

Whoever writes the code. As long as you're willing to write code and not break anything else you can pretty much do whatever. I'll review any patches and merge those I think are ok. And I suppose FB guys have the final veto right.
Turo Lamminen
Alternative Games

hubrobin
Posts: 1
Joined: Tue Aug 16, 2011 12:54 am

Re: On renderer API abstraction

Postby hubrobin » Tue Aug 16, 2011 1:02 am

Use Direct3D -style abstraction. If OpenGL sampler objects are available, use them. If not, update texture state when necessary. And only when necessary. Need to shadow this state in the texture objects to avoid unnecessary changes.


This sounds like a good idea. I think that sampler is rather a function then texture state.

What about vertex input layout abstraction? Are you going to use system like Direct3D10 uses?


Return to “Jack Claw Feedback & Development”

Who is online

Users browsing this forum: No registered users and 1 guest