Multiple Render Targets


I'm currently developing a game with Orx Engine and I have to say I'm very impressed by the overall quality of the code and design of the engine!

However, since I'm implementing something similar to 3D deferred rendering in 2D I was wondering if there's any support for multiple render targets within the engine;
I looked for it in the source but I haven't found anything at the moment.

Long story short: Is there any kind of support for MRT at the moment?
My idea was to add something like orxDisplay_SetDestinationBitmap with an array of bitmaps instead of just one but I wanted to know if this feature is already planned.

(As for my project, I'll probably post about it in a week in the right section because I think this engine deserves more attention :) )


  • jimjim
    edited March 2013
    Welcome to this forum :) Are you talking about rendering to texture ? then you could find a tutorial about it here
  • edited March 2013
    Yes, MRT specifically is a technique that allows you to render at the same time to multiple textures with a specific shader.
    I use simple render to texture for the moment but the rendering would be faster if MRT was supported.
  • jimjim
    edited March 2013
    Hmm, I never needed MRT though, you can use custom shader with ORX, I don't know if the tutorial does what exactly you are looking for, then wait for Iarwain
  • edited March 2013
    Yes, I've seen the tutorial you were referring to but my problem is different and MRT will undoubtely improve my performance as I won't need to draw every object twice with different textures ( I need to render color and normals in two different buffers ).
  • edited April 2013
    Hi subr3v and welcome here!

    First of all, thanks for your appreciation.

    As for your question, sorry for the delay as I was out all day, but I'm glad jim was able to start helping you. :)

    The short answer is that there's no out-of-the-box support for MRT in orx. There are a couple of reasons for this.

    The first one is, as you've probably noticed, the orxDisplay API is heavily oldschool 2D-oriented. Most of it was written over a decade ago and it shows its age. I'm in the process of trying to make it evolve to be more current hardware-friendly API but it's a lengthy process as I still want to allow it to make sense for older architectures.

    The second reason is that MRT isn't available on some hardwares/APIs such as OpenGL ES/iOS/Android. I'm still not sure if I should simply echo a warning message in that case or if, in addition to it, I would also provide a way to "simulate" MRT by iterating over the render targets behind the scene (which wouldn't give your the perfs one might expect by using MRT in the first place, but at least it'd be functional with a single code path, though I'd have to rewrite custom shaders in that case).

    At the moment, the most straight forward way of using MRT that comes to my mind is to either use your own FBO and attach both your targets to it, or to re-use the one created by orx and attach your extra target to COLOR1, ...

    As you've notice, calling orxDisplay_SetTargetBitmap() with a bitmap that's not the screen will setup the FBO with your bitmap as target in COLOR0.
    After this, you simply need to add your own render target to that FBO.

    You can get the texture ID of a bitmap by calling orxDisplay_GetBitmapID() (unless you want to use a texture that you create yourself, but in that case you won't be able to access it via orx for easy debug rendering of the normals via another viewport or by saving it using the console, for example).

    In the same order of idea, were you needing a different vertex shader than the one orx uses, or geometry/tessellation ones, you can replace the whole internal rendering pipeline by listening to the render *_START events, issue your own calls and return orxSTATUS_FAILURE to prevent orx from doing the default rendering.
    Then a call to orxDisplay_SetVideoMode(orxNULL) would then reset all the internals.

    Let us know if you have any problems.
  • edited April 2013
    Thanks for the welcome! :)

    I have followed your suggestion and I'm currently using the existing FBO by attaching a new texture and calling glDrawBuffers accordingly. It works like a charm!

    And then, the fact that MRT is easily implemented is only possible thanks to the excellent design of the engine!

    I'll post in the next days about my project and I'll let you see what I'm doing with MRTs :P

    Unrelated question: when the optimization with partitioning of the render plug-in is coming?
  • edited April 2013
    Glad it worked right away! :)

    [strike]Btw, you shouldn't have to issue the glDrawBuffers call yourself, it should get called automatically upon any context change (shader, render target, etc...).[/strike]

    EDIT: Read too fast and/or wasn't completely awake so please ignore this! ^^

    I'm looking forward to learning more about your project. :)

    Btw, if you feel like it (no pressure!), would you consider writing a tutorial about MRT on the wiki?

    As for the render optimization, there are two parts:
    - the partitioner, which is actually on hold at the moment as I still haven't found a good compromise and reading about partitioning on Battlefield 3
    - the batch gathring phase in the render plugin itself, I've started working on it 2 weeks ago but stumbled upon an issue I need to solve (namely should I maintain an ordered list of objects at all time or should I just keep an ordered list of proxies in the render plugin which means I should also find an efficient way of notifying that a proxy needs to be re-sorted)

    Lastly, Lydesik asked me to add a feature about object grouping/layering, I'm still not 100% sure which way I'm going to take and that is likely to affect any decision for the batch gathering phase, so... :)

    In any case I'll let you know when it's done. Do you have performance issues at the moment? If so, how many objects do you have? In how many batches are they rendered? You can check that in the profiler screen, that would be the number of calls to orxDisplay_DrawArrays().
  • edited April 2013
    I'll definitely write a tutorial! But for I'm very busy so it may take some weeks.

    As for the optimization: I'm not having performance problems directly related to the partitioning ( at least with the tilemap which had a lot of objects ) because I wrote a custom rendering routine by overriding the object_start_event.

    But since you asked me to look for the DrawArrays count I've noticed that the batching is not performed when you're using a shader to render an object ( and that's the case for all of my objects ), I've looked into the source code and I found out that you delete the last used bitmap when using a shader, is there a reason for that?
  • edited April 2013
    No worries, take the time you need for the tutorial, it's greatly appreciated.

    As for shader-batching, it's actually something I implemented only 10 days ago, so I guess you're not using the latest version from the hg repo. :)

    As you can see in the code, it was a bit more complex that not resetting the last used bitmap. As the code is fairly new, lemme know if you get any issues with it.

    PS: If you're using custom parameters for your shader, orx won't be able to batch anything as it'll need to update the uniforms between each call.
  • edited April 2013
    Awesome! I'm going to compile the latest version then!
    (I was lazily using the precompiled one :P)

    Does the batching work if I use shader_start, then a number of transform bitmap and then shader_end? Even if I use a custom param during the start?

    Thanks again for the awesome support and engine!
  • edited April 2013
    It should, as long as you're using the same context (texture, blending, filtering).

    Thanks again for your appreciation! :)

    Btw, I might soon-ly add native MRT for architectures that support it + warning message for those that don't.
    Setting up everything from config has a certain appeal to it. ;)
  • edited April 2013
    I managed to compile and there is a huge performance gain! from 8.5 ms to 0.6! :)

    Native MRT support would be nice since at the moment my implementation only works for windows (I need to replace wglGetProcAddr with other OS functions).

    I'm posting a bonus screen here since you helped me a lot :)
    I'll make a full post in the next days.

  • edited April 2013
    Good, I'm glad it gave nice results in your case! :)

    You also get the latest features by using the hg repo, such as the resource module (examples on how to use it here).

    Nice screen! Looking forward to seeing more! :)
  • edited April 2013
    I'll try the resource module in the next days, thanks for reporting that :P
    We're also getting ready a small teaser/trailer of the game, so I'll make a topic after we have that.
  • edited April 2013
    Btw, I thought I'd have had the time to implement MRT last week end, but I was wrong.
    So as to not make it slip too much, I created that issue, if you want to track the progress on it. :)
  • edited June 2013
    Sorry for the long delay but I finally got the opportunity to implement support for MRT on windows/mac/linux.

    Instead of using the config property Viewport.Texture, you can now use Viewport.TextureList and give a list of texture that will then transmitted to orxDisplay_SetDestinationBitmaps().

    All the textures must have the same size. You can also query the config value Display.DrawBufferNumber at runtime to know how many destination textures you can use at once.

    I've only rapidly tested the feature so if you see any weird behavior, please let me know! :)
Sign In or Register to comment.