orxDisplay_DrawMesh, vertex attributes, z-buffer

edited January 2015 in Help request
Hi,

I've been using orxDisplay_DrawMesh in order to draw my soft-bodies for a while. It works quite well, but now I've started investigating how much I can utilize shaders with it. In particular, I'm wondering if there's any way of passing custom vertex attributes to my fragment shaders? As far as I could see in the orx source code, the vertex shaders are hard-coded. They don't seem to support such customization.

The only workaround I could come up with is to abuse the vertex colors. Since I'm writing the fragment shader, I simply override the default behavior that multiplies the vertex colors with the object texture and that gives me 4 floats to transfer useful information to my shaders. Is there any problem with this workaround? Also, can you think of any way to transfer more that 4 numbers (on a vertex-by-vertex basis)?

Another problem I'm trying to solve is handling self occlusions gracefully. The best way of doing that would be to set the z-position in the fragment shader, but as far as I could find, the OpenGL ES fragment shaders don't support that. So, the next best would be the ability to define the z-position with the vertex positions, but the orxDisplay_DrawMesh API only accepts x and y coordinates, the z coordinate is inherited from the object. My last resort could be to sort the triangles manually to handle self-occlusions reasonably, but I'd rather avoid that unless necessary.

All of that said, I'm really amazed at how much service I'm getting from Orx and how little I need to fight it, even though I'm using it for something that it wasn't designed for. I guess that speaks for how well it was designed :)

Comments

  • edited January 2015
    Hi,

    Yes, support for custom vertex shader would be great and will probably be added, hopefully before the end of the year. My current trouble is to try to find a nice and user-friendly way to formalize the attributes and do their binding.

    It's part of my plan of also adding meshes as first class citizen in addition to just "quads". Those might also become the first steps to supporting 3D (but this is not planned for this year ;)).

    I'm glad you still enjoy orx even when pushing it outside its comfort zone. :)
  • edited January 2015
    Woops, missed a question: for now I can't think of any tricks to pass more than 4 per-vertex values.

    Also, I'll probably refactor the render/display connection to handle object-camera-viewport transformations on the GPU instead of the CPU. But that shouldn't be too hard to do.
  • edited January 2015
    I'll be one of the early testers of the custom vertex shaders then :) Meshes as first class citizens would be very nice. Would eliminate the zero-area triangles I'm senging right now.

    About the orxDisplay_DrawMesh API; would it be too much trouble to accept an fZ value there and add that value to the object's z value?
  • edited January 2015
    iarwain wrote:
    Also, I'll probably refactor the render/display connection to handle object-camera-viewport transformations on the GPU instead of the CPU. But that shouldn't be too hard to do.

    I guess that'll be happening in the vertex shader, right? So that shouldn't interfere with my fragment shaders. I only need the vertex shaders to pass attributes.
  • edited January 2015
    enobayram wrote:
    I'll be one of the early testers of the custom vertex shaders then :) Meshes as first class citizens would be very nice. Would eliminate the zero-area triangles I'm senging right now.

    About the orxDisplay_DrawMesh API; would it be too much trouble to accept an fZ value there and add that value to the object's z value?

    I too would welcome the mesh as first class citizen (as I mentioned in another thread). But I don't know if we should try to improve orxDisplay_DrawMesh(). I would rather concentrate on the new Mesh API. I know that sometimes it could be pain waiting for something you need now.
  • edited January 2015
    Trigve wrote:
    I too would welcome the mesh as first class citizen (as I mentioned in another thread). But I don't know if we should try to improve orxDisplay_DrawMesh(). I would rather concentrate on the new Mesh API. I know that sometimes it could be pain waiting for something you need now.

    Hmm, I'm starting to think that I've no idea what you're talking about when you say "the new Mesh API". Could you provide any pointers? In my case, the orxDisplay_DrawMesh is a perfect fit, since I don't actually have meshes, but I'm using the function to draw an irregularly shaped soft body, that changes its shape (including the number of vertices) at each frame.
  • edited January 2015
    enobayram wrote:
    Hmm, I'm starting to think that I've no idea what you're talking about when you say "the new Mesh API". Could you provide any pointers? In my case, the orxDisplay_DrawMesh is a perfect fit, since I don't actually have meshes, but I'm using the function to draw an irregularly shaped soft body, that changes its shape (including the number of vertices) at each frame.

    I was referring to iarwain's post about Mesh as first class citizen. That is, if this will be implemented, I think that there will be new functions for the mesh handling (create, etc...).
  • edited January 2015
    enobayram wrote:
    About the orxDisplay_DrawMesh API; would it be too much trouble to accept an fZ value there and add that value to the object's z value?

    I'd need to modify the vertex format (there's no depth as we speak) and activate the depth test as well. It can be done before everything else happen, I think, I'll look into it before starting the bigger task (mesh as first class citizen).

    However there's no "object" Z value when calling this function, it'd still be completely in screen space, you could do the addition yourself with the object retrieved from the render event.


    As for the "new" API, it'd more likely be changes in orxGraphic (+associated config and potential wrapper in orxObject), not in orxDisplay itself (beside the communication structure: orxDISPLAY_TRANSFORM).
  • edited January 2015
    iarwain wrote:
    However there's no "object" Z value when calling this function, it'd still be completely in screen space, you could do the addition yourself with the object retrieved from the render event.

    I hadn't noticed that the vertices had no Z value, since the API doesn't ask for it, I just assumed you were internally passing the object's Z value translated to the screen space. How do you currently handle the occlusion then? Does it currently work simply due to the drawing order?
  • edited January 2015
    Yes, the sorting is done beforehand, in the render plugin. It's based on the object's Z value and on some other properties (shader, smoothing, ...) in order to facilitate batching in the display plugin.
    Then the draw calls are issued in a back-to-front order.

    The whole sorting process needs to be optimized, especially if we want to add an early-Z rejection later on, which would considerably boost the GPU performance on some hardware.
  • edited January 2015
    I see, I think I can minimize my current problem with some rudimentary sorting.
Sign In or Register to comment.