Help with fog shader

edited April 2014 in Help request
Hey guys! New to orx, and so far loving it.

I have a bit of a problem. I'm trying to make a fog shader work (objects fade into the background), and to do this I need (I think) to access the color values for the texture being passed to the shader, but when I try to access it I get
[ASSERT] [orxDisplay.c:orxDisplay_GLFW_SetShaderBitmap():4465] [ASSERT] : <(_pstValue != sstDisplay.pstScreen) && "Can't use screen bitmap as texture parameter (ID <0>) for fragment shader.">

Here's my shader block:
[FogShader]
ParamList	=	texture # fogAmt # fogStart # fogEnd # fogColor
fogAmt		=	1.0
fogStart	=	0.0
fogEnd		=	200.0
fogColor	=	(.149, .145, .141)
Code		=	"
void main()
{
  float fog_coord, fog_amt, factor;

  vec4 fog_color  =  vec4(fogColor.rgb, 1.0);
  // Problem line:
  vec4 tex_color  =  texture2D(texture, gl_TexCoord[0].st);
  float fog_dist  =  fogEnd - fogStart;

  fog_coord  =  abs(gl_FragCoord.z / gl_FragCoord.w) - fogStart;
  fog_coord  =  clamp(fog_coord, 0.0, fog_dist);
  factor    =  (fog_dist - fog_coord) / fog_dist;
  factor    =  clamp(fog_amt, 0.0, 1.0);

  gl_FragColor	=	mix(tex_color, fog_color, (1.0 - factor) * fogAmt);

}
"

I'm fairly new to shaders, textures, pipelines and all that, so any pointers you could give would really help.

Thanks!

EDIT: Forgot to mention that this shader is being applied to the Viewport, not individual objects.

Comments

  • edited April 2014
    So I checked out the tutorials in the source (as opposed to the Wiki ones) and found this:
    [Viewport]
    ...
    TextureList = ViewportTexture
    
    [FogShader]
    ParamList	=	texture # fogAmt # fogStart # fogEnd # fogColor
    texture		=	ViewportTexture
    ...
    

    Now I'm getting
    [18:35:52] [DISPLAY] [orxTexture.c:orxTexture_CreateFromFile():770] Failed to load bitmap [ViewportTexture] and link it to texture.

    Am I going in the right direction, or is this wrong?

    Thanks!
  • edited April 2014
    Hi orthecreedence and welcome here! :)

    Yes, you're going in the right direction!

    Shaders on viewport/screen have changed last year and I might not have updated the tutorials accordingly, sorry.

    Before the modification, when no explicit input texture was given to a viewport shader, orx was grabbing the content of the destination (usually the screen, like in your case), copy it back to a texture and use that texture behind the scene as an input. It was convenient for the user but actually a not so efficient pipeline due to the screen->texture copy (not even mentioning the pipeline stall/sync on the GPU).

    Now this isn't supported anymore, if one want to apply a shader to a whole viewport, one has to explicitly render to texture the original content and reuse that texture as input of the shader. This yields much better performances and is actually almost as straightforward to setup than the previous one: almost all that is required is some additional config lines.

    I've updated my compositing tutorial if you want to see a more complex example than what I'm going to write down here.

    The first thing you need to setup is multiple viewports, as you'll need at least two of them now. This part actually requires a small code change but it's the only one. I like to do it in a more or less generic way, ie. my viewport list and their order of creation is controlled by config.
    Here's what I usually do:

    Config:
    [Main]
    ViewportList = GameViewport # ScreenViewport
    

    Code:
    orxSTATUS orxFASTCALL Init()
    {
      // Pushes Main config section
      orxConfig_PushSection("Main");
    
      // For all viewports
      for(orxU32 i = 0, iCounter = orxConfig_GetListCounter("ViewportList");
          i < iCounter;
          i++)
      {
        // Creates it
        orxViewport_CreateFromConfig(orxConfig_GetListString("ViewportList", i));
      }
    
      // Pops config section
      orxConfig_PopSection();
    }
    

    That will create both GameViewport (rendered first) and ScreenViewport (rendered second).

    Your GameViewport here is what you currently have minus the shader and with a small addition: a destination texture (where we're going to render all your game's content).
    Here's the line you need:
    [GameViewport]
    TextureList = GameTexture ; The name doesn't matter as long as you'll reuse the same as the shader input
    

    As "GameTexture" isn't a texture that already exists, orx will create a new one (and handle its deletion as well when the viewport gets deleted) with the same size as your viewport. If you didn't specify any size for your viewport, your display size will be used, which I assume is what you want.

    Ok, now we're rendering to a texture in memory, there are a couple of neat things we can do for debug purposes, I'll come back on that later.

    You now need to render that texture to screen while applying your shader on it. That's what the second viewport, ScreenViewport, is for. No need to link it to a camera as we're not going to render any world object: we're simply going to apply a shader and blit the resulting quad onto the destination (in this case the screen, however we can daisy chain viewport to textures to make more advanced compositing effects).
    [ScreenViewport]
    ; No TextureList -> rendering to screen
    ShaderList = FogShader ; Using the fog shader, that's all we need
    ; You might also want to setup here a background color, a blend mode or whatever else you'd need for your game
    
    [FogShader]
    ; Here the only change we're going to do is use the GameTexture as input for your parameter "texture"
    texture = GameTexture
    

    And that's all. I typed all this directly in the forum, so lemme know if something isn't working as expected.

    Now for debugging purposes, you might want to capture the original content of GameTexture. There are a few ways of doing this. The most convenient one being to save it to a file using the console.
    To do this, open the console by pressing the ` key, and type:
    texture.find GameTexture
    texture.save <tab> game.png
    The <tab> key press will autocomplete the previous result, ie. what texture.find GameTexture returned: the GUID of GameTexture.
    Now you can open game.png and you'll see what its content was when the command was executed through the console.

    Now let's say you want to continuously display its content in a corner of your screen. The way I do that is by using a dev.ini extra config file that only exists locally on my computer and will only be loaded when developing my game. If your main config file is Game.ini, at the end of it simply add this line:
    @@dev.ini@@
    

    If the file exists, it'll be loaded and you can potentially override any previously defined config in it. In our case, we'll add a new viewport, rendered last:

    dev.ini:
    [Main]
    ViewportList = GameViewport # ScreenViewport # DebugViewport
    
    [DebugViewport]
    RelativeSize = (0.25, 0.25, 1.0)
    RelativePosition = top right
    ShaderList = @
    Code = "void main() {gl_FragColor = texture2D(texture, gl_TexCoord[0].xy);}"
    ParamList = texture
    texture = GameTexture
    

    Now you'll have a debug thumbnail (1/16th size) of the content of GameTexture always displayed at the top right of your screen all the time. This can come handy when doing advanced shader composition such as shadows, for example.

    Let me know if you have any questions! :)
  • edited April 2014
    orthecreedence wrote:
    Now I'm getting
    [18:35:52] [DISPLAY] [orxTexture.c:orxTexture_CreateFromFile():770] Failed to load bitmap [ViewportTexture] and link it to texture.

    Am I going in the right direction, or is this wrong?

    Thanks!

    So yes, that's the exact direction you should take and this warning is normal in debug: it lets you know there wasn't any "ViewportTexture" available from disk and an empty one gets created automatically by the viewport.

    I'll try to inhibit that warning message in this particular case in the future.
  • edited April 2014
    Now, looking more closely to your fog shader, I don't think you'll get the result you're expecting, unfortunately.
    You're relying on the Z coordinate, however the rendering itself being 2D, you'll always have the same value here.
    If you can give us more details to the exact kind of effect you want to achieve (a mockup/screenshot would help), maybe we can help you with that.
    As all your objects are 2D anyway, depending on what you want for result, simply applying on-the-fly a color on the objects based on their depth might work (you can also play with the blend mode if need be).
  • edited April 2014
    Awesome!! thanks for the quick response. I managed to get the shader loading by splitting my viewports up and loading them via a loop as you suggested. That all makes sense now.

    I guess my final question (hopefully) is this: Am I grabbing the "z" value for the object properly in the shader?
    fog_coord	=	abs(gl_FragCoord.z / gl_FragCoord.w) - fogStart;
      fog_coord	=	clamp(fog_coord, 0.0, fog_dist);
      factor		=	(fog_dist - fog_coord) / fog_dist;
      factor		=	clamp(fog_amt, 0.0, 1.0);
    

    In other words, is gl_FragCoord.z the correct way to go here or is there another way to grab the depth of the fragment?

    Also, the pre-shader texture debug window is awesome, I'll set that up.
  • edited April 2014
    Posted my last reply before seeing yours.

    So my goal is to have objects set up in the Z plane (much like the "scolling" tutorial) such that the further they are from the camera, the more they fade into the background ("fog") color.

    So in another game I was working on (by hand, no engine) I was using almost the exact same FogShader to get the affect I wanted. I don't fully understand how it works honestly, but I'm guessing it was using depth testing to grab the z coords out of the texture.

    Is there a better way of doing this?
  • edited April 2014
    My pleasure!

    Well, I can think of two ways to solve your problem right now.
    One would be to modify orx itself to actually send the Z coordinate to the vertex buffer. It's not done yet for small performances reasons: removing one coordinate was saving some bandwidth but it gets less and less relevant.
    Anyway, I'd like to add a feature in midterm future called early-Z pass, which should bring a good perf boost on GPU for people using many many objects/layers, and that will require me to send the Z coordinate for vertices anyway.
    This is not too hard to do but might require some familiarity with OpenGL rendering and orx itself.

    The other option would be to not use a shader but teint the objects themselves as I suggested in my previous post.
    You can hook custom code to object rendering by listening to the orxRENDER_EVENT_OBJECT_START event of type orxEVENT_TYPE_RENDER.
    In its payload, you'll find all the information about the object getting currently rendered and you can alter its teint with orxObject_SetColor().
    You can compute the color you want in the same way it was previously done in the shader.
    Now, if your objects never move along the Z axis and neither does your camera, you can simply do this computation once when one object gets created (this time by listening to orxOBJECT_EVENT_CREATE/orxEVENT_TYPE_OBJECT).

    If anything I just wrote doesn't look clear, let me know and I'll be happy to get into more details, however it won't be before tonight or tomorrow as I have to run some errands for now. :)
  • edited April 2014
    Thanks for the response, that all makes sense. I'll weigh my options and see what makes sense (probably your original idea of pre-coloring the objects would be the simplest). Although it does sound kind of fun to gain a better understanding of what's happening under the hood by digging through the Orx code a bit =].
  • edited April 2014
    I like when people dig through the code itself, it often helps spotting hard-to-see bugs and some people have provided very astute suggestions in the past as well.
    Not even mentioning those who then contribute to it (like the whole Android version, texture compression support, etc...). =)
  • edited April 2014
    Hey, before I go off on a wild tangent, does this sound right?

    Looking through the code (mainly the GLFW display plugin, but I'm guessing it's similar for the others), would passing in depth be a matter of increasing the orxDISPLAY_KU32_VERTEX_BUFFER_SIZE multiplier from 4 to 6, implementing a "fZ" value in orxDISPLAY_GLFW_VERTEX, calculating the Z value for each sstDisplay.astVertexList entry, and updating glVertexPointer(2, ...) to glVertexPointer(3, ...)? Would I have to update the matrix struct at all or is it possible to do this with just vX/vY?

    Unless I'm way off, I wouldn't mind taking a crack at this (with guidance from you, of course). I think having depth values in the shaders would make things like DoF/fog/other depth-related effects a lot simpler. I could tie it into the config system as well (unless you're planning on forcing Z into the coords eventually anyway, then might as well make it the default).
  • edited April 2014
    Yes, that's basically it for the second half of the task: handling the vertex format.

    The calculation of the value though, would go to the render plugin.
    Objects already have a Z value, you only need to normalize it in the camera space, something like Object.Z - Camera.Z / Camera.Far - Camera.Near.
    Then you need to modify the structure orxDISPLAY_TRANSFORM and add a fDstZ that will contain the value you just calculated.

    On the display plugin side, you then have to pass that value to the vertex buffer as you mentioned. No need to update the matrix itself as the world -> camera calculation already happened in the render plugin.

    The additional annoyance will then to deal with the different object groups as they act as "layers" and you might want to separate the Z values of objects based on those groups. Ie., if a camera renders objects from 4 distinct groups, you might want to partition the 0 - 1 range in 4 separate subranges, 0 - 0.25 for the first group, 0.25 - 0.5 for the second and so on. Shouldn't be too bad but that's definitely step two. ;)
  • edited April 2014
    Well cool, that went over my head =].

    I'll look over the render plugin and see if I can make sense of it and correlate what you said to some actual changes. Also, as far as rendering groups/layers, are you saying that objects appearing first in the list would render behind objects after the list even if their Z value is closer to the camera? I'm confused about that.

    Thanks for hand-holding me here.
  • edited April 2014
    Well, the way it works is that Z order is respected within a group. However, groups are rendered in sequence defined by the Camera.GroupList config property (in their order of declaration).

    I've recently tried to explain the group concept and what it implies in this forum thread, let me know if that makes sense.

    A more detailed explanation has been written on the orx-dev group (don't hesitate to subscribe to the group, that's a good place to learn about new feature or discuss enhancements).

    As always, never hesitate when you have a question, the community being rather small there aren't abundance of tutorials out there and orx is rather feature-packed, even if it doesn't show at first. ;)
Sign In or Register to comment.