Crash when calling orxShader_AddFloatParam on iOS

drpdrp
edited September 2011 in Help request
Hi!

I'm playing around with orx (thanks for the project BTW) and shaders but I'm crashing.

I have a really simple fragment shader that is crashing on iOS (it runs perfectly fine on Win32). I believe the offending line is when I invoke orxShader_AddFloatParam() like this:
// If I don't call this line on iOS, it doesn't crash
// "0" because the object has only one shader, "blur" (see below)
coin->AddShaderFloatParam(0, "blurSize", blur);

which calls:
void Object::AddShaderFloatParam(int shaderIndex, const char* paramName, float paramValue) const
{
    const orxSHADERPOINTER* shaderPointer = reinterpret_cast<const orxSHADERPOINTER *>(_orxObject_GetStructure(orxobject_, orxSTRUCTURE_ID_SHADERPOINTER));
    const orxSHADER* shader = orxShaderPointer_GetShader(shaderPointer, shaderIndex);
    const orxSTATUS shaderResult = orxShader_AddFloatParam(const_cast<orxSHADER*>(shader), paramName, 0, &paramValue);
    assert(orxSTATUS_SUCCESS == shaderResult); // While debugging, the result is success
}

and the shader code is a simple pseudo Gaussian blur:
[blur]
Code = "
uniform float blurSize;

// Implements a simple Gaussian blur (3x3 kernel)

void main()
{
	vec4 sum = vec4(0.0);

	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 4.0 * blurSize,	gl_TexCoord[0].y))	* 0.05;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 3.0 * blurSize,	gl_TexCoord[0].y))	* 0.09;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 2.0 * blurSize,	gl_TexCoord[0].y))	* 0.12;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x - blurSize,			gl_TexCoord[0].y))	* 0.15;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x,					gl_TexCoord[0].y))	* 0.16;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x + blurSize,			gl_TexCoord[0].y))	* 0.15;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 2.0 * blurSize,	gl_TexCoord[0].y))	* 0.12;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 3.0 * blurSize,	gl_TexCoord[0].y))	* 0.09;
	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 4.0 * blurSize,	gl_TexCoord[0].y))	* 0.05;

	gl_FragColor = sum;
}"
ParamList = texture

So I was wondering if you guys can think of any reason why it is crashing on iOS but not on Win32?

Also, I'm getting the 0x502 GL error.

Cheers,
Diego

Comments

  • edited September 2011
    drp wrote:
    Hi!

    Hi Diego!
    I'm playing around with orx (thanks for the project BTW) and shaders but I'm crashing.

    I have a really simple fragment shader that is crashing on iOS (it runs perfectly fine on Win32). I believe the offending line is when I invoke orxShader_AddFloatParam() like this:

    I'm actually more surprised it works at all on windows, explanations below. :)
    // If I don't call this line on iOS, it doesn't crash
    // "0" because the object has only one shader, "blur" (see below)
    coin->AddShaderFloatParam(0, "blurSize", blur);
    

    which calls:
    void Object::AddShaderFloatParam(int shaderIndex, const char* paramName, float paramValue) const
    {
        const orxSHADERPOINTER* shaderPointer = reinterpret_cast<const orxSHADERPOINTER *>(_orxObject_GetStructure(orxobject_, orxSTRUCTURE_ID_SHADERPOINTER));
        const orxSHADER* shader = orxShaderPointer_GetShader(shaderPointer, shaderIndex);
        const orxSTATUS shaderResult = orxShader_AddFloatParam(const_cast<orxSHADER*>(shader), paramName, 0, &paramValue);
        assert(orxSTATUS_SUCCESS == shaderResult); // While debugging, the result is success
    }
    

    First of all, and its a detail, you might want to use the macro orxOBJECT_GET_STRUCTURE() to get components from an object at this will do runtime check to make sure the component is the correct one and that you're not transcasting with an incorrect structure. In a more general way, for each orxSTRUCTURE child, there's a helper macro that will cast and do some runtime check in debug (for example orxSTRUCTURE_ID_SHADERPOINTER -> orxSHADERPOINTER() is the casting macro). In release there's no runtime check done, so no cost penalty for using them.

    The orxShader_Add*Param() functions need to be called before a shader is actually compiled (as it's stated in their doxygen doc/header comment). You generally don't have to bother as it's done automatically for you when parameters are defined in config. It can be useful if you wanted to generated new shaders on the fly at runtime with modified code. And even in this case I'd recommand to write a config entry instead and create a shader from config as you'll be sure no steps won't be forgotten nor misused. Let's go back to our case.
    and the shader code is a simple pseudo Gaussian blur:
    [blur]
    Code = "
    uniform float blurSize;
    
    // Implements a simple Gaussian blur (3x3 kernel)
    
    void main()
    {
    	vec4 sum = vec4(0.0);
    
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 4.0 * blurSize,	gl_TexCoord[0].y))	* 0.05;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 3.0 * blurSize,	gl_TexCoord[0].y))	* 0.09;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 2.0 * blurSize,	gl_TexCoord[0].y))	* 0.12;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - blurSize,			gl_TexCoord[0].y))	* 0.15;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x,					gl_TexCoord[0].y))	* 0.16;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + blurSize,			gl_TexCoord[0].y))	* 0.15;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 2.0 * blurSize,	gl_TexCoord[0].y))	* 0.12;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 3.0 * blurSize,	gl_TexCoord[0].y))	* 0.09;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 4.0 * blurSize,	gl_TexCoord[0].y))	* 0.05;
    
    	gl_FragColor = sum;
    }"
    ParamList = texture
    

    So I was wondering if you guys can think of any reason why it is crashing on iOS but not on Win32?

    Also, I'm getting the 0x502 GL error.

    Cheers,
    Diego

    It's probably not crashing on windows as the first uniform param has probably 0 for ID but it's not mandatory and I'm pretty sure OpenGL ES implementations on iOS don't use that scheme.

    If you want to use a shader in orx, don't define the parameters in the body of the code but only in the paramlist. Here's what you'd need to write:

    Config:
    
    [blur]
    Code = "
    // Implements a simple Gaussian blur (3x3 kernel)
    void main()
    {
    	vec4 sum = vec4(0.0);
    
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 4.0 * blurSize,	gl_TexCoord[0].y))	* 0.05;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 3.0 * blurSize,	gl_TexCoord[0].y))	* 0.09;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - 2.0 * blurSize,	gl_TexCoord[0].y))	* 0.12;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x - blurSize,			gl_TexCoord[0].y))	* 0.15;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x,					gl_TexCoord[0].y))	* 0.16;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + blurSize,			gl_TexCoord[0].y))	* 0.15;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 2.0 * blurSize,	gl_TexCoord[0].y))	* 0.12;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 3.0 * blurSize,	gl_TexCoord[0].y))	* 0.09;
    	sum += texture2D(texture, vec2(gl_TexCoord[0].x + 4.0 * blurSize,	gl_TexCoord[0].y))	* 0.05;
    
    	gl_FragColor = sum;
    }"
    ParamList = texture # blurSize
    blurSize  = 2.0
    

    In this case, both texture and blurSize will be defined for you and linked back to orx's core. We specified 2.0 as default value for blurSize. Orx then knows it's a float. If you need a vec3, simply use a vector as default value and if there's no default value or the name of a texture, the parameter will be considered as a texture.
    The keyword 'screen' is used to get the current content of the screen from the framebuffer. It's costly as the whole GPU pipeline needs to be flushed and a sync has to happen between CPU and GPU, not even mentioning the cost of transfering the actual data back from GPU, so it's better not to abuse from it. But anyway, it does exist.
    If a default value isn't provided, as I said, the parameter will be considered to be a texture, and its value will depend on what type of structure is carrying the shader.
    If it's an object, the current animation frame's texture will be used (or the graphic's one if there's no running animation). If it's a viewport that holds the shader, the texture associated with the viewport will be used (ie. the screen unless you use the viewport to do offscreen rendering to a texture).

    If you don't need to modify the value of the shader parameter, simply add a config property UseCustomParam = false, that'll remove unnecessary polling code and will make the whole thing slightly faster (probably unnoticeable, but heh...).

    Now, how do you modify the value at runtime? It's easy, simply listen to an event!

    The event's type is orxEVENT_TYPE_SHADER and the ID is orxSHADER_EVENT_SET_PARAM. Those events will be fired for each parameter of a shader everytime a shader is going to be used.
    The payload will contain the parameter's default value and it's up to you to replace it with a new one.

    In your case if you don't want to change the value of blurSize, no need to listen to that event. Otherwise, simply do:

    Code:
    // Somewhere in your init
    orxEvent_AddHandler(orxEVENT_TYPE_SHADER, ShaderEventHandler);
    
    [...]
    
    orxSTATUS orxFASTCALL ShaderEventHandler(const orxEVENT *_pstEvent)
    {
      orxSHADER_EVENT_PAYLOAD *pstPayload;
    
      // Checks
      orxASSERT(_pstEvent->eID == orxSHADER_EVENT_SET_PARAM);
    
      // Gets payload
      pstPayload = (orxSHADER_EVENT_PAYLOAD *)_pstEvent->pstPayload;
    
      // Our param?
      if(!orxString_Compare("blurSize", pstPayload->zParamName))
      {
        // Updates its value
        pstPayload->fValue = MyChangingBlurSize;
      }
    
      return orxSTATUS_SUCCESS;
    }
    

    As I wrote the code in the forum, it might contain some syntax error and not compile, just let me know. =)

    Also, I'll update the orxShader_Add*Param() functions to return orxSTATUS_FAILURE if the shader has alreay been compiled, that should help in the future.

    If anything's unclear or you need any other info, feel free to ask!

    Cheers!
  • edited September 2011
    Oh I forgot to mention the parameter arrays!

    If you give a list as default value for default value of a shader parameter, it'll generate an array of the size = the number of elements in your list instead of a regular simple variable. Example:
    [MyShader]
    Code = "[...]"
    
    ParamList = floats # textures # vectors
    
    vectors  = (0, 0, 0) # (1, 1, 1)
    floats   = 3 # 4 # 5.5 # 6
    textures = screen # MyFirstTexture # MyOtherTexture
    

    Will automatically generate those uniform variables for the shader:
    uniform vec3      vectors[2];
    uniform float     floats[4];
    uniform sampler2D textures[3];
    

    That's all. :)

    EDIT:

    If you need to have random values for init instead of declaring an array, simply use an indirection:
    [MyShader]
    Code      = "[...]"
    ParamList = floats # float
    
    floats      = 3 # 4 # 5; <= This generates an array of size 3 whose default value is {3, 4, 5}
    float       = @MyShader.RandomFloat <= This generates a single float param whose default value is randomly 3, 4 or 5 (see below)
    RandomFloat = 3 # 4 # 5
    
  • drpdrp
    edited September 2011
    You are the man, thanks a lot for the complete answer! I will give it a shot and let you know! :laugh:
  • edited September 2011
    My pleasure! I acknowledge I didn't do a good job at exposing things in the tutorials so I try to be as reactive as possible on the forum for people who have questions. :)

    I think orx's biggest default is definitely not a lack of features but a lack of detailed docs. It's getting better thanks to those who contributed to the wiki though. :)

    Let me know if you encounter any other issues!
  • drpdrp
    edited September 2011
    Thanks dude, I confirm it worked!
  • edited September 2011
    Excellent, thanks for letting me know!
Sign In or Register to comment.