It looks like you're new here. If you want to get involved, click one of these buttons!
// If I don't call this line on iOS, it doesn't crash
// "0" because the object has only one shader, "blur" (see below)
coin->AddShaderFloatParam(0, "blurSize", blur);
void Object::AddShaderFloatParam(int shaderIndex, const char* paramName, float paramValue) const
{
const orxSHADERPOINTER* shaderPointer = reinterpret_cast<const orxSHADERPOINTER *>(_orxObject_GetStructure(orxobject_, orxSTRUCTURE_ID_SHADERPOINTER));
const orxSHADER* shader = orxShaderPointer_GetShader(shaderPointer, shaderIndex);
const orxSTATUS shaderResult = orxShader_AddFloatParam(const_cast<orxSHADER*>(shader), paramName, 0, ¶mValue);
assert(orxSTATUS_SUCCESS == shaderResult); // While debugging, the result is success
}
[blur]
Code = "
uniform float blurSize;
// Implements a simple Gaussian blur (3x3 kernel)
void main()
{
vec4 sum = vec4(0.0);
sum += texture2D(texture, vec2(gl_TexCoord[0].x - 4.0 * blurSize, gl_TexCoord[0].y)) * 0.05;
sum += texture2D(texture, vec2(gl_TexCoord[0].x - 3.0 * blurSize, gl_TexCoord[0].y)) * 0.09;
sum += texture2D(texture, vec2(gl_TexCoord[0].x - 2.0 * blurSize, gl_TexCoord[0].y)) * 0.12;
sum += texture2D(texture, vec2(gl_TexCoord[0].x - blurSize, gl_TexCoord[0].y)) * 0.15;
sum += texture2D(texture, vec2(gl_TexCoord[0].x, gl_TexCoord[0].y)) * 0.16;
sum += texture2D(texture, vec2(gl_TexCoord[0].x + blurSize, gl_TexCoord[0].y)) * 0.15;
sum += texture2D(texture, vec2(gl_TexCoord[0].x + 2.0 * blurSize, gl_TexCoord[0].y)) * 0.12;
sum += texture2D(texture, vec2(gl_TexCoord[0].x + 3.0 * blurSize, gl_TexCoord[0].y)) * 0.09;
sum += texture2D(texture, vec2(gl_TexCoord[0].x + 4.0 * blurSize, gl_TexCoord[0].y)) * 0.05;
gl_FragColor = sum;
}"
ParamList = texture
Comments
Hi Diego!
I'm actually more surprised it works at all on windows, explanations below.
First of all, and its a detail, you might want to use the macro orxOBJECT_GET_STRUCTURE() to get components from an object at this will do runtime check to make sure the component is the correct one and that you're not transcasting with an incorrect structure. In a more general way, for each orxSTRUCTURE child, there's a helper macro that will cast and do some runtime check in debug (for example orxSTRUCTURE_ID_SHADERPOINTER -> orxSHADERPOINTER() is the casting macro). In release there's no runtime check done, so no cost penalty for using them.
The orxShader_Add*Param() functions need to be called before a shader is actually compiled (as it's stated in their doxygen doc/header comment). You generally don't have to bother as it's done automatically for you when parameters are defined in config. It can be useful if you wanted to generated new shaders on the fly at runtime with modified code. And even in this case I'd recommand to write a config entry instead and create a shader from config as you'll be sure no steps won't be forgotten nor misused. Let's go back to our case.
It's probably not crashing on windows as the first uniform param has probably 0 for ID but it's not mandatory and I'm pretty sure OpenGL ES implementations on iOS don't use that scheme.
If you want to use a shader in orx, don't define the parameters in the body of the code but only in the paramlist. Here's what you'd need to write:
Config:
In this case, both texture and blurSize will be defined for you and linked back to orx's core. We specified 2.0 as default value for blurSize. Orx then knows it's a float. If you need a vec3, simply use a vector as default value and if there's no default value or the name of a texture, the parameter will be considered as a texture.
The keyword 'screen' is used to get the current content of the screen from the framebuffer. It's costly as the whole GPU pipeline needs to be flushed and a sync has to happen between CPU and GPU, not even mentioning the cost of transfering the actual data back from GPU, so it's better not to abuse from it. But anyway, it does exist.
If a default value isn't provided, as I said, the parameter will be considered to be a texture, and its value will depend on what type of structure is carrying the shader.
If it's an object, the current animation frame's texture will be used (or the graphic's one if there's no running animation). If it's a viewport that holds the shader, the texture associated with the viewport will be used (ie. the screen unless you use the viewport to do offscreen rendering to a texture).
If you don't need to modify the value of the shader parameter, simply add a config property UseCustomParam = false, that'll remove unnecessary polling code and will make the whole thing slightly faster (probably unnoticeable, but heh...).
Now, how do you modify the value at runtime? It's easy, simply listen to an event!
The event's type is orxEVENT_TYPE_SHADER and the ID is orxSHADER_EVENT_SET_PARAM. Those events will be fired for each parameter of a shader everytime a shader is going to be used.
The payload will contain the parameter's default value and it's up to you to replace it with a new one.
In your case if you don't want to change the value of blurSize, no need to listen to that event. Otherwise, simply do:
Code:
As I wrote the code in the forum, it might contain some syntax error and not compile, just let me know.
Also, I'll update the orxShader_Add*Param() functions to return orxSTATUS_FAILURE if the shader has alreay been compiled, that should help in the future.
If anything's unclear or you need any other info, feel free to ask!
Cheers!
If you give a list as default value for default value of a shader parameter, it'll generate an array of the size = the number of elements in your list instead of a regular simple variable. Example:
Will automatically generate those uniform variables for the shader:
That's all.
EDIT:
If you need to have random values for init instead of declaring an array, simply use an indirection:
I think orx's biggest default is definitely not a lack of features but a lack of detailed docs. It's getting better thanks to those who contributed to the wiki though.
Let me know if you encounter any other issues!