I've been studying working with shaders after I got recommendations on using tiling solutions.
Below is a small test configuration that includes a shader. I am sure some of the questions will show that I am a complete newbie in the graphics area.
[Object]
Graphic = Player
Position = (16.0, 3.0, 0.2)
ShaderList = TileShader
[Player]
Texture = data/player.png;
[TileShader]
ParamList = tileAtlas # atlasIndex; # atlasScale
tileAtlas = data/texture_atlas.png;
atlasIndex = (1.0, 1.0);
;atlasScale = (0.5, 0.5);
Code = "
void main(void) {
vec2 atlasScale = vec2(0.5, 0.5); // It should be an external variable like atlasIndex
gl_FragColor=texture2D(tileAtlas, atlasScale * atlasIndex + gl_TexCoord[0].xy * atlasScale);
}"
player.png file is just a green rectangle with 16x16 size. texture_atlas.png is a collection of 4 images. Each image is 16x16. Images are in 2x2 array, thus the png itself is 32x32 in size. atlasScale variable of 0.5 represents compression ratio of 2 images per row and per column. Thus valid atlasIndex values are 0.0 and 1.0 in X and Y axis.
Questions
Q: Is vector data type always a 3D vector in orx?
If the following line does not use .xy, the code will generate an error message
Couldn't compile fragment shader:
ERROR: 0:13: '*' does not operate on 'vec2' and 'vec3'
atlasScale * atlasIndex + gl_TexCoord[0].xy * atlasScale);
It appears that orx always genrates 3D vectors. I guess it simplifies number of data types to deal with.
Q: Shader coordinate system appears to be reversed compared to screen coordinates
That probably shows that I am an absolute beginner
The point of origin (0,0) for the window screen seems to be the top left corner. At the same time the shader's or texture coordinate system system seems to start at the bottom left corner in the following line:
gl_FragColor=texture2D(tileAtlas, atlasScale * atlasIndex + gl_TexCoord[0].xy * atlasScale);
This one got me confused really well
Q: Shaders don't work if Object has no texture defined
The shader in my example stops working if my object does not define any texture at all. It makes sense, because Object does not have the size defined. Orx Graphic has textureSize attribute, but it is only used in the context of textureCorner and defined texture.
It seems pointless that I have to define a texture for a shader to work if I don't plan on using its data. I am guessing it is necessary and is part of the processing pipeline at OpenGL level.
Q: Object coordinaes impact on tile map resolution
If each orx object is represented in the shader by coordinates from 0.0 to 1.0, then I would have to configure each object with its individual shader. That seems to defeat the tile map.
So, I would have to build a single object with indexes into atalsMap texture. That will work, but it seems to complicate producing per-tile effects such as selection. I am not sure how to pass a collection of indexes from orx configuration into a shader.
I must be missing something here.
Comments
Don't worry about that, there are no stupid questions and someone else might benefit later of the questions you asked today!
First of all, and a bit unrelated, I recommend reading about the resource system that has been added to orx recently. You can find a tutorial here, and I plan on explaining that tutorial via the wiki but also via the orx-dev google group.
Basically you can define places where your resources (sound, texture, config, or your own files) will be searched for, in order or priority. That makes it easy to patch a game later on (simply add a new place to look for resources at a higher priority) or to support multiple platforms with different variations of your assets (by modifying the places).
In this case, you can have:
Yes, it is.
As you noted yourself it's simply for conveniency of having a single type.
Only 3 types of shader parameters are allowed: vectors (3 components), floats and samplers (ie. textures). By supporting only those 3 types, I can easily guess which type of data are the defined parameters without having to bother the user to actually specify the type.
Yes, GLSL isn't kind on mixing different types, and some GLSL compilers are stricter than others. So, just in case, be as pedantic as possible by always specifying correct "types" and swizzles.
That is entirely true. At some point I wanted to parse given shaders and reverse automatically the coordinates of samplers behind the scene but that's a lot of work for a very small improvement. Now that you know how it works, you shouldn't have this problem anymore.
That's true. As you noted, without Graphic/Texture, there's no size, and even if there was a size, we don't process Objects that don't have any visuals in the rendering pipeline for efficiency sake.
You don't actually need to create a 16x16 texture just for having your shader displayed. You can use the internal texture named 'pixel' and add a scale to your object:
Also, by default a shader will use it's owner's Texture for its undefined parameters:
I think I lost you on that one to be honest. At first I though your shader was for your tiled background (ie. probably a unique object). But you're using it for your player, so I'm not sure what the intent is.
Anyway, there are a few things you might not know about how shaders work in orx, and maybe one of those pieces of info will help in your current situation:
1 - Original owner's texture coordinates will be given to the shader, behind the scene. For example, if you have:
Then orx will generate extra parameters behind the scene for your texture. They'll be called <NameOfYourTexture>_left, _top, _right and _bottom. In this case, we'll have:
myTexture_left, myTexture_top, myTexture_right and myTexture_bottom. Those will contain normalized values matching the original coordinates defined in the Graphic. Let's say myTexture is 32x32, we'd then get:
myTexture_left = 16 out of 32 => 0.5
myTexture_top = 32 - 16 out of 32 => 0.5
myTexture_right = 16 + 8 out of 32 => 0.75
myTexture_bottom = 32 - (16 + 8) out of 32 => 0.25 (it has been inverted for you as you can see, same for _top, but in this example it doesn't show)
2 - You can define shader parameters on the fly if you specify the config property UseCustomParam = true in your shader.
Which means an event of type orxEVENT_TYPE_SHADER and ID orxSHADER_EVENT_SET_PARAM will be fired for all parameters and its payload will contain the name of the param, its default value. In your event handler you can then modify that value if need be, and it'll get used by the shader.
However, when UseCustomParam is defined, those objects can't be batched at rendering so it might be a bit more expensive (wouldn't really matter unless you have hundreds of them, but it's good to know). See my test playgroung code, orxBounce, for an example on how to set those shader parameters on the fly.
3 - You can define parameters as arrays.
Will actually create a shader variable of type:
Of course those values are like any other values and can be modified on the fly is needed (you'll get an event call per element in an array). Please note that arrays have a fixed size that is defined by the number of elements in the config.
4 - There are a couple of keywords for the parameters:
Lemme know if none of this is useful for you and you still need more information.
Cheers!
Well, if I had to do a tiled map via shaders, I'd store my grids in textures, and will only provide the shader those params: the grid of the current level, the texture atlas, the frame coordinates in grid space and the frame size.
All the calculations would then be done in the shader.
Only the frame coordinates would change regularly.