It looks like you're new here. If you want to get involved, click one of these buttons!
[Object]
Graphic = OnePixel
Position = (0., 0., 0.2)
Scale = (600., 600., 1)
ShaderList = HexShader
[OnePixel]
Texture = pixel
[HexShader]
ParamList = mousePoint
UseCustomParam = true
mousePoint = (0.5, 0.5, 0.)
Code = "
void main()
{
vec3 p = gl_TexCoord[0].xyz; // current point
if (abs(mousePoint.x - p.x) < 0.005 && abs(mousePoint.y - p.y) < 0.005) {
gl_FragColor = vec4(1., 1., 1., 1.);
} else {
discard;
}
}
"
// shaderRes is the result in the shader coordinates
// point is the screen coordinate to translate
// origin is the left bottom corner of the texture with (0,0) texture coordinate
// size is the size of the texture
orxVECTOR* hemScreenIntoShader(orxVECTOR *shaderRes, orxVECTOR *point, orxVECTOR *origin, orxVECTOR* size) {
orxVECTOR pointToOrigin;
pointToOrigin.fX = point->fX - origin->fX;
pointToOrigin.fY = origin->fY - point->fY;
pointToOrigin.fZ = origin->fZ;
orxVector_Div(shaderRes, &pointToOrigin, size);
//orxLOG("shaderPoint: %f, %f.", shaderPoint.fX, shaderPoint.fY);
return shaderRes;
}
the actual origin calculation is a bit boring:
/* Creates object */
grid = orxObject_CreateFromConfig("Object");
orxObject_GetWorldPosition(grid, &gridPos);
orxVECTOR size, scale;
orxObject_GetSize(grid, &size);
orxObject_GetScale(grid, &scale);
orxVector_Mul(&gridSize, &scale, &size);
orxVector_Set(&shaderOrigin, gridPos.fX, gridPos.fY + gridSize.fY, gridPos.fZ);
orxLOG("grid size: %f, %f, %f", gridSize.fX, gridSize.fY, gridSize.fZ);
orxLOG("shader origin: %f, %f, %f", shaderOrigin.fX, shaderOrigin.fY, shaderOrigin.fZ);
Comments
You will observe that coordinate passed into the shader is (0.9, 0.9), but it is displayed at the very top right corner of the application window. So, I can't quite understand why it is 0.9 instead of 1.0.
My best guess is that view transforms are responsible for changing the coordinate space.
I believe you're mistaking what gl_TexCoord[0] actually is.
It's the normalized coordinates of your texel in your texture, which normally doesn't map to any screen coordinates.
What you want is the coordinates of your fragment (ie. pixel) in your display/window. This is given by the internal variable gl_FragCoord.
Also notice that gl_FragCoord isn't normalized.
In your case, you can simply flip the Y coordinate of your mouse position (something like MousePos.y = ScreenHeight - MousePos.y) and test it against gl_FragCoord in your shader.
There shouldn't be any need of any other calculations.
As a side note (but not really relevant to your case), whenever you have an array of samplers (aka textures) in your shader, orx will create arrays for _top, _left, _bottom and _right with the same name and size.
Ex: MyTexture[3] -> MyTexture_left[3], etc...
Here is my little test project .c and .ini files to demo mouse tracking with shaders. It simply displays a small red rectangle around mouse pointer.
INI:
C:
One small remark though: in order to get the ScreenHeight parameter, you query the config system.
The issue with that is: if you change your resolution at runtime (by calling orxDisplay_SetVideoMode()), the config value won't change and only reflect the initial value.
You might want to call orxDisplay_GetScreenSize() to query the current resolution instead.