Hello again
I'm currently learning how to use glsl shaders and I would like to do a lightning shader as seen in the lighting tutorial but with the light position being relative to some object in orx, instead of the mouse position.
So I'm searching for the reverse function to orxRender_GetWorldPosition.
Do I have to do it by myself?
I'm worried it would be hard to simulate the AutoScroll and DepthScale for me.
EDIT:
I've found a
topic on this forum stating there would be implicit parameters given to the shader for every texture painted to the viewport(I think).
But I don't think I really understand what
Each shader will have implicit parameters created for every texture defined for it.
means.
Comments
Hi there!
Well, there's orxRender_GetScreenPosition() that would fit the bill pretty closely.
That being said, this will not match objects using the AutoScroll feature. If you really want the object's screen position in that case, you'll have to listen to the orxRENDER_EVENT_OBJECT_START event for it and extract the value from the payload (pstRenderFrame), calling something like orxFrame_GetPosition(pstRenderFrame, orxFRAME_SPACE_GLOBAL, &vPosition).
Those parameters are used for telling you what are the UVs boundaries for your object (that will not be 0 - 1 if you use texture atlases/spreadsheets).
Let's say your texture parameter is named texparam, then you'll have implicit parameters names texparam_left, texparam_right, texparam_top and texparam_bottom that will define the actual coordinates in texture space (aka UV) that are used for your object.
Lemme know if this still isn't clear.
I think I've understood the UV part,
but if I try to listen to the orxRENDER_EVENT_OBJECT_START
event with this code I get a black screen and no event.
Also I do not know how to distinguish different objects rendered.
Would _pstEvent->hRecipient hold the orxObject that gets to be rendered?
Why is the frame that should be returned by this event in screen coordinates, but the frame returned by this function in world coordinates?
Well make sure your event doesn't return orxSTATUS_FAILURE when it receives the other render events otherwise you might just bypass rendering entirely.
Always returning orxSTATUS_SUCCESS by default should take care of that, unless your problem is different.
Yes, it's also _pstEvent->hSender. You can easily access it with orxOBJECT *pstObject = orxOBJECT(_pstEvent->hSender);
You can then compare it to a stored pointer, using its GUID (safer in case of deleted objects) or using its name if it's not a precise instance you're interested in.
The frame used by the object is proper to that object, it hence contains its coordinates (locally with respect to its parent and globally in the world).
The frame sent contained in the event is a render frame, ie. the frame that has been processed by the renderer to display the object on screen and is thus dependent not only on the object but also on the viewport/camera couple that currently "sees" that object. It thus cannot be stored directly withint the objects (imagine having more than one viewport/camera that sees that object, that will result in different render frames).
Now everything works out good