in my game
there is some gui list with slide bar, I don't know how to use a proper way to describe it. you could see the image below.
now the list contains four items in the list from the image and a slidebar on the right of the list.
users could drag those items to upward and downward in the list. the effect is similar with what you did in some applications in iphone like gmail.
therefore, I need two viewport to draw them. one is the whole game viewport whose size is the same with the screen, while one is only for the list.
but if there are two viewport, the objects will be drew twice. I once judge whether the objects need to be drawn in the render event. but I feel it is complicated, is it possible to add this feature to the system?
like
[ObjectTemplate]
OwnerViewport = ....
if ownerViewport is equal to current viewport it will be drawn,vice versa.
another question is if some parts of the object is out of the viewport and still in the screen, they are still shown by system, but how do I not show them?
Comments
In the viewport for the list, you can use a different camera pointing to a completely different part of the world space. That part of the world space only contains your list items so only those will get displayed. An easy way of doing that without touching the (x, y) coordinates is by playing with the z values. You can say that your list items are in space with a z between -1 and 0 and your world has objects with z between 0 and -1. The list camera will have its frustum only look at the negative Zs while the world camera will look at the positive Zs.
That will give you the same result than the OwnerViewport and allows you to decide if some objets need to be drawn by more than one viewport or not. No need of listening to the render event either.
That being said, why using two viewports? You can easily achieve the same thing by listening to the render start event of your first list object (that can be a dummy and doesn't display anything but only there for you to get the event if need be) and then call something like:
orxDisplay_SetBitmapClipping(orxDisplay_GetScreenBitmap(), ...) to the correct coordinates. That will make sure that no list items outside of these coordinates will get displayed.
In this case you can even set all your list items with a common parent (your slider?) that you simply move vertically to the right place and let orx do the rest.
When your last list item is displayed, you can reset the bitmap clipping so that closer objects don't get cut.
I'm not sure I get the last question. Do you mean that if there's a viewport smaller than the screen and one object is partially visible in this viewport then the whole object get displayed anyway? That shouldn't happen as we're using the viewport coordinate to call the SetBitmapClipping function so that a glScissor call is made behind the scene. If it doesn't work this way, it's a bug.
I read a thread about camera with https://forum.orx-project.org/discussion/1215
and test the attachment tdomhan posts. I am wondering why you design useParentSpace which transfer to a (0,1) box(if I won't misunderstand it). I think the pixel setting is good enough. but I guess you must have your own reason. why ,where we need to use parent space, and how does it performance. Thank you
But How do I set two camera with one viewport. Besides, if I set parentCamera, the list object will follow the camera's move, in fact it won't move.
could you give me an example, the simpler the better.
There's no impact for performances as the relative coordinates are translated to actual pixel ones upon creation.
Using two cameras was only the option when using two viewports.
I meant to use a virtual "slider" as parent of all the list item, for example. You simply move that dummy parent object up and down and let the clipping display only the items you want. No need to move the list items individually.
I am developing the game, and at the same time, I am making my game structure and gui framework for orx. I may release it after the list finished.