grid mining

Hi, one thing I've always wanted to explore is a grid-like mining/craft style game...along the lines of Terraria. I'm curious if there is any general recommendation on how to approach the world grid and maybe orx-specific hints as well! :smile: . I know it isn't a simple subject so I'm not expecting instructions on how to do it, just looking for ideas.

I don't think it is feasible to generate a bunch of orxOBJECTs for each block in the grid for performance reasons. I was thinking of keeping my world in a large 2d array of bytes, with the values representing what type of block is at that location. (1 = dirt, 2 = grass, 3 = rock , etc). For rendering, I'd loop through the visible portion of the array (dependent on player position) and render rectangles according to the values in the array. I've actually done something like that a few years back with my own home-grown stuff. Any hint as to how I could render out textures in Orx directly like that?

The other thing to handle would be collision. I feel like that also may require some custom collision handling rather than using the orx body. Since I'd know where the player is on the grid, I really only have to check a few of the values in the array that are near the player for collision. I know collision can be tricky, but think I can handle doing that part.

So, I'm curious if that sounds like a way to go, or if there are other well-known approaches? Probably the biggest unknown for me is how to render the textures directly without using orxOBJECTS (unless there is a workable approach using them)

thanks for any comment!

«1

Comments

  • I think your approach of managing your separate date is the way to go.

    That being said, for the rendering part, you can do something inspired from the shader-based tilemap rendering I demonstrated in https://github.org/iarwain/tilemap

    Regarding the physics you could manage it yourself as well or simply use the Edge part/shape and attach it either to your full world or to subparts of it if your world is too big.

    Another option would be to use orxOBJECTs as well but to only keep the ones around the players within a given radius alive, ie. streaming them based on player's position. This way you can retain the usefulness of orxOBJECTs while keeping their count manageable so as to not impact performance.

    The advantage of the streaming approach being that you can easily handle advanced things like spawners, localized ambiant sounds or hierarchies of objects with one single unified system.

  • Great, thanks for the ideas. I will check into both the tilemap demo and the orxOBJECT streaming idea and see where it takes me!

  • Just took a quick look at the tile map...that looks really promising. I will have to do some studying to understand how it works. Thanks.

  • This morning I added a quick modification so I can run around the map WASD style. Visually, this looks really good and crisp when I run around...better than when I've tried similar things using regular orxOBJECTS. I'm assuming using the shader technique may be the key to getting such good visual performance.

    My next test will be to make a set with slightly smaller tiles to see what I can get away with regarding performance...and to understand how things are working.

    Hope I can find the time to experiment with this!

  • I'm glad the shader gives you a good starting base.
    That being said, depending on your map size, the shader approach might require to much bandwidth to update with the currently available API as you can't do a partial texture update, but I can add a function for that in the coming days.

    That being said, I think the best approach would still be a mix between shader and objects while keeping your own world data structure.

  • edited March 26

    For example, looking at Terraria, their largest maps are 8400 x 2400, which would amount to a 40mb texture with the current shader, and you'd definitely wouldn't want to upload 40mb to the GPU for each map update.

  • I hadn't gotten that far, but I assumed for a large world I'd have to load in chunks of data somehow as the player moves around in the world and keep the actual map itself smaller, maybe like a size similar to your tile example. (similar to what you mention about streaming the orxObjects, which I took to mean just creating and deleting them as needed based on player location)

    Or, maybe you are saying if I have the partial update feature, the map itself could be very large, but for rendering we only update the GPU with a clip rectangle as needed for displaying the area around the player? If so, that sounds useful.

    In any case, I'm looking forward to learning more, this is pretty interesting stuff. It will take me a while just to get a little more control over your tile example.

  • The map itself, in texture format, can be quite large.
    It's only local updates that wouldn't be supported at the moment for it as you'd need to re-upload the entire texture every time.

    Also, regarding the tilemap example, most of the code is there only to convert from the format used by @sausage's Tiled exporter to a texture-based format that can be used by the shader. This part is likely not necessary to you, unless you use said exporter.

    At the core of the method, there are only 2 main components:

    • the map data in texture form
    • the shader to render that data

    Another advantage of this approach, depending on your needs, is that it's trivial to have the map wrap around, like it's the case in the example.

  • Sounds good, thank you. For now I'm going to see what I can do with a map size similar to your example.

  • Last night I was able to get my own test tile set running with 20x20 pixel tiles and a map stored in my array of tile indexes. Generated a little scrolling style landscape complete with a couple of randomly dug "caves" ...the nice thing is how fast and crisp the graphic looks, I'm really glad you pointed me to the shader idea...I can easily see how I can use this for a generated and changeable landscape and use the normal oxobjects for the rest (I dropped in a parallax mountain background and a single tree as orxobjects for a test). I think this weekend I'm going to see how feasible my idea of handling the physics on my own will be...if that works, it would certainly be nice to someday be able to use the "local update" concept you refer to which isn't available at the moment. If I could combine this strategy with a large map with the same kind of performance, it would really be great!

    I think I've got the basic concept of how the shader works, using a texture to represent the map itself in pixels - though the shader code itself is pretty fuzzy for me at this point. I can't imagine having > 65536 tiles in a set, so it seems pretty scale-able in that regard. Very cool stuff....thanks a lot!

  • My pleasure. If you have questions about the shader code, I'll be happy to answer them. You can also shift the compromise between the number of tiles available inside a single set and the size of the texture map (right now using 16bits per tile ID, ie. 65536 tiles in a set, but this can easily change).
    You can also improve on this example with multiple layers or animated ones, for example.

    Lastly, the partial bitmap update is available since yesterday, but in its own branch until I can find someone that confirms that I didn't break anything on iOS or Android, which I can't test myself at the moment: https://github.com/orx/orx/tree/b-setpartialbitmapdata

    The new function's called orxDisplay_SetPartialBitmapData.

  • Awesome, thank you. What is the correct way to use the new branch? I downloaded and unzipped it, then used init from the new folder to create a new project, but the new function does not seem to be available, at least not appearing in intelli-sense. Did I miss a step in setting it up? I was trying to keep it separate from my original orx installation.

  • Nevermind, I hadn't run setup. I have the new function now.

  • If you download it, you'll also need to run setup.bat which will update your environment to point to this installation of orx and not your original one, then compile orx itself.
    I'm not sure it's worth the trouble, you can easily switch from one branch to the other with git checkout.
    Then you'll simply need to recompile orx.

  • Tried out the update function and it seems to work great. I'm using it to test update a single tile by toggling it back and forth between two tiles with a keyboard event. I'm updating a single pixel in the texture and it seems to work with no problem. I'm including my MapUpdate function here to make sure I'm using it correctly.

    I do notice a strange behavior in that the first 20 times I use it, it causes a big lag in motion on the texture.....this is noticeable if I'm scrolling the texture while toggling my block. After the 20th time, it completely smooths out and I can scroll and toggle as fast as I want with no noticeable interruption. I know that sounds flaky, but I've tested it many times and it literally starts working perfectly after the 20th time. It isn't a timing thing, I can wait 1 minute after starting the program, then test the toggle and every time it starts working smoothly after 20. I don't think there is anything in my code that could be related to that, but I'll keep experimenting.

    In any case, let me know if this looks like a reasonable usage. the pu8Data is allocated once initially (in another function) and deleted upon closing the level (returning to main menu). I'm assuming always updating just a single tile with that for now.

    void UpdateMapTexture(orxU32 row, orxU32 col) {
    
        orxASSERT(pu8Data);
        pu8Value = pu8Data;
    
        orxU32          u32Index;
    
        u32Index = WorldArray[GetIndex(row, col - 1)];
    
    
        *pu8Value++ = (u32Index & 0xFF00) >> 8;
        *pu8Value++ = u32Index & 0xFF;
    
        u32Index = WorldArray[GetIndex(row, col)];
    
        *pu8Value++ = (u32Index & 0xFF00) >> 8;
        *pu8Value++ = u32Index & 0xFF;
    
    
        orxDisplay_SetPartialBitmapData(pstBitmap, pu8Data, col / 2, row, 1, 1);
    
    }
    
  • Mmh, the behavior you describe is definitely weird. Was it with a debug or a release build?
    Could you look at the profiler (only in profile or debug) to see if anything weird appears there?
    You can toggle the profiler with the config property ShowProfiler in the Render section.
    You can use ScrollLock to display a vertical slice of each frame, Space to pause/unpause the profiler and left/right to scrub on the past few hundred frames.

  • I'm not sure exactly how to interpret the profiler, but it looks pretty much the same when it has the stuttering and when it doesn't. It behaves the same in the release build. I'll keep messing around to see if I can get more clues.

  • You're looking for spikes with the orxDisplay* functions, mostly.
    As it could be a stall in the driver while waiting for the texture to be available. And after a few requests, the driver might act upon it and duplicate the texture or something like this. It'd be all happening in the OpenGL drivers, so it's only conjectures at this point.

  • I do see something going on just after the event fires...the following frame appears to take 65ms, but I can't tell what exactly is spiking. Here are 3 frames...the first you can see the event fire I think, the next one is 65ms, but I don't see where it shows up in the functions listed, and the next frame is back down to 8ms, if I'm reading it correctly.

  • I probably should mention that my world is pretty big...I was trying 8400x2400 just to see what I could get away with. So, it seems the smaller I make the world, the less noticeable it is...like 2000x2000 has a 24ms gap. 1000x1000 has a 16ms gap. In all cases, the gap goes away after 20 calls.

  • A workaround of calling the partial update function 20 times the first time update is called is in place for now. Seems to solve the problem for all practical purposes.

  • I really think your OpenGL driver notice the access pattern before changing how that texture buffer is handled internally.
    The gap you see in the profiler is likely around the call to orxDisplay_SetPartialBitmapData. You can add profiler markers in your function to verify this hypothesis:

    • orxPROFILER_PUSH_MARKER("MyFunction") at the beginning of your function
    • orxPROFILER_POP_MARKER() at the end of it

    Are you working on a laptop or a desktop?

  • Yes, my update function that calls orxDisplay_SetPartialBitmapData now shows up and is indeed the culprit. I guess that isn't surprising, but verified. By the way, thanks again for adding in this new function so quickly...your level of support and attentiveness is pretty unusual!

    I am on a laptop with a GTX 1060. I do have a desktop with a better graphics card at a different location, I'll check the behavior on it later today.

    At least now I have a tiny bit of experience using the profiler.

  • Physics experiment successful! I made a 6x6 grid of "blocker" orxObjects surrounding the player, snap their positions to the world grid on update, and set them to solid or not, depending on what tile they are above. I was fairly amazed to see this works quite well and I can use the orx collision system for the tile map. Here is my collision cocoon. Feel free to laugh at my stand in graphics:

  • Been having a lot of fun with this. I just noticed these messages when I run my project in VS, so thought I'd let you know in case it is worrisome:

  • Hey @funemaker, sorry for the delay, it's been a busy day on my side.

    Does your laptop have a dual/hybrid GPU? If so, you'll need to add this somewhere in your main source file:

    #ifdef __orxWINDOWS__
    
    #include "windows.h"
    extern "C"
    {
      _declspec(dllexport) DWORD NvOptimusEnablement                = 1;
      _declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
    }
    
    #endif // __orxWINDOWS__
    

    This will make sure that your laptop is going to be using your dedicated GPU.
    Note that those symbols need to be exported in the executable file, they can't be in a linked DLL.

    I'm glad you appreciate the level of support we have. That's something I really enjoy with this project, even when I'm not available there's usually always someone around (even more so on the chat at https://gitter.im/orx/orx) to help those who need it. The community, albeit small, is very nice and helpful, which in turn keeps me motivated. :)

    I like your approach of handling the physics, it's quite clever and I'm glad it's working the way you want. Are you moving them individually or with a single parent? It doesn't change much but I'm wondering which approach would be more practical.

    Regarding your messages, they're all coming from internal windows components and I'm rather sure they're not linked to orx. I'm not sure on how to help you with those, sorry.

  • HI, I don't think GTX 1060 is dual or hybrid....in any case, I dropped in that code but the behavior is the same. I'm just using the workaround now. Still haven't tried on the desktop yet.

    I was moving the blockers individually with a loop (only snapping the starting location once, then looping through them all), but using a parent does indeed sound a little simpler. I may change it over.

    There does seem to be situations where I try make a blocker solid while it is overlapping the player body, and that seems to throw the player somewhere far away from the camera, or something bad happens, because all I see is the viewport background. It doesn't happen often so I'm trying to figure out how/why it does happen.

    Understood about that messages...I think they may have always been there.

  • The GTX 1060 itself wouldn't be dual/hybrid, the duality would be at the laptop level.
    It's common nowadays for laptops to have a low power GPU integrated with the CPU, usually an Intel one, and a separate, more powerful, dedicated GPU, usually an ATI/NVidia one.

    Regarding the collisions, I'd suggest a different approach as well, instead of using the actual dynamic simulation of Box2D, I'd instead rely on some raycasts and position your player accordingly.

    @lydesik made a test project with this concept a few years ago, you might want to check it: https://bitbucket.org/loki666/boxbot/src/default/

  • I'll check out that raycasting technique, thanks!

    For some reason I can't seem to get tiles greater than around 60 in my tileset to work reliably...they sometimes appear correctly, but occasionally render some other tiny part of the tileset, stretched out....took me a while to even understand what I was looking at. In any case, if my tile set is small, they all work consistently...I'm sure I've got something messed up in my code somewhere, but it is strange that the affected tile renders correctly 90% of the time, and then sometimes as a screwy looking stretch of pixels.

    I've probably messed around enough today! I'm very happy to have the ability to generate a really large map and have it work so nicely.

  • edited March 29

    Ah, the tile size limitation is weird. If you have some tile/map textures for me, I can check that on my side as well.

    You can also save textures' content back to disk for inspection if that can help, even textures that have been created at runtime. For that you can use the texture.save <texture_name> command inside the console.

    I'm glad you're doing nice progress with your systems in any case!

Sign In or Register to comment.