grid mining

Hi, one thing I've always wanted to explore is a grid-like mining/craft style game...along the lines of Terraria. I'm curious if there is any general recommendation on how to approach the world grid and maybe orx-specific hints as well! :smile: . I know it isn't a simple subject so I'm not expecting instructions on how to do it, just looking for ideas.

I don't think it is feasible to generate a bunch of orxOBJECTs for each block in the grid for performance reasons. I was thinking of keeping my world in a large 2d array of bytes, with the values representing what type of block is at that location. (1 = dirt, 2 = grass, 3 = rock , etc). For rendering, I'd loop through the visible portion of the array (dependent on player position) and render rectangles according to the values in the array. I've actually done something like that a few years back with my own home-grown stuff. Any hint as to how I could render out textures in Orx directly like that?

The other thing to handle would be collision. I feel like that also may require some custom collision handling rather than using the orx body. Since I'd know where the player is on the grid, I really only have to check a few of the values in the array that are near the player for collision. I know collision can be tricky, but think I can handle doing that part.

So, I'm curious if that sounds like a way to go, or if there are other well-known approaches? Probably the biggest unknown for me is how to render the textures directly without using orxOBJECTS (unless there is a workable approach using them)

thanks for any comment!


  • I think your approach of managing your separate date is the way to go.

    That being said, for the rendering part, you can do something inspired from the shader-based tilemap rendering I demonstrated in

    Regarding the physics you could manage it yourself as well or simply use the Edge part/shape and attach it either to your full world or to subparts of it if your world is too big.

    Another option would be to use orxOBJECTs as well but to only keep the ones around the players within a given radius alive, ie. streaming them based on player's position. This way you can retain the usefulness of orxOBJECTs while keeping their count manageable so as to not impact performance.

    The advantage of the streaming approach being that you can easily handle advanced things like spawners, localized ambiant sounds or hierarchies of objects with one single unified system.

  • Great, thanks for the ideas. I will check into both the tilemap demo and the orxOBJECT streaming idea and see where it takes me!

  • Just took a quick look at the tile map...that looks really promising. I will have to do some studying to understand how it works. Thanks.

  • This morning I added a quick modification so I can run around the map WASD style. Visually, this looks really good and crisp when I run around...better than when I've tried similar things using regular orxOBJECTS. I'm assuming using the shader technique may be the key to getting such good visual performance.

    My next test will be to make a set with slightly smaller tiles to see what I can get away with regarding performance...and to understand how things are working.

    Hope I can find the time to experiment with this!

  • I'm glad the shader gives you a good starting base.
    That being said, depending on your map size, the shader approach might require to much bandwidth to update with the currently available API as you can't do a partial texture update, but I can add a function for that in the coming days.

    That being said, I think the best approach would still be a mix between shader and objects while keeping your own world data structure.

  • edited March 2020

    For example, looking at Terraria, their largest maps are 8400 x 2400, which would amount to a 40mb texture with the current shader, and you'd definitely wouldn't want to upload 40mb to the GPU for each map update.

  • I hadn't gotten that far, but I assumed for a large world I'd have to load in chunks of data somehow as the player moves around in the world and keep the actual map itself smaller, maybe like a size similar to your tile example. (similar to what you mention about streaming the orxObjects, which I took to mean just creating and deleting them as needed based on player location)

    Or, maybe you are saying if I have the partial update feature, the map itself could be very large, but for rendering we only update the GPU with a clip rectangle as needed for displaying the area around the player? If so, that sounds useful.

    In any case, I'm looking forward to learning more, this is pretty interesting stuff. It will take me a while just to get a little more control over your tile example.

  • The map itself, in texture format, can be quite large.
    It's only local updates that wouldn't be supported at the moment for it as you'd need to re-upload the entire texture every time.

    Also, regarding the tilemap example, most of the code is there only to convert from the format used by @sausage's Tiled exporter to a texture-based format that can be used by the shader. This part is likely not necessary to you, unless you use said exporter.

    At the core of the method, there are only 2 main components:

    • the map data in texture form
    • the shader to render that data

    Another advantage of this approach, depending on your needs, is that it's trivial to have the map wrap around, like it's the case in the example.

  • Sounds good, thank you. For now I'm going to see what I can do with a map size similar to your example.

  • Last night I was able to get my own test tile set running with 20x20 pixel tiles and a map stored in my array of tile indexes. Generated a little scrolling style landscape complete with a couple of randomly dug "caves" ...the nice thing is how fast and crisp the graphic looks, I'm really glad you pointed me to the shader idea...I can easily see how I can use this for a generated and changeable landscape and use the normal oxobjects for the rest (I dropped in a parallax mountain background and a single tree as orxobjects for a test). I think this weekend I'm going to see how feasible my idea of handling the physics on my own will be...if that works, it would certainly be nice to someday be able to use the "local update" concept you refer to which isn't available at the moment. If I could combine this strategy with a large map with the same kind of performance, it would really be great!

    I think I've got the basic concept of how the shader works, using a texture to represent the map itself in pixels - though the shader code itself is pretty fuzzy for me at this point. I can't imagine having > 65536 tiles in a set, so it seems pretty scale-able in that regard. Very cool stuff....thanks a lot!

  • My pleasure. If you have questions about the shader code, I'll be happy to answer them. You can also shift the compromise between the number of tiles available inside a single set and the size of the texture map (right now using 16bits per tile ID, ie. 65536 tiles in a set, but this can easily change).
    You can also improve on this example with multiple layers or animated ones, for example.

    Lastly, the partial bitmap update is available since yesterday, but in its own branch until I can find someone that confirms that I didn't break anything on iOS or Android, which I can't test myself at the moment:

    The new function's called orxDisplay_SetPartialBitmapData.

  • Awesome, thank you. What is the correct way to use the new branch? I downloaded and unzipped it, then used init from the new folder to create a new project, but the new function does not seem to be available, at least not appearing in intelli-sense. Did I miss a step in setting it up? I was trying to keep it separate from my original orx installation.

  • Nevermind, I hadn't run setup. I have the new function now.

  • If you download it, you'll also need to run setup.bat which will update your environment to point to this installation of orx and not your original one, then compile orx itself.
    I'm not sure it's worth the trouble, you can easily switch from one branch to the other with git checkout.
    Then you'll simply need to recompile orx.

  • Tried out the update function and it seems to work great. I'm using it to test update a single tile by toggling it back and forth between two tiles with a keyboard event. I'm updating a single pixel in the texture and it seems to work with no problem. I'm including my MapUpdate function here to make sure I'm using it correctly.

    I do notice a strange behavior in that the first 20 times I use it, it causes a big lag in motion on the texture.....this is noticeable if I'm scrolling the texture while toggling my block. After the 20th time, it completely smooths out and I can scroll and toggle as fast as I want with no noticeable interruption. I know that sounds flaky, but I've tested it many times and it literally starts working perfectly after the 20th time. It isn't a timing thing, I can wait 1 minute after starting the program, then test the toggle and every time it starts working smoothly after 20. I don't think there is anything in my code that could be related to that, but I'll keep experimenting.

    In any case, let me know if this looks like a reasonable usage. the pu8Data is allocated once initially (in another function) and deleted upon closing the level (returning to main menu). I'm assuming always updating just a single tile with that for now.

    void UpdateMapTexture(orxU32 row, orxU32 col) {
        pu8Value = pu8Data;
        orxU32          u32Index;
        u32Index = WorldArray[GetIndex(row, col - 1)];
        *pu8Value++ = (u32Index & 0xFF00) >> 8;
        *pu8Value++ = u32Index & 0xFF;
        u32Index = WorldArray[GetIndex(row, col)];
        *pu8Value++ = (u32Index & 0xFF00) >> 8;
        *pu8Value++ = u32Index & 0xFF;
        orxDisplay_SetPartialBitmapData(pstBitmap, pu8Data, col / 2, row, 1, 1);
  • Mmh, the behavior you describe is definitely weird. Was it with a debug or a release build?
    Could you look at the profiler (only in profile or debug) to see if anything weird appears there?
    You can toggle the profiler with the config property ShowProfiler in the Render section.
    You can use ScrollLock to display a vertical slice of each frame, Space to pause/unpause the profiler and left/right to scrub on the past few hundred frames.

  • I'm not sure exactly how to interpret the profiler, but it looks pretty much the same when it has the stuttering and when it doesn't. It behaves the same in the release build. I'll keep messing around to see if I can get more clues.

  • You're looking for spikes with the orxDisplay* functions, mostly.
    As it could be a stall in the driver while waiting for the texture to be available. And after a few requests, the driver might act upon it and duplicate the texture or something like this. It'd be all happening in the OpenGL drivers, so it's only conjectures at this point.

  • I do see something going on just after the event fires...the following frame appears to take 65ms, but I can't tell what exactly is spiking. Here are 3 frames...the first you can see the event fire I think, the next one is 65ms, but I don't see where it shows up in the functions listed, and the next frame is back down to 8ms, if I'm reading it correctly.

  • I probably should mention that my world is pretty big...I was trying 8400x2400 just to see what I could get away with. So, it seems the smaller I make the world, the less noticeable it 2000x2000 has a 24ms gap. 1000x1000 has a 16ms gap. In all cases, the gap goes away after 20 calls.

  • A workaround of calling the partial update function 20 times the first time update is called is in place for now. Seems to solve the problem for all practical purposes.

  • I really think your OpenGL driver notice the access pattern before changing how that texture buffer is handled internally.
    The gap you see in the profiler is likely around the call to orxDisplay_SetPartialBitmapData. You can add profiler markers in your function to verify this hypothesis:

    • orxPROFILER_PUSH_MARKER("MyFunction") at the beginning of your function
    • orxPROFILER_POP_MARKER() at the end of it

    Are you working on a laptop or a desktop?

  • Yes, my update function that calls orxDisplay_SetPartialBitmapData now shows up and is indeed the culprit. I guess that isn't surprising, but verified. By the way, thanks again for adding in this new function so quickly...your level of support and attentiveness is pretty unusual!

    I am on a laptop with a GTX 1060. I do have a desktop with a better graphics card at a different location, I'll check the behavior on it later today.

    At least now I have a tiny bit of experience using the profiler.

  • Physics experiment successful! I made a 6x6 grid of "blocker" orxObjects surrounding the player, snap their positions to the world grid on update, and set them to solid or not, depending on what tile they are above. I was fairly amazed to see this works quite well and I can use the orx collision system for the tile map. Here is my collision cocoon. Feel free to laugh at my stand in graphics:

  • Been having a lot of fun with this. I just noticed these messages when I run my project in VS, so thought I'd let you know in case it is worrisome:

  • Hey @funemaker, sorry for the delay, it's been a busy day on my side.

    Does your laptop have a dual/hybrid GPU? If so, you'll need to add this somewhere in your main source file:

    #ifdef __orxWINDOWS__
    #include "windows.h"
    extern "C"
      _declspec(dllexport) DWORD NvOptimusEnablement                = 1;
      _declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
    #endif // __orxWINDOWS__

    This will make sure that your laptop is going to be using your dedicated GPU.
    Note that those symbols need to be exported in the executable file, they can't be in a linked DLL.

    I'm glad you appreciate the level of support we have. That's something I really enjoy with this project, even when I'm not available there's usually always someone around (even more so on the chat at to help those who need it. The community, albeit small, is very nice and helpful, which in turn keeps me motivated. :)

    I like your approach of handling the physics, it's quite clever and I'm glad it's working the way you want. Are you moving them individually or with a single parent? It doesn't change much but I'm wondering which approach would be more practical.

    Regarding your messages, they're all coming from internal windows components and I'm rather sure they're not linked to orx. I'm not sure on how to help you with those, sorry.

  • HI, I don't think GTX 1060 is dual or any case, I dropped in that code but the behavior is the same. I'm just using the workaround now. Still haven't tried on the desktop yet.

    I was moving the blockers individually with a loop (only snapping the starting location once, then looping through them all), but using a parent does indeed sound a little simpler. I may change it over.

    There does seem to be situations where I try make a blocker solid while it is overlapping the player body, and that seems to throw the player somewhere far away from the camera, or something bad happens, because all I see is the viewport background. It doesn't happen often so I'm trying to figure out how/why it does happen.

    Understood about that messages...I think they may have always been there.

  • The GTX 1060 itself wouldn't be dual/hybrid, the duality would be at the laptop level.
    It's common nowadays for laptops to have a low power GPU integrated with the CPU, usually an Intel one, and a separate, more powerful, dedicated GPU, usually an ATI/NVidia one.

    Regarding the collisions, I'd suggest a different approach as well, instead of using the actual dynamic simulation of Box2D, I'd instead rely on some raycasts and position your player accordingly.

    @lydesik made a test project with this concept a few years ago, you might want to check it:

  • I'll check out that raycasting technique, thanks!

    For some reason I can't seem to get tiles greater than around 60 in my tileset to work reliably...they sometimes appear correctly, but occasionally render some other tiny part of the tileset, stretched out....took me a while to even understand what I was looking at. In any case, if my tile set is small, they all work consistently...I'm sure I've got something messed up in my code somewhere, but it is strange that the affected tile renders correctly 90% of the time, and then sometimes as a screwy looking stretch of pixels.

    I've probably messed around enough today! I'm very happy to have the ability to generate a really large map and have it work so nicely.

  • edited March 2020

    Ah, the tile size limitation is weird. If you have some tile/map textures for me, I can check that on my side as well.

    You can also save textures' content back to disk for inspection if that can help, even textures that have been created at runtime. For that you can use the <texture_name> command inside the console.

    I'm glad you're doing nice progress with your systems in any case!

  • I can confirm that on my desktop, the stuttering issue does not occur with the partial update call like it does on my laptop. I'll just leave the workaround in since, though a little ugly, it doesn't hurt anything I can tell.

    For the tile issue, after looking a bit this morning, I'm more convinced it is something I've screwed up when I generate the map. I may ask you to check the tileset out if I later decide differently, but for now I need to look into what I've done a little more carefully.

  • Saving off my map texture and looking at it is helping to diagnose. It seems there is something wrong with the way I'm pulling the tile indexes from my array and putting them into the works fine in almost all cases, but when certain tiles are next to each other, something goes wrong in the map texture...I'm sure this has to do with storing two tiles in each pixel, but I don't see my error yet. I'll try later today and see if I can find the problem.

  • Hmm, I can't seem to understand what is the problem, but when a single pixel in the map texture shares two particular tiles, it won't display correctly. Those tiles display correctly on their own in other parts of the map, but if they are shared in a pixel in a certain order it doesn't work. As example, if tile index 217 is stored with 62, the display comes out messed up on one of the tiles. If tile 64 is stored with 217, something similar happens. (but not the revers, 62 with 217 is ok and 217 with 64 is ok). Again, I know that sounds a little questionable....I might try to put together a very simple reproducible scenario....that may flush out a problem on my code, or more illuminate what exactly is going wrong. Here is the code that builds the bitmap from my WorldArray variable. I don't see what I've done wrong. I've logged out the values for the areas where the trouble is (where the pixel shares those particular tiles) and it all looks as it should be to me. Here is my function, really just barely modified from the original tilemap example.

     orxTEXTURE *LoadMyMap(const orxSTRING _zMapName, const TileSet *_pstTileSet)
        orxU8      *pu8Data, *pu8Value;
        // Pushes its config section
        // Gets its size (tiles)
        orxConfig_GetVector("Size", &vSize);
        // Adjusts map size
        vMapSize.fX = vSize.fX;
        vMapSize.fY = vSize.fY;
        vMapSize.fZ = orxMath_Ceil(vSize.fX / orx2F(2.0f)) * orx2F(2.0f);
        // Computes texture size (using 2 bytes per index as we have less than 65536 tiles in the set)
        u32BitmapWidth = (orxF2U(vSize.fX) + 1) / 2;
        u32BitmapHeight = orxF2U(vSize.fY);
        // Creates bitmap
        pstBitmap = orxDisplay_CreateBitmap(u32BitmapWidth, u32BitmapHeight);
        // Creates texture
        pstTexture = orxTexture_Create();
        // Links them together
        orxTexture_LinkBitmap(pstTexture, pstBitmap, _zMapName, orxTRUE);
        // Upgrades map to become its own graphic
        orxConfig_SetString("Texture", orxTexture_GetName(_pstTileSet->pstTexture));
        orxConfig_SetString("Pivot", "center");
        // Setups the shader on the map itself, with all needed parameters
        orxConfig_SetString("Code", "@MapShader");
        orxConfig_SetString("ParamList", "@MapShader");
        orxConfig_SetVector("CameraSize", &svCameraSize);
        orxConfig_SetVector("MapSize", &vMapSize);
        orxConfig_SetVector("TileSize", &_pstTileSet->vTileSize);
        orxConfig_SetVector("SetSize", &_pstTileSet->vSize);
        orxLOG("_pstTileSet->vSize.fX: %f", _pstTileSet->vSize.fY);
        orxConfig_SetString("Map", _zMapName);
        orxDisplay_GetScreenSize(&vScreenSize.fX, &vScreenSize.fY);
        orxConfig_SetVector("Resolution", &vScreenSize);
        orxConfig_SetVector("CameraPos", &orxVECTOR_0);
        orxConfig_SetVector("Highlight", &orxVECTOR_0);
        // Allocates bitmap data
        pu8Data = (orxU8 *)orxMemory_Allocate(u32BitmapWidth * u32BitmapHeight * sizeof(orxRGBA), orxMEMORY_TYPE_TEMP);
        orxU16          u16Index;
        // For all rows
        for (j = 0, pu8Value = pu8Data; j < orxF2U(vSize.fY); j++)
            // For all columns
            for (i = 0; i < orxF2U(vSize.fX); i++)
                //we need to pull the index from the WorldArray
                u16Index = WorldArray[GetIndex(j, i)];
                if ((j >= 100) && (j < 110)  && (i >= 101) && (i < 111)) {
                    orxLOG("index: %d", u16Index);
                //u32Index = 0;
                // Stores it over two bytes
                *pu8Value++ = (u16Index & 0xFF00) >> 8;
                *pu8Value++ = u16Index & 0xFF;
            // Zeroes padding bytes
            if (orxF2U(vSize.fX) & 1)
                *pu8Value++ = 0;
                *pu8Value++ = 0;
        // Updates texture with indices
        orxDisplay_SetBitmapData(pstBitmap, pu8Data, u32BitmapWidth * u32BitmapHeight * sizeof(orxRGBA));
        //free memory
        pu8Data = orxNULL;
        // Pops config section
        // Done!
        return pstTexture;
  • I think I have some evidence of my my map, I'm generating a 10x10 grid of tiles to demonstrate. If the even columns are index 62 and odd columns are 217, then the tiles (stone vs grass) appear as expected:

    You can see both types of tiles are rendered ok.

    But if I reverse that so even columns are 217 and odd are 62, every other tile is messed up:

    Maybe something going wrong with the shader? My knowledge of the shader code is too weak to know.

  • At first sight I can't really tell what your issue is, sorry. However if you were to send me a zip with your project I could have a look either tonight or tomorrow.

  • Thanks for that offer. There is certainly no hurry. I think I should probably clean up my project and maybe strip it down a bit to try to have a simple case reproduction of the behavior...just doing that may help me come across a problem...and would be more friendly to look at if I send to you. I'll let you know how it goes.

  • Here is a pretty simple demo project with everything stripped out except a tileset, map based on my array, and a problem 10x10 grid. This project actually behaves differently from run to run, each time with different tiles messed up, so I'm really confused. Same behavior on both laptop and desktop.

    Let me know if you see anything obvious. Thanks!

  • edited March 2020

    Thanks for sending this repro case project.

    The weird thing, as you mentioned, is that the issue is not consistent. I've narrowed it down to the index stored in Blue/Alpha not correctly being interpreted depending on the value stored in the other half of the pixel, and only for a range of values too.

    For example, having 62 in both RG and BA of a pixel would work perfectly, however, as you mentioned, having 248 for RG and 62 for BA would bring that issue.

    That doesn't make much sense to me, to be fair. In any case, I found a workaround in making sure the computed indices are rounded after unpacking, which should have been the case since the beginning to avoid any imprecisions.

    In order to fix your shader, replace the lines:

      // Computes index
      float index = 255.0 * ((256.0 * value.x) + value.y);


      // Computes index
      float index = round(255.0 * ((256.0 * value.x) + value.y));

    and it should now be working correctly (at least it did for me).

  • Wonderful, thanks so much for finding that! In indeed works like a charm now. My debugging led to the shader but I wasn't completely confident, so glad to know I was headed to the right place. I need to spend a bit of time working with shader code so I don't have such a weak spot. I'm curious if there are any good techniques for debugging the shaders? Usually for a problem like the rounding thing, in normal cpu code, I'd be able to examine the values in a debugger to find out what is happening. Probably no way to do that with a shader?

  • I'm glad it's now working for you as well!

    You can debug shader code using some GPU tools such as RenderDoc and similar. However their OpenGL support greatly varies from one tool to the other.

    In the end I tend to only manually examine the code and check the content of textures, including the result (you can dump the screen's content as any other regular textures with orx).

  • Thanks, I'll check that out. Being able to dump the textures is definitely a help. I went ahead and redid the physics with my own version as you suggested, which I turns out to be pretty easy using the map grid. Seems to be working well so I think all the questions I had in this thread are pretty well handled. I may come back with more when I try some lighting experiments, but I'll start another thread for that. I've checked out the lighting tutorial so I'll start from there and see what I can do...trying to add some torch or lantern type lights at some point.
    thanks again

  • My pleasure, I'm glad you're making good progress.
    Don't hesitate if/when you have new questions!

Sign In or Register to comment.