grid mining

2»

Comments

  • I can confirm that on my desktop, the stuttering issue does not occur with the partial update call like it does on my laptop. I'll just leave the workaround in since, though a little ugly, it doesn't hurt anything I can tell.

    For the tile issue, after looking a bit this morning, I'm more convinced it is something I've screwed up when I generate the map. I may ask you to check the tileset out if I later decide differently, but for now I need to look into what I've done a little more carefully.
    thanks

  • Saving off my map texture and looking at it is helping to diagnose. It seems there is something wrong with the way I'm pulling the tile indexes from my array and putting them into the texture...it works fine in almost all cases, but when certain tiles are next to each other, something goes wrong in the map texture...I'm sure this has to do with storing two tiles in each pixel, but I don't see my error yet. I'll try later today and see if I can find the problem.

  • Hmm, I can't seem to understand what is the problem, but when a single pixel in the map texture shares two particular tiles, it won't display correctly. Those tiles display correctly on their own in other parts of the map, but if they are shared in a pixel in a certain order it doesn't work. As example, if tile index 217 is stored with 62, the display comes out messed up on one of the tiles. If tile 64 is stored with 217, something similar happens. (but not the revers, 62 with 217 is ok and 217 with 64 is ok). Again, I know that sounds a little questionable....I might try to put together a very simple reproducible scenario....that may flush out a problem on my code, or more illuminate what exactly is going wrong. Here is the code that builds the bitmap from my WorldArray variable. I don't see what I've done wrong. I've logged out the values for the areas where the trouble is (where the pixel shares those particular tiles) and it all looks as it should be to me. Here is my function, really just barely modified from the original tilemap example.

     orxTEXTURE *LoadMyMap(const orxSTRING _zMapName, const TileSet *_pstTileSet)
    {
        orxU8      *pu8Data, *pu8Value;
    
        // Pushes its config section
        orxConfig_PushSection(_zMapName);
    
        // Gets its size (tiles)
        orxConfig_GetVector("Size", &vSize);
    
        // Adjusts map size
        vMapSize.fX = vSize.fX;
        vMapSize.fY = vSize.fY;
        vMapSize.fZ = orxMath_Ceil(vSize.fX / orx2F(2.0f)) * orx2F(2.0f);
    
        // Computes texture size (using 2 bytes per index as we have less than 65536 tiles in the set)
        u32BitmapWidth = (orxF2U(vSize.fX) + 1) / 2;
        u32BitmapHeight = orxF2U(vSize.fY);
    
        // Creates bitmap
        pstBitmap = orxDisplay_CreateBitmap(u32BitmapWidth, u32BitmapHeight);
        orxASSERT(pstBitmap);
    
        // Creates texture
        pstTexture = orxTexture_Create();
        orxASSERT(pstTexture);
    
        // Links them together
        orxTexture_LinkBitmap(pstTexture, pstBitmap, _zMapName, orxTRUE);
    
        // Upgrades map to become its own graphic
        orxConfig_SetString("Texture", orxTexture_GetName(_pstTileSet->pstTexture));
        orxConfig_SetString("Pivot", "center");
    
        // Setups the shader on the map itself, with all needed parameters
        orxConfig_SetString("Code", "@MapShader");
        orxConfig_SetString("ParamList", "@MapShader");
        orxConfig_SetVector("CameraSize", &svCameraSize);
        orxConfig_SetVector("MapSize", &vMapSize);
        orxConfig_SetVector("TileSize", &_pstTileSet->vTileSize);
        orxConfig_SetVector("SetSize", &_pstTileSet->vSize);
        orxLOG("_pstTileSet->vSize.fX: %f", _pstTileSet->vSize.fY);
        orxConfig_SetString("Map", _zMapName);
        orxDisplay_GetScreenSize(&vScreenSize.fX, &vScreenSize.fY);
        orxConfig_SetVector("Resolution", &vScreenSize);
        orxConfig_SetVector("CameraPos", &orxVECTOR_0);
        orxConfig_SetVector("Highlight", &orxVECTOR_0);
    
        // Allocates bitmap data
        pu8Data = (orxU8 *)orxMemory_Allocate(u32BitmapWidth * u32BitmapHeight * sizeof(orxRGBA), orxMEMORY_TYPE_TEMP);
        orxASSERT(pu8Data);
        orxU16          u16Index;
        // For all rows
        for (j = 0, pu8Value = pu8Data; j < orxF2U(vSize.fY); j++)
        {
            // For all columns
            for (i = 0; i < orxF2U(vSize.fX); i++)
            {
    
                //we need to pull the index from the WorldArray
    
                u16Index = WorldArray[GetIndex(j, i)];
    
                if ((j >= 100) && (j < 110)  && (i >= 101) && (i < 111)) {
    
                    orxLOG("index: %d", u16Index);
                }
    
                //u32Index = 0;
                // Stores it over two bytes
                *pu8Value++ = (u16Index & 0xFF00) >> 8;
                *pu8Value++ = u16Index & 0xFF;
    
            }
    
            // Zeroes padding bytes
            if (orxF2U(vSize.fX) & 1)
            {
    
                *pu8Value++ = 0;
                *pu8Value++ = 0;
            }
        }
    
        // Updates texture with indices
        orxDisplay_SetBitmapData(pstBitmap, pu8Data, u32BitmapWidth * u32BitmapHeight * sizeof(orxRGBA));
    
        //free memory
        orxMemory_Free(pu8Data);
        pu8Data = orxNULL;
        // Pops config section
        orxConfig_PopSection();
    
        // Done!
    
        return pstTexture;
    }
    
  • I think I have some evidence of my claim...in my map, I'm generating a 10x10 grid of tiles to demonstrate. If the even columns are index 62 and odd columns are 217, then the tiles (stone vs grass) appear as expected:


    You can see both types of tiles are rendered ok.

    But if I reverse that so even columns are 217 and odd are 62, every other tile is messed up:


    Maybe something going wrong with the shader? My knowledge of the shader code is too weak to know.

  • At first sight I can't really tell what your issue is, sorry. However if you were to send me a zip with your project I could have a look either tonight or tomorrow.

  • Thanks for that offer. There is certainly no hurry. I think I should probably clean up my project and maybe strip it down a bit to try to have a simple case reproduction of the behavior...just doing that may help me come across a problem...and would be more friendly to look at if I send to you. I'll let you know how it goes.

  • Here is a pretty simple demo project with everything stripped out except a tileset, map based on my array, and a problem 10x10 grid. This project actually behaves differently from run to run, each time with different tiles messed up, so I'm really confused. Same behavior on both laptop and desktop.

    Let me know if you see anything obvious. Thanks!

    https://app.box.com/s/x8t88ej001hivrh7e1oys9op92r0g8yl

  • edited March 30

    Thanks for sending this repro case project.

    The weird thing, as you mentioned, is that the issue is not consistent. I've narrowed it down to the index stored in Blue/Alpha not correctly being interpreted depending on the value stored in the other half of the pixel, and only for a range of values too.

    For example, having 62 in both RG and BA of a pixel would work perfectly, however, as you mentioned, having 248 for RG and 62 for BA would bring that issue.

    That doesn't make much sense to me, to be fair. In any case, I found a workaround in making sure the computed indices are rounded after unpacking, which should have been the case since the beginning to avoid any imprecisions.

    In order to fix your shader, replace the lines:

      // Computes index
      float index = 255.0 * ((256.0 * value.x) + value.y);
    

    with

      // Computes index
      float index = round(255.0 * ((256.0 * value.x) + value.y));
    

    and it should now be working correctly (at least it did for me).

  • Wonderful, thanks so much for finding that! In indeed works like a charm now. My debugging led to the shader but I wasn't completely confident, so glad to know I was headed to the right place. I need to spend a bit of time working with shader code so I don't have such a weak spot. I'm curious if there are any good techniques for debugging the shaders? Usually for a problem like the rounding thing, in normal cpu code, I'd be able to examine the values in a debugger to find out what is happening. Probably no way to do that with a shader?

  • I'm glad it's now working for you as well!

    You can debug shader code using some GPU tools such as RenderDoc and similar. However their OpenGL support greatly varies from one tool to the other.

    In the end I tend to only manually examine the code and check the content of textures, including the result (you can dump the screen's content as any other regular textures with orx).

  • Thanks, I'll check that out. Being able to dump the textures is definitely a help. I went ahead and redid the physics with my own version as you suggested, which I turns out to be pretty easy using the map grid. Seems to be working well so I think all the questions I had in this thread are pretty well handled. I may come back with more when I try some lighting experiments, but I'll start another thread for that. I've checked out the lighting tutorial so I'll start from there and see what I can do...trying to add some torch or lantern type lights at some point.
    thanks again

  • My pleasure, I'm glad you're making good progress.
    Don't hesitate if/when you have new questions!

Sign In or Register to comment.