Hi, I've been trying to use the [Resource] WatchList feature, but it's giving me trouble.
I've tried to use it for hot-swapping my textures, but any image file that's not ridiculously small gets corrupted when I try to change it at runtime.
I've dug a bit into the problem and realized that the orxDisplay_LoadBitmap (src/display/orxDisplay.c:385) is being called too soon. I think as soon as I click export in GIMP and the data starts to get written to the disk, orxDisplay_LoadBitmap gets called and receives incomplete image data. For .bmp files it manifests itself as an texture that's missing some parts of the image, for .png files, I get
[ASSERT] [orxDisplay.c:orxDisplay_GLFW_LoadBitmapData():1091] [ASSERT] : <(s64Size > 0) && (s64Size < 0xFFFFFFFF)>
I've further confirmed my "loading too soon" hypothesis by trying to first save the image with a different name and then renaming it as the texture. This way, there was no trouble reloading the texture.
All of this happened on Ubuntu 14.04 with a 30.2kB 200x84 .png image.
The way the watch works is by monitoring the modified timestamp of a file and comparing it with the one in cache. Have you tried saving the file with a different software than gimp?
To my understanding, the modified timestamp shouldn't be modified until fflush() or fclose() is being called by the code that saves the image. At least that's how it works on windows.
Now, as a work around, the only thing I can think of, would be to start a timer when a modified timestamp is detected, and reset this timer every time the timestamp changes again. Then only when the timer has expired would we reload the asset. Not the best plan, but it might work.
If you think of a cleaner way to achieve the same thing, please let me know!
iarwain wrote: Experimenting a bit more, I've come to the conclusion that it's probably not gimp's fault. Here's what I've tried:
- Instead of renaming, I've tried to copy a file (much larger than 30kB) overwriting the currently shown texture, there was no problem.
- In order to further eliminate a possible copy-on-write going on at the OS level, I've also tried to convert a .jpg to a .bmp and convert it back, overwriting the texture, still no problem.
- But then I wanted to see what would happen if a write took a significant amount of time, so I wrote a simple program to read a source file, open the target file, copy half of it, sleep 2 seconds and copy the rest and close the file. This time Orx read a corrupt file again, even though the actual file was intact at the end.none
I remember reading a discussion on this problem a while ago, and someone was telling that the only "reliable" way is to check the file a few times with a delay, and call it changed only when successive reads return the same file. Well that someone was working on code whose primary function was keeping track of files, but maybe we don't need to complicate the implementation that much for a development-only feature. I think the timer based solution would be enough, especially if the delay can be configured in the [Resource] section.
Should I open an issue on bitbucket?
We'd then have to open an exclusive write access, when we get it, we'd open a read access, and when the read is done we'd release the write access. It feels slightly overkill and I'm not sure how badly it'd impact the performances (the resource watch happens on the main thread). But if the simple solution isn't good enough, we can always turn to this one.
I was also thinking of trying to find all file descriptors opened on a given file, but in my search I was only able to find those either globally or per-process, which would make the search potentially expensive.
As you said, enobayram, it's a debug feature so if we can weed out most issues with simple and fast code, I'm all for this. So yes, please open an issue on bitbucket for me. I'll look into it when I'm done with WebP and resource leak tracking.
I don't think it would be too slow, as it is develop-only feature the performance would be ok. Also you could always do it on the other thread, and I think it should be done in the other thread and then only signal to the main thread what files has been modified.
If things end up in a separate thread that wouldn't be a problem, unfortunately the watch system is currently on the main thread and given that it's a dev feature I don't really have any high priority task for moving it elsewhere.
I did end up with a similar problem with Sublime Text when keeping a png image open in a tab and re-exporting it using graphviz/dot. Once in a while, the tab would refresh with a mention "can't load image" instead of the actual content. But apparently Sublime is smart enough to retry a bit later and the file ends up being displayed eventually. I'll try to be slightly more pro-active than that with orx.
That would mean we need to suppress the resource load assertions while watching files though. As it is, it would bail out (for a .png) in debug mode at the first try.
How would you see the additional size-related logic?
I assume that you're periodically going through the files in the WatchList, and checking if any of them has a more recent modified-timestamp than the last time we've loaded it. And I assume further that this is causing trouble, since the modified-timestamp is updated only when the file is fopened.
So, while the write into the file is still in progress, we're loading it in an incomplete state, and we're not loading it later since the file is never fopened again and its modified-timestamp is never updated, even though the file has been appended to.
That's the only explanation I could come up with to why we're loading half empty textures and not reloading them again when they're further written to.
As a solution to this, I've suggested that we periodically check the file size as well as the modified-timestamp, and reload the file if any of the two changes. Since additional content will have been appended to the file since the last time we've checked, this will enable us to eventually load the final contents. This might even be what Sublime Text is doing.
the way it'd work would be to sometimes reload during a write, but would reload it again later on when the size has changed?
I see two problems with this:
- at some point we'll be reloading stale data, and that can lead to undefined results
- some softwares (I don't know if it's still often the case or not) used to reserve the size of the file on disk before writing its content, to make sure there's enough room on the destination
Granted with much larger storage used nowadays, that approach isn't really necessary anymore, but that doesn't mean it's not used by some.
I might have misinterpreted your suggestion, however if I'm not I think I prefer the timer-based approach.
About reserving size; I didn't know about that, and that completely invalidates my proposal...
In any case, I totally agree that your timer-based approach is better, I just thought it might be harder to implement, so I wanted to suggest an alternative that only requires an extra if statement.
Shouldn't be too bad to add the timer logic in the coming days.
Nice work as always