orxanimeditor

edited November 2014 in Projects - Tools
I've successfully built and tested this thing. It was created by Enis Bayramoglu (enobayram), who isn't active at the moment. It looks pretty cute, if anyone is interested. All Java code.

Comments

  • edited November 2014
    Nice!

    Do you think you could upload the .jar somewhere? I'll add it to the download page of the bitbucket project.
  • edited November 2014
    Doh. I just realized it was already there all this time. I should have checked earlier.
    https://bitbucket.org/orx/animationeditor/downloads/OrxAnimationEditor.jar
  • edited December 2014
    I might not be active on the forums but I'll gladly accept feature requests and bug reports :)
  • edited December 2014
    I only opened this application for the first time today.

    Wow. Some decent effort went into this.

    enobayram, I have a list of ideas for you that would make this editor much more flexible for working with existing projects. I'll pop you a PM.

    I think it really deserves some love.
  • edited December 2014
    Nice to see you around, enobayram! :)
  • edited December 2014
    I've actually regularly lurked the forums all this time :) but I'm also glad to be interacting with you again.

    Sausage has shown a great deal of love for the animation editor, and came with a nice set of suggestions. I'm trying to implement them as I find the time (and the energy). The toughest (and the coolest) one is to use the orx config module itself to parse the .ini files of existing projects, so that people can use the editor as a drop-in tool. We've been discussing various ways, but one common obstacle is whether there are precompiled binaries for all platforms and all bit sizes (32 and 64). In particular, I couldn't be sure whether there are Win64 binaries. I know Win64 can run 32 bit executables, but an 64 bit java virtual machine can't call 32-bit dlls.

    BTW, can orx config write back to the source .ini files? Also, can I query where exactly a configuration value comes from? I mean, say I have a config section X that inherits from section Y and receives the field F from there. Can orx config tell me that F of X comes from Y?

    Cheers!
  • edited December 2014
    Well I'm glad to learn that you were never far then! :)

    I saw your commits related to Sausage's suggestions but haven't had the opportunity to really look into it yet (probably not before coming back from vacation).

    There are no precompiled 64 binaries but simply because I don't have installed a visual studio capable of it yet, there shouldn't be any problem compiling them (after all it works on linux and osx). I can look into it when I come back as I'll probably install the VS 2013 community edition and add all the appropriate binaries, including 64bit.

    Orx can write back to the originating .ini file (it's a parameter in the orxConfig_Save() function).
    However note that you'll lose any special indentation and comments.

    Orx currently can tell you if a value is inherited or not (orxConfig_IsInheritedValue()), but it won't tell you the actual source section. I can easily add such accessor if you need it (the info is available, it's just not exposed).

    Cheers! =)

    iarwain
  • edited January 2015
    Happy new year!

    It'd be great if the Win64 orx.dll were available and up-to-date online. Then the editor could just download the right file based on the currently running installation. I'm planning to access the functionality in the orx binary through Java Native Access, so that I shouldn't have to compile (for all platforms) any native binaries.
    Orx can write back to the originating .ini file (it's a parameter in the orxConfig_Save() function).
    However note that you'll lose any special indentation and comments.
    Will I lose them on the modified line or the entire file?
    I can easily add such accessor if you need it (the info is available, it's just not exposed).
    That'd be great if it's not too much trouble. I think that information should be exposed to the user of the editor somehow.

    Cheers!
  • edited January 2015
    I'll try to have the win64 binaries up in a week or so.

    As for the .ini file, you don't lose the content, just the comments and indentation, and yes, on the whole file (the whole file is overwritten). If your file doesn't contain any manual modification that shouldn't really matter.

    I'll add the accessor next week, when back from vacation. Can you open an issue for that on bitbucket and assign it to me?
  • edited January 2015
    As you probably already know, the accessor was added last week.
    I added Win64 binaries (and builds to the build machines) yesterday, using the VS2013 setup.
    There's no permanent binary online, but if you tell me where you'd like them to be, I can make sure the nightly builds are sent there. The current Win64 nightly build can be fetched here: http://sourceforge.net/projects/orx/files/orx/nightly/orx-dev-vs2013-64-nightly-2015-01-12.zip/download (the link only works for 24h, till the next nightly pass is done).
  • edited January 2015
    Hi, first of all, thanks a lot for adding the accessor so fast. It'll be very useful once I get back to the animation editor.

    As for the 64-bit binaries, I've started to think that downloading the appropriate orx binary based on the user's environment is probably not a good idea. First of all, the config handling of the editor will be deeply coupled to the orx version that I compile it against, so, downloading at runtime has no functional benefit such as the ability to choose an orx version. Another reason is that, it'll probably be much easier for me to simply pack all 6 binaries (3 platforms x 2 bitnesses) into the editor .jar file.

    So, since the binaries will be packed into the .jar at compile time, there isn't much need for automation. I'll add it to the build steps that one should download all 6 precompiled binaries and extract them somewhere so that the build script can collect them and add them to the .jar.

    In conclusion, the way they're currently organized in sourceforge is perfectly fine.

    BTW, just out of curiosity, is there any reason you're distributing the development bundles separately as vs2012, vs2013 etc.? Since orx is a C library, I'd expect the binaries from different compilers on the same platform to be compatible with each other. Am I missing something?
  • edited January 2015
    I see.

    As for the multiple versions, you'd expect such compatibility but that wasn't always the case, especially between vs2005 and vs2008.
    I haven't checked more recent versions but I got in the habit of doing this and it simplifies the building/packaging process as well.
  • edited January 2015
    Hi Everyone!

    The Orx Animation Editor needs YOU! :)

    As you know, these days, I'm attempting to call the orxConfig functions from java, so that I can use them to parse the .ini files of existing projects. I'm trying to do that through a tool called JNAerator, which parses your C headers and emits pure java files that can call into your C binary, without requiring you to compile any extra native binaries for the gluing (unlike JNI). In theory, the emitted .java files are platform independent, and they should be able to call the functions from binaries compiled for the currently running platform. In the specific case of Orx, though, I'm worried that this might not work, since Orx uses billions of compile time switches, which in effect makes the headers themselves dependent on the platform.

    Anyway, it seems that I've managed to call the orxConfig functions from java on my development machine (linux x64), and I've prepared an orxjnatest.jar file that contains the orx binaries for all the desktop platforms as well as a simple java class that tests the use of relevant orxConfig files.

    I've uploaded the orxjnatest jar to this link along with a test.ini file that the test tries to read from. I'd be glad if you could run it on your platform and see if it works. If it does, it will create pop-ups that say: "testval read as float is: 45.0", "testval read as S32 is 45", U32, S64, U64 and String.

    Thanks!
  • edited January 2015
    Just tried on OSX 10.9 64bit and it worked flawlessly! Nice work. :)
  • edited January 2015
    Thanks for testing it iarwain! I'm glad it worked on OSX. I've also managed to try it on Win 8 64-bit, and it works there as well. However, just as I was afraid, it failed miserably on Linux 32 bit. That's probably due to the fact that orx changes the function signatures based on platform dependent compile-time switches.

    I've not given up on JNAerator yet though, since it works so well when it does. I'll try to preprocess the orx headers, manually defining key symbols to mimic a 32-bit linux environment and then run JNAerator on it, obtaining a 32-bit compatible interface into the orx binary. Then I'll make sure the right interface gets used at runtime. This will probably be tricky since the java signatures will change this time. Still, sounds like a nice challenge :)
  • edited January 2015
    I'll try it on Win7 64-bit tonight as well, but I don't expect any surprise. ;)

    Regarding the compile flags, I can modify orxDecl.h so that you could manually override the calling conventions, hence removing some of the differences between platforms. You'd need to compile orx yourself with your own defines for orxFASTCALL, orxSTDCALL and orxCDECL, but that shouldn't be a problem.
  • edited January 2015
    iarwain wrote:
    Regarding the compile flags, I can modify orxDecl.h so that you could manually override the calling conventions, hence removing some of the differences between platforms. You'd need to compile orx yourself with your own defines for orxFASTCALL, orxSTDCALL and orxCDECL, but that shouldn't be a problem.

    Thanks for the offer, but that would defeat the goal of not having to compile any native binaries. You know how native binaries complicate the build system for a cross-platform project. I'll try to run JNAerator on the orx headers, mimicking a Win32 environment through predefined symbols, since that's the most demanding platform. I'll then try to combine the generated 32-bit and 64-bit java interfaces under a more general interface. Is orxDecl.h the only place that influences the function signatures and the calling conventions?
  • edited January 2015
    Yes, everything should be in orxDecl.h.
    As for compiling the binaries, we could use orx's build machines to provide them to you if need be.
  • edited January 2015
    iarwain wrote:
    As for compiling the binaries, we could use orx's build machines to provide them to you if need be.
    Great, thanks, that would be a nice fallback strategy :)
  • edited January 2015
    By the way, I have a tangentially related question. The last time I checked the downloads page (probably well over a year ago) I don't remember seeing the Linux binaries. When I saw them this time, I was pleasantly surprised. May I ask what steps you've taken to make sure that the binaries work across distributions? Or DO THEY work across distributions? Did they just work for me because I have a pretty standard disribution (i.e. Ubuntu 14.04)?
  • edited January 2015
    I think that if you use ABI compactible binaries it would work on most linux distributions (of course if you've all the dependencies installed).
  • edited January 2015
    Trigve wrote:
    I think that if you use ABI compactible binaries it would work on most linux distributions (of course if you've all the dependencies installed).
    Thanks for the answer, but I wonder how you make sure that they are ABI compatible, the Linux64 orx binary is just 4.2MB, and it's probably relying on a lot of shared objects being on the load path. Some of these OS dependencies may or may not be ABI compatible across distros, and some of them might guarantee a certain subset of its API to be ABI compatible. You can, for instance specially prepare your binaries to remove symbols found in the newer versions of glibc, to be able to run them on very old systems. I'm curious about the steps iarwain thought would be enough to support a reasonable range of distros.
  • edited January 2015
    Mmh, weird, the linux libraries should have been there, even a year ago. They might have been released a day or two later than other platforms (as the whole process took me 6h of work), but in the end everything should have been available.

    I do not do anything at orx's level really. However, when releasing Little Cells, in addition to a small script that would create a dekstop entry and select the correct architecture between x86 & x64, I'd package the extra dependencies as well:
    - libstdc++
    - libsndfile
    - libopenal
    - libgcc_s

    That's about it.
  • edited April 2015
    Hi Iarwain,

    I've tried to get the JNAerator based Java-Orx interface for a while, but I could never get it to work properly on 32-bit systems. So I've decided to either (1) compile orx with emscripten to Javascript and run the result inside the Java's standard javascript interpreter or (2) use SWIG to generate interface code. After seeing that knolan's orxEditor also needs similar bindings (for python), I've thought maybe it's best to instead go with SWIG and generate bindings for any language we like.

    If we decide to do it this way, I'd need to get the SWIG generated C/C++ sources compiled for orx's targets (only the desktop ones initially). Are you still willing to use the build server for this purpose?

    If so, how would you like to proceed? In the end, we'll have a SWIG interface definition file and a script to generate all the sources for the bindings. The generation needs to be done once, but the sources need to be compiled for each platform.

    For example; for python, SWIG will generate:
    * orx_python_bindings.cxx
    * orx.py

    Then we need to compile orx_python_bindings.cxx for each platform, and package and distribute the binaries along with orx.py.
  • edited April 2015
    enobayram wrote:
    Hi Iarwain

    Hi Enobayram! :)
    I've tried to get the JNAerator based Java-Orx interface for a while, but I could never get it to work properly on 32-bit systems. So I've decided to either (1) compile orx with emscripten to Javascript and run the result inside the Java's standard javascript interpreter or (2) use SWIG to generate interface code. After seeing that knolan's orxEditor also needs similar bindings (for python), I've thought maybe it's best to instead go with SWIG and generate bindings for any language we like.

    If we decide to do it this way, I'd need to get the SWIG generated C/C++ sources compiled for orx's targets (only the desktop ones initially). Are you still willing to use the build server for this purpose?

    Of course!
    If so, how would you like to proceed? In the end, we'll have a SWIG interface definition file and a script to generate all the sources for the bindings. The generation needs to be done once, but the sources need to be compiled for each platform.

    For example; for python, SWIG will generate:
    * orx_python_bindings.cxx
    * orx.py

    Then we need to compile orx_python_bindings.cxx for each platform, and package and distribute the binaries along with orx.py.

    That's an excellent question. I'd love to package and distribute wrappers, and not only for Python, but I haven't given much thoughts about package naming nor where the SWIG scripts should be in the hierarchy. Maybe under the code/build folder? I have 0 experience with SWIG, so I don't know if there's any requirement at this level.

    I'll be happy to modify the buildbot script once we have something working.
  • edited April 2015
    Wow, SWIG has been so cooperative as usual! I've already completed the config bindings for Java and Python. The interface is not very natural to the target languages ATM, as in you need to write C-like code:

    Python example:
    v = orxVECTOR()
    orxConfig_GetVector("key",v)
    

    while it would have been much nicer (and easily accomplished with SWIG) to write:
    v = orxConfig_GetVector("key")
    

    But at least, SWIG has done all the grunt-work of crossing the language borders. I guess we could improve the bindings for each language over time, but we have something to work with for now.

    BTW, the current SWIG interface description file is so simple that it could be used for any of the other languages that SWIG supports (including clisp, csharp, d, go, lua, ocaml, php, ruby and others).

    Creating bindings for the rest of Orx will probably take a bit longer, since exposing the callback registration to the target language is somewhat more tricky, but it's no big deal.

    iarwain wrote:
    Of course!
    Great! :)
    That's an excellent question. I'd love to package and distribute wrappers, and not only for Python, but I haven't given much thoughts about package naming nor where the SWIG scripts should be in the hierarchy. Maybe under the code/build folder? I have 0 experience with SWIG, so I don't know if there's any requirement at this level.

    I guess I know one half of the equation, and you know the other, so, let's discover together shall we :)

    I've attached a zip https://forum.orx-project.org/uploads/legacy/fbfiles/files/orxbinding.zip containing the following files:
    /
     CMakeLists.txt
     orx.i # The swig interface definition file
     test.ini # a small ini file for test purposes
     test.py # a small python test script
     cmake/
           FindORX.cmake # A cmake find file to find ORX (used by CMakeLists.txt)
     build/
           orxPYTHON_wrap.cxx # The python wrapper generated by SWIG
           orxJAVA_wrap.cxx # The java wrapper generated by SWIG
           orx.py # The python module generated by SWIG
           *.java # The java class files for the SWIG generated java module
    

    As you might have noticed, I've included some files from my cmake build folder "build" in case you'd like to try it without installing cmake or SWIG, but here's how you'd do it from scratch:
    mkdir build
    cd build
    cmake .. -DGENERATE_PYTHON_BINDINGS=TRUE 
             -DGENERATE_JAVA_BINDINGS=TRUE 
             -DORX_DIR=<path_to_orx_root_folder>
    make
    python ../test.py
    

    If you'd like to just try the pre-generated sources I've sent you, please compile the .cxx files using something similar to how cmake does it (the names of the binaries are important etc.):

    For Python:
    /usr/bin/c++   -D_orx_EXPORTS -fPIC -I<ORX_DIR>/dev-linux64/include -I/usr/include/python2.7 -o orxPYTHON_wrap.cxx.o -c orxPYTHON_wrap.cxx
    
    /usr/bin/c++  -fPIC    -shared -Wl,-soname,_orx.so -o _orx.so orxPYTHON_wrap.cxx.o  -L<ORX_DIR>/orx-1.6/dev-linux64/lib -lorx -Wl,-rpath,<ORX_DIR>/dev-linux64/lib
    

    I'll be happy to modify the buildbot script once we have something working.

    My notes about the buildbot script:
    1. I think SWIG should only be run in one place, and the generated .cxx files should be compiled on all the build slaves. Running SWIG on different computers runs the risk of generating slightly different interfaces, which will be a big problem since the users of the binding must see a single cross-platform, say, .py file.
    2. We should gather all the compiled binaries and package them into a single library for the target language. I've tried this for Java and it works quite nicely. In the end you get an innocent looking .jar that contains everything for every platform. Naturally, this step will be quite language-dependent.
  • edited April 2015
    Nice work!

    I did try to play around a bit with SWIG at about the same time, two days ago, but it was my first contact with it so I had a very blunt approach.

    Here's the .i I wrote, which contains some windows-specific defines that I thought would be given to the command line instead (as well as the inclusion of windows.i, which should be conditional upon said defines).

    https://forum.orx-project.org/uploads/legacy/fbfiles/files/orx-a5be451010123579863dcf5e8f8c1664.zip

    There's no langage-specific idiom, but, aside from some warnings, it looked like it was able to generate valid wrappers for the languages I tried (python, lua and go).

    I was also thinking of excluding all the "private" API in orx, in all the .h files, from __orxEXTERN__ to help with the process.

    Now regarding the build steps you mentioned:

    1- if we want to generate the wrappers on a single build machine, they'll have to be part of hg repository and generated everytime the headers change, very similar to the way the doxygen doc is currently maintained.

    2- this is a bit more problematic as build machines are not up all the time (the OSX/iOS ones are actually almost never up) and doing such inter-dependencies in buildbot is actually rather tricky, albeit not unfeasible. Also, when you say:
    We should gather all the compiled binaries and package them into a single library for the target language.
    How do you package windows/osx/linux binaries into a single library? Into a single package, I could see, but into a single library, I'm not sure how it works.
    A first step could be to have separate packages per target architecture (ie. windows/osx/linux for all the languages), like it's apparently done by some other libraries (I just checked SFML and that's the approach they've taken)?
  • edited April 2015
    iarwain wrote:
    Here's the .i I wrote, which contains some windows-specific defines that I thought would be given to the command line instead (as well as the inclusion of windows.i, which should be conditional upon said defines).

    Nice try for a first attempt :) Even though SWIG is quite smart, it still requires some hand-holding. For instance, it's smart enough to wrap a (char *) as a string in the target language, but it really doesn't know what to make of a char **. That's why I have the bit that that goes:
    %inline %{
    orxSTATUS orxConfig_SetListString(const orxSTRING key, std::vector<std::string> list) {
      std::vector<const orxSTRING> pointers;
      for(int i=0; i<list.size(); ++i) {
        pointers.push_back(list[i].c_str());
      }
      return orxConfig_SetListString(key, pointers.data(), pointers.size());
    }
    %}
    

    Because SWIG knows what to do with a vector<string> (thanks to %include"std_vector.i")

    Also, I've been reluctant to show it orxDecl.h and orxType.h directly, as that, in my mind, runs the risk of leaking something platform dependent to the generated wrappers. I instead want it to use the broadest types in the wrappers by lying to it about #defines such as orxFLOAT. In the end, the C compiler will see the true #defines for each platform, and compile the wrappers correctly.

    By the way, why did you need to include windows.i? In general, I think we need to keep the .i files completely platform-independent. Think about this: almost all the target languages are platform agnostic. So, f.x. if we're going to generate Python bindings, those bindings should work exactly the same way on all the platforms. In the end, the user's Python codebase will see a single orx.py, and it shouldn't matter which platform was used to generate that file. A single orx.py, multiple _orx.{so,dll,dylib}s. The only way to make that work, is to generate the wrappers on a single platform, and compile the very same generated .cxx on all the platforms (and make sure that it does compile).
    There's no langage-specific idiom, but, aside from some warnings, it looked like it was able to generate valid wrappers for the languages I tried (python, lua and go).

    Staying language-agnostic has the huge benefit of being able to generate bindings for any language, we can also, in time, focus on some languages and make the bindings more natural via conditionally included interface code.
    I was also thinking of excluding all the "private" API in orx, in all the .h files, from __orxEXTERN__ to help with the process.

    Can you give a specific example of a function you'd like excluded? One option is to %ignore them individually, but if we can state a pattern in the function signature, we might also be able to ignore them all at once.
    1- if we want to generate the wrappers on a single build machine, they'll have to be part of hg repository and generated everytime the headers change, very similar to the way the doxygen doc is currently maintained.

    2- this is a bit more problematic as build machines are not up all the time (the OSX/iOS ones are actually almost never up) and doing such inter-dependencies in buildbot is actually rather tricky, albeit not unfeasible.

    So we have two challenges; getting the wrapper.cxxs into the build machines, and getting the binaries out of them. I guess we can manage the getting in bit, by making one build machine upload the wrappers to a common repository and the others pulling from there. Another option could be to let each of them run SWIG independently, while making sure that they generate the exact same wrappers.

    How would you feel about handling the getting out bit "manually". I mean, it could be triggered manually for each release, once we know that all the slaves have uploaded their binaries.
    How do you package windows/osx/linux binaries into a single library? Into a single package, I could see, but into a single library, I'm not sure how it works.
    A first step could be to have separate packages per target architecture (ie. windows/osx/linux for all the languages), like it's apparently done by some other libraries (I just checked SFML and that's the approach they've taken)?

    Sorry, by a "single library", I meant a single package. Or leaving terms aside, I'd want a single entity, that works across platforms. I've just checked SFML, and they indeed have separate packages per architecture (which I dislike) for Python. On the other hand, their Java bindings are more like what I'd prefer. A single .jar file that contains the binaries for all the platforms.

    IMHO, providing separate packages could be inconvenient for the users of the binding. For instance, if I make a game in Python, I'd like my users to just download the game and play, without needing to install a library for their platform. I actually don't know if people distribute python programs this way, so, it may be irrelevant for python, but in Java, downloading a single .jar and running it by double-clicking on it is common practice.
  • edited April 2015
    enobayram wrote:
    Nice try for a first attempt :) Even though SWIG is quite smart, it still requires some hand-holding. For instance, it's smart enough to wrap a (char *) as a string in the target language, but it really doesn't know what to make of a char **. That's why I have the bit that that goes:
    %inline %{
    orxSTATUS orxConfig_SetListString(const orxSTRING key, std::vector<std::string> list) {
      std::vector<const orxSTRING> pointers;
      for(int i=0; i<list.size(); ++i) {
        pointers.push_back(list[i].c_str());
      }
      return orxConfig_SetListString(key, pointers.data(), pointers.size());
    }
    %}
    
    Ah, I see. It sounds weird to me that it can easily convert char * but has trouble handling char**. It's nitpicking, but you might want to do a reserve() before all the push_backs. :)
    Also, I've been reluctant to show it orxDecl.h and orxType.h directly, as that, in my mind, runs the risk of leaking something platform dependent to the generated wrappers. I instead want it to use the broadest types in the wrappers by lying to it about #defines such as orxFLOAT. In the end, the C compiler will see the true #defines for each platform, and compile the wrappers correctly.

    Mmh, which parts concern you precisely?
    By the way, why did you need to include windows.i? In general, I think we need to keep the .i files completely platform-independent. Think about this: almost all the target languages are platform agnostic. So, f.x. if we're going to generate Python bindings, those bindings should work exactly the same way on all the platforms. In the end, the user's Python codebase will see a single orx.py, and it shouldn't matter which platform was used to generate that file. A single orx.py, multiple _orx.{so,dll,dylib}s. The only way to make that work, is to generate the wrappers on a single platform, and compile the very same generated .cxx on all the platforms (and make sure that it does compile).

    Windows.i allows SWIG to gracefully handle all the calling convention, declspec() tags, etc...
    I was thinking of having its inclusion conditional to the __orxWINDOWS__ define. But if you'd rather redefine all the relevant content manually, I don't see any problem with that either.
    Staying language-agnostic has the huge benefit of being able to generate bindings for any language, we can also, in time, focus on some languages and make the bindings more natural via conditionally included interface code.

    I do think supporting targeted languages idioms will prove beneficial for the end users, when we can. That being said, I'm not the target audience as it's unlikely I'm going to use any of those bindings myself. :)
    Can you give a specific example of a function you'd like excluded? One option is to %ignore them individually, but if we can state a pattern in the function signature, we might also be able to ignore them all at once.

    Well doing it via __orxEXTERN__ is also beneficial to the users using the C/C++ includes directly, not just for the wrappers. Things like orx<Module>_Setup/_Init/_Exit are good candidates for that, their intent is definitely to be private, not public.
    So we have two challenges; getting the wrapper.cxxs into the build machines, and getting the binaries out of them. I guess we can manage the getting in bit, by making one build machine upload the wrappers to a common repository and the others pulling from there. Another option could be to let each of them run SWIG independently, while making sure that they generate the exact same wrappers.

    I see no problem with storing the wrappers directly with the source itself, on the same repository.
    Sorry, by a "single library", I meant a single package. Or leaving terms aside, I'd want a single entity, that works across platforms. I've just checked SFML, and they indeed have separate packages per architecture (which I dislike) for Python. On the other hand, their Java bindings are more like what I'd prefer. A single .jar file that contains the binaries for all the platforms.

    The separate architecture could be a first step though, as it's easier to put together.
    IMHO, providing separate packages could be inconvenient for the users of the binding. For instance, if I make a game in Python, I'd like my users to just download the game and play, without needing to install a library for their platform. I actually don't know if people distribute python programs this way, so, it may be irrelevant for python, but in Java, downloading a single .jar and running it by double-clicking on it is common practice.

    That part can always be done by the developer themselves: they could get all the versions and ship whichever combination they want to their end user. Like the current linux32/64 packages at the moment: they are separate .zip files, but usually people making games will retrieve both and ship both versions with their game. It is an extra step for the developer, but only once (when they retrieve the package) and it could simplify the package generation at least in a first time.

    Regarding the build slave, if you look at code/build/buildbot/install.txt, all the relevant steps should be there. Lemme know if you have any issues.

    In your case, the slave would be named orx-mac-slave-enobayram and the password would be: pallas.
  • edited April 2015
    iarwain wrote:
    Ah, I see. It sounds weird to me that it can easily convert char * but has trouble handling char**. It's nitpicking, but you might want to do a reserve() before all the push_backs. :)
    Well, a char * could mean many things, but in the vast majority of the cases, it points to a null-terminated string, so SWIG takes liberty in assuming it as such. Besides a char * is all you need to properly access a null-terminated string. For a char ** though, is it a null-terminated list of null-terminated strings? Is it the address of a pointer to a single null-terminated string? Or is it what it is in this case? So, SWIG doesn't attempt anything fancy when it sees a char ** by default. You can make it wrap a char ** however you wish with some SWIG-fu(you can define typemaps which tell how to map types), but in this case, i didn't think it was worth it for a single function.

    As for the "reserve", I agree, it's such an easy and harmless optimization that there's no excuse not to do it here, aside from that, this code is going to talk to Python :), besides, we're constructing an extra vector<string> in the first place, and I don't think it's at all possible to avoid that, since most of the target languages keep their strings as unicode, and worse, they're not even null-terminated.

    In general, I really ignore most performance considerations while writing language bindings, since the activity is excessively wasteful to begin with.
    Mmh, which parts concern you precisely?

    Well, I'm probably not as comfortable with the orx codebase as you are, so whenever I see a platform-specific #define I'm afraid that it'll cause SWIG to emit different wrappers on each platform. Besides, I'd prefer to have complete control over how SWIG wraps, say, orxFLOAT, so that the wrappers work correctly and similarly on each platform.
    Windows.i allows SWIG to gracefully handle all the calling convention, declspec() tags, etc...
    I was thinking of having its inclusion conditional to the __orxWINDOWS__ define. But if you'd rather redefine all the relevant content manually, I don't see any problem with that either.

    I see, I guess Windows.i would be essential in a codebase that has declspecs and such all around the codebase, but thanks to your consistent use of macros, we should be able to avoid that problem without it. As I said, I'd prefer to stay away from anything that implies a platform dependency, so a "#define orxFASTCALL // empty" feels much more innocent since we know it'll work the same way on all platforms.
    Well doing it via __orxEXTERN__ is also beneficial to the users using the C/C++ includes directly, not just for the wrappers. Things like orx<Module>_Setup/_Init/_Exit are good candidates for that, their intent is definitely to be private, not public.

    Ah, case in point, if those are private functions, what's the officially recommended way of using the config module in isolation? I've discovered that you need to call the following first:
    orxModule_RegisterAll()
    orxModule_SetupAll()
    orxModule_Init(orxMODULE_ID_CONFIG)
    
    I see no problem with storing the wrappers directly with the source itself, on the same repository.
    So you mean, you'd prefer to keep the generated code in repository? That really does greatly simplify the getting in problem :)
    The separate architecture could be a first step though, as it's easier to put together.
    ...
    That part can always be done by the developer themselves...
    I definitely agree, I hate it when I unnecessarily complicate things :/. As Einstein said “A clever person solves a problem. A wise person avoids it.”(Conclusion: I'm definitely not wise). We could even manually upload the cross-platform binding packages for chosen releases, no need to complicate the build setup.
    Regarding the build slave, if you look at code/build/buildbot/install.txt, all the relevant steps should be there. Lemme know if you have any issues.

    In your case, the slave would be named orx-mac-slave-enobayram and the password would be: pallas.
    Great, I'll set it up as soon as possible.
  • edited April 2015
    enobayram wrote:
    Well, a char * could mean many things, but in the vast majority of the cases, it points to a null-terminated string, so SWIG takes liberty in assuming it as such. Besides a char * is all you need to properly access a null-terminated string. For a char ** though, is it a null-terminated list of null-terminated strings? Is it the address of a pointer to a single null-terminated string? Or is it what it is in this case? So, SWIG doesn't attempt anything fancy when it sees a char ** by default. You can make it wrap a char ** however you wish with some SWIG-fu(you can define typemaps which tell how to map types), but in this case, i didn't think it was worth it for a single function.

    To me, the only valid assumption would be a pointer to a null-terminated string. Which works fine in any of the actual cases.
    The potential extra entries in an array, if any, would be the responsibility of the programmer.
    If it were declared as a char*[n], that would be different, of course.
    So as long as it's assuming null-terminated string by default for char*, it should also assume pointer to null-terminated string by default for a char**, not saying that those can't be overridden by the programmer, but at least it'd be consistent, me think. :)
    As for the "reserve", I agree, it's such an easy and harmless optimization that there's no excuse not to do it here, aside from that, this code is going to talk to Python :), besides, we're constructing an extra vector<string> in the first place, and I don't think it's at all possible to avoid that, since most of the target languages keep their strings as unicode, and worse, they're not even null-terminated.

    I guess it depends on which kind of unicode encoding they're using. If it's Go, for example, it'd be UTF-8, which is also what orx is using internally. I do not know what other languages use.
    You could avoid the internal heap allocation altogether by using a stack-allocation for storing the pointers before sending the list to orx.
    In general, I really ignore most performance considerations while writing language bindings, since the activity is excessively wasteful to begin with.

    It's probably the best option, yes, but who knows, sometimes a "death by thousand paper cuts" can be alleviated a bit. In this case though, I agree it doesn't really matter. :) However the heap allocation could have been worth removing were this function used more often.
    Well, I'm probably not as comfortable with the orx codebase as you are, so whenever I see a platform-specific #define I'm afraid that it'll cause SWIG to emit different wrappers on each platform. Besides, I'd prefer to have complete control over how SWIG wraps, say, orxFLOAT, so that the wrappers work correctly and similarly on each platform.

    I understand and it's totally fine with me to do a cherry-picking approach, I was just trying to learn more about SWIG and its limitations, it's really a brand new world to me. :)
    I see, I guess Windows.i would be essential in a codebase that has declspecs and such all around the codebase, but thanks to your consistent use of macros, we should be able to avoid that problem without it. As I said, I'd prefer to stay away from anything that implies a platform dependency, so a "#define orxFASTCALL // empty" feels much more innocent since we know it'll work the same way on all platforms.

    Sounds good to me!
    Ah, case in point, if those are private functions, what's the officially recommended way of using the config module in isolation? I've discovered that you need to call the following first:
    orxModule_RegisterAll()
    orxModule_SetupAll()
    orxModule_Init(orxMODULE_ID_CONFIG)
    

    The exhaustive sequence can be found in tools/orxCrypt/src/orxCrypt.c. In this case, I'd recommend not using the embedded version as you won't need any of the plugins, like display, for example. This way the library remains pretty light and should initialize very quickly as well.
    I definitely agree, I hate it when I unnecessarily complicate things :/. As Einstein said “A clever person solves a problem. A wise person avoids it.”(Conclusion: I'm definitely not wise). We could even manually upload the cross-platform binding packages for chosen releases, no need to complicate the build setup.

    Excellent, let's start with this approach then. We can always iterate and improve it over time, of course.
    Great, I'll set it up as soon as possible.

    Thanks a lot, that'll definitely be helpful! :)
  • edited April 2015
    iarwain wrote:
    I guess it depends on which kind of unicode encoding they're using. If it's Go, for example, it'd be UTF-8, which is also what orx is using internally. I do not know what other languages use.
    You could avoid the internal heap allocation altogether by using a stack-allocation for storing the pointers before sending the list to orx.

    You're right, that would be a worthy optimisation indeed, especially for relatively fast target languages such as Java or Go. Probably not worth it for Python though. Maybe, in time, we could specialise the wrappers for those functions for languages that need it.
    "death by thousand paper cuts"
    I fell in love with that phrase :)
    I understand and it's totally fine with me to do a cherry-picking approach, I was just trying to learn more about SWIG and its limitations, it's really a brand new world to me. :)
    I hope you like SWIG. As a C/C++ lover, SWIG has enabled me to use C++ under circumstances that would make it impossible to use C++ otherwise. In today's IT world, I think SWIG gives superpowers to a C++ programmer. In my experience, the best way to approach SWIG is to focus on what it can do, rather than what it can't. It almost always does the most dangerous and ugly bits automagically, while you can find clever solutions to fill in the gaps.
    Excellent, let's start with this approach then. We can always iterate and improve it over time, of course.
    How would you like to proceed? I can create a pull request for an initial version that only wraps the config module (as that's what the current potential users, me included, need.). We could try to get the build setup going with that. We can wrap the rest of the orx headers over time. CAUTION, AMBITIOUS DREAM AHEAD: One day, we could even package orx with a Python interpreter, along with the wrappers and build a cross-platform python environment to develop mobile games! That would be very much appreciated in the Python community. end of AMBITIOUS DREAM.

    How would like the code to be organised? I can put all the SWIG stuff under code/build/bindings. I can create a cross-platform CMake build setup to generate and compile the bindings, CMake + SWIG works well.

    What branch would you like to keep these in?

    About the build slave; I've tried to set it up as a system service, so it should be up as long as my dev machine is on (and that would be the case on most week days), but as a Mac noob, there's a good chance that I didn't set it up correctly, so could you confirm that it's working?
  • edited April 2015
    enobayram wrote:
    I hope you like SWIG. As a C/C++ lover, SWIG has enabled me to use C++ under circumstances that would make it impossible to use C++ otherwise. In today's IT world, I think SWIG gives superpowers to a C++ programmer. In my experience, the best way to approach SWIG is to focus on what it can do, rather than what it can't. It almost always does the most dangerous and ugly bits automagically, while you can find clever solutions to fill in the gaps.

    I've heard very good things on SWIG for quite some time now, I just never got into it as my favourite high-level scripting language isn't supported: Rebol. :)
    How would you like to proceed? I can create a pull request for an initial version that only wraps the config module (as that's what the current potential users, me included, need.). We could try to get the build setup going with that. We can wrap the rest of the orx headers over time. CAUTION, AMBITIOUS DREAM AHEAD: One day, we could even package orx with a Python interpreter, along with the wrappers and build a cross-platform python environment to develop mobile games! That would be very much appreciated in the Python community. end of AMBITIOUS DREAM.

    How would like the code to be organised? I can put all the SWIG stuff under code/build/bindings. I can create a cross-platform CMake build setup to generate and compile the bindings, CMake + SWIG works well.

    What branch would you like to keep these in?

    A pull request on a new branch, named, say, SWIG, sounds good to me. /code/build/bindings sounds like a good place as well. We'll merge everything back in the default branch when it's stable.

    Do we really need CMake? What tasks will it run precisely? If it's the SWIG command line invocation, I believe that can be handled directly in buildbot or through a python script (I'm just trying to avoid adding new dependencies on the slaves unless they're mandatory :)).
    About the build slave; I've tried to set it up as a system service, so it should be up as long as my dev machine is on (and that would be the case on most week days), but as a Mac noob, there's a good chance that I didn't set it up correctly, so could you confirm that it's working?

    Your slave is up and running, you can see its status here:

    http://buildbot.orx-project.org:8010/buildslaves/orx-mac-slave-enobayram

    It did compile two mac versions fine (not the iOS ones, but I'm sure it's another format change as your mac is running 10.10 with a newer XCode than mine). However, it has since had other problems: apparently on the latest builds, it doesn't find hg in the path anymore. Here's the log of the last failed attempt:

    http://buildbot.orx-project.org:8010/builders/mac/builds/8/steps/hg/logs/stdio

    Maybe hg isn't in the PATH (PATH=/usr/bin:/bin:/usr/sbin:/sbin) ?

    I'll try to update to 10.10 locally and see if I can get the newer version of xcode to run on my old macbook in the coming days.
  • edited April 2015
    iarwain wrote:
    I've heard very good things on SWIG for quite some time now, I just never got into it as my favourite high-level scripting language isn't supported: Rebol. :)

    I had never heard about Rebol before, I've checked it a bit and it definitely seems interesting. However, it also seems very similar to LISP, why do you prefer it over LISP? Is it because Rebol has an interpreter more suitable for embedding?
    A pull request on a new branch, named, say, SWIG, sounds good to me. /code/build/bindings sounds like a good place as well. We'll merge everything back in the default branch when it's stable.

    Great!
    Do we really need CMake? What tasks will it run precisely? If it's the SWIG command line invocation, I believe that can be handled directly in buildbot or through a python script (I'm just trying to avoid adding new dependencies on the slaves unless they're mandatory :)).

    I'm a bit confused at this point, I think this paragraph is in conflict with our previous agreement of including the SWIG generated wrappers in the repository. In that case, the slaves wouldn't need to call SWIG anyway, so the slaves wouldn't even depend on SWIG. In any case, I can easily drop cmake for this setup. It would probably create more problems than it solves.
    Your slave is up and running, you can see its status here:
    ...
    I'll try to update to 10.10 locally and see if I can get the newer version of xcode to run on my old macbook in the coming days.

    I think I've fixed my slave. As I said, I'm a noob at mac, and I wasn't very successful at setting the slave up as a system service running as an isolated user. Now I've fixed it, so the slave should be up whenever my computer is on.

    By the way, fixing the slave was harder since I had to wait for the build to be triggered by some means. Is there any way I can trigger a build the next time I need to fix anything? If so, I could attempt to fix the iOS build myself.
  • edited April 2015
    enobayram wrote:
    I had never heard about Rebol before, I've checked it a bit and it definitely seems interesting. However, it also seems very similar to LISP, why do you prefer it over LISP? Is it because Rebol has an interpreter more suitable for embedding?

    It's actually much higher level and simpler than LISP. For example, the source of the "read" function can be a file as well as an URL or even a block/object. It makes writing scripts very easy and fast, no matter which resources one is handling.
    It supports file path, URLs, dates, etc... as first citizens.
    But it does have some strong similarities with LISP, such as the homoiconicity property (ie. data and code are stored in the same way).

    There's now some work done by the Rebol community on Red, a Rebol-based language suited for both low and high level development. The language is still in its infancy:http://www.red-lang.org/p/about.html
    I'm a bit confused at this point, I think this paragraph is in conflict with our previous agreement of including the SWIG generated wrappers in the repository. In that case, the slaves wouldn't need to call SWIG anyway, so the slaves wouldn't even depend on SWIG. In any case, I can easily drop cmake for this setup. It would probably create more problems than it solves.

    Mmh, I guess that's probably because we don't share exactly the same vision for the process. Here's in details how I thought we could do it (very similar to how the Doxygen doc is maintained):

    - When a .h file is modified and/or the .i files are
    - Buildbot will trigger a "SWIG" build on one of the capable slaves
    - This build would create the new wrappers by calling SWIG
    - And it would then commit/push the wrapper changes to the repository

    Does it sound reasonable? How would you see the whole process?
    I think I've fixed my slave. As I said, I'm a noob at mac, and I wasn't very successful at setting the slave up as a system service running as an isolated user. Now I've fixed it, so the slave should be up whenever my computer is on.

    By the way, fixing the slave was harder since I had to wait for the build to be triggered by some means. Is there any way I can trigger a build the next time I need to fix anything? If so, I could attempt to fix the iOS build myself.

    Ah, thanks! Regaarding the build triggering, yes, there's a way to do it, I'll send you the credentials by PM. Once logged with them on the master, you can request new builds.
  • edited April 2015
    iarwain wrote:
    It's actually much higher level and simpler than LISP. For example, the source of the "read" function can be a file as well as an URL or even a block/object. It makes writing scripts very easy and fast, no matter which resources one is handling.
    It supports file path, URLs, dates, etc... as first citizens.
    But it does have some strong similarities with LISP, such as the homoiconicity property (ie. data and code are stored in the same way).

    There's now some work done by the Rebol community on Red, a Rebol-based language suited for both low and high level development. The language is still in its infancy:http://www.red-lang.org/p/about.html

    You've definitely put Rebol and Red on my radar, though I have my sights set on Haskell these days. Incidentally, Haskell also suffers from a lack of SWIG support, though that's probably because object-oriented imperative programming is so alien to Haskell.

    I've a feeling that the SWIG CLISP back-end could be a very good starting point to write a Rebol back-end, since they're so similar topologically. That would probably still be a significant undertaking though.
    - When a .h file is modified and/or the .i files are
    - Buildbot will trigger a "SWIG" build on one of the capable slaves
    - This build would create the new wrappers by calling SWIG
    - And it would then commit/push the wrapper changes to the repository

    Does it sound reasonable? How would you see the whole process?

    That's great, I initially thought we'd manually commit the wrappers, but that's better. For now, I'll focus on generating complete bindings and writing some tests/examples for them.
    Ah, thanks! Regaarding the build triggering, yes, there's a way to do it, I'll send you the credentials by PM. Once logged with them on the master, you can request new builds.

    Thanks :)
  • edited April 2015
    enobayram wrote:
    You've definitely put Rebol and Red on my radar, though I have my sights set on Haskell these days. Incidentally, Haskell also suffers from a lack of SWIG support, though that's probably because object-oriented imperative programming is so alien to Haskell.

    I never had the courage to look deeply into Haskell. One day that might change maybe. :) Right now, I'm interested in Go, as an evolution from C. I do not like C++ and, even if they solved quite a few shortcomings of C++ with D, I find the language needlessly complicated. Go, on the contrary, tried to both add meaningful features and to simplify C as well. For example, they added support for concurrency in a very elegant way without making the language itself much more complicated. They also added duck-typing, which works much better than the rigid C++ OO approach, if you ask me.
    Lastly, fetching programs or dependent applications is integrated in the language/compiler itself. Imagine that orx were written in Go (or wrapped using SWIG, as it is), if one were to type, in their game.go file:
    import (
      "bitbucket.org/orx/orx"
    )
    

    When compiling their game, Go would sync the hg repository by itself and compile it for the target platform, unless it's already been compiled in the past, in which case it simply uses it.
    That's great, I initially thought we'd manually commit the wrappers, but that's better. For now, I'll focus on generating complete bindings and writing some tests/examples for them.

    Excellent, I'll take care of the buildbot scripting part when you're done! =)
  • edited April 2015
    I find Go to be quite nice as well, although I'm mainly watching both Rust and Jai (the language Jonathan Blow is working on) and hoping for the future :) Until then I will continue my love of C!
Sign In or Register to comment.