lighting a large map

As a continuation of my tilemap experiments, I'm going to try to get some lighting going on it. First test will be to get a simple light like those in the lighting tutorial surrounding my player so I can only see a little bit of the underground world around the player.

I haven't done anything yet other than look over how the lighting tutorial works, but I have a general question about how to handle lights in a large map. So, the character light is easy because in my setup, the character is always at the center of the screen. I'd like to be able to place lanterns in the map to light up various areas. Of course, these light sources may or may not be on the visible part of the map. I think one way to do this is to look at any light sources in the world (these would probably be regular orx objects) and determine if their location is visible on the screen, and if so, update an array of available lights with the appropriate light color, position, radius, etc. This assumes a setup with a static array of lights similar to the tutorial approach. That would limit the number of lights in a scene to whatever size the array is, and would maybe require some prioritization if there are too many light sources in the scene.

Having never tried to do something like that, I'm wondering if that is a sound approach, or is there some more standard way of handling this that makes more sense. Any advice or suggestion is welcome.


  • This will really depend on the kind of lighting you want to achieve.

    For example, do you simply want to have simple additive or alpha-blended flat lights? Do you want to use bump/normal maps with a shader?

    If you go for simple lights, you can have regular objects on which you'll control the blend mode and the layer/depth to make sure they'll illuminate what you want.
    If you go for shader ones, you have a couple of different options there as well.
    You could go with point lights as in the tutorial, and feed the positions of the lights around you. As you mentioned, you'll have a limitation on density as you'll have a fixed array to store them.
    Or you could go with a multi-pass lighting, where lights would render to a separate "lighting" texture that you'll then use in a post-render pass to apply lighting. Something similar to the compositing tutorial that can be found on the wiki but with an actual lighting shader.

    Similarly, do you want shadows? Which kind, pixel-perfect projected ones? Polygon-based projected ones? Simple multiply-blended flat ones?

    The same reflection applies there.

    All in all, if you go for something that would be akin to a deferred renderer, you can benefit from a multiple-render target (MRT) approach where you could output the colors to one texture and the normal/lighting information to another and have a post-processing shader combine everything at the end.

  • I forgot to mention runtime polygon-based approaches, either for the lighting or the shadows.

    For Little Cells I implemented both, polygon-based lighting and pixel-perfect shader-based.
    The Android version runs the polygon-based one that's lighter on the GPU while the PC one runs the shader-based one that gives a more accurate result (especially given that our cells aren't perfectly circular).

    One example of this approach would be the lighting in Celeste:

  • Thanks for the ideas on the various approaches. Your questions about what I want to achieve with the lighting are helpful to get me thinking. At the moment, I'm sort of just trying to develop some proficiency in, well, any technique...learning for learning sake at this point. The shaders have intrigued me so I think for now I'm going to see if I can get that going and where it takes me. Since I'm only thinking about simple lighting (not hitting walls, no shadows, etc), perhaps I should try using the orx objects and blend mode, though I'm not sure at the moment how to go about it.

    Some years ago I had used the multi-pass method in my own home-grown setup (I was, somewhat foolishly trying to create my own engine while just barely knowing what I'm doing), using the separate lighting texture, rending all the lights to it and applying to the final render, so I do have a little familiarity with how that can work. In the end it would probably be good for me to try various different things to learn how they compare and how to implement them in orx, etc. I will checkout the compositing tutorial.

    I do like the idea of more interesting lighting that interacts with what it illuminates...the page on Celeste is pretty helpful for that, and so far just looked at the Little Cells trailer, which looks really nice.

    Thanks for fielding the vague question on a huge topic!

  • edited April 2020

    I'm glad I brought you some food for thought.

    Regarding simple flat lighting, here's a wiki entry by @enobayram that goes a bit further by combining additive and multiplicative blend mode to achieve an interesting effect:

    Regarding Little Cells, the pixel perfect shadow technique I used was inspired from

    Some of the changes I made were:

    • all the lights are rendered in a single pass
    • use MRT for rendering occluders both on the color buffer and the occluder buffer in a single pass
    • use 2048 rays per light for a more precise effect when covering the full 360° range
    • encode the distance in 3 color components, giving a full range of 16M pixels of distance (which was overkill, I know, I should have simply used 2 for better packing)
    • have two radii to describe the light coverage so as to control shadow falloff

    Almost everything's done in data/config, including setting up the pipeline and MRT, the only thing that is done in code was to feed the lights positions and their radii and create the viewports based on a config list.

  • edited April 2020

    Ah, I forgot to mention an additional neat debugging trick. As the whole rendering pipeline is data driven, it's easy to modify it for debugging purposes.

    In the case of Little Cells, I had a dev.ini file that was included at the end of the main config file and would thus override/add new properties only during development (I wouldn't commit that file to source control, just have it locally on my machine).

    In it I would add extra viewports as overlays in the corner of the screen to display the content of my intermediate textures (occluder and lighting). It looked like this:

    ViewportList  += Debug1Viewport # Debug2Viewport
    Position        = (768, 0, 0)
    RelativeSize    = (0.25, 0.25, 1.0)
    BlendMode       = alpha
    ShaderList      = @
    Code            = "
    void main()
      gl_FragColor = vec4(texture2D(texture, gl_TexCoord[0].xy).rgb, 0.2);
    ParamList       = texture
    texture         = LightingTexture
    Position        = (512, 0, 0)
    texture         = OccluderTexture
  • Great, thanks. I spent a little time this afternoon and implemented the offscreen texture approach (like the compositing tutorial) and it was pretty easy to setup. I have a player light and can place lights on my map where ever I want as I move around...the lighting doesn't look good, but I'll worry about that later, just getting a feel for the approach.

    So, here is a specific tile map has a horizon line above which the tiles are transparent, with a parallax scrolling background scene behind it. I'd like my light map to only affect the tile map, but not the background scene behind it. Right now it is laid over the entire scene. I thought maybe I can handle that with the grouplist somehow. I feel like there is probably a simple way to handle that? Not sure if you can picture what I've set up, but it is pretty much a transplant of the way the compositing tutorial works.

  • edited April 2020

    There are probably a few different ways to handle this situation but the easiest one that comes to mind right now would be to have two separate viewport/camera couples for the background and the foreground, and use the group feature. If you use additive blending for the lights, make sure it won't modify the alpha component of the result as the shader in the example below depends on it.

    Let's say you want two different groups in the back: Sky and Background, and three in the front: Game, Light and UI, all rendered in that order.

    You could then have:

    Camera = BackCamera
    TextureList = BackTexture
    GroupList = Sky # Background
    ParentCamera = FrontCamera
    Camera = FrontCamera
    TextureList = FrontTexture
    GroupList = Game # Light # UI
    ShaderList = @
    Code = "void main() {
      vec4 frontPixel, backPixel;
      frontPixel = texture2D(front, gl_TexCoord[0].xy);
      backPixel = texture2D(back, gl_TexCoord[0].xy);
      gl_FragColor = mix(backPixel, frontPixel, frontPixel.a);
    ParamList = front # back
    front = FrontTexture
    back = BackTexture
    ViewportList = BackViewport # FrontViewport # CompositingViewport

    Now you simply need to create all three viewports from the list above and set the appropriate Group to your objects:

    for(orxS32 i = 0, count = orxConfig_GetListCount("ViewportList"); i < count; i++)
      orxViewport_CreateFromConfig(orxConfig_GetListString("ViewportList", i));

    As I typed the code above directly in the forum, it might not run as-is, but you get the idea.

  • Thanks very much! I'll see if I can get a chance this week to try it out.

  • Still haven't tried this yet but I have a quick question about viewports. My project has one minor complication in that this tile scene is loaded from a main menu screen. Currently, everything was just sharing the same main viewport. One solution may be to use orxViewport_Delete to remove the menu viewport before creating the the 3 new viewports for the map and background layers and similarly delete the 3 map/background viewports and create the menu viewport upon going from the map back to the main menu. I just want to check if that is the way you'd suggest handling switching around the viewports (and if indeed that is what the viewport_Delete function is for). It looks like I could also add and remove the shader from the compositing viewport as needed and use it for both menu and map, though it seems easier to read to me if I keep a separate viewport for the menu and map level.

  • You can delete/create viewports on the fly, it might be the easiest approach. It'll delete/create the intermediate textures as well but the performance impact should be negligible.

    You can also enable/disable them as well, without having to bother changing their setup (camera, shader, ...).

    Lastly, in your case, you might not need to have to do anything if your menu is using the group UI in my example, as the other texture for the back will simply be entirely transparent/black.

  • Understood, thank you. Deleting the viewports and recreating them as needed seems to work well enough for my purpose right now.

    So, over lunch I tried to set the compositing up, but so far haven't succeeded. So, the Front and Back cameras needed size data, so I changed those to inherit that from my main camera (which is no longer used at this point):


    I think that is probably ok. So, when I try this, the first thing I'm trying to do is get my foreground to show up, but it renders upside down. The foreground is the just the tile map that is created with the shader from the tilemap example I started with (not playing with the lights yet). I'm sure it is somehow related to the use of that shader, as I was able to get a regular orxObject background to display normally. Any hint as to why it is upside down? I reproduced this in my simple tilemap project as well to make sure it wasn't due to some other stuff going on in my main project.

  • Ah yes, I only put the new relevant parts in my former post, not all the content, sorry.

    Regarding the upside/down rendering, I'd recommend dumping the intermediate textures to file (with the console + or displaying them in extra debug viewports in order to check their content and see where the issue is.

  • I did dump the textures and the main map texture (where the tile indexes are stored) looks normal, but the rendered texture associated with the viewport is indeed upside down. I solved the problem by changing the line in the map rendering shader so that it does not reverse the y coordinates. I can't say I understand why it was there in the first place, but changing it solves the upside down business. Changed

    vec2 coord = mod((CameraPos.xy + vec2(gl_FragCoord.x, Resolution.y - gl_FragCoord.y) * ratio) / TileSize.xy, MapSize.xy);


     vec2  coord = mod((CameraPos.xy + vec2(gl_FragCoord.x, gl_FragCoord.y) * ratio) / TileSize.xy, MapSize.xy);

    So I'll move on to the next issue and see if I can get it fixed up. Sadly, I may come back begging for help again...hopefully won't annoy!


  • Yep, I was about to suggest inverting Y in the shader, but it's always nice to find the actual source of it.

    In any case, given that OpenGL uses an upward Y axis, the cause is probably rooted in having a couple of cascaded shaders across multiple steps.

    Don't hesitate if you have more questions. :)

  • I'm getting closer but haven't testing the lighting layer yet. In any case, I want to mention this in case it isn't expected and/or if others stumble upon this.

    For the foreground viewport, it seems necessary to include both a BackgroundColor line and a BackgroundAlpha = 0 line. If they are omitted, the foreground smears across the background if you move the foreground (ie, player walking around). If you only have a background color, then that color renders in the transparent parts of the texture, which makes sense. But I'm not sure why the smearing occurs if you have neither specified or if you only have the BackgroundAlpha = 0 line with no color. In any case, it works fine with both lines there.

  • Got it all working with the lights! Since my light map is on a separate viewport/texture, I modified the compositing shader to take in 3 textures, multiply the lights onto the foreground, then mix the fore and background as in your example. Works very nicely and I may have learned a thing or two.

    Thanks again for all the help.

  • Maybe the CreationTemplate.ini file will shed some light on that behavior:

    BackgroundColor = [Vector]; NB: If not set, the viewport won't erase any part of other viewports previously rendered this frame if there are overlaps;
    BackgroundAlpha = [Float]; NB: If BackgroundColor is set, this value will be used as alpha; Defaults to 1.0;

    What it doesn't say is that only the screen is automatically cleared between frames (as we'll never read from it).
    However we don't do that by default with textures as one of the use of offscreen rendering is to do effects based on previous frames' content (like fading out trails, for example).
    When rendering to textures, one need to explicitly ask for the clearing by setting a background color/alpha on the viewport.

    @sausage do you know if we address this point in the wiki somewhere to your knowledge?

  • Nice, I'm glad you got a working base setup.

    You should be able to build on top of this when/if the need for more complex lighting arises.

  • Ah, it looks clear there in the ini file...I always go to the wiki to look these things up, and now that I look at it, indeed it has the same information there! The documentation is good, I just am usually impatient and try things until I find the right setting!

    I'm going to try to add some default lighting for the foreground above a certain height in the map so that the top of the map is more well lit and becomes dark as you descend. I think I have enough knowledge now to do that on my own. We will see. :smile:

  • No worries!

    Yes, you should be able to provide a depth parameter as well, base on the current position + pixel's position and use it to alter your lighting.

    I find both CreationTemplate.ini and SettingsTemplate.ini files very useful to me, that's where I always go when I don't remember what properties are available or how they work in details. :)

  • Yes, got the depth parameter working and a nice transition between no lighting effect above a certain map y position to full lighting effect below a lower y position. Transition start and end y settable as parameters.

    I'm starting to get the power of the shaders and also appreciating how simple (well sort of) it is to get the multiple render textures running in orx. Really cool stuff.

Sign In or Register to comment.