Friday, November 27, 2015

Global Illumination over Clipmaps

Global illumination is one of my favorite problems. It is probably the simplest most effective way to make a 3D scene come to life.

Traditional painters were taught to just "paint the light". This was centuries before 3D graphics were a thing. They understood how light bounced off surfaces picking up color on its way. They would even account for changes in the light as it went through air.

Going into realtime 3D graphics we had to forget all of this. We could not just draw the light as it was computationally too expensive. We had to concentrate on rendering subjects made of surfaces and hack the illumination any way we could. Or we could bake the stuff, which looks pretty but leaves us with a static environment. A house entirely closed should be dark, opening a single window could make quite a difference and what happens if you make a huge hole in the roof?

For sandbox games this is a problem. The game maker cannot know how deep someone will dig or if they will build a bonfire somewhere inside a building.

There are some good solutions out there to realtime global illumination, but I kept looking for something simpler that would still do the trick. In this post I will describe a method that I consider is good enough. I am not sure if this has been done before, please leave me a link if you see that is the case.

This method was somewhat of an accident. While working with occlusion I saw that determining what was visible from any point of view was a similar problem to finding out how light moves. I will try to explain it using the analogy of a circuit.

Imagine there is an invisible circuit that connects every point in space to it neighboring points. For each point we also need to know a few physical properties like how transparent it is, how it changes the light direction and color.

Why use something like that? In our case it was something we were getting almost for free from the voxel data. We saw we could not use every voxel, it resulted in very large circuits, but the good news was we could simplify this circuit pretty much the same way you collapse nodes in an octree. In fact the circuit is just a dual structure superimposed on the octree.

Consider the following scene:



The grey areas represent solid, white is air and the black lines is an octree (quadtree) that covers the scene at adaptive resolution.

The light circuit for this scene would be something like:



Red arrows mean connections between points where light can freely travel.

Once you have this, you could feed light into any set of points and run the node to node light transfer simulation. Each link conduces light based on its direction and the light's direction, each link also has the potential to change the light properties. It could make the light bounce, change color or be completely absorbed.

It turns out that this converges after only a few iterations. Since the octree has to be updated only when the scene change you could run the simulation many times over the same octree, for instance when the sun moves or a dragon breathes fire.

To add sunlight we can seed the top nodes like this:



Here is how that looks after the simulation runs. This is a scene of a gorge in some sort of canyon. Sunlight has a narrow entrance:



The light nodes are rendered as two planes showing the light color and intensity.

Here are other examples of feeding just sunlight to a complex scene. Yellow shows the energy picked up from the sunlight.



Taking light bounces into account is then easy. Unlike the sunlight, the bounced light is not seeded from outside, it is produced by the simulation.

In the following image you can see the results of multiple light bounces. We made the sunlight pure yellow and all the surfaces bounce pure red:


You can see how the light probes (the boxes) are picking red light from below. Here is the same setup for a different scene:



This is still work in progress but I like the fact it takes a fraction of a second to compute a full light solution, regardless of how complex the scene is. Soon we will be testing this in a forest setting. I miss the green light coming through the canopies from those early radiosity days.

11 comments:

  1. This approach is very similar to light propagation volumes except that you are diffusing the light throughout an octtree rather than a uniform grid. As I recall - and I may have this a little wrong as it has been a while since I have used them but - the concept of LPV is quite simple and implemented efficiently using low resolution 3D textures. It works by dividing your scene (or view frustum) into a volume of cells into which the light sources are injected and encoded as (2 band?) spherical harmonics. The volume is then processed in each axis to propagate the light through the scene. Spherical harmonics are quite good at this as you can convolve two together quite easily by just using the dot product. When you then come to render the image you the query the SH coefficients to compute your diffuse GI.

    A quick google brought up this document:
    http://www.crytek.com/download/Light_Propagation_Volumes.pdf

    Although I am sure that there are more modern implementations. There is a straight-forward implementation by Tobias Franke which you can find here:
    https://github.com/thefranke/dirtchamber

    ReplyDelete
    Replies
    1. Yes you are correct, they are very similar. I saw the Crytek method long ago when it was published and kind of forgot about the regular grid propagation part. No question this inspired this approach.

      I think they were doing only one bounce (not sure how it was determined this was enough). Key differences is we do it over an octree, I think they had or proposed something similar to cascading shadow volumes to extend coverage.

      Also not clear how Crytek are getting the space light transfer properties out of the polygon data. In our case that was the big win, starting with voxelized content makes it almost trivial.

      I need to read this paper again, thanks for posting the link!

      Delete
  2. This looks pretty good.
    Shouldn't the sunlight nodes start on the surface directly below instead of on the first row ? ( diagram ). Looks like this would produce gradients where there shouldn't be any.

    ReplyDelete
  3. I remember implementing something like this years ago, using a uniform grid, and it was very fast. The problem with my solution was that the propagation was all X,Y,Z axis aligned, and hard artifacts on non-cube shaped scenes. It worked well in modern building interiors but produced poor results on outdoor scenes and more curved architecture. I don't see any of these artifacts on the curved surfaces of the Sponza atrium scene in your screenshots, so maybe you're actually doing something different.

    Can your approach correctly propagate colored light reflected from colored surfaces? If it's voxel based, it seems like you would need to take some average color from the triangles/texels of the side of a voxel, which is less accurate then casting a ray and computing an exact triangle intersection + texture color at the intersection point. If colored lighting isn't supported, then it's more like ambient occlusion that global illumination.

    Also, what about partially transparent materials like water? It seems like that might require handling shades of gray between "light" and "dark" voxels, unless you can represent the transparent surface somehow.

    Does this scale to open world scenes, or is it limited to small areas? I suppose you could have the lighting computed in tiles or something and update it as the user moves through a large scene, as long as it's fast enough to keep up with the user's movement.

    ReplyDelete
    Replies
    1. It does scale very well to large open scenes, you just get a coarser coverage over distant areas. You can see in the yellow castle screenshots how portions of the terrain in the background also get the light treatment.

      This can reflect colored surfaces, it is all about how much granularity you would like to have in the octree. Larger cells means more averaging but quicker results.

      Delete
  4. The light circuit you speak of reminds me of radiosity, but radiosity is a finite element method dealing with surfaces, and your method deals with voxel volumes. I had read about radiosity a long time ago and understood the iteration concept, but it wasn't until I took a class in applied mathematics, focusing on linear algebra, that it dawned on me that the iterative method was approximating the solution to a system of linear equations by Jacobi iteration. You are doing the same thing, solving a linear system by iteration.

    ReplyDelete
  5. Is your algorithm robust to light bleeding? It is mentioned on page 21 of the linked article. The game Minecraft also suffers from light bleeding when using half height, stair shaped, and other such blocks, as the engine was really only meant for cubes.

    ReplyDelete
    Replies
    1. Light bleeding is a big problem. It can be corrected just like in the Crytek method for the most part so I would not say it is a deal-breaker.

      Delete
  6. You should look at voxel cone tracing Global illumination by Cyril Crassin
    https://research.nvidia.com/sites/default/files/publications/GIVoxels-pg2011-authors.pdf
    from
    https://research.nvidia.com/publication/interactive-indirect-illumination-using-voxel-cone-tracing

    And

    I believe you use distance field with your voxels right?
    https://forums.unrealengine.com/showthread.php?2421-Global-Illumination-alternatives/page5

    There is trick you can do with distance field to help, shadow, collisions (and soft comllisions), raytracing and global illumination apparently.

    ReplyDelete
    Replies
    1. Thanks for the link to the voxel cone tracing paper. I think this was at some point in UE4, not sure what is the status now. Anyone can provide more info?

      Distance fields are a subset of the content we see. Definitively the case for terrain but not so much for user made edits or procedural architecture.

      Delete
    2. Voxel cone tracing is very costly (at least for polygon graphics). Kind of technology for next gen video cards (9xxgtx series and up). But produces stunning results.

      Delete