I've implemented the Cascaded Light Propagation Volumes algorithm (no indirect shadowing yet) for real-time diffuse global illumination detailed here and here. It works fine but I'm still trying to fix one artifact in particular.
Short summary
You may skip this if you already know how the algorithm works.
The algorithm works by storing lighting information in the form of spherical harmonics in a 3D grid, where initially the data in each cell of the grid comes from rendering an extended shadow map (reflective shadow map) that also includes color and normal information, besides depth. The idea is that essentially all pixels seen by a light source are the cause of the first bounce of indirect illumination, so you store the required information alongside the ordinary depth buffer you use for shadow mapping, and sample all the data to initialize the 3D grid. The information in the 3D grid is then propagated iteratively by (for each iteration) propagating the information in one cell to all of its 6 direct neighbours (above, below, left,right, top, bottom). To light the scene using the information in the grid, you apply a full screen pass over your scene, and for each rasterized pixel you have the world space position of the rasterized surface available (e.g. from G-Buffers in deferred shading), so you know which cell of the grid a certain pixel on screen belongs to.
This is working fine for the most part, here are two images without simulated GI and just a hardcoded ambient term, and next to it an image with the LPV algorithm. Notice colored reflections on surfaces, better depth detail, etc.
Problem
When looking up the cells during the lighting stage, trilinear interpolation (using hardware texture filters) is used to smoothly interpolate data between the center of a cell, its neighboring cells, and the actual looked up texture coordinate. Essentially, this interpolation mimics the propagation of the lighting information at the center of a cell to the concrete pixels around the center where the information is looked up. This is required because otherwise the lighting would look very rough and ugly. However, since trilinear interpolation doesn't take into account the direction of light propagation of lighting information encoded in a cell (remember, it's in spherical harmonics), the light can be incorrectly propagated to the looked up pixel. For example, if the radiance encoded in the cell only propagates towards (1,0,0) ("the right"), everything that's to the left of the center of the cell should receive less light than what is stored at the center, and everything that's to the right should receive more, but trilinear interpolation does not take that into account.
This causes light bleeding incorrectly trough walls when the cell sizes in the grid are big compared to the surfaces in the scene (this is necessary because you need big cells to propagate light far into the scene with as few propagation iterations as possible). This is what it looks like:
As you can see (from the shadow outlines at the top right), the scene is lit by a directional light source somewhere above the scene to the top left. And since there is only one cell separating the outside of the atrium and the inside, the light bleeds through and the wall to the left is incorrectly illuminated.
Actual question
The author suggests a form of manual anisotropic filtering to fix this. He gives a radiance gradient (I'm assuming of the SH coefficients sampled from the current cell) towards direction of the surface normal n as:
And states
Thus, by comparing the radiance directional derivative with the actual radiance direction, it can be calculated whether the radiance distribution starts further than its trilinear interpolation for this point.
My question(s):
In the equation, the function c(x) seem to be the SH coefficients at point (x). So the radiance gradient seems to be computed like a normal numerical derivate as the weighted difference of SH coefficients at points x - (n/2) and x + (n/2). However, what is c(x) in my context? Currently I'm assuming that c(x) refers to the trilinearly interpolated coefficients at surface location(x), but I'm not sure at all, since I don't know how that is supposed to give you more information about the directional distribution of the SH coefficients.
And how is that gradient then used to change how the sampled lighting from the cell is applied to the surfaces, exactly? The author just writes "comparing the radiance directional derivative with the actual radiance direction", but this is pretty vague.
He mentions using a "central differencing scheme" and references these slides for the central differencing of SH coefficients, and also references this paper which shows the derivations of the gradient, but right now I can't draw any useful conclusions from them.