Abstract
Convincing illumination is a major component of visual immersion in virtual worlds. For dynamic environments, the direct and indirect scattering of light must adapt to changes in both the geometry and the light sources, a computationally demanding task for typical untethered VR systems or mobile platforms. In this paper we explore a simple yet effective approach to generate real-time, responsive illumination effects due to the diffuse indirect scattering of light in the virtual space, based on the well-studied approach of instant radiosity and the notion of virtual point lights (VPLs), which simulate global illumination via direct lighting. However, contrary to the instant radiosity method, where VPLs are spawned on demand by tracing photons from the light sources, here the VPLs are statically defined and represent the geometric and material properties of the underlying environment. Dynamic updates to the VPLs’ intensity allow for fast reflected light estimation, taking into account light source visibility. We provide full details and evaluation of the methodology and easy-to-use source code to apply the technique in the Unity game engine.
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Avoid common mistakes on your manuscript.
1 Introduction
Physically correct or at least plausible illumination is one of the key components to attain visual immersion in virtual spaces. The simulation of detailed light transport in virtual environments is one of the longest standing goals for 3D graphics. In the case of interactive rendering, where image buffer preparation needs to be done in a small fraction of the second, developers resort to simplifications and approximations to correct or even admissible lighting, in order to maintain a frame rate comfortable for the end user. These typically include the precalculation (or “baking”) of indirect and sometimes direct illumination, at least for static parts of a virtual scene, but also go as far as the aggressive elimination of shadow calculations and very crude shading calculations, especially in the case of low-end platforms. A typical untethered VR headset falls in the latter category and, while its stereo rendering, wide field of view, intuitive interaction and freedom of motion substantially improve immersion, visuals often fall short in delivering a credible environment.
Dynamic lighting, especially when it involves a reaction to a user action, helps bind the user to the virtual environment, as it strengthens the cause and effect relationship within it. Furthermore, it produces imagery that is in line with expectation from real-life observations and naturally makes the environment more familiar and convincing.
In this paper, we attempt to enhance the experience and presence of a user of a typical VR application, by exploring a simple yet effective approach to generate real-time, responsive illumination effects due to the diffuse indirect scattering of light in the virtual space (see examples in Fig. 1). The approach is intended for use with mid- and low-end graphics hardware, such as untethered VR headsets and mobile GPUs in general, which have a limited practical support for dynamic lighting computations. Furthermore, this is a technique that can be used on top of an already implemented game engine, such as Unity (Unity Technologies 2022), combining the estimated indirect illumination with direct and indirect lighting computed otherwise, with no modification whatsoever to the graphics back end.
More specifically, we revisit the principle behind a well-known approach to global illumination, instant radiosity (Keller 1997), which was developed in a time when rendering hardware resources were limited and was further extended to support complex light simulations in many derivative works, e.g. Walter et al. (2005), Dong et al. (2009).
For non-metallic surfaces, most of the back-scattered light from their surface comes from the light that is transmitted through the surface interface and re-emerges after being selectively absorbed (and hence pigmented) by the subsurface medium. This diffuse illumination exhibits a near-uniform scattering of light. We are particularly interested in this type of light - surface interaction, since it constitutes a significant part of the indirect lighting in a typical environment and its uniform scattering simplifies many calculations in the underlying light transport model.
In the original method, in a preprocessing pass, a number of virtual photons are traced into the scene and when each one strikes a surface, a virtual point light (VPL) is created that acts as a new light source and represents the diffuse light reflected off the surface at the hit location. The process can be repeated recursively to model multiple light bounces. During rendering, each VPL is treated as a normal light source and indirect illumination is thus replaced by direct illumination from the computed VPLs. In derivative VPL-based global illumination methods, the VPL’s emissive attributes were configured to match the reflectance response of the surface’s material or extended to virtual spherical lights with directional emission (Hašan et al. 2009), to support reflective scattering on glossy finishes, leading to a good approximation of generalized light transport in many scenarios.
Instead of computing the location of multiple VPLs, which is a costly operation to perform in real-time, we let the user define the reflectance and scattering lobe at sparse locations in the environment in the form of “static VPLs”. At run time, the emission of these VPLs is dynamically adjusted to account for the actual direct light that reaches these locations. We provide two distinct diffuse global illumination solutions, that trade accuracy for versatility and can handle both animated lighting and geometry, accounting for color bleeding, incident light direction and approximate visibility. They are very lightweight and can be combined with other indirect illumination techniques, such as reflection and irradiance probes. We have successfully applied our approach to different virtual environment settings and have used the methodology to approximate real-time global illumination in an educational VR production with historical content, which has been thoroughly evaluated by end users of all ages and is currently used in exhibitions and courses.
2 Background and related work
2.1 Real-time global illumination
Today, modern graphics pipelines may support dynamically updated global illumination, either by exploiting the ray tracing hardware to directly compute indirect lighting per shaded fragment, using path tracing and real-time de-noising (Áfra 2024), or by performing high-fidelity radiance caching, updated via approximate (Hart 1996) or exact ray tracing (Eto et al. 2023). Despite the impressive visual quality offered by such solutions, the computational cost is prohibitive for low-end GPUs or rendering environments with very strict frame rate and latency constraints, such as virtual reality. Restricting the indirect illumination computations to diffuse-only scattering, can significantly reduce the cost, by exploiting a sparse scheme of volumetric radiance caching (Greger et al. 1998) that can be updated in real time, as in Vardis et al. (2014). Precomputed radiance transfer variants (Sloan et al. 2002), such as Kristensen et al. (2005), which encode and store the energy exchange potential but resolve the actual received lighting interactively, are also popular, especially for localized light interactions and sky lighting. A different line of work attempts to capture near-field global illumination effects and highly directional scattering, such as glossy reflections, using screen-space ray marching (McGuire and Mara 2014). This approach has a low cost, but produces view-dependent artifacts and relies on a deferred shading pipeline (Akenine-Möller et al. 2018). Certain rendering solutions implemented in popular game engines, such as the Lumen system of the Unreal Engine (Epic Games 2024), combine many of the above techniques to deliver convincing interactive global illumination for games and other visually demanding experiences. Still, the overhead induced by all these approaches can push the time budget beyond the fast update requirements of VR. This is why real-time graphics applications, such as computer games and VR productions often rely on pre-computed lighting via light maps, irradiance maps and reflection probes (Akenine-Möller et al. 2018). However, radical changes in the environment and lighting cannot be properly modeled and supported by “baked” lighting.
2.2 Instant radiosity
Given a typical punctual light source, as used for real-time updates in rendering pipelines and game engines, the instant radiosity approach first spawns a number of VPLs within the light’s emission lobe, by tracing photons from the source’s location towards the virtual environment and registering the closet hits \(\mathbf{x}\) (see Fig. 2—column 1). If area lights are supported, a Monte Carlo approach is used to approximate the contribution of the entire emitter surface by light samples (see “Appendix A”). New VPLs are generated at the hit locations in the form of hemispherical light sources, aligned with the normal vector of the surface \(\mathbf{n}\) at position \(\mathbf{x}\) (see Fig 2—column 2). Assuming here a diffuse-only surface with base color (albedo) \(\rho (\mathbf{x})\), the emission of a VPL spawned at the first light bounce is:
where \(L_e(\mathbf{y},\mathbf{y}\rightarrow \mathbf{x})\) is the radiance of the emitter (pre-multiplied by any weighting factor, if sampled) towards the VPL position \(\mathbf{x}\), \(\mathbf{y}\) is the light source emission location and \(\theta _x\) is the angle between the incident direction and the surface normal (VPL direction). Although instant radiosity primarily addresses diffuse inter-reflections, extensions of the method also set up and simulate light transport of glossy VPLs (Davidovič et al. 2010).
VPLs can be set up once and reused, or updated dynamically, when a change in the affected environment or the light source occurs. An update to the VPLs’ location requires the tracing of photons (rays) to determine the new position, normal and reflectance characteristics of each VPL. This is an expensive process, unsuitable for lightweight rendering cycles, unless some temporal amortization is utilized, which limits the responsiveness of the method.
In the rendering phase, indirect lighting is replaced by direct lighting from the VPLs, leading to a typical many-light rendering situation, where even hundreds or thousands of light sources must be iterated and sampled for lighting in order to shade the visible surfaces. Many modern rendering approaches have been proposed to address or mitigate the resulting rendering complexity, including tiled and clustered rendering architectures (Olsson et al. 2012) and reservoir sampling (Bitterli et al. 2020). Still, for even a small number of light sources, lighting passes in a typical rendering engine can become a bottleneck for VR devices and therefore, a preemptive light culling is employed prior to shading. In effect this means that very few VPLs are sustainable, in practice.
In order to get a correct result, visibility of the reflected light (i.e. the VPLs here) must be accounted for, when applying the rendering equation to compute the contribution of incident light to any point in the environment. This is demonstrated in Fig. 2, in columns 3 through 5, where single-bounce indirect lighting due to VPLs with and without shadows enabled is compared to the path tracing estimator for the same setup. However, due to the complexity of computing visibility for all the resulting VPLs, e.g. using the shadow maps algorithm (Eisemann et al. 2011), we can disable shadow estimation for VPLs to trade accuracy for a significant increase in rendering performance.
Instant radiosity inspired an entire branch of rendering algorithms, including VPL image-space splatting (Dachsbacher and Stamminger 2006) and reflective shadow maps (Dachsbacher and Stamminger 2005). Given a rendering architecture that can efficiently handle many light sources, such as tiled or clustered rendering (Olsson et al. 2012), instant radiosity maps well to the rendering hardware, provided that indirect shadowing (i.e. shadows from the VPLs’ point of view) is not performed, which can significantly impact performance.
3 Method overview
Illustration of the two alternative approaches to approximate instant radiosity. Left: energy from the light position and emission cone is distributed to predefined static VPLs. Right: A phantom VPL is generated at the (approximate) intersection of a spotlight’s emission axis with the environment. Its emission characteristics are interpolated from the predefined static VPLs, which are only used as proxy surface points and not for lighting.
The main idea behind the approximate global illumination approach presented here is that, instead of dynamically spawning VPLs based on photon tracing, VPLs are predefined and attached on static and animated geometry, with their position, direction and reflectance determined beforehand. At run time, only their emissive intensity is modulated according to their spatial arrangement relative to the light source. VPLs with emission below a certain threshold, are culled (disabled) and do not contribute to the shading computations. The idea is illustrated in Fig. 3—left.
A variant of the approach, suitable only for spotlights, which have a concentrated beam of light, is to perform a single fast ray intersection of the spotlight’s emission axis with either the geometry or simplified, proxy collision targets and generate a new VPL there. To decouple the emission characteristics of this VPL from the actual geometry and allow for fast VPL computation, the new, dynamic VPL adopts a weighted combination of the emission direction and encoded reflectance from the predefined static VPLs, as illustrated in Fig. 3—right. The benefit of this variant is the more accurate positioning of the source of reflected light in space and the drastic reduction in active light sources in the scene, since only a single dynamic VPL is present per spotlight, while all user-defiend VPLs only act as surface proxies to estimate the dynamic VPLs’ attributes from.
It is worth noting that both variants can support indirect shadows in the form of shadow maps from the VPLs’ point of view, if rendering budget permits (see Fig. 4).
4 Method details
In the literature, and in compliance with the light transport calculations that are based on the rendering equation, VPL emission is calculated in terms of exitant radiance (see “Appendix”), which for purely diffuse surfaces is constant. However, here we will attempt to approximate computations leading to VPL radiant intensity, so that the resulting VPLs are compatible with the rendering pipeline of many typical game engines, such as Unity (Unity Technologies 2022). Next, an approximate visibility computation for the light reaching the statically defined VPLs is presented, followed by the phantom VPL variant of the method. Finally, we present a small extension to simulate secondary indirect light bounces, by computing the contribution of a suitably posed proxy fill light.
4.1 VPL intensity
Let a set \(\mathcal {V}\) of tuples \(V_i=(\mathbf{x}_i,\mathbf{n}_i), I_i\) be the user-defined static hemispherical VPLs, which represent the surfaces’ “potential” to reflect light in a given direction. \(\mathbf{x}_i\) is the position of the VPL, \(\mathbf{n}_i\) is the axis of its symmetrical emission lobe and \(I_i\) the estimated radiant intensity of the VPL. All VPL attributes, apart from intensity, are user-specified at scene composition to better match the underlying geometry. Let also \(I(\mathbf{y}, \mathbf{y}\rightarrow \mathbf {x_i})\) be the radiant intensity of a light source for which we generate VPLs, parameterized by the emission direction from its location \(\mathbf{y}\) towards the VPL’s position \(\mathbf{x}\). \(I(\cdot )\) is dependent on the light source type and emission profile and includes all factors, such as cosine-based attenuation for plannar emitters (Lambert’s cosine law), IES profile intensity distribution, etc. The diffuse exitance of point \(\mathbf{x}_i\) due to light \(\mathbf{y}\) is given by Theoharis et al. (2008):
where \(E(\mathbf{x}_i)\) is the irradiance at \(\mathbf{x}_i\) due to the light source. Assuming now that the VPL represents the uniform reflected light over the surface for the neighborhood of the VPL with area \(A_i\), we can approximate the VPL’s maximum intensity at normal direction \(\mathbf{n}_i\) as:
\(w_i\) is a unit-less weighting factor and a is a unit conversion factor so that \(A_i=w_i a\). This conversion helps us associate the impact of the VPL to indirect lighting according to a unit-less weight we can derive from the spatial relation of the VPLs.
Now accounting for Lambert’s cosine law for radiant intensity, the VPL’s output intensity in a direction \(\mathbf{l}\) is:
For the first variation of the proposed approach, during runtime, one simply has to update \(I_i\) for all VPLs according to the weights \(w_i\) and the current pose of the light source and then proceed to perform shading with the active VPLs. VPLs with \(I_i<I_{cull}\) are discarded. In our implementation, \(I_{cull} = 0.01\) W/sr.
The last quantity that we need to determine is \(w_i\). Since VPLs are positioned by the user, there is no practical way to ensure a correct deterministic estimation of the surface they represent. In the case of adequate surface coverage, i.e. when enough VPLs have been placed to roughly cover all significant surfaces in the accessible environment, one could approximate a VPL’s area, and consequently \(w_i\), by computing the disk area whose radius is the distance to the closest other VPL. Such a computation is expensive in an environment, where VPL positions are expected to change, e.g. when VPLs are attached to moving objects, unless some form of temporally amortized estimation is performed (see implementation in Sect. 5.1). It is often preferable to encode \(w_i\) in a user-defined scalar parameter of the VPL source. In many implementations, we have access to such parameters, usually as lighting scaling factors. For instance, in Unity, where we have implemented this technique, we can employ a spotlight’s declared intensity as storage for the \(w_i\). During scene processing, this value can be read and maintained. At run time, a new intensity is computed and applied to the light source, as per Eq. 3.
4.2 Approximate visibility
In instant radiosity, VPLs are generated on the first visible surface from their point of origin, so once established, they always correspond to some fraction of the incident, non-zero luminous energy. Since VPLs here are static and only represent the reflective potential of the surfaces, we must compute visibility to the light source in order to exchange energy with it. However, the VPLs here have no access to actual visibility information with respect to the light source and therefore, their configuration is optionally complemented with a set of approximate “soft” light blockers.
To intercept the light from the source, we set up a number of spherical light-attenuating blobs, which diminish the contribution of the light source to each VPL, according to the proximity of the connecting light path segment to the blob’s center. More precisely, if \(\mathbf{y}\) is the light source’s position, \(\mathbf{b}_j\) is the location of the j-th blocker with radius \(r_j\) and \(\mathbf{x}_i\) is the i-th VPL, the non-binary visibility function \(V(\mathbf{x}, \mathbf{y})\) is:
where \(d_{seg}( \mathbf{x}_0,\mathbf{x}_1,\mathbf{q})\) denotes the closest distance between a linear segment \((\mathbf{x}_0,\mathbf{z}_1)\) and a point \(\mathbf{q}\).
The principle is shown in Fig. 5, top—left and demonstrated in the remaining insets of the figure. As Eq. 5 implies, the spherical blockers can overlap to further reduce visibility or approximate more complex occluders. Furthermore, light blockers can be dynamic objects themselves or dependent on other moving geometry. Light blockers need not be placed everywhere, as the goal is plausible lighting rather than accurate. Therefore, only the most experience-breaking indirect shadowing needs to be captured. This is demonstrated in the bottom row of Fig. 5; visibility of the light source from the moving static VPL is only reduced when the vehicle is behind large volumes.
Approximate visibility estimation. Top row, from left to right: the static VPL’s access to incident light is diminished by the interference of spherical blockers, set up to correspond to scene obstacles. The VPL in the first example inset is not occluded by the spherical blocker, whereas next, it is almost completely obscured. The rightmost inset shows light leaking, when the blocker is disabled. Middle row: Application example of the light blockers in a VR production. The VPL attached to the side of the truck transitions between different states of occlusion with respect to the directional sun light, as the vehicle passes near obstacles for which light blocking spheres have been set up. Bottom row: The manually positioned blockers for the example of the second row.
The approximate visibility estimation process is only relevant to the first variant of the method and not to the variant using a ray-traced phantom VPL, discussed next.
4.3 Phantom virtual point light
In this variant of the method, a single virtual point light, called a phantom VPL here, is actually computed by the intersection of the light beam’s axis with the scene. The initially defined static VPLs are disabled, and the phantom VPL takes their place. However, instead of performing an expensive ray-geometry intersection test to determine the hit point, a rough estimate of the ray intersection is computed, using user-defined crude geometry proxies, such as convex colliders or simplified meshes. This variant is appropriate only for spotlights, since no single VPL position can adequately represent reflected radiance from an omni-directional or directional light source. The benefit of using a phantom VPL, whose position is dynamically update to track the spotlight’s beam, is the more accurate and focused light bounce.
Due to the fact that only the phantom VPL’s position \(\mathbf{x}\) is approximately determined via ray intersection, we have no access to surface and material information. Instead, in order to obtain the rest of the VPL parameters, i.e. the hemispherical VPL axis \(\mathbf{n}\), the surface base color \(\rho (\mathbf{x})\) and area weight w, these values are interpolated from the N static VPLs, according to inverse distance weighting with weights \(b_i\):
Approximate direct visibility makes no sense in the case of the phantom VPL, since by definition, it should be at a location visible to the emitter. Additionally, due to interpolation of VPL parameters, the emission cone may not be perpendicular to the underlying surface geometry. However, due to the tightness of the interpolation factors (relative inverse squared distances), the VPL axis is well-behaved near defined static VPLs and smoothly varying everywhere else.
4.4 Approximate secondary bounces
In confined spaces or where the directly lit surface is close to other, indirectly illuminated ones, there is a very noticeable illumination feedback involving many surfaces, including the primarily lit area, due to secondary light bounces exchanging significant energy (see example in Fig. 6).
Example of one- and two-bounce approximate global illumination using the phantom VPL approach in an enclosed space (prison cell). Screenshots are captured from the VR production “Block 15: A Virtual Journey into a Grim Past” (Athens University 2024).
Extending the method to support high-order light transport would involve implementing an energy exchange mechanism among the VPLs (first variant) or tracing additional VPLs (second variant). To avoid incurring the disproportionate computational overhead with respect to the visual impact, we approximately simulate this light interaction by setting up another phantom VPL to play the part of an imaginary second-bounce reflector. This phantom VPL is easily set up to either follow the primary phantom VPL or the weighted average of the static VPL constellation, according to the chosen method variant. More specifically, if static VPLs are used, the relative contribution of each one’s position, orientation and emission to the secondary bounce VPL are set to be proportional to their intensity:
where \(\rho _{ave}\) is a user-defined estimate of the average environment albedo, which depends on the materials used in the environment and \(d_{sec}\) is the desired distance of the secondary bounce VPL from the average static VPL position. This is also a user-defined parameter, roughly corresponding to the average distance between facing surfaces in the environment. Keep in mind that it is not crucial to carefully choose the values of \(\rho _{ave}\) and \(d_{sec}\), in order to achieve a convincing visual result.
For the case of the phantom VPL variant, setting up the secondary bounce VPL is more intuitive, since it solely depends on the already computed single phantom VPL:
5 Implementation and evaluation
5.1 Implementation in unity
We implemented the approximate global illumination approach in Unity (see Fig. 7), since this was the game engine of choice for the realization of our VR production. The production is a historical, story-driven, interactive experience about the infamous Block 15 of the Haidari concentration camp in Athens, the largest and most notorious German concentration camp in Nazi occupied Greece, during the second world war (Benardou et al. 2022).
All static VPLs were implemented as Unity spotlights, whose emission characteristics were programmatically set up, during application initialization and updated in every frame. In particular, any spotlight encountered within any active game object group named “VPLS” is registered as a static VPL. Likewise, omnidirectional (point) light sources found within any group named “BLOCKERS” is considered an occluding primitive in the sense of Sect. 4.2.
To model the hemispherical, cosine-weighted outgoing intensity of the VPLs, VPL spotlights were configured to have a wide emission cone (170 degrees) and their intensity is modulated by a symmetrical cosine-weighted mask (a light “cookie” in Unity). The number of maximum pixel lights to render was increased from the default value of 2 to 6, to avoid culling important visible VPLs.
The implementation optionally supports indirect occlusion, via low-resolution shadow maps, enabled for each VPL. Despite the low resolution of the shadow map, emitting the scene geometry for rendering N times, where N is the number of the static VPLs, can be costly, for large environments and many VPLs. Therefore, indirect shadows is more suitable for the phantom VPL variant.
As discussed in Sect. 4.1, the weighting factors \(w_i\), which represent the surface area each VPL corresponds to, can be reasonably associated with the distance \(d_{min}\) to the closest other VPL j, in the form of the area of a disk with radius \(d_{min}/2\). In our Unity script, we allow for the automatic estimation of the weights \(w_i\) and provide a simple switch to enable that through the component’s properties. Without any particular acceleration data structure, however, this operation has a complexity of \(O(N^2)\). We could opt to compute the weights once, during initialization, but this would render the weights irrelevant to the true spacing of the static VPLs, once objects started to move in a dynamic environment. Instead, we temporally amortize the cost of the estimator by performing a single comparison and weight update per frame update cycle, i.e. requiring about \(N^2\) frames to fully update all weights. Obviously, one can choose to aggregate more computations per cycle, such as fully updating a single weight.
The method code is implemented as a single Unity C# script that should be attached to the primary light source. It is possible to attach the script to multiple light sources, but this only makes sense for the phantom VPL case, since the VPL updates of the first variant globally affect the static VPLs themselves. The code is publicly available at https://github.com/cgaueb/fakeIR, under an MIT license.
5.2 Static VPL scene population
The method requires that the scene designer spends some time placing the light sources that represent the static VPLs in the environment and potentially attaching them to moving geometry. For a moderate scene and familiarity with the concept of bounce lighting, manually placing static VPLs may take from 5 min to about 20 min, given that static VPLs can be easily replicated and repositioned. For large or complex environments though, users would be required to spend more time in this process.
To expedite the workflow and allow for non-expert users to take advantage of the method, a simple Editor script is provided to automatically populate the scene with static VPLs. The static VPL population process attempts to discover scene locations that would be accessible to the light source(s). It simultaneously performs empty space exploration for positioning a moving source and tracing of photons, thus avoiding the placement of static VPLs in obscured locations. The process can be repeated multiple times to accumulate static VPLs for different sources. Generated VPLs automatically adopt the surface position, orientation and albedo of the intersected geometry, respecting base color textures, if present.
Additionally, VPLs are clustered according to proximity, direction and reflectance, to avoid unnecessary overpopulation and undesirable behavior in automatic weighting (see Sect. 5.1). Controls for the overall VPL population, density (spacing), photon tracing depth and normal bias are provided, as shown in Fig. 8. Normal bias retracts the VPL position towards the inside of the hit surface, so that spurious bright spots caused by VPLs created very close to adjacent surfaces are eliminated.
5.3 Evaluation
5.3.1 Performance
We measured the framerate, process time and GPU load on the application running with and without the GI computations on a Meta Quest 2 VR headset. The respective light sources for the VPLs and the blockers were also disabled in the application running with GI off. For the measurements, we relied on the profiler of the Meta Quest Developer Hub and the on-board statistics captured by the device, with detailed measurements recorded and analyzed in the Perfetto accompanying service.
To objectively measure the impact of our method, we executed the VR applications with and without the relevant script and static VPLs enabled. The additional light sources and the VPL state update routine (called once per frame) did not incur a noticeable impact on the application, when run on the VR headset. The process overhead was below the standard deviation of the measured CPU process time of the non-GI application version. Shading overhead was slightly increased, due to the increased lighting computations in the pixel shader, but did not impact the framerate significantly.
5.3.2 Quality
A qualitative comparison among the resulting images of two scenes, rendered with the diffuse global illumination estimator of the Unity engine and the two variants of our method is presented in Fig. 9; a single frame of the camera path animation was frozen and Unity’s lightmap baking system was used for generating the (static) illumination (middle insets). The same instance was used for rendering the global illumination using our method (left and right insets). For completeness, we also provide a measurement of the perceptual difference of the two approximate GI variants against the diffuse indirect light baking of the reference image, using the method (Andersson et al. 2020).
Single-shot comparison of indirect lighting. Middle: Reference 2-bounce path-traced GI using Unity’s light simulator for baked lighting. Left: approximate GI using the static VPL variant. Right: approximate GI with a single phantom VPL. The example in top row uses only a single light bounce in all variations, whereas the second one includes two bounces. A perceptual difference map is provided between the reference shot and the two approximate GI variants, using the method (Andersson et al. 2020).
Being an approximate method, there are evidently differences in the appearance of certain geometric features, noticeable even without the difference maps of Fig. 9. When light interaction involves detailed and closely positioned parts of the geometry, near-field color bleeding is not adequately captured. Second, when using the single phantom VPL approach, locations behind the VPL are not receiving any first-bounce lighting. This is demonstrated in the top example of Fig. 9, where in contrast to both the path traced solution and the static VPL variant, there is a dark zone behind the VPL that is here positioned on the statue. The effect is largely mitigated when the approximate secondary bounces are turned on, as is the case of the second example.
When inspecting the rendered results of Fig. 9, one noticeable yet misleading difference is the presence of highlights due to indirect lighting in the case of our method, where there exist none in the Unity lightmapper results. This is due to the fact that the VPLs, as implemented here using Unity lights, also contribute to the specular part of the local illumination model, whereas irradiance stored on the light maps does not. Unity provides light probes for the specular term, however lighting from these is greatly affected by their manually chosen position in the environment, so they were omitted in the tests.
Despite the difference noted above, all major light transport events are successfully captured by our method and, most importantly, dynamically updated. Although a side by side comparison with a more correct, ground-truth approach would reveal discrepancies, in a practical real-time application we are primarily interested in a plausible dynamic illumination and not necessarily a physically accurate one.
Furthermore, in Fig. 10, the effectiveness of the static VPL visibility is demonstrated in a stress test. A fully enclosed room with a single door that can dynamically swing open has been built, with static VPLs placed on the interior surfaces and on the door mesh. Both the door and the facade of the building also bear a number of spherical blockers to intercept sunlight (a directional light source), when the door is closed. The test confirms that, when the door is shut, the interior VPLs register no perceivable incident light and therefore emit none, as expected. Light floods the room, proportionally to the unblocked doorway, as the door swings open.
Stress test of the approximate VPL visibility. A completely dark room is gradually lit as the door opens to let the directional light of the sun in. Left: the VPL and blockers setup in Unity. Red markers: stationary blockers. Green markers: blockers attached to the door. White spotlights: the static VPLs.
5.3.3 Variant comparison
As shown in Fig. 11, the phantom VPL approach produces more convincing, localized scattering for spotlight sources, since its position consistently follows the source beam. However, due to the interpolation of reflectance from the predefined static VPLs, a single value for reflected color is determined when the beam crosses over patches of different albedo, washing out and merging the hue of the reflected light. Conversely, the basic static VPL technique works inherently better for directional and point lights, where there is no single point of focus, or where moving reflective surfaces are involved.
5.3.4 Comparison with instant radiosity
Since the proposed methodology is directly related to the basic instant radiosity method, it is natural to include a comparison with the latter. To this end, an interactive IR implementation was made, again using the scripting facilities of Unity. For omnidirectional and spotlight emitters, a user-defined number of photons are spawned and a single-bounce set of VPLs is traced in the scene. For directional light sources, photons are spawned outside the bounding box of the environment, tightly covering the entire scene from the incident direction. For simplicity, a uniform photon distribution is used in all cases. To improve temporal stability, all random directions and positions are precomputed at application launch and reused in every frame. For direct comparison purposes, we also use here the approximate secondary bounce approximation of Sect. 4.4.
Photons for IR need to be traced down to the primitive level to obtain the material properties of each intersected triangle, look up the base color texture or material color and interpolate the normal vector at each hit point. On the other hand, our method only requires a single approximate intersection with the scene, and only in the case of the phantom VPL variant, which can be performed using proxy collision geometry, typically available for general collision detection. All material attributes have already been defined and stored in the static VPLs, either by the user or by an automatic VPL placement preprocessing stage (see Sect. 5.2). The impact on performance for the intersections is first demonstrated in examples Ruins and Cell in Fig. 12. In the Ruins example, both the phantom VPL and a single IR photon are traced using full mesh collisions, resulting to nearly identical performance. In the Cell example, however, the phantom VPL intersections are computed using proxy geometry. The Spindle and Road examples require multiple photons to even marginally capture the illumination effects, severely impacting performance, as indicated by the reported framerate measurements.
Additionally, the mechanism adopted for the static VPLs is more akin to a gathering process (the static VPL update) than a spawning one, as in standard IR. This means that our approach is inherently more stable and requires far fewer VPLs, especially for moving geometry. In the case of IR, the wide beam spread and the moving object in the Spindle case, require 20 VPLs to be traced to adequately sample the geometry and result in a marginally temporally stable result. On the other hand, the environment subtended by the directional light source of the Road example cannot be sufficiently sampled, even with 200 VPLs, leading to strong illumination irregularities. The issue is further accentuated when the renderer automatically culls light sources during rendering for efficiency, as is the case with Unity.
Geometry and reflectance sampling is also generally an issue with typical IR methods, when casting a small number of photons, since the undersampling of the geometric detail and textures may lead to a poor representation of the true reflected light field. The problem is demonstrated in the Ruins example of Fig. 12; sparsely sampling the wall texture for the single photon tracing results in color bleeding with a shifted hue compared to the overall neighborhood appearance and a VPL normal that constantly changes direction, when the beam moves over the surface. Our method, which can spawn a large number of photons during preprocessing and cluster the resulting VPLs, provides a weighted contribution of the initial, dense reflection locations, not unlike hierarchical many-light approaches. The same holds for manual VPL placement, since the VPLs are by design representative of the nearby surfaces. Furthermore, in the case of the phantom VPL variant, where a single VPL is used, its values are smoothly interpolated over the static VPLs, drastically reducing undesirable artifacts.
A last comment about the use of standard IR is its inability to coexist with baked lighting. VPLs need to contribute to both dynamically updated objects and static geometry, for which indirect light has been pre-computed. If the light source, for which VPLs are traced, already contributes to the static illumination, VPL lighting interferes with the baked illumination, producing incorrect (over-lit) results. This is demonstrated in Fig. 12, in the Road example. Traced VPLs illuminate the covered section of the road and the truck, for which baked sun and sky lighting have already been accounted for.
5.3.5 User evaluation
In the past 3 years that the VR application is in development, we have regularly performed evaluation by users of diverse groups (teenagers, people with limited experience in gaming or VR, older people, students, exhibition center visitors, etc.). The particular feature of approximate global illumination has been added in the last year’s iteration and has been evaluated by a total of 27 participants, mostly humanities students, of which 15 had no previous VR experience, 7 had a very limited one, 3 identified themselves as gamers and 2 were experienced users. The average age of participants was 26.6 years. The evaluation covered many aspects of the VR production, via an online questionnaire and semi-structured interviews. There were no specific questions targeting the particular effect as it would have been too difficult to explain to the general audience. We instead emphasized on a broader improvement of immersion due to lighting. In particular, to avoid drawing the evaluators’ attention to the specific scenes, where our method was used, and therefore biasing their responses, we split the question into two parts, as discussed below and summarized in Fig. 13.
First, we asked whether the illumination of the virtual spaces helped them immerse themselves in the grim nature of the scenario, taking the focus off the dynamically illuminated environment. To this, 85.1% responded that illumination was indeed impactful (answering “a lot” and “definitely”), while the rest considered it not so important (“somewhat”). Next, we asked which one of the independent scenes they thought was the most dramatic and immersive one and why. We left the justification an open question to let users express their own opinion, since the questionnaire was a general one, intended to also evaluate the impact of other factors on the VR experience. In the production, there are 6 locations within the concentration camp, where the action takes place, shown at the bottom of Fig. 13. Four of them were indicated by the users as being the most immersive, with the dark cell (Cell 8) receiving substantially more votes than the rest (see left association matrix in Fig. 13). In summary, most of the users (61%) attributed this choice to a mixture of sense of claustrophobia, dislocation and dramatic lighting. However, the latter is known to artificially contribute to the other two and is deliberately used here to this effect.
Several scenes using the approximate global illumination methodology (see example figures) were subject to a more thorough inspection by 4 computer graphics experts, not associated with the development team. They were asked whether the resulting global illumination is plausible, whether the parameterization of the script made sense and whether the entire workflow of manual VPL placement and light scripting was a) tedious and b) intuitive. They all found the results convincing and the methodology intuitive to work with, in terms of scene development. However, 3 out of 4 reported that manually placing the VPLs was more involved than they expected, requiring some iterations to get the illumination coverage right. As it happened, the fourth participant was already using a similar workflow to complement the global illumination system in productions, where specific (yet static) lighting conditions had to be attained. For him, it was a fluent process.
When presented with the automatic VPL placement tool, all computer graphics experts agreed that it was a vast improvement to the overall workflow. However, all participants performed position adjustments and reflectance corrections. Three out of four started from the automatically provided solution and proceeded to simplify the lighting configuration, removing subjectively unimportant VPLs.
6 Conclusions
We presented a simple technique for real-time approximate diffuse global illumination, based on the well-known approach of instant radiosity. Instead of emitting photons from the light source(s) and evaluating their direct illumination to simulate indirect lighting, here the virtual point lights are statically defined and represent the geometric and material properties of the underlying environment, in a geometry-less manner. Dynamic updates of the VPLs or the instantiation of a single phantom VPL, drawing properties from the static VPLs, allow for fast reflected light estimation, including visibility checks. The method relies on the placement of VPLs at a few key locations in the environment, representative of the bulk of the major reflectors. The static VPLs can be manually positioned or automatically computed and potentially refined by hand, if required, as discussed in Sect. 5.2.
Overall, the overhead introduced by our approximate global illumination technique is small, rendering it ideal for untethered VR systems, which demand low latency and need to handle stereo rendering at comfortable framerates. Its independence from a particular graphics pipeline implementation (e.g. forward or deferred rendering pipeline), also makes it compatible with general mobile and web-based graphics implementations and platforms. One of its attractive features is its ability to be combined with other global illumination techniques, as demonstrated in all test examples and in Fig. 14, where baked illumination, procedural lighting, low-frequency diffuse real-time light probe lighting and screen-space reflection and refraction are combined with the phantom VPL technique to render an underwater environment for desktop VR.
Combination of our method with other illumination techniques to render an underwater environment. In addition to our technique for rendering the indirect lighting for the moving spotlight, the scene includes static diffuse indirect lighting via lightmaps, fake caustics and screen-space reflection and refraction.
Data availability
Not applicable.
Code availability
The code is publicly available at https://github.com/cgaueb/fakeIR, under an MIT license.
Materials availability
Not applicable.
References
Áfra, AT (2024) Intel\(^{\text{\textregistered} }\) Open image denoise. https://www.openimagedenoise.org
Akenine-Möller T, Haines E, Hoffman N, Pesce A, Iwanicki M, Hillaire S (2018) Real-time rendering, 4th edn. A K Peters, Natick
Andersson P, Nilsson J, Akenine-Möller T, Oskarsson M, Åström K, Fairchild MD (2020) A difference evaluator for alternating images. In: Proceedings of the ACM on computer graphics and interactive techniques, vol 3, no 2, pp 15:1–15:23
Athens University of Economics and Business, Information Processing Lab. (2024). Block 15: a virtual journey into a grim past VR production. Retrieved July 30, 2024, from https://block15.aueb.gr/
Benardou A, Droumpouki AM, Papaioannou G (2022) First-person interactive experience of a concentration camp: the case of block 15. In: Benardou A, Droumpouki AM (eds) Difficult heritage and immersive experiences (first). Routledge, London
Bitterli B, Wyman C, Pharr M, Shirley P, Lefohn A, Jarosz W (2020) Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting. In: ACM transactions on graphics (proceedings of SIGGRAPH), vol 39, no 4. https://doi.org/10/gg8xc7
Dachsbacher C, Stamminger M (2005) Reflective shadow maps. In: Proceedings of the 2005 symposium on interactive 3D graphics and games, pp 203–231. https://doi.org/10.1145/1053427.1053460
Dachsbacher C, Stamminger M (2006) Splatting indirect illumination. In: Proceedings of the 2006 symposium on interactive 3D graphics and games, pp 93–100. https://doi.org/10.1145/1111411.1111428
Davidovič T, Křivánek J, Hašan M, Slusallek P, Bala K (2010) Combining global and local virtual lights for detailed glossy illumination. ACM Trans Graph. https://doi.org/10.1145/1882261.1866169
Dong Z, Grosch T, Ritschel T, Kautz J, Seidel H-P (2009) Real-time indirect illumination with clustered visibility. In: Vision, modeling, and visualization workshop
Eisemann E, Schwarz M, Assarsson U, Wimmer M (2011) Real-time shadows, 1st edn. A. K. Peters, Ltd., Natick
Epic Games (2024) Unreal engine (Version 5.4). https://www.unrealengine.com
Eto K, Meunier S, Harada T, Boissé G (2023) Real-time rendering of glossy reflections using ray tracing and two-level radiance caching. In: SIGGRAPH Asia 2023 technical communications. https://doi.org/10.1145/3610543.3626167
Greger G, Shirley P, Hubbard P, Greenberg D (1998) The irradiance volume. IEEE Comput Graph Appl 18(2):32–43. https://doi.org/10.1109/38.656788
Hart J (1996) Sphere tracing: a geometric method for the antialiased ray tracing of implicit surfaces. Vis Comput 12(10):527–545. https://doi.org/10.1007/s003710050084
Hašan M, Křivánek J, Walter B, Bala K (2009) Virtual spherical lights for many-light rendering of glossy scenes. In: ACM SIGGRAPH Asia 2009 papers. https://doi.org/10.1145/1661412.1618489
Kajiya JT (1986) The rendering equation. SIGGRAPH Comput Graph 20(4):143–150. https://doi.org/10.1145/15886.15902
Keller A (1997) Instant radiosity. In: SIGGRAPH ’97: proceedings of the 24th annual conference on computer graphics and interactive techniques, pp 49–56. https://doi.org/10.1145/258734.258769
Kristensen AW, Akenine-Möller T, Jensen HW (2005) Precomputed local radiance transfer for real-time lighting design. ACM Trans Graph 24(3):1208–1215. https://doi.org/10.1145/1073204.1073334
McGuire M, Mara M (2014) Efficient GPU screen-space ray tracing. J Comput Graph Tech 3(4):73–85
Olsson O, Billeter M, Assarsson U (2012) Clustered deferred and forward shading. In: Dachsbacher C, Munkberg J, Pantaleoni J (eds)Eurographics/acm siggraph symposium on high performance graphics. The Eurographics Association. https://doi.org/10.2312/EGGH/HPG12/087-096
Sloan P-P, Kautz J, Snyder J (2002) Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. ACM Trans Graph 21(3):527–536. https://doi.org/10.1145/566654.566612
Theoharis T, Papaioannou G, Platis N, Patrikalakis NM (2008) Graphics & visualization–principles and algorithms. A K Peters, Natick
Unity Technologies (2022) Unity (Version 2022.1.9) Game development platform. https://unity.com/
Vardis K, Papaioannou G, Gkaravelis A (2014) Real-time radiance caching using chrominance compression. J Comput Graph Tech 3(4):111–131
Walter B, Fernandez S, Arbree A, Bala K, Donikian M, Greenberg DP (2005) Lightcuts: a scalable approach to illumination. ACM Trans Graph 24(3):1098–1107. https://doi.org/10.1145/1073204.1073318
Funding
This work has been supported by the Federal Republic of Germany Embassy in Greece, German—Greek Fund for the Future, Grant No. 11362501.
Author information
Authors and Affiliations
Contributions
Not applicable.
Corresponding author
Ethics declarations
Conflict of interest
The author has no relevant financial or non-financial interests to disclose.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file 1 (mp4 48192 KB)
Appendix A: emission of virtual point lights
Appendix A: emission of virtual point lights
To derive the light (outgoing radiance) that comes from a VPL in instant radiosity, we first need to calculate the radiance \(L(\mathbf{x}, \omega _o)\) reflected off any surface point \(\mathbf{x}\) towards an arbitrary direction \(\omega _o\), due to light \(L_e(\mathbf{y},\mathbf{y}\rightarrow \mathbf{x})\) from all points \(\mathbf{y}\) on a light emitter with area \(S_e\). For this purpose, we use the surface integral form Theoharis et al. (2008) of the rendering equation (Kajiya 1986) and limit the integration domain to only extend over the surface of the light source:
where \(V(\cdot )\) is the visibility between the receiving point \(\mathbf{x}\) and the sample on the source surface \(\mathbf{y}\), \(f(\cdot )\) is the bidirectional reflectance distribution function and \(\theta _x\) and \(\theta _y\) are the angles between the connecting segment \((\mathbf{x},\mathbf{y})\) and the normal vector at \(\mathbf{x}\) and \(\mathbf{y}\), respectively.
We can simplify the above integral if we assume a uniformly scattering surface with no glossy or specular reflection, leading to a constant \(f(\cdot )\) and an output that is independent of \(\omega _o\), only affected by the surface base color \(\rho (\mathbf{x})\). Typically the integral is estimated via Monte Carlo integration by drawing a given number of samples N on the emitter with distribution \(p(\cdot )\):
Further assuming a punctual light source, as is often the approximation for light sources in real-time rendering, we drop the sum and the dependence on \(\theta _y\), although, emission direction still affects the illumination via \(L(\mathbf{y},\mathbf{y}\rightarrow \mathbf{x})\) (e.g. for spotlights):
For distant light sources, such as sunlight, we further drop the dependence on distance, since radiance is practically measured at the receiving surface, or near it:
Now, given that a VPL has been traced as a photon from the light source at \(\mathbf{y}\) towards the nearest hit on the geometry, \(V(\cdot )\) is always 1, so the first-bounce diffuse VPL at location \(\mathbf{x}\) with normal vector \(\mathbf{n}\) has an emission equal to:
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Papaioannou, G. Approximate dynamic global illumination for VR. Virtual Reality 29, 54 (2025). https://doi.org/10.1007/s10055-025-01114-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10055-025-01114-3