I have been working on Source Engine from past few months with a wonderful team at Crowbar Collective on a project called BlackMesa. It is a remake of one of the best games of all time – Half-Life which was originally released in 1988. One of the things I have been working on from past couple of months is cascaded shadow maps.
Shadows are one of the most important aspects of making a virtual scene look realistic and making games more immersive. They provide key details of object placements in the virtual world and can be pretty crucial to gameplay as well.
SourceEngine (at least the version of the engine we have) doesn’t have a high-quality shadows system that works flawlessly on both static and dynamic objects. Even the static shadows from the VRAD aren’t that great unless we use some crazy high resolutions for light maps which greatly increases both the compile times and the map size. And shadows on models just doesn’t work properly since they are all vertex lit. So we needed some sort of dynamic and a very high-quality shadow system, which is something very common nowadays in a real-time rendering application or game.
One of the most popular ways of implementing shadows is through shadow mapping algorithm. CSM or Cascade Shadow Maps is the further extension of the algorithm to generate high-quality shadows avoiding aliasing artifacts and other limitations of vanilla shadow mapping. For more details check this.
Here’s a screenshot of one of the levels from the upcoming content update –
Working on a spherical harmonics maths code base to be used in lighting in my engine in near future.
It’s been quite a while since I worked on my engine ( or any other hardcore stuff for that matter ). I thought working on some maths stuff would be a nice way to resume engine development.
Some screenshots of spherical harmonics visualizations generated from my code base –
Update – Uploaded wrong screenshots by mistake before so updated the screenshots.
Image Based Lighting is used to implement ambient lighting for dynamic objects coming from static objects in a game level. In most cases, it is used for specular lighting or reflections. In this process lighting information at the potential point of interests is stored in a special type of cube maps called Light probes. These Light probes are created using environment maps captured at same locations, which are then blurred in some special way depending on the BRDF  that will consume them at runtime. Each mipmap of a light probe contains a version of environment map blurred by a different amount depending roughness represented by that level. For example, if we are using 10 mip map levels to represent roughness from 0 to 1, then mip-0 will be blurred by a value representing roughness 0, mip-1 will represent 0.1 and so on. The last mip level mip-10 will be blurred by a value represent the roughness of 1.0. This process is also called Cubemap Convolution . All of this is done as a pre-process and resulting light probes are fetched at runtime to enable reflections or Image Based Lighting.
From past few weeks, I have been working on Image Based Lighting. Finally, I have finished writing features to generate dynamic cube maps and generate pre-convoluted probes for both diffuse (lambert) and GGX based specular.
Still there’s a lot of work to do. Here’s some rough list of features(or problems) I will be working on next:
- Support for many IBL probes – I am thinking maybe 32
- Choosing IBL probe (probes) to be used for the current pixel being shaded – I am thinking to maybe implement something similar to point light selection using tiled rendering or something. I can just replace point light calculation with cube map sampling for IBL. This way I can store all probes in a single array can bind all IBL probes to the single slot. I think this would be faster than checking the closest IBL probe for every object and passing it while shading. (Same benefits as deferred rendering over forward)
- Parallax correction for local probes
- Realtime IBL probes? – I don’t know if I would need this or SSR would be good enough.
- Fix seams for lower MIP levels. I am not sure whether I should implement parabolic maps or code some other techniques like wrap, or something.
- Improve Pre-convolution performance by further randomizing the samples and use less number of samples.
I have been experimenting with tone mapping from a long time. I tried many different ways of implementing tonemapping & bloom and now I have finalized this feature in the engine.
Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high dynamic range images in a medium that has a more limited dynamic range.
I tried few different Global tone mapping operators with various ways to calculate scene luminance.I tried 3 major approaches (with a lot of variations in between) –
In the first approach, I calculated scene luminance by averaging log luminance values of HDR render target of the scene. I did this by rendering a screen space quad calculating log luminance values for the scene and then I used GenerateMips to average them automatically for me. I tried some different tone mapping operators including Reinhard, filmic curve and modified filmic curve (or uncharted2’s filmic curve). At the end modified filmic curve with the ability to modify the curve as need turned out to be the best approach for tone mapping operator.Problem with this method is any few extreme dark/bright spots in the scene can make the whole scene look brighter or darker beyond desired ranges.