I have been working on Source Engine from past few months with a wonderful team at Crowbar Collective on a project called BlackMesa. It is a remake of one of the best games of all time – Half-Life which was originally released in 1988. One of the things I have been working on from past couple of months is cascaded shadow maps.
Shadows are one of the most important aspects of making a virtual scene look realistic and making games more immersive. They provide key details of object placements in the virtual world and can be pretty crucial to gameplay as well.
SourceEngine (at least the version of the engine we have) doesn’t have a high-quality shadows system that works flawlessly on both static and dynamic objects. Even the static shadows from the VRAD aren’t that great unless we use some crazy high resolutions for light maps which greatly increases both the compile times and the map size. And shadows on models just doesn’t work properly since they are all vertex lit. So we needed some sort of dynamic and a very high-quality shadow system, which is something very common nowadays in a real-time rendering application or game.
One of the most popular ways of implementing shadows is through shadow mapping algorithm. CSM or Cascade Shadow Maps is the further extension of the algorithm to generate high-quality shadows avoiding aliasing artifacts and other limitations of vanilla shadow mapping. For more details check this.
Here’s a screenshot of one of the levels from the upcoming content update –
One of the very popular volumetric lighting effect is SunShafts or volumetric sun shadows or god rays.
An example from real world (image taken from Wikimedia) –
Here’s a screenshot from a video game – Crysis. It was the first game (for me) in which I saw SunShafts –
Working on a spherical harmonics maths code base to be used in lighting in my engine in near future.
It’s been quite a while since I worked on my engine ( or any other hardcore stuff for that matter ). I thought working on some maths stuff would be a nice way to resume engine development.
Some screenshots of spherical harmonics visualizations generated from my code base –
Update – Uploaded wrong screenshots by mistake before so updated the screenshots.
Image Based Lighting is used to implement ambient lighting for dynamic objects coming from static objects in a game level. In most cases, it is used for specular lighting or reflections. In this process lighting information at the potential point of interests is stored in a special type of cube maps called Light probes. These Light probes are created using environment maps captured at same locations, which are then blurred in some special way depending on the BRDF [1] that will consume them at runtime. Each mipmap of a light probe contains a version of environment map blurred by a different amount depending roughness represented by that level. For example, if we are using 10 mip map levels to represent roughness from 0 to 1, then mip-0 will be blurred by a value representing roughness 0, mip-1 will represent 0.1 and so on. The last mip level mip-10 will be blurred by a value represent the roughness of 1.0. This process is also called Cubemap Convolution [4][5]. All of this is done as a pre-process and resulting light probes are fetched at runtime to enable reflections or Image Based Lighting.
From past few weeks, I have been working on Image Based Lighting. Finally, I have finished writing features to generate dynamic cube maps and generate pre-convoluted probes for both diffuse (lambert) and GGX based specular.
Still there’s a lot of work to do. Here’s some rough list of features(or problems) I will be working on next: