I joined the Blockade Labs Slack for some R’n’D and we got to talking about deriving lighting data from the AI generated imagery. While working at VAU we had previously created a script that read a video feed and turned that into a lighting probe.
I figured this is no different. Maybe instead of matching the ambient lighting and adding multiple spotlights, there was a way to turn the actual AI generated image into lighting probe data. That would make the realtime 3D meshes match the environments absolutely perfectly!
For this test, I chose a simple night-time kiosk location. As it has some darkness and some light. It would be a perfect candidate for light probe baking.
The shadow pass painting is getting super easy. First I create a layer with darken mode on, the with a very soft brush with very low flow I color pick nearby shadow areas and paint over all directional light areas the player is likely to cast shadow on.
After this, I duplicate the original painting, and then using the “create clipping mask” command I nest this painting in the shadow layer. After this, I desaturate the layer and apply a “high pass” filter on it to only show the details of the image. Then this layer is set to overlay mode. This brings back all the lost detail into the painted shadows.
for the light probe based lighting, we will need a new texture. A fake HDR map for broadening the lighting dynamic range. We need to overdrive the bright areas for the probes to have more pop. WIthout this step the probe data is a bit bland, as the source material is not HDR.
For this special map, I will add a black layer on top of the image and set its opacity to 80%. Then I will take a duplicate of the original image, place it on top of the black layer and using the blending options’ blend if -feature I will only expose the bright areas of this layer. Thus broadening the dynamic range of the image by 6x, even though I will only store the file as an 8bit psd.
Again, the 3D vanishing point needs to be determined with fSpy. Once that is done, the file is ready to be imported to Blender 3D.
The modeling and texturing steps with this scene are the exact same ones that I have to go trough each time a new location is created. These take around an hour total. This post details the Blender step in more detail.
With a couple of clicks we have the 3D version of the scene in Unity complete with hand painted shadow pass. But no lights yet. This time we will do the lighting differently!
There is one spotlight in the scene, but its intensity is so low it is not visible on the surfaces. The only reason for this light is to provide shadow information for my custom shader.
The next step is to create a light probe group. This will be used to actually render the lights on the characters moving trough the scene. There will not be much of actual direct lighting in the scene. Almost all of the lighting will come from the painting.
this is where the fake HDR image will come in handy.
In engine the the fame HDR is set to the emissive channel of a lit texture with intensity of 10. It looks absolutely horrible, but once baked to the lighting probes, it actually creates quite a pleasing light. I need to seriously exaggerate the values for the emissive to get harsh enough lighting for the probes. So it feels like direct light.
I hid the emissive version of the environment and placed some spheres in the env to preview the probe based lighting. It looks pretty good. The lights of the kiosk are casting light on the spheres and they get a soft bounce from the ground. The area behind the kiosk is sufficiently dark.
Here is a 3D sphere moving trough the image derived probe lighting.
It looks pretty good. It is not perfect. but it is totally usable for the game! The probe based lighting marries the 3D meshes a lot better to the 2D image. It also allows the 2D image to actually cast light on the meshes without having to manually place lights on every emissive surface. This is a better looking and a more performant solution!
Leave a Reply