How to improve shadows ?

I already applied a normal map and a specular map for better
pixel-based lighting and reflections. On curved objects, this helps
to avoid the need of a hi-poly mesh.

see: NormalsAndSpecMapApplied.jpg
here a normal map using TBN (tangent space matrix) is used.
and a specular map for reflection (which conatins zero intensity in this sample here;
and so also the roughness is not applied). I would use roughness this way:

importance = saturate(importance - roughness);

Do you think that is appropriate ?

NormalMapWithFresnelSpecularReflection.jpg
here only the normal map and “Fresnel Reflectance”
(as shown in 1.7. Tutorial 6 in OptiX_Quickstart_Guide_5.0.0.pdf)

Shadow_AnyHit.jpg
I tried “Shadowing Transparent” (as the shown in 1.11. Tutorial 10)
but instead of “shadow_attenuation” I simply use the pixel color
from the related texture of this material.

However, there are still some issues with it:

  1. how to remove the shadow of the cow (which again there in the shadow of the “boat” ?
  2. the shadow border is not smooth. how could I achieve that without more light sources? is there any way?
  3. color bleeding: Is there a way without using a path tracer / photon mapper? How should I do that?
  4. I have also an ambient occlusion map for the material. But I don’t really know where to start
    when adding them to the lighting. Is it recommened to add it at all to a ray tracer? Any suggestions?


I use roughness this way: importance = saturate(importance - roughness); Do you think that is appropriate ?

Roughness is a material appearance property which is hard to express in a Whitted ray tracer especially on reflections of other objects.
What you’re doing is not really resulting in surface roughness due to scattered rays, it’s just changing the threshold when the Whitted renderer stops the recursions which basically only works for the diffuse (Lambert) and specular materials present in your scene because they stop at diffuse anyway and continue on non-black specular materials.

I tried “Shadowing Transparent” (as the shown in 1.11. Tutorial 10) but instead of shadow_attenuation" I simply use the pixel color from the related texture of this material.

That’s not how opaque materials work. You mean if the pixel color would have been white, there wouldn’t be a shadow? That may work with alpha transparency, but not with the diffuse color.

1. how to remove the shadow of the cow (which again there in the shadow of the “boat” ?

If the cow shouldn’t throw a shadow, use an any_hit program on the shadow ray type for the material on the cow which simply calls rtIgnoreIntersection() and nothing else. That will make the cow material invisible to shadow rays.

2. the shadow border is not smooth. how could I achieve that without more light sources? is there any way?

Soft shadows in a ray tracer come from area lights and more rays shot to accumulate a shadow coverage value. Anything else would be hacky rasterizer solutions.

3. color bleeding: Is there a way without using a path tracer / photon mapper? How should I do that?

Not really. A Whitted ray tracer is only doing direct lighting normally. You would need to implement a proper light transport algorithm to get the correct result. Everything else would again be rasterizer tricks like rendering to cubemaps for reflections, irradiance caching, voxel lighting, etc.

4. I have also an ambient occlusion map for the material. But I don’t really know where to start
when adding them to the lighting. Is it recommened to add it at all to a ray tracer? Any suggestions?

You can simply modulate images rendered without global illumination with your ambient occlusion result. It’s a limited global illumination effect not present inside a Whitted renderer which does only direct lighting.

If you want to get all these things right, use a renderer implementation which supports global illumination.

Hi Detlef,
thank you very much for your detailed answer.

“Soft shadows in a ray tracer come from area lights”
So a light emitter object is required as shown in the “Denoiser Sample” in OptiX 5.0.0 SDK. using Path Tracer.
(a material which calls “diffuseEmitter()” on ClosestHit.) Ok, that’s clear now.

“If you want to get all these things right, use a renderer implementation which supports global illumination”
Ok, so I looked at Path Tracer, BDPT, Photon Mapper, MLT and PBGI. There are so many different ones.

Path Tracer: “ideal for outdoor lighting and large direct light sources” (pixar.com)
When in an interior scene light comes through a glass window […]“It can handle only the small fraction of light coming from the environment”[…] in “Comparison of Advanced Light Transport Methods (Kaplanyan)” SIGGRAPH 2013

BDPT (“Bidirectional Path Tracing” / “VCM integrator”; VCM = “Vertex Connection and Merging”):
[…]VCM can resolve complicated indirect paths and caustic effects[…] (pixar.com)
VCM: “Light Transport Simulation with Vertex Connection and Merging” (SIGGRAPH Asia 2012)
On using BDPT “some difficult paths are noisy or missing.” see: “Comparison of Advanced Light Transport Methods (Kaplanyan)” SIGGRAPH 2013;

Would you also recommend to implement a Bidrectional Path Tracer?
Such as presented in “Vilem Otte: Efficient Implementation of Bi-directional Path Tracer on GPU” ?

MLT: “Metropolis Light Transport
[…]converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing[…] Iray[…]has an option for MLT[…] (from Wikipedia)
Csaba Kelemen 2002 “A Simple and Robust Mutation Strategy for the Metropolis Light Transport Algorithm” “Metropolis Light Transport” Eric Veach Leonidas J. Guibas

And there’s Point-based Global Illumination (PBGI). “POINT BASED APPROXIMATE COLOR BLEEDING WITH CUDA” by San Luis Obispo 2013. (GPU Surfel Raytracing)

So the best option seems to be a Path Tracer for outdoor scenes. And a bi-directional path tracer for complex scenes with caustics, which then optionally can switch on/off MLT, and for high quality a VCM, right?

Well, yes, if it were just that simple. :-)
This will get increasingly difficult to implement with the complexity of the light transport algorithms.
What you just listed might take a few years to get right for a decent material system when starting from scratch
If you want to be able to switch between light transport algorithms, you would need to plan ahead for the most complicated one and that’s going to be a frustrating experience for a beginner in that field.

I would recommend to attack this in order complexity starting with a path tracer which is the easiest.
The optixPathTracer example only shows Lambert materials and a diffuse light source. It doesn’t get much simpler than that.
Even that gets complicated rather quickly when adding support for a more complex material system.
As an example of the complexity involved with different material types, in my current path tracer the integrator itself needs just 40 lines of code for opaque materials, another 40 for transparent materials with absorption, while adding homogeneous volume scattering needed 300 lines of pretty complex code.

My first one from 2009 was like 150 lines of device code:

Generally speaking and in theory yes. But the final choice ALSO depends on your implementation and your scene + light. Path tracer (Uni-directional) is simple, which means it is easier to implement correctly for GPU(s). More complicated light transport may need more render pass and use more GPU resource (memory and compute). That’s why we usually get lower rays/second from Bi-directional and MLT, and the latter one is even more complex, not many people can implement correctly, to make it easier, some variant of MLT have been introduced.

Note that, compared with Path tracer, Bi-directional tries to bring light source information for generating path, and MLT also tries to bring prior knowledge of previous path for generating new path. So to judge if you want to use any of these techniques, is to think and test if computing these additional information is cheaper than wasted paths from a Path tracer (Uni-directional).

Thank you both for the answers.
So I know where to start now.

I now successfully implemented a derivate of the path tracer with Cornell scene from “denoiser sample”

  • glass from “advanced sample” + denoiser + environment map + some SDS-adjustments

light source emission is much too high for the glass and some artefacts are present when camera, glass and light is at special angles:
See full_light_emission1.jpg

I solved this by reducing the light emission from the diffuse emitter object.
see ok1.jpg and ok2.jpg

I noticed, that the artefacts only occur on camera angles where the rays eiher directly refract from the
eye through the glass into the light emitter and when they are reflected on a glass directly into the light emitter.

Am I right, that these artefacts are the reason, why we need the Bidirectional Path Tracer for such scene?

What do you think of the red caustics/reflections (relfected from the red cube side) in RedCaustics1.jpg and RedCaustics2.jpg. I think they are good enough. Or are they in any way visually unrealistic?





This shouldn’t be a problem of the light’s brightness. Your corruptions are exactly the symptom of errors from non-initialized variables in your device code or possible exceptions. Note that the corruptions are screen aligned rectangles matching the OptiX warp shape used for 2D launches on a single GPU.

Please try to debug with OptiX exceptions enabled and an exception program which indicates what exception code it hit. Especially look for stack overflow and invalid rays. I posted code how to do that on the forum before.
In case this is not an exception check the rtPayload structures of renderers you combined and unify them to one.

No. Specular materials cannot be connected that way because the probability that the incoming ray reflects or transmits into exactly that continuation direction is zero for such specular materials.

You might want to start with a scene with a sphere instead of the smaller box in that Cornell Box scene. Put it slightly above the ground and then try again. There should be a bright caustic underneath and darker colored ones on the opposite sides of the colored walls as shown in many images on the web.

Again I did this initialization issue. Now it works. Great! thank you very much for your advice!

from Whitted example I used the glass sphere and added them to the cornell scene instead of the teapot.
Now the caustics seem to be ok. I noticed, that they’re also very bright in a dark scene.
I’ve seen that also on scenes after a search on a search engine. So that seems to be right?



Yes, these look ok.
Your index of refraction is possibly a little low to look like glass. (1.33 is water, around 1.5 is glass, 2.6 is diamond).
For a standard glass material the Fresnel reflection in the normal direction (looking at the center of the sphere) is about 0.04 (4%) where at the silhouette it’s 1.0.

Since the light is brighter than the monitor can display, you could check with a tone-mapper if the reflections are always darker than when looking directly at the diffuse light, simply by turning down the exposure. If the light stays brighter your specular reflections would be ok.
Or as you did in the dark image, reduce the light brightness until it starts to fall below 1.0 on the final image. The reflections should darken accordingly. This is the case in your image. The reflection on the bigger sphere also gets darker nearer to the center which is due to the Fresnel falloff.

thank you very much for the answer. I changed IOR to 1.5 and I also successfully added again the glass teapot. Works fine now on CUDA 9.0 in Full HD (see Glass1.jpg).
However, transmittance is (0.1f, 0.63f, 0.3f) and so the glass should be green. but it is not.
for exiting rays Beer’s Law is applied as in Glass Advanced Sample:

transmittance = optix::expf( logf(unit_transmittance) * t_hit );

but the color remains white glass. what could be missing?

In SD_DiffuseTextureInsteadOfColor.jpg I already replaced the right wall with a diffuse texture map from a .tif file. The shadows of the glass becomes a red tone. And the glass still has green trasmittance. Not too bad, but realistic?


SD_DiffuseTextureInsteadOfColor.jpg

SD_DiffuseTextureInsteadOfColor.jpg

The absorption or scattering formulas are normally expressed in meters. To work correctly the world coordinates of your scene would either need to match that (model with 1 unit == 1 meter), or you could adjust the formulas by adding a scaling parameter which allows to transform the traveled distance into the expected meter units.
Means if your glass objects are tiny right now, there won’t be much absorption. It’s an exponential effect that increases quickly with the traveled distance.

the teapot was already scaled 40x: So if 1 unit is 1 meter, then a 40x bigger teapot should not be a tiny object. but also then glass remains white. And in the Advanced Glass Sample it works without scaling. (I use same formula as in glass.cu) I tried out small and big factors, but no difference.
the big sphere is at location xyz= 140.0f, 128.5f, 240.0f and has a radius 128.0f

However, I added this:
on each glass object:

prd.glossyHit = 1;
if (prd_radiance.depth == 0) prd_radiance.transmittance = forced_transmittance;

The forced_transmittance is the color I want a low pass filter for the glass color. So 0.0f blocks while 1.0f fully passes the color.
All other code is nearly the same as in glass.cu in the Advanced Sample.

I think the pictures in attachment are ok, could there be something wrong with that?


and I wondered about this:
in Advanced Glass Sample in line 93 of file path_trace_camera.cu :

result += prd.reflectance * prd.radiance;

this line simply adds a zero, cause prd.radiance is not adjusted in any file of the sample. its only zero-initialized in line 78.

So per ray in this sample only one time updated (when prd.depth >= max_depth):

result += prd.reflectance * cutoff_color;

All diffuse colors are multiplied with the reflectance and theirselves
in line 68 of diffuse.cu :

prd_radiance.reflectance *= Kd;

is this the way a reflection/refraction value for a path tracer has to be done?

!(upload://w7f6U6i9MuOcyAHObs15qz6vqqm.jpeg)

Well, that’s because the radiance evaluation (direct lighting) of a specular material is black.
Specular materials will only modulate the throughput along the path by their surface color or volume absorption.

That path tracer implementation would be able to handle more than just specular materials, but doesn’t implement any.
Have a look at the optixVox example which does the opposite by only handling diffuse (Lambert) materials.

Thank you very much for that clarification.

I thought “prd.attenuation” and “prd.reflectance” would be something different,
although obviously they have the same purpose. Now it works!
All the mesh-based objects now work fine with the scaling factor you suggested.
So I think finally I understood it right.
coloredGlass.jpg:
The right Whitted-based sphere is still transparent, although it has same material as the teapot,
and the left Whitte-based sphere has the material from the blue mesh-based sphere.
However, so maybe another transmittance formual is required for these spheres.
I only use mesh-based objects, so this is not so important for me.

ColorGlassPM.jpg: same colored glass material applied to the Photon Mapper on a mesh-based sphere.

reflectionMaterial.jpg: back wall is a reflective material (using schlick function and importance sampling)
!(upload://bKDElPY1gXWNQTtbvZtfJWytVdD.jpeg)
!(upload://6qKJHCrdKMafjx3Br961dIJ85GS.jpeg)

The absorption in the glass.cu shader is calculated when hitting a back face (sidenote: which means that implementation doesn’t support nested materials, and there is a comment about that in the sources).

If your spheres are actual parameteric sphere primitives, maybe the backface check against the geometry normal is not correct in your code and no absorption happens.
The absorption calculation itself must be the same for all primitives. It’s just based on the distance traveled inside the volume.

finally I got it working!
Detlef, thank you very much!
I checked it out again and again and now I found that the normals in the intersect program were pointing into the wrong direction. As you said “the backface check against the geometry normal is not correct”. That exactly was the reason. And I also removed the inner sphere.
Now it works fine (with a much smaller scaling factor (0.001) as on the mesh-based ones (factor 0.1) );

However, as you see in the attachment, the (on the mirror) reflected sphere is still transparent, but I think that is due to the fact, that yet no dialectrics are supported.
But the green teapot is correctly green reflected (as small one) on the reflected sphere (on the mirror) and (as big one) on the mirror. And the green sphere which is reflected on the mirror, is also correctly visible on itself (due to refraction).

(The back wall of the Cornell box is still a simple mirror; and on the right green wall is the brown material from my roughness tests)

If the normals point to the wrong direction inside the intersection program, that would be scary.
The normals should always be defined on the front face inside the intersection program.

Inside the closest hit program you would flip the geometry and shading normals to the side from which your ray is hitting the surface to get consistent results on both sides, while you need to track which side you hit initially, to be able to support the ior (eta) calculation correctly for transmissions and for example thin-walled materials.

I tried to again add the inner sphere in a way, that its normals point to inside, so that the same situation should be present as on the teapot mesh, where also all normals of the inside-triangles point to inside.
Caustics and transmittance work for the original sphere, but reflection is still not green.
In glass closest hit I tried for the procedural spheres:

normal =   faceforward( normal, ray.direction, normal );

Caustics are ok also then, but it seems to be as if there’s a second sphere inside it. (and reflections still are transparent)
And so I again removed it.

And I added 2 more mirrors. Cause if the normals from one reflection to the next would flip, then they should be ok on every 2nd reflection of a reflection. But they aren’t.
the mesh-based object is ok on all. My procedural one is not.
And the procedural yet also becomes more transparent when moving the cam closer to the object.
The teapot always remains on same green tone.

I additionally tried to use a simple sphere geometry and a baked normal map (not shown in the pictures) but obivously using hi-poly meshes is even more efficient in the ray tracer than using maps. And since its also mesh-based it seems to me, that’s ok (hi-poly-sphere.jpg in attachment): caustics ok; no additional “inner” sphere effect; and optical visual behaviour (=world is upside down in the sphere)

So I will use the mesh-based ones and discarded the procedural ones. I will move on with the MDL/materials/roughness.



If you put a sphere with normals pointing inwards into another sphere, hitting that inner sphere is leaving the outer sphere’s volume. Absorption happens with the outer sphere’s absorption coefficient along the current intersection distance. So far so good.
Now when continuing the path further inside the inner sphere, you will hit that again and since you made the front face point inwards, that would be a front face hit and no absorption calculation would happen in that simple glass path tracer example. It’s like air then.
Assuming you have another transmission on the inner sphere, that would become your current volume with the inner(!) sphere’s absorption coefficient and when hitting the outer sphere which would be a backface again, the absorption would happen with that inner sphere’s absorption coefficient, which is wrong if it’s different.

I said before that glass material and the simple path tracer around that is not supporting nested materials and that is exactly what’s going wrong with your scene setup.
Maybe take a piece of paper and a pencil and draw your paths and their absorption behaviour onto it and you’ll see. Absorption won’t work correctly with nested materials in that simple demo, independently of what the front face condition of the inner sphere is.

Not sure what’s wrong with your other parametric spheres. They should simply work the same way when not nesting materials. Geometry normal and shading normal on these would be identical and on the outside defining the front face, and absorption works the same way and must match for the same size of spheres.