Problems with DirectionalLight sources

Hi,

I have the problem that the parameters of a directional light are not set correctly.
Usually there should be the variables in the following order:
float3 direction;
float3 v1;
float3 v2;
float3 color;

Unfortunately my program uses the v1 variable as color and the direction as absolute position in the scene and v2/color do nothing. Is there any way to fix this?

Your post is lacking some background about the actual code which has a problem.

If you say these variables are “in the following order”, do you mean they are inside a struct?
What do you mean with “usually”? Are you referring to some OptiX example?

Or do you simply have rtDeclareVariable() with these names and there is a name clash of variable names used in different programs which possibly have different variable lookup scopes and overwrite your variable contents?
In that case, simply rename your variables to make them unique.

Or do you mean there is no name clashing anywhere but the contents of the v1, v2, and color variables are not what you set them to?

Please be a little bit more specific. Post the relevant code which doesn’t work.
Crucial would be variable declarations, initialization on the host, and usage in the device code.

The problem refers to the commonStructs.h header light sources, which are included in many examples. I tried to setup a light source like in the examples, except that I use a DirectionalLight instead of a BasicLight. The following code is involved:

//Setup light source
  m_light_buffer = m_context->createBuffer(RT_BUFFER_INPUT);
  m_light_buffer->setFormat(RT_FORMAT_USER);
  m_light_buffer->setElementSize(sizeof(DirectionalLight));
  m_light_buffer->setSize(m_num_lights);

  DirectionalLight* lights = reinterpret_cast<DirectionalLight*>( m_light_buffer->map() );
    for(size_t l=0; l<m_num_lights; l++) {
      lights[l] = makeLight( 1.0f / static_cast<float>(m_num_lights) );
    }

  m_light_buffer->unmap();
  m_context[ "lights" ]->set( m_light_buffer );
DirectionalLight SimpleOptixScene::makeLight( const float bright_scale )
{	DirectionalLight bl;
	bl.direction = make_float3( 1.0f, 0.0f, 0.0f);
	bl.v1 = make_float3( 1.0f, 1.0f, 0.0f);
	bl.v2 = make_float3( 1.0f, -1.0f, 0.0f);
	bl.color = make_float3( 0.0f, 1.0f, 0.0f);

	return bl;
}

The commonStructs.h looks like this:

#pragma once

#include <optixu/optixu_vector_types.h>

typedef struct struct_BasicLight
{
#if defined(__cplusplus)
  typedef optix::float3 float3;
#endif
  float3 pos;
  float3 color;
  int    casts_shadow; 
  int    padding;      // make this structure 32 bytes -- powers of two are your friend!
} BasicLight;

struct TriangleLight
{
#if defined(__cplusplus)
  typedef optix::float3 float3;
#endif
  float3 v1, v2, v3;
  float3 normal;
  float3 emission;
};

struct DirectionalLight
{
#if defined(__cplusplus)
  typedef optix::float3 float3;
#endif
  float3 direction;
  // Basis vectors for sampling cone about direction
  float3 v1;
  float3 v2;

  float3 color;
};

I’m getting a yellow color (which is the variable v1) instead of the defined color green. Please can you explain me how the directional light works? I would like to make a light cone along the x-axis but I don’t know what v1 and v2 are doing exactly.

I don’t see anything wrong with that part of the code. The float3 fields themselves have a 4-byte alignment and the structure size is 48 bytes which is float4 aligned to the next struct in an array.
In case this works with one light but not multiple, this would eliminate possible misalignments of structs.

Personally I wouldn’t define a directional light this way. This is just some example code and actually not used in any shipping OptiX example as far as I can see.

The vectors v1 and v2 are possibly meant to hold the other two vectors of an ortho-normal basis with the direction as normal. That doesn’t make too much sense with a directional light source which has no anchor point, but allows to abuse this structure to implement a positional disc light and that’s where things get hacky. Don’t do that.

That said, if you don’t need parts of specific example code, just roll your own.

Now to the yellow color. You didn’t give the device code which uses the data inside the buffer.
Are you sure the yellow color comes from the light and isn’t just the “bad_color” used in some OptiX examples when an exception occurs?

I’m normally using magenta there because that doesn’t appear in real world materials that often.
Additionally to detect that an exception occurred, you should always run with an exception program which dumps exception codes when debugging your program.
An example of an exception program which dumps the exception code and how to enable that can be found here:
https://devtalk.nvidia.com/default/topic/936762/?comment=4882450
Omit the host code line which limits the printing to only one launch index to get exceptions shown for your whole launch size.
When benchmarking, undefine that code. Exception handling costs performance!

If the yellow color comes from the exception, that most often happens when changing the device code to do more recursions or using more local variables, which often exceeds the set stack size.
If the exception code is a stack overflow, simply increase the value in rtContextSetStackSize() to the smallest value which doesn’t hit that anymore.

Then you say you “would like to make a light cone along the x-axis”.
A directional light doesn’t have a cone. For that you would need to implement a positional light source with a direction and some cone angle, aka. a spot light.

You can find various implementations of spot lights on the web. Here’s one post explaining it for Cg
http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter05.html
If your renderer is a Whitted ray tracer (no global illumination) you only need to port the fragment shading part to your closest hit program.

Thanks for the help, I implemented my own light source to get the spotlight working.

I also changed my exception color to magenta as you mentioned.

Now I have another problem. How can I calculate the attenuation factor to make a distance dependent lightning of objects? The shorter the distance between the lightsource and objects gets, I would like to make the lightninig stronger. I saw some calculations in the tutorial5.cu in the clostest hit program, but since I’m loading my scene out of .obj and .mtl files, I don’t know how to manipulate the color values. Here is my first try, which wasn’t successful.

RT_PROGRAM void pinhole_camera()
{
  size_t2 screen = output_buffer.size();

  float2 d = make_float2(launch_index) / make_float2(screen) * 2.f - 1.f;
  float3 ray_origin = eye;
  float3 ray_direction = normalize(d.x*U + d.y*V + W);
  
  optix::Ray ray = optix::make_Ray(ray_origin, ray_direction, radiance_ray_type, scene_epsilon, RT_DEFAULT_MAX);

  PerRayData_radiance prd;
  prd.importance = 1.f;
  prd.depth = 0;

  rtTrace(top_object, ray, prd);

  for(int i = 0; i < lights.size(); ++i) {
    SpotLight light = lights[i];

  float3 hit_point = ray.origin + t_hit * ray.direction;
  float  D = length(light.spotPos - hit_point);

  prd.result = prd.result / ( D * D);   // wanted to make physically correct light propagation
  }
  output_buffer[launch_index] = make_color( prd.result );
}

The attenuation of point lights is quadratic with the distance. That’s the (D * D) in your code.
Your light loop code in line 17 to 24 is trying the right thing, just at the wrong place.

Currently you’re unconditionally attenuating the final radiance result of all lighting calculations and that won’t work generally, especially since some lights might not have lit that surface point because the vector to the light was on the opposite hemisphere of the current shading normal, or in case of spot lights outside the lit cone, or the light was shadowed by other objects in the scene.
None of these cases would do any distance attenuation on top because there wasn’t any incoming light to begin with.

Instead that attenuation needs to be applied to the incoming light on the surface point from each light. Means it belongs into the actual lighting routine which should be a similar loop somewhere inside the closet hit program in your code.
Then the prd.result is the final radiance result after rtTrace returned and can directly be written into the output buffer.

If the “t_hit” has the semantic rtIntersectionDistance, that code shouldn’t even compile because the raygeneration program domain has no access to that. Please have a look into the OptiX Programming Guide 3.9.1 at Table 5 “Semantic Variables”.

Also as a performance tip: SpotLight light = lights[i]; // This is a needless copy operation!
Write this instead: SpotLight const& light = lights[i];

Thank you again for your help. The problem is, that I load the .obj files with OptixMesh. I looked inside the code and I have seen that loadBegin_Geometry() creates default closest hit programs for the loaded materials.

How can I use my own clostest hit programm instead?

I tried to make a loop over the materials with getMaterialCount() and getMeshMaterialParams(i).name to set my closest hit programm, but it doesn’t work, because my name variable can’t be set as Material object. Does the default closest hit programm get overwritten, when I use setClosestHitProgram()?

The pragmatic method would be to simply change that default closest hit program to what you need.
But you should be able to exchange the default material used by OptiXMesh as well.

I’m always recommending a “Find in Files” search over all .h;.c;*.cpp files in your OptiX SDK installation. That often turns up some code which does what you’re looking for.

In this case searching for “OptiXMesh” finds a comment in sample6pp.cpp in function MeshViewer::initMaterial() which shows how to generate a different material with custom closest hit and any hit programs depending on some global shader type.
If you then search where that m_material is used in that same file, you’ll find MeshViewer::initGeometry() which shows how to replace the default material with your custom material by using loader.setOptiXMaterial().
That’s overwriting the m_optix_materials entries inside OptiXMeshImpl, which are finally set to the GeometryInstances when calling loader.loadFinish_Materials().
If you search for “m_mesh.getOptiXMaterial( material_number );” you’ll find where that happens.

Much too complicated for a beginner’s example, IMO.
Starting with a full blown OBJ loader example which does many things behind the back to generate some baseline image won’t be the easiest path for understanding how to integrate OptiX into an own project.
Learning how to create an OptiX scene graph from scratch will definitely help to get a thorough understanding of the necessary steps.
You could build that sample6pp example as debug version and single step through the code inside the debugger to see what it actually does for the different shading modes.

My recommendations to start learning OptiX can be found in the links in this post:
[url]https://devtalk.nvidia.com/default/topic/915251/?comment=4801716[/url]