rtContextValidate() throws error using textures

I am trying to use single channel float textures, I create the texture object then set parameters RT_WRAP_CLAMP_TO_EDGE, RT_FILTER_LINEAR (min and max) and RT_FILTER_NONE (mipmap) and RT_TEXTURE_INDEX_ARRAY_INDEX. After linking the material object to a group instance connected to context I try to validate the context. It throws the following error

1
terminate called after throwing an instance of 'optix::Exception'
  what():  Invalid value (Details: Function "RTresult _rtContextValidate(RTcontext)" caught exception: Unsupported combination of texture index, wrap and filter modes:  RT_TEXTURE_INDEX_ARRAY_INDEX, RT_WRAP_REPEAT, RT_FILTER_LINEAR
================================================================================
Backtrace:
	(0) () +0x711547
	(1) () +0x70dd31
	(2) () +0x1b647b
	(3) () +0x49bf5a
	(4) () +0x441faf
	(5) () +0x22d878
	(6) () +0x177917
	(7) rtContextValidate() +0x1e6
	(8) optix::ContextObj::validate() +0x2b
	(9) main() +0x287
	(10) __libc_start_main() +0xf0
	(11) _start() +0x29

================================================================================

The Exception message states that the wrap mode is RT_WRAP_REPEAT. I even added a getWrapMode() before the validate statement, which returns 1 (RT_WRAP_CLAMP_TO_EDGE). Can someone help me make sense of this.

The exact TextureSampler initialization code would have been helpful.

The error says that you cannot use unnormalized texture coordinates with wrap repeat.
Most likely one of the three available wrap modes is not set correctly.

Here is the code for texture sampler initialization:

optix::TextureSampler diffuseTex = context->createTextureSampler();
	diffuseTex->setWrapMode(0, RT_WRAP_CLAMP_TO_EDGE);
	diffuseTex->setWrapMode(1, RT_WRAP_CLAMP_TO_EDGE);
	diffuseTex->setFilteringModes(RT_FILTER_LINEAR, RT_FILTER_LINEAR, RT_FILTER_NONE);
	diffuseTex->setIndexingMode(RT_TEXTURE_INDEX_ARRAY_INDEX);
	diffuseTex->setReadMode(RT_TEXTURE_READ_ELEMENT_TYPE);
	diffuseTex->setMaxAnisotropy(1.f);
	diffuseTex->setBuffer(diffuseTexBuffer);
	optix_mesh.diffuse_texture = diffuseTex;
	optix_mesh.material["diffuse_texture"]->setTextureSampler(optix_mesh.diffuse_texture);

As it can be seen I set the wrap mode to clamp to edge

There are three wrap modes. Please add diffuseTex->setWrapMode(2, RT_WRAP_CLAMP_TO_EDGE);
Assuming your buffer defines a 2D texture, that shouldn’t actually be hit, but the check in OptiX is not special-cased per buffer type defining the kind of TextureSampler and just runs over all three wrap modes.

Thanks that solves my problem. I have another doubt if I might ask. What is the best way to communicate an array of floats between intesrection program and closest hit program. Do I do it using rtDeclareVariable() or do I create GPU buffer ?

More details please. How big is that array?

The intersection program is the most often called program inside the ray tracer. It’s invoked whenever a ray hits an axis aligned bounding box during the internal BVH traversal. This program must be as efficient as possible.

The communication between intersection and anyhit/closesthit must happen through user defined variables with user defined semantic names behind the “attribute” keyword.
That is because the BVH traversal happens in the order of the traversal not in the order along the ray.
Means OptiX must be able to store the attributes of the closest hit so far and that is only possible via these attribute variables.
Matching of these attribute variables between intersection and anyhit/closesthit program domains happens with the variable type and that semantic name.

[url]http://raytracing-docs.nvidia.com/optix/guide/index.html#programs#4124[/url]

If your array is small (e.g. my typical renderers have four float3 and one or two integer attributes), you can define a structure with the necessary data and make that your attribute variable.

If that is much bigger, you should think about a different way to structure your intersection program attributes to be able to calculate your resulting array data deferred inside the anyhit/closesthit invocations which happens less often.

The array sizes are 12 floats and 12 ints, but still I would like to know what you mean by structure my intersection program in different way. Apart from these arrays I have other attributes (3 float3).

For example, when calculating the attributes of a triangle at the hit position, you need the primitive ID and the barycentric coordinates.
Instead of using these to calculate the geometric normal, shading normal, tangent, texture coordinate, etc. inside the intersection program, you could instead make the barycentrics (float2), the primitive ID (int) and the geometry ID (int) to access a buffer of bindless buffer IDs for the attribute and index buffers your attributes.
Means a float2 + int2 would be enough attributes for a triangle intersection program.
Then use those to calculate the actual triangle data at the hitpoint deferred inside the anyhit/closesthit programs.
That is, move your current code between rtPotentialIntersection() and rtReportIntersection() into the anyhit and closesthit programs.
That should normally be faster because that happens less often.

Similar for other geometric primitives. If the same closest hit program should be used for different geometric primitives, that is from different intersection programs (e.g. a sphere has no barycentric triangle coordinates) then you would need to adapt the intersection attributes accordingly to be able to detect what primitive type was actually intersected.
Note that all attributes which appear in an anyhit/closesthit program must be generated by all intersection programs which can call into that.

36 attribute slots sounds a little too much, esp. compared to 4 in my example above.