What performance to expect from OptixPrime?

I have created a code (based on the primeSimplePP/ example) to supply a set of triangles and vertices, set up a rectangular shoot grid, and launch rays. While it works, performance seems fairly modest in terms of performance over an old and purely CPU ray tracer that I am using as a benchmark. This is on a MacBook Pro (Retina, Mid 2012), 2.6 GHz Intel Core i7 with NVIDIA GeForce GT 650M. My model has 900k triangles (with 500k vertices), and here are the performance numbers for shooting a 1700x1600 grid of rays, seeking the closest hit:

Optix: 1.09 million rays/sec
CPU: 0.25 million rays/sec

The time measured was how long it took to call and then return from a Query::execute() call. I was kind of expecting a 10x-100x improvement over what my old double precision CPU ray tracer was doing.

Is there anything else I should be doing to improve performance? Break the shoot grid into smaller tiles for instance? I have been unable to find anything in the OptixPrime C++ documentation or interface that mentions if or what it is using as far as acceleration structures go.

Thanks.

Here is what running deviceQuery gives on my machine:

Detected 1 CUDA Capable device(s)

Device 0: “GeForce GT 650M”
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 1024 MBytes (1073414144 bytes)
( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores
GPU Max Clock rate: 900 MHz (0.90 GHz)
Memory Clock rate: 2508 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 262144 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GT 650M
Result = PASS

What was the command line you started primeSimplePP with?

It defaults to CPU intersection tests and you would need to use –context cuda --buffer cuda command line options to let it run on the GPU.
(Find all its command line options inside the primeSimplePP.cpp main() function.)

According to your other post you couldn’t get that to work in your own application.
Does it work with the pre-compiled primeSimplePP example and the above options?

Thanks for the fast reply. I found two issues on my part that were causing problems. First, my .obj facet file was corrupted, so I fixed that. Second, looking at primeSimplePP.cpp:126, I noticed that its setTriangles() call is hard coded to use RTP_BUFFER_TYPE_HOST. In my code I had been setting that to RTP_BUFFER_TYPE_CUDA_LINEAR which I think led to the crash that I reported in a different post.

My ray trace throughput now appears to be significantly faster, and I can see the performance difference between the CPU and CUDA contexts. Thanks.

Fwiw, the “–context cuda” argument will be the default for primeSimple in the next release of OptiX. Our samples shouldn’t run on the CPU by default :)

H