Single geometry LBS animation

Hello and happy new year,

I am building an application which fuses 360^o media with 3D content.
The 3D content is an animated mesh (Linear blend skinning animation) which is a single mesh (i.e. single geometry - not divided in different geometries according to skeleton bones and thus I cannot animate the mesh from many transform nodes).

The mesh is a captured performance which is produced by a 3D reconstruction pipeline described in:
“Alexiadis, D.S., Zioulis, N., Zarpalas, D. and Daras, P., 2018. Fast deformable model-based human performance capture and FVV using consumer-grade RGB-D sensors. Pattern Recognition, 79, pp.260-278.”

available here:
[url]http://vcl.iti.gr/vclNew/wp-content/uploads/2018/03/RGB-D_09-03-2018.pdf[/url]

In my understanding it would be somewhat difficult to divide the mesh in more w.r.t. the skinning weights of each vertex (without “breaking” triangles), which I tried without nice results.

So I’d like to ask if it is possible to implement the animation in device code, in the intersection program, i.e. transforming vertices (and consequently normals) when reading them from the vertex buffer.
Would a scenario like that be possible and what will be the consequences on the Acceleration Structure? Will I need to rebuild it on each animation frame?

A sample of the application I am working on can be seen here:
[url]https://drive.google.com/open?id=1khZLGTq90WJsy6o4_T4yHr7EC54Dodu-[/url]

thank you in advance

Hi Perukas,

You can update your entire mesh every frame, that shouldn’t be a problem unless your meshes are extremely large. It normally is possible to break your skinned mesh into multiple geometries, but it is tricky and might not be worth the effort. I would suggest first updating the entire mesh, and wait to observe a performance problem before trying to optimize it into smaller pieces.

I’ll advise against evaluating your animation as part of your intersection program. It’s technically possible, and not difficult, but it would be better to keep animation and intersection separate, for a couple of different reasons: I don’t see much advantage to have them together, and you might even need to evaluate the rig outside of ray tracing. It complicates the intersection program, and you might still need a separate static mesh intersection program too. Also in the upcoming OptiX 6.0 release, we will have a native triangle mesh API. If you keep your rig evaluation separate from your intersection code, it will be much easier to take advantage of the new mesh API when it becomes available.

You could do the rig evaluation in CUDA and share the mesh vertex & index buffers between CUDA and OptiX (http://raytracing-docs.nvidia.com/optix/guide/index.html#cuda#interoperability-with-cuda). Or, you may be able to evaluate the rig on the CPU, or if you don’t have too much animation, save separate baked meshes in memory or into separate files.

No matter which path you choose, you will need to mark your mesh’s acceleration structure dirty, and also mark the mesh’s parent acceleration structure dirty.


David.

Hi David,

Thank you for your prompt answer.
As much as I’ve read the programming guide I always pass the CUDA interoperability section.
Following you suggestion I will try evaluating the mesh animation on host-side first, updating the entire mesh, and see about performance, and then try implementing it in CUDA.
Thank you again, I will update the topic if any more issues occur.

Antonis

Hi,

So I implemented both CPU and GPU versions (deforming mesh vertices in CUDA using OptiX-Cuda interop - which is much easier than what I thought it would be) and everything runs smoothly.

Thanks again for your recommendations David.