Using NVDEC to transfer non-image data

Hey Guys,

I want to transfer very large 3D Objects to the GPU Memory, and I thought maybe I could speed it up by encoding the objects, transferring the packets to the device, decode them there and copy them to an OpenGL Buffer.

I looked at the NvDecodeGL Sample and I have two questions remaining:

  1. Is it possible to encode the objects (maybe lossless?) and reconstruct the original content? I know that the compression rate will be rather bad, but still it might be faster than copying the objects uncompressed.

  2. Can I seek to a specific frame, or can I jump between frames? Maybe even copy the encoded packets and leave them in the device memory and decode them only when needed?

I don’t really understand how the packets are transferred to the device.

Any help would be appreciated!

It is not clear what exactly you mean by “3D objects”. Is this video content? NVENC can encode anything as long as you provide valid buffers, but it makes sense to encode the content using NVENC only when it’s video content. If you are talking about binary 3D object data, NVENC is not your choice.

To answer your other question, NVENC in Pascal can encode standard H.264 and HEVC lossless and NVDEC can decode it too.

No, I have an OpenGL application and the 3D objects are quite big. I’m just looking into ways to make the transfer faster, and I thought, if I treat them like image data, encode them, transfer the (hopefully) smaller packets, decode them on the GPU and copy them into an OpenGL Buffer, maybe it can be a little bit faster.

Also, can I store the encoded packets in the GPU memory and decode them only when needed? Maybe I could fit more objects into the memory and instead of transferring new ones, just decode them?

Why would NVENC not be the right choice? Do you think the files won’t get smaller? I’m just curious… And if you have any other suggestions, I am happy to try things out :)

[This is just a student project for me, I don’t know which other methods my supervisor has already tried out]

To use NVEnc you would have to transfer the uncompressed objects to the GPU to feed the encoder. Since that is what you are trying to avoid, the whole idea seems pointless.

You could implement your own CPU-based compression and then have a decompression kernel but these kinds of scheme rarely lead to any significant performance improvement.

Oh no, thats not what I wanted, sorry if I wasn’t clear enough!

I want to compress the objects well before runtime and have them in a “video” file, that gets loaded at runtime. I don’t have to create or modify the objects, just load them. I will try to compress some non-image data and see how much smaller they get.

The transfer from NVDEC to OpenGL is no problem, the only question I had was whether it’s even possible to encode any data and decode it to reconstruct the original data, and whether there was a better way to get to a specific frame than start from the beginning and skip all unwanted frames.

Maybe I could split my Objects into several video files and have multiple video sources? I will do some tests and see how it goes…

I thought maybe someone has done something similar before…