Decode video file while it is being written

Hi!

We have video files being written by network cameras. We would like to start decoding the files while they are being written (using Nvdecode).

We are able to start decoding the videos. But after a while it seems like Nvdecode decides that there are no more frames to decode, even though there are plenty more as the files are being written continuously. We have limited decoding speed such that we are working slower than real-time, and thus we should never reach end of the files. Is it possible that Nvdecode at the start of decoding determines the number of available frames, and thus fails to see additional frames at a later timepoint?

If anyone here has any ideas what is going on, we would very much appreciate the help. Is it possible that we are using inappropriate settings/parameters? (The ones we use are pretty much based on the Decode-sample project coming with the SDK).

Thanks in advance

“it seems like Nvdecode decides that there are no more frames to decode”

That’s a very vague trouble report! Specifically, what happens to the decoding process? Do you stop getting callbacks or what?

As long as you continue injecting valid data you should continue to get appropriate callbacks (i.e., NVDec does not count frames or anything like you mentioned). First thing to check is that the data is not being corrupted.

I had a similar issue when I started from the examples. Their use of a video source was the culprit. I think it does a seek to determine the length of the input file, and assumes the stream is done when it hits that many bytes read. For a continuous stream, you have to use the parser directly, by reading your file/stream and passing the bytes to cuvidParseVideoData yourself by creating a CUVIDSOURCEDATAPACKET pointing to your data in memory.

But how to use the parser directly, if I don’t use cuvidCreateVideoSource()to start a new thread?
said by NVDEC_VideoDecoder_API_ProgGuide
“The application thread (referred to as the primary thread) calls cuvidCreateVideoSource() , which spawns a de-multiplexer thread (referred to as the secondary thread).”
So , cuvidCreateVideoSource() start a new thread which calls cuvidParseVideoData().
The problem is without the de-multiplexer thread, how can I find a new thread to call cuvidParseVideoData() .

You make your own thread if you need one (via pthreads or std::thread or whatever). I don’t think you have to use a separate thread to use the API, if you don’t mind your main thread blocking. It’s up to you. I’ve used multiple parsers for handling multiple streams, each on their own threads that I create and manage. I don’t use cuvidCreateVideoSource(). Instead, I read the bytes from the source and call cuvidParseVideoData(). My stream happens to have frames delineated by my own headers, but if yours doesn’t, you might have to call it with chunks delineated by NAL unit boundaries. They have a simple sequence of header bytes you can search for as you scan the input stream.

[url]https://yumichan.net/video-processing/video-compression/introduction-to-h264-nal-unit/[/url]