TensorRT 3.0 run MNIST sample error: Assertion `engine' failed

Hi all,
After configuring CUDA and TensorRT, I have compiled sample successfully. However, when I run it, it returns …

ERROR: cudnnEngine.cpp (55) - Cuda Error in initializeCommonContext: 4
sample_mnist: sampleMNIST.cpp:63: void caffeToGIEModel(const string&, const string&, const std::vector<std::basic_string<char> >&, unsigned int, nvinfer1::IHostMemory*&): Assertion `engine' failed.

And the relative codes are

void caffeToGIEModel(const std::string& deployFile,				// name for caffe prototxt
					 const std::string& modelFile,				// name for model 
					 const std::vector<std::string>& outputs,   // network outputs
					 unsigned int maxBatchSize,					// batch size - NB must be at least as large as the batch we want to run with)
					 IHostMemory *&gieModelStream)    // output buffer for the GIE model
{
	// create the builder
	IBuilder* builder = createInferBuilder(gLogger);

	// parse the caffe model to populate the network, then set the outputs
	INetworkDefinition* network = builder->createNetwork();
	ICaffeParser* parser = createCaffeParser();
	const IBlobNameToTensor* blobNameToTensor = parser->parse(locateFile(deployFile, directories).c_str(),
															  locateFile(modelFile, directories).c_str(),
															  *network,
															  DataType::kFLOAT);

	// specify which tensors are outputs
	for (auto& s : outputs)
		network->markOutput(*blobNameToTensor->find(s.c_str()));

	// Build the engine
	builder->setMaxBatchSize(maxBatchSize);
	builder->setMaxWorkspaceSize(1 << 20);

	ICudaEngine* engine = builder->buildCudaEngine(*network);
	assert(engine);

	// we don't need the network any more, and we can destroy the parser
	network->destroy();
	parser->destroy();

	// serialize the engine, then close everything down
	gieModelStream = engine->serialize();
	engine->destroy();
	builder->destroy();
	shutdownProtobufLibrary();
}

Since it’s about cudnnEngine, so is there any problem with my cuDNN?
I installed cuDNN by copying files to /usr/local/cuda/.

tar -xzvf cudnn-8.0-linux-x64-v7.tgz
sudo cp cuda/include/cudnn.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h
/usr/local/cuda/lib64/libcudnn*

Got the identical error, even with the Python wrapper.

Executing the samples / python scripts with sudo solved the problem for me !

I don’t have more time in the moment to find the reason…