TensorRT C++ API layers, how to add custom layers to imported model?

Hello,
I went over Developer Guide :: NVIDIA Deep Learning TensorRT Documentation samples and I have two questions:

How do I extend imported network to custom build network? In sample code

// parse the caffe model to populate the network, then set the outputs
	INetworkDefinition* network = builder->createNetwork();
	ICaffeParser* parser = createCaffeParser();
	parser->setPluginFactory(pluginFactory);

	std::cout << "Begin parsing model..." << std::endl;
	const IBlobNameToTensor* blobNameToTensor = parser->parse(locateFile(deployFile).c_str(),
		locateFile(modelFile).c_str(),
		*network,
		DataType::kFLOAT);
	std::cout << "End parsing model..." << std::endl;

I can use network and create more layers by calling function

network->addFullyConnected(*pool2->getOutput(0), 500, weightMap["ip1filter"], weightMap["ip1bias"])

. How do I set weights, If my original caffe2 model can not be saved to protobuf .pb file, because of custom layers, that I want to write manually in C++ API?
Is it possible to initiate model with randomly created weights?

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth