Some layers have to be defined with above layer conditions, especially when I want to use inception layer in googlenet.
I confirmed the same error condition with CIFAR-10 example by changing conv3 from “bottom size: 8 / pad: 2, kernel: 5” to “bottom size: 8 / pad: 5, kernel: 10”.
If I compile without cuDNN options, i.e. use conv_layer.cu, it works.
It seems like a bug in cuDNN v2.
One of the failure case I tested is as follows, (it has been worked with cuDNN v1 successfully)
I have the same issue try to run deepdream dreamify.py script
with cudnn v2 on Tegra4Linux (based on ubuntu 14.04).
I tried to play a bit with the kernel size and other paramters…
But if I for example change kernel_size to 8 instead of 7 I got
" net.cpp:860] Cannot copy param 0 weights from layer ‘conv1/7x7_s2’; shape mismatch. Source param shape is 64 3 7 7 (9408); target param shape is 64 3 8 8 (12288)"
It also says
“To learn this layer’s parameters from scratch rather than copying from a saved net, rename the layer.”
But dont how to do this… How to change the layer, and do I really need to do this?
Anybody solved this issue in the meanwhile?
Without cudnn all works fine, gpu & cpu computing…
I tried the same thing a few weeks ago, and hoped it would be fixed, but it don`t seem like that. And now is cudnn v3 released, so I dont think this will be fixed inside cudnn v2 anymore…
I was not able to solve this problem for the last week…
In the meanwhile I implemented a fallback logic via a script. If bvlc_googlenet is choosen, a caffe version without cuda just using gpu is used. If another dataset (for ex. bvlc_alexnet which works with the most included layers) is selected, a caffe version compiled with USE_CUDNN enabled will be taken.
I will try to improve that with the use of cudnn v1 later. People seems to work successfully with that version…