Installing and Running Jetpack 3.2 Caffe problem

Hi I am building the Image classifier model in DIGITS with Jetpack 3.2

It crashes out with

Train Caffe Model Error
Initialized at 08:40:55 PM (1 second)
Running at 08:40:57 PM (0 seconds)
Error at 08:40:58 PM
(Total - 2 seconds)
ERROR: Check failed: status == CUDNN_STATUS_SUCCESS (3 vs. 0) CUDNN_STATUS_BAD_PARAM
Top shape: 32 3 224 224 (4816896)
Top shape: 32 (32)
Memory required for data: 19267712
Creating layer label_train-data_1_split
Creating Layer label_train-data_1_split
label_train-data_1_split ← label
label_train-data_1_split → label_train-data_1_split_0
label_train-data_1_split → label_train-data_1_split_1
label_train-data_1_split → label_train-data_1_split_2
Setting up label_train-data_1_split
Top shape: 32 (32)
Top shape: 32 (32)
Top shape: 32 (32)
Memory required for data: 19268096
Creating layer conv1/7x7_s2
Creating Layer conv1/7x7_s2
conv1/7x7_s2 ← data
conv1/7x7_s2 → conv1/7x7_s2
Cannot create cuDNN handle. cuDNN won’t be available.
Check failed: status == CUDNN_STATUS_SUCCESS (3 vs. 0) CUDNN_STATUS_BAD_PARAM

I have libcudnn installed

/var/lib/dpkg/info/libcudnn7.list
/var/lib/dpkg/info/libcudnn7-dev.md5sums
/var/lib/dpkg/info/libcudnn7.md5sums
/var/lib/dpkg/info/libcudnn7.triggers
/var/lib/dpkg/info/libcudnn7-dev.postinst
/var/lib/dpkg/info/libcudnn7.shlibs
/var/lib/dpkg/info/libcudnn7-dev.list
/var/lib/dpkg/info/libcudnn7-dev.prerm
/var/lib/dpkg/alternatives/libcudnn

Any ideas please? If I am posting in the wrong thread please let me know

I tried reinstalling caffe but is still gives the same error

ERROR: Check failed: status == CUDNN_STATUS_SUCCESS (3 vs. 0) CUDNN_STATUS_BAD_PARAM
Top shape: 32 3 224 224 (4816896)
Top shape: 32 (32)
Memory required for data: 19267712
Creating layer label_train-data_1_split
Cannot create cuDNN handle. cuDNN won’t be available.
Creating Layer label_train-data_1_split
label_train-data_1_split ← label
label_train-data_1_split → label_train-data_1_split_0
label_train-data_1_split → label_train-data_1_split_1
label_train-data_1_split → label_train-data_1_split_2
Setting up label_train-data_1_split
Top shape: 32 (32)
Top shape: 32 (32)
Top shape: 32 (32)
Memory required for data: 19268096
Creating layer conv1/7x7_s2
Creating Layer conv1/7x7_s2
conv1/7x7_s2 ← data
conv1/7x7_s2 → conv1/7x7_s2
Check failed: status == CUDNN_STATUS_SUCCESS (3 vs. 0) CUDNN_STATUS_BAD_PARAM

I followed DIGITS/BuildCaffe.md at master · NVIDIA/DIGITS · GitHub
#56
Posted 01/21/2018 04:05 PM

francisdomoney

caffe output Log

I0121 15:58:56.605885 6959 upgrade_proto.cpp:1044] Attempting to upgrade input file specified using deprecated ‘solver_type’ field (enum)': /home/frank/DIGITS/digits/jobs/20180121-155854-0770/solver.prototxt
I0121 15:58:56.606048 6959 upgrade_proto.cpp:1051] Successfully upgraded file specified using deprecated ‘solver_type’ field (enum) to ‘type’ field (string).
W0121 15:58:56.606055 6959 upgrade_proto.cpp:1053] Note that future Caffe releases will only support ‘type’ field (string) for a solver’s type.
I0121 15:58:56.783870 6959 caffe.cpp:197] Using GPUs 0
I0121 15:58:56.784118 6959 caffe.cpp:202] GPU 0: GeForce GTX 1080 Ti
E0121 15:58:57.157729 6959 common.cpp:128] Cannot create cuDNN handle. cuDNN won’t be available.
I0121 15:58:57.168172 6959 solver.cpp:48] Initializing solver from parameters:
test_iter: 1322
test_interval: 5948
base_lr: 0.01
display: 157
max_iter: 178440
lr_policy: “step”
gamma: 0.1
momentum: 0.9
weight_decay: 0.0001
stepsize: 58886
snapshot: 5948
snapshot_prefix: “snapshot”
solver_mode: GPU
device_id: 0
net: “train_val.prototxt”
type: “SGD”
I0121 15:58:57.168601 6959 solver.cpp:91] Creating training net from net file: train_val.prototxt
I0121 15:58:57.170284 6959 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer val-data
I0121 15:58:57.170328 6959 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer loss1/top-1
I0121 15:58:57.170333 6959 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer loss1/top-5
I0121 15:58:57.170363 6959 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer loss2/top-1
I0121 15:58:57.170368 6959 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer loss2/top-5
I0121 15:58:57.170397 6959 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer loss3/top-1
I0121 15:58:57.170403 6959 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer loss3/top-5
I0121 15:58:57.171790 6959 net.cpp:52] Initializing net from parameters:
state {
phase: TRAIN
}
layer {
name: “train-data”
type: “Data”
top: “data”
top: “label”
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 224
mean_value: 125.40289
mean_value: 138.01128
mean_value: 142.36745
}
data_param {
source: “/home/frank/DIGITS/digits/jobs/20180121-150421-c0d0/train_db”
batch_size: 32
backend: LMDB
}
}
layer {
name: “conv1/7x7_s2”
type: “Convolution”
bottom: “data”
top: “conv1/7x7_s2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 3
kernel_size: 7
stride: 2
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “conv1/relu_7x7”
type: “ReLU”
bottom: “conv1/7x7_s2”
top: “conv1/7x7_s2”
}
layer {
name: “pool1/3x3_s2”
type: “Pooling”
bottom: “conv1/7x7_s2”
top: “pool1/3x3_s2”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “pool1/norm1”
type: “LRN”
bottom: “pool1/3x3_s2”
top: “pool1/norm1”
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: “conv2/3x3_reduce”
type: “Convolution”
bottom: “pool1/norm1”
top: “conv2/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “conv2/relu_3x3_reduce”
type: “ReLU”
bottom: “conv2/3x3_reduce”
top: “conv2/3x3_reduce”
}
layer {
name: “conv2/3x3”
type: “Convolution”
bottom: “conv2/3x3_reduce”
top: “conv2/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 192
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “conv2/relu_3x3”
type: “ReLU”
bottom: “conv2/3x3”
top: “conv2/3x3”
}
layer {
name: “conv2/norm2”
type: “LRN”
bottom: “conv2/3x3”
top: “conv2/norm2”
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: “pool2/3x3_s2”
type: “Pooling”
bottom: “conv2/norm2”
top: “pool2/3x3_s2”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “inception_3a/1x1”
type: “Convolution”
bottom: “pool2/3x3_s2”
top: “inception_3a/1x1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3a/relu_1x1”
type: “ReLU”
bottom: “inception_3a/1x1”
top: “inception_3a/1x1”
}
layer {
name: “inception_3a/3x3_reduce”
type: “Convolution”
bottom: “pool2/3x3_s2”
top: “inception_3a/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 96
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.09
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3a/relu_3x3_reduce”
type: “ReLU”
bottom: “inception_3a/3x3_reduce”
top: “inception_3a/3x3_reduce”
}
layer {
name: “inception_3a/3x3”
type: “Convolution”
bottom: “inception_3a/3x3_reduce”
top: “inception_3a/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3a/relu_3x3”
type: “ReLU”
bottom: “inception_3a/3x3”
top: “inception_3a/3x3”
}
layer {
name: “inception_3a/5x5_reduce”
type: “Convolution”
bottom: “pool2/3x3_s2”
top: “inception_3a/5x5_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 16
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.2
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3a/relu_5x5_reduce”
type: “ReLU”
bottom: “inception_3a/5x5_reduce”
top: “inception_3a/5x5_reduce”
}
layer {
name: “inception_3a/5x5”
type: “Convolution”
bottom: “inception_3a/5x5_reduce”
top: “inception_3a/5x5”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3a/relu_5x5”
type: “ReLU”
bottom: “inception_3a/5x5”
top: “inception_3a/5x5”
}
layer {
name: “inception_3a/pool”
type: “Pooling”
bottom: “pool2/3x3_s2”
top: “inception_3a/pool”
pooling_param {
pool: MAX
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: “inception_3a/pool_proj”
type: “Convolution”
bottom: “inception_3a/pool”
top: “inception_3a/pool_proj”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 32
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3a/relu_pool_proj”
type: “ReLU”
bottom: “inception_3a/pool_proj”
top: “inception_3a/pool_proj”
}
layer {
name: “inception_3a/output”
type: “Concat”
bottom: “inception_3a/1x1”
bottom: “inception_3a/3x3”
bottom: “inception_3a/5x5”
bottom: “inception_3a/pool_proj”
top: “inception_3a/output”
}
layer {
name: “inception_3b/1x1”
type: “Convolution”
bottom: “inception_3a/output”
top: “inception_3b/1x1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3b/relu_1x1”
type: “ReLU”
bottom: “inception_3b/1x1”
top: “inception_3b/1x1”
}
layer {
name: “inception_3b/3x3_reduce”
type: “Convolution”
bottom: “inception_3a/output”
top: “inception_3b/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.09
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3b/relu_3x3_reduce”
type: “ReLU”
bottom: “inception_3b/3x3_reduce”
top: “inception_3b/3x3_reduce”
}
layer {
name: “inception_3b/3x3”
type: “Convolution”
bottom: “inception_3b/3x3_reduce”
top: “inception_3b/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 192
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3b/relu_3x3”
type: “ReLU”
bottom: “inception_3b/3x3”
top: “inception_3b/3x3”
}
layer {
name: “inception_3b/5x5_reduce”
type: “Convolution”
bottom: “inception_3a/output”
top: “inception_3b/5x5_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 32
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.2
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3b/relu_5x5_reduce”
type: “ReLU”
bottom: “inception_3b/5x5_reduce”
top: “inception_3b/5x5_reduce”
}
layer {
name: “inception_3b/5x5”
type: “Convolution”
bottom: “inception_3b/5x5_reduce”
top: “inception_3b/5x5”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 96
pad: 2
kernel_size: 5
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3b/relu_5x5”
type: “ReLU”
bottom: “inception_3b/5x5”
top: “inception_3b/5x5”
}
layer {
name: “inception_3b/pool”
type: “Pooling”
bottom: “inception_3a/output”
top: “inception_3b/pool”
pooling_param {
pool: MAX
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: “inception_3b/pool_proj”
type: “Convolution”
bottom: “inception_3b/pool”
top: “inception_3b/pool_proj”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_3b/relu_pool_proj”
type: “ReLU”
bottom: “inception_3b/pool_proj”
top: “inception_3b/pool_proj”
}
layer {
name: “inception_3b/output”
type: “Concat”
bottom: “inception_3b/1x1”
bottom: “inception_3b/3x3”
bottom: “inception_3b/5x5”
bottom: “inception_3b/pool_proj”
top: “inception_3b/output”
}
layer {
name: “pool3/3x3_s2”
type: “Pooling”
bottom: “inception_3b/output”
top: “pool3/3x3_s2”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “inception_4a/1x1”
type: “Convolution”
bottom: “pool3/3x3_s2”
top: “inception_4a/1x1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 192
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4a/relu_1x1”
type: “ReLU”
bottom: “inception_4a/1x1”
top: “inception_4a/1x1”
}
layer {
name: “inception_4a/3x3_reduce”
type: “Convolution”
bottom: “pool3/3x3_s2”
top: “inception_4a/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 96
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.09
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4a/relu_3x3_reduce”
type: “ReLU”
bottom: “inception_4a/3x3_reduce”
top: “inception_4a/3x3_reduce”
}
layer {
name: “inception_4a/3x3”
type: “Convolution”
bottom: “inception_4a/3x3_reduce”
top: “inception_4a/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 208
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4a/relu_3x3”
type: “ReLU”
bottom: “inception_4a/3x3”
top: “inception_4a/3x3”
}
layer {
name: “inception_4a/5x5_reduce”
type: “Convolution”
bottom: “pool3/3x3_s2”
top: “inception_4a/5x5_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 16
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.2
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4a/relu_5x5_reduce”
type: “ReLU”
bottom: “inception_4a/5x5_reduce”
top: “inception_4a/5x5_reduce”
}
layer {
name: “inception_4a/5x5”
type: “Convolution”
bottom: “inception_4a/5x5_reduce”
top: “inception_4a/5x5”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 48
pad: 2
kernel_size: 5
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4a/relu_5x5”
type: “ReLU”
bottom: “inception_4a/5x5”
top: “inception_4a/5x5”
}
layer {
name: “inception_4a/pool”
type: “Pooling”
bottom: “pool3/3x3_s2”
top: “inception_4a/pool”
pooling_param {
pool: MAX
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: “inception_4a/pool_proj”
type: “Convolution”
bottom: “inception_4a/pool”
top: “inception_4a/pool_proj”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4a/relu_pool_proj”
type: “ReLU”
bottom: “inception_4a/pool_proj”
top: “inception_4a/pool_proj”
}
layer {
name: “inception_4a/output”
type: “Concat”
bottom: “inception_4a/1x1”
bottom: “inception_4a/3x3”
bottom: “inception_4a/5x5”
bottom: “inception_4a/pool_proj”
top: “inception_4a/output”
}
layer {
name: “loss1/ave_pool”
type: “Pooling”
bottom: “inception_4a/output”
top: “loss1/ave_pool”
pooling_param {
pool: AVE
kernel_size: 5
stride: 3
}
}
layer {
name: “loss1/conv”
type: “Convolution”
bottom: “loss1/ave_pool”
top: “loss1/conv”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.08
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “loss1/relu_conv”
type: “ReLU”
bottom: “loss1/conv”
top: “loss1/conv”
}
layer {
name: “loss1/fc”
type: “InnerProduct”
bottom: “loss1/conv”
top: “loss1/fc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1024
weight_filler {
type: “xavier”
std: 0.02
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “loss1/relu_fc”
type: “ReLU”
bottom: “loss1/fc”
top: “loss1/fc”
}
layer {
name: “loss1/drop_fc”
type: “Dropout”
bottom: “loss1/fc”
top: “loss1/fc”
dropout_param {
dropout_ratio: 0.7
}
}
layer {
name: “loss1/classifier”
type: “InnerProduct”
bottom: “loss1/fc”
top: “loss1/classifier”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 12
weight_filler {
type: “xavier”
std: 0.0009765625
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “loss1/loss”
type: “SoftmaxWithLoss”
bottom: “loss1/classifier”
bottom: “label”
top: “loss1/loss”
loss_weight: 0.3
}
layer {
name: “inception_4b/1x1”
type: “Convolution”
bottom: “inception_4a/output”
top: “inception_4b/1x1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 160
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4b/relu_1x1”
type: “ReLU”
bottom: “inception_4b/1x1”
top: “inception_4b/1x1”
}
layer {
name: “inception_4b/3x3_reduce”
type: “Convolution”
bottom: “inception_4a/output”
top: “inception_4b/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 112
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.09
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4b/relu_3x3_reduce”
type: “ReLU”
bottom: “inception_4b/3x3_reduce”
top: “inception_4b/3x3_reduce”
}
layer {
name: “inception_4b/3x3”
type: “Convolution”
bottom: “inception_4b/3x3_reduce”
top: “inception_4b/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 224
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4b/relu_3x3”
type: “ReLU”
bottom: “inception_4b/3x3”
top: “inception_4b/3x3”
}
layer {
name: “inception_4b/5x5_reduce”
type: “Convolution”
bottom: “inception_4a/output”
top: “inception_4b/5x5_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 24
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.2
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4b/relu_5x5_reduce”
type: “ReLU”
bottom: “inception_4b/5x5_reduce”
top: “inception_4b/5x5_reduce”
}
layer {
name: “inception_4b/5x5”
type: “Convolution”
bottom: “inception_4b/5x5_reduce”
top: “inception_4b/5x5”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4b/relu_5x5”
type: “ReLU”
bottom: “inception_4b/5x5”
top: “inception_4b/5x5”
}
layer {
name: “inception_4b/pool”
type: “Pooling”
bottom: “inception_4a/output”
top: “inception_4b/pool”
pooling_param {
pool: MAX
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: “inception_4b/pool_proj”
type: “Convolution”
bottom: “inception_4b/pool”
top: “inception_4b/pool_proj”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4b/relu_pool_proj”
type: “ReLU”
bottom: “inception_4b/pool_proj”
top: “inception_4b/pool_proj”
}
layer {
name: “inception_4b/output”
type: “Concat”
bottom: “inception_4b/1x1”
bottom: “inception_4b/3x3”
bottom: “inception_4b/5x5”
bottom: “inception_4b/pool_proj”
top: “inception_4b/output”
}
layer {
name: “inception_4c/1x1”
type: “Convolution”
bottom: “inception_4b/output”
top: “inception_4c/1x1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4c/relu_1x1”
type: “ReLU”
bottom: “inception_4c/1x1”
top: “inception_4c/1x1”
}
layer {
name: “inception_4c/3x3_reduce”
type: “Convolution”
bottom: “inception_4b/output”
top: “inception_4c/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.09
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4c/relu_3x3_reduce”
type: “ReLU”
bottom: “inception_4c/3x3_reduce”
top: “inception_4c/3x3_reduce”
}
layer {
name: “inception_4c/3x3”
type: “Convolution”
bottom: “inception_4c/3x3_reduce”
top: “inception_4c/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4c/relu_3x3”
type: “ReLU”
bottom: “inception_4c/3x3”
top: “inception_4c/3x3”
}
layer {
name: “inception_4c/5x5_reduce”
type: “Convolution”
bottom: “inception_4b/output”
top: “inception_4c/5x5_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 24
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.2
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4c/relu_5x5_reduce”
type: “ReLU”
bottom: “inception_4c/5x5_reduce”
top: “inception_4c/5x5_reduce”
}
layer {
name: “inception_4c/5x5”
type: “Convolution”
bottom: “inception_4c/5x5_reduce”
top: “inception_4c/5x5”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4c/relu_5x5”
type: “ReLU”
bottom: “inception_4c/5x5”
top: “inception_4c/5x5”
}
layer {
name: “inception_4c/pool”
type: “Pooling”
bottom: “inception_4b/output”
top: “inception_4c/pool”
pooling_param {
pool: MAX
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: “inception_4c/pool_proj”
type: “Convolution”
bottom: “inception_4c/pool”
top: “inception_4c/pool_proj”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4c/relu_pool_proj”
type: “ReLU”
bottom: “inception_4c/pool_proj”
top: “inception_4c/pool_proj”
}
layer {
name: “inception_4c/output”
type: “Concat”
bottom: “inception_4c/1x1”
bottom: “inception_4c/3x3”
bottom: “inception_4c/5x5”
bottom: “inception_4c/pool_proj”
top: “inception_4c/output”
}
layer {
name: “inception_4d/1x1”
type: “Convolution”
bottom: “inception_4c/output”
top: “inception_4d/1x1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 112
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4d/relu_1x1”
type: “ReLU”
bottom: “inception_4d/1x1”
top: “inception_4d/1x1”
}
layer {
name: “inception_4d/3x3_reduce”
type: “Convolution”
bottom: “inception_4c/output”
top: “inception_4d/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 144
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.09
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4d/relu_3x3_reduce”
type: “ReLU”
bottom: “inception_4d/3x3_reduce”
top: “inception_4d/3x3_reduce”
}
layer {
name: “inception_4d/3x3”
type: “Convolution”
bottom: “inception_4d/3x3_reduce”
top: “inception_4d/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 288
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4d/relu_3x3”
type: “ReLU”
bottom: “inception_4d/3x3”
top: “inception_4d/3x3”
}
layer {
name: “inception_4d/5x5_reduce”
type: “Convolution”
bottom: “inception_4c/output”
top: “inception_4d/5x5_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 32
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.2
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4d/relu_5x5_reduce”
type: “ReLU”
bottom: “inception_4d/5x5_reduce”
top: “inception_4d/5x5_reduce”
}
layer {
name: “inception_4d/5x5”
type: “Convolution”
bottom: “inception_4d/5x5_reduce”
top: “inception_4d/5x5”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4d/relu_5x5”
type: “ReLU”
bottom: “inception_4d/5x5”
top: “inception_4d/5x5”
}
layer {
name: “inception_4d/pool”
type: “Pooling”
bottom: “inception_4c/output”
top: “inception_4d/pool”
pooling_param {
pool: MAX
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: “inception_4d/pool_proj”
type: “Convolution”
bottom: “inception_4d/pool”
top: “inception_4d/pool_proj”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.1
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4d/relu_pool_proj”
type: “ReLU”
bottom: “inception_4d/pool_proj”
top: “inception_4d/pool_proj”
}
layer {
name: “inception_4d/output”
type: “Concat”
bottom: “inception_4d/1x1”
bottom: “inception_4d/3x3”
bottom: “inception_4d/5x5”
bottom: “inception_4d/pool_proj”
top: “inception_4d/output”
}
layer {
name: “loss2/ave_pool”
type: “Pooling”
bottom: “inception_4d/output”
top: “loss2/ave_pool”
pooling_param {
pool: AVE
kernel_size: 5
stride: 3
}
}
layer {
name: “loss2/conv”
type: “Convolution”
bottom: “loss2/ave_pool”
top: “loss2/conv”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.08
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “loss2/relu_conv”
type: “ReLU”
bottom: “loss2/conv”
top: “loss2/conv”
}
layer {
name: “loss2/fc”
type: “InnerProduct”
bottom: “loss2/conv”
top: “loss2/fc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1024
weight_filler {
type: “xavier”
std: 0.02
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “loss2/relu_fc”
type: “ReLU”
bottom: “loss2/fc”
top: “loss2/fc”
}
layer {
name: “loss2/drop_fc”
type: “Dropout”
bottom: “loss2/fc”
top: “loss2/fc”
dropout_param {
dropout_ratio: 0.7
}
}
layer {
name: “loss2/classifier”
type: “InnerProduct”
bottom: “loss2/fc”
top: “loss2/classifier”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 12
weight_filler {
type: “xavier”
std: 0.0009765625
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “loss2/loss”
type: “SoftmaxWithLoss”
bottom: “loss2/classifier”
bottom: “label”
top: “loss2/loss”
loss_weight: 0.3
}
layer {
name: “inception_4e/1x1”
type: “Convolution”
bottom: “inception_4d/output”
top: “inception_4e/1x1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4e/relu_1x1”
type: “ReLU”
bottom: “inception_4e/1x1”
top: “inception_4e/1x1”
}
layer {
name: “inception_4e/3x3_reduce”
type: “Convolution”
bottom: “inception_4d/output”
top: “inception_4e/3x3_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 160
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.09
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4e/relu_3x3_reduce”
type: “ReLU”
bottom: “inception_4e/3x3_reduce”
top: “inception_4e/3x3_reduce”
}
layer {
name: “inception_4e/3x3”
type: “Convolution”
bottom: “inception_4e/3x3_reduce”
top: “inception_4e/3x3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 320
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
std: 0.03
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: “inception_4e/relu_3x3”
type: “ReLU”
bottom: “inception_4e/3x3”
top: “inception_4e/3x3”
}
layer {
name: “inception_4e/5x5_reduce”
type: “Convolution”
bottom: “inception_4d/output”
top: “inception_4e/5x5_reduce”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 32
kernel_size: 1
weight_filler {
type: “xavier”
std: 0.2
}
bias_filler {
type: “constant”
value: 0.2
}
}
}
layer {
name: "
I0121 15:58:57.172623 6959 layer_factory.hpp:77] Creating layer train-data
I0121 15:58:57.173887 6959 net.cpp:94] Creating Layer train-data
I0121 15:58:57.173905 6959 net.cpp:409] train-data → data
I0121 15:58:57.173933 6959 net.cpp:409] train-data → label
E0121 15:58:57.174237 6970 common.cpp:128] Cannot create cuDNN handle. cuDNN won’t be available.
I0121 15:58:57.174358 6970 db_lmdb.cpp:35] Opened lmdb /home/frank/DIGITS/digits/jobs/20180121-150421-c0d0/train_db
I0121 15:58:57.217375 6959 data_layer.cpp:78] ReshapePrefetch 32, 3, 224, 224
I0121 15:58:57.217448 6959 data_layer.cpp:83] output data size: 32,3,224,224
I0121 15:58:57.252684 6959 net.cpp:144] Setting up train-data
I0121 15:58:57.252715 6959 net.cpp:151] Top shape: 32 3 224 224 (4816896)
I0121 15:58:57.252722 6959 net.cpp:151] Top shape: 32 (32)
I0121 15:58:57.252727 6959 net.cpp:159] Memory required for data: 19267712
I0121 15:58:57.252763 6959 layer_factory.hpp:77] Creating layer label_train-data_1_split
E0121 15:58:57.253790 6979 common.cpp:128] Cannot create cuDNN handle. cuDNN won’t be available.
I0121 15:58:57.294332 6959 net.cpp:94] Creating Layer label_train-data_1_split
I0121 15:58:57.294363 6959 net.cpp:435] label_train-data_1_split ← label
I0121 15:58:57.294385 6959 net.cpp:409] label_train-data_1_split → label_train-data_1_split_0
I0121 15:58:57.294402 6959 net.cpp:409] label_train-data_1_split → label_train-data_1_split_1
I0121 15:58:57.294414 6959 net.cpp:409] label_train-data_1_split → label_train-data_1_split_2
I0121 15:58:57.294519 6959 net.cpp:144] Setting up label_train-data_1_split
I0121 15:58:57.294533 6959 net.cpp:151] Top shape: 32 (32)
I0121 15:58:57.294543 6959 net.cpp:151] Top shape: 32 (32)
I0121 15:58:57.294550 6959 net.cpp:151] Top shape: 32 (32)
I0121 15:58:57.294555 6959 net.cpp:159] Memory required for data: 19268096
I0121 15:58:57.294562 6959 layer_factory.hpp:77] Creating layer conv1/7x7_s2
I0121 15:58:57.294589 6959 net.cpp:94] Creating Layer conv1/7x7_s2
I0121 15:58:57.294596 6959 net.cpp:435] conv1/7x7_s2 ← data
I0121 15:58:57.294607 6959 net.cpp:409] conv1/7x7_s2 → conv1/7x7_s2
F0121 15:58:57.297176 6959 cudnn_conv_layer.cpp:243] Check failed: status == CUDNN_STATUS_SUCCESS (3 vs. 0) CUDNN_STATUS_BAD_PARAM
*** Check failure stack trace: ***
@ 0x7fbd16f255cd google::LogMessage::Fail()
@ 0x7fbd16f27433 google::LogMessage::SendToLog()
@ 0x7fbd16f2515b google::LogMessage::Flush()
@ 0x7fbd16f27e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fbd17569b40 caffe::CuDNNConvolutionLayer<>::Reshape()
@ 0x7fbd174f1de6 caffe::Net<>::Init()
@ 0x7fbd174f36b6 caffe::Net<>::Net()
@ 0x7fbd174d256a caffe::Solver<>::InitTrainNet()
@ 0x7fbd174d3977 caffe::Solver<>::Init()
@ 0x7fbd174d3d33 caffe::Solver<>::Solver()
@ 0x7fbd1749fc75 caffe::Creator_SGDSolver<>()
@ 0x40ba35 train()
@ 0x4086f8 main
@ 0x7fbd15970830 __libc_start_main
@ 0x408e69 _start

OK. CMAKE solved the problem along with getting rid of the arm branch

A video walkthrough of natively installing NVIDIA DIGITS on Ubuntu 18.04 LTS is available here:

-Cuda Education