train_lmdb: Cannot allocate memory

Hi all,

I need to train 25K images for cat/dog classification

As part of the process i need to create lmdb file which stores all serialized classes. While i am allocating the memory beyond the RAM size it is throwing the below error

Error: Traceback (most recent call last):
File “create_lmdb.py”, line 64, in
in_db = lmdb.open(train_lmdb, map_size=int(3e9))
lmdb.MemoryError: rain_lmdb: Cannot allocate memory

Code snippet:

in_db = lmdb.open(train_lmdb, map_size=int(3e9))
with in_db.begin(write=True) as in_txn:
for in_idx, img_path in enumerate(train_data):
if in_idx % 6 == 0:
continue
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
img = transform_img(img, img_width=IMAGE_WIDTH, img_height=IMAGE_HEIGHT)
if ‘cat’ in img_path:
label = 0
else:
label = 1
datum = make_datum(img, label)
in_txn.put(‘{:0>5d}’.format(in_idx), datum.SerializeToString())
print ‘{:0>5d}’.format(in_idx) + ‘:’ + img_path
in_db.close()

I even created a paging file of size 4 GB but still the issue is not fixed.
Please suggest how to take it forward

Thanks
vijay

Hi,

GPU memory size is fixed. Adding swap only increase CPU memory amount.
Not sure which out of memory type do you meet. Maybe you can allocate more swap to check it.

We will recommend training your model on desktop GPU.
Jetson is designed for inferencing, and usually, users meet storage and memory issue when training.

Thanks.