GPU can not be used when the java calling

Hi, I have a problem when I used the NVIDIA Jetson AGX Xavier to operation my deep learning model.
I have confirmed that all the necessary model can be operated by the NVIDIA Jetson AGX Xavier , but the GPU can not use when calling from java (only CPU use ), I do not know what kind of reason cause this situation.

by the way, We did not use any frameworks in this case but only use the glassfish-servlet
Our architecture as below:

apache →(to call) → glassfish →(to call) →python →(to call) →Inference model

And when we use the terminate to call the python to operation the inference model directly, the GPU can be used as usual.

but when we use the java (from glassfish) to operation the inference model, the GPU can not be used.

We want to understand why this issue happen, is this issue happen because of our architecture ?

Thank you so much

Hi,

Is this topic duplicate to 1065065?
[url]https://devtalk.nvidia.com/default/topic/1065065/jetson-agx-xavier/can-not-start-the-glassfish-service-/[/url]

Suppose the java call is just a wrapper and shouldn’t make any difference on getting GPU resource.

Could you share more information about your issue?
What is the behavior of GPU is used and not used?
Do you measure it with tegratstats or meet an error?

Thanks.

Hello.
No, they are different topics, I have solved the issue which was descried in [https://devtalk.nvidia.com/default/topic/1065065/jetson-agx-xavier/can-not-start-the-glassfish-service-/ ] after I deleted the -client- in domain.xml file

There is no error happen when I operation my inference model

my command to operation the inference model in xavier from macos is :

curl -F ‘apiKey=edge’ -F ‘image=@test.png’ http://192.168.3.45:8080/NxWeb/v1/NxRead/all

I would like to share some screenshot about the Xavier information to introduce why the GPU is not used:

These screenshots show the information of Xavier working under MAXN stauts:

We can see all of the CPUS are working in almost full load status, but the GPU looks not work during the process(most time GPU usage keep in 0%, some time the GPU usage work up to 16 or 25%, but only keep if for a moment and back to 0%, so it looks that the GPU is only work for page update at that time)

And When I changed the model status to MODE30WALL, the results has no changed

And Even I changed the model status to MODE10W, the results has no changed also.

Thus that is why I said the GPU is not used during the inference model operation.

Do you have any idea of this kind of issue?

Thank you







And add the information
When we use python3 command to call the inference model in Xavier
The GPU can work well and speed is good (the GPU up to 100% when calculating)

the command we use in Xavier is :

import keras
from keras.models import load_model
import numpy as np
import tensorflow as tf
config = tf.ConfigProto(gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.1,allow_growth=True))
sess = tf.Session(config=config)
keras.backend.set_session(sess)
p_model = load_model(‘release/OCR_Engine_phone_keras.aiinside’)
p_model.predict(np.zeros((1, 64, 64, 1)))
p_model.predict(np.zeros((1, 512, 64, 1)))
p_model.predict(np.zeros((1, 1024, 64, 1)))

but we need use java as first to call the python and to call the inference model, but why I can not use GPU when we call the inference model from java (glassfish)

Hi

We want to report this issue to our internal team.
Could you help us prepare a sample to reproduce this in our environment.

1. A python script and model to launch GPU inference.
2. A Jave app to launch the above python script.

Thanks.

this issue has solved