I am using the neural network toolbox for deep learning and I have this chronical problem when I am doing a classification. My DNN model has trained already and I keep receiving the same error during classification despite the fact that I used an HPC (cluster) that has Nvidia GeForce 1080, and my machine that has GeForce 1080Ti. the error is : Error using nnet.internal.cnngpu.convolveForward2D Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'. Error in nnet.internal.cnn.layer.util.Convolution2DGPUStrategy/forward (line 14) Error in nnet.internal.cnn.layer.Convolution2D/doForward (line 332) Error in nnet.internal.cnn.layer.Convolution2D/forwardNormal (line 278) Error in nnet.internal.cnn.layer.Convolution2D/predict (line 124) Error in nnet.internal.cnn.DAGNetwork/forwardPropagationWithPredict (line 236) Error in nnet.internal.cnn.DAGNetwork/predict (line 317) Error in DAGNetwork/predict (line 426) Error in DAGNetwork/classify (line 490) Error in Guisti_test_script (line 56) parallel:gpu:array:OOM Has anyone faced the same problem before? ps: my test data contains 15000 images.
Prashant Kumar answered .
2025-11-20
residualImage =activations(net, Iy, 41, 'MiniBatchSize', 1);
1) To solve this problem you might use CPU instead:
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
I think this problem is caused by the high resolution of the test images, e.g. the second image "car2.jpg", which is 3504 x 2336.
2) A better solution is to use GPU for low resolution images, and CPU for high resoultion images by replacing "residualImage =activations(net, Iy, 41)" with:
sx=size(I);
if sx(1)>1000 || sx(2)>1000 %try lower values if it does not work e.g: if sx(1)>500 || sx(2)>500
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
else
residualImage =activations(net, Iy, 41);
end