You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
By clicking “Sign up for GitHub”, you agree to our
terms of service
and
privacy statement
. We’ll occasionally send you account related emails.
Already on GitHub?
Sign in
to your account
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Android 10
TensorFlow installed from (source or binary): source
TensorFlow version (or github SHA if from source):2.9.2
GPU is not supported on qualcomm adreno 702 GPU device.
On qualcomm adreno 702 GPU device,
the tensorflow example android app
can not work and it give us the toast "GPU is not supported on this device.".
Look into the tensorflow source code , and notice that int the tensorflow source code path tensorflow_src/tensorflow/lite/experimental/acceleration/compatibility,that give a way to add GPU device to make GPU work.
1. Following the tensorflow_src/tensorflow/lite/experimental/acceleration/compatibility/README.md, but it can not well on GPU, and the android app will crash. We do not know how to resolve this ?
Very thanks for your reply.
Hi
@megleo
,
Hexagon is a specialized processor designed by Qualcomm for mobile devices. TensorFlow Lite supports Hexagon delegate, which can be used to accelerate the inference process on devices that do not support GPUs or have limited GPU capabilities. Please refer to this document on
Hexagon Delegate
. Thank you!
@synandi
Thanks for your reply.
On my embedded device, there is no DSP device , but a Andreno GPU. So we expect to use GPU on our embedded devices.By using google example android app
image classification
, it shown that "GPU is not supportted". So we just wonder that how to enable GPU using the following method but not work.
In the
README.md
, it seems give us the method to add gpu_compatibility. we follow it , but get the gpu_compatibility.bin is about 117Kb, which is more huge than the original one, and it may be Converted failed.
Thanks
Hi
@megleo
Can you check the following lines in build.gradle (Module:app) after the gradle file is built?
// Tensorflow lite dependencies
implementation 'org.tensorflow:tensorflow-lite-task-vision:0.4.0'
// Import the GPU delegate plugin Library for GPU inference
implementation 'org.tensorflow:tensorflow-lite-gpu:2.9.0'
implementation 'org.tensorflow:tensorflow-lite-gpu-delegate-plugin:0.4.0'
If not you can add the project dependendcies and enable GPU acceleration if available, as given here.
dependencies {
implementation 'org.tensorflow:tensorflow-lite'
implementation 'org.tensorflow:tensorflow-lite-gpu'
If the GPU is not supported, we can set the num of threads to run the model.
Thanks.