Latency estimation tool and investigation of neural networks inference on mobile gpu

Evgeny Ponomarev, Sergey Matveev, Ivan Oseledets, Valery Glukhov

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


A lot of deep learning applications are desired to be run on mobile devices. Both accuracy and inference time are meaningful for a lot of them. While the number of FLOPs is usually used as a proxy for neural network latency, it may not be the best choice. In order to obtain a better approximation of latency, the research community uses lookup tables of all possible layers for the calculation of the inference on a mobile CPU. It requires only a small number of experiments. Unfortunately, on a mobile GPU, this method is not applicable in a straightforward way and shows low precision. In this work, we consider latency approximation on a mobile GPU as a data-and hardware-specific problem. Our main goal is to construct a convenient Latency Estimation Tool for Investigation (LETI) of neural network inference and building robust and accurate latency prediction models for each specific task. To achieve this goal, we make tools that provide a convenient way to conduct massive experiments on different target devices focusing on a mobile GPU. After evaluation of the dataset, one can train the regression model on experimental data and use it for future latency prediction and analysis. We experimentally demonstrate the applicability of such an approach on a subset of the popular NAS-Benchmark 101 dataset for two different mobile GPU.

Original languageEnglish
Article number104
Issue number8
Publication statusPublished - Aug 2021


  • Inference
  • Latency
  • Mobile GPU
  • Neural architecture search


Dive into the research topics of 'Latency estimation tool and investigation of neural networks inference on mobile gpu'. Together they form a unique fingerprint.

Cite this