Introduction Deep learning tools are extensively used in vision applications across most industries, ranging from facial recognition on mobile devices to Tesla’s self-driving cars. However, using the right tools is paramount when working on these applications, as it requires in-depth knowledge and specialization. After reading this article, you’ll understand what deep learning tools are, why […]
How to use OpenCV with OpenVINO
Guest post by Aleksandr Voron To keep OpenVINO™ Toolkit focused on optimizing and deploying inference, we no longer include OpenCV and DL Streamer in our distribution packages. But not to worry! Both OpenCV and DL Streamer continue to work with OpenVINO and in this blog post we will explain how. In the 2022.1 release of OpenVINO™, OpenCV became an optional […]
How to run YOLOv4 model inference using OpenVINO and OpenCV on ARM
Deep Learning Inference Engine backend from the Intel OpenVINO toolkit is one of the supported OpenCV DNN backends. It was mentioned in the previous post that ARM CPUs support has been recently added to Inference Engine via the dedicated ARM CPU plugin. Let’s review how OpenCV DNN module can leverage Inference Engine and this plugin […]
Deep learning inference in OpenVINO on ARM
We are pleased to announce that Deep Learning Inference Engine backend in the OpenCV DNN module can run inference on ARM CPUs nowadays. Previously Inference Engine supported Intel hardware only: CPUs, iGPUs, FPGAs and VPUs. Recently a new ARM CPU plugin has been published on the GitHub. This plugin allows Inference Engine to run DL […]
OpenVINO: Merging Pre and Post-processing into the model
We have already discussed several ways to convert your DL model into OpenVINO in previous blogs (PyTorch and TensorFlow). Let’s try something more advanced now.
Running TensorFlow model inference in OpenVINO
How TensorFlow trained model may be used and deployed to run with OpenVINO Inference Engine
OpenVINO model optimization
Are you looking for a fast way to run neural network inferences on Intel platforms? Then OpenVINO toolkit is exactly what you need. It provides a large number of optimizations that allow blazingly fast inference on CPUs, VPUs, integrated graphics, and FPGAs. In the previous post, we’ve learned how to prepare and run DNN models […]
How to Speed Up Deep Learning Inference Using OpenVINO Toolkit
Nowadays, many ground-breaking solutions based on neural network are developed daily and more people are adopting this technique for solving problems such as voice recognitions in their life. Because of the recent advancement in computing and the growing trend of using neural networks in a production environment, there is a significant focus of having such […]
Running TensorFlow model inference in OpenVINO
How TensorFlow trained model may be used and deployed to run with OpenVINO Inference Engine