We are happy to announce that the Embedded Vision Alliance selected OpenVINO™ toolkit as the 2019 Developer Tool of the Year!
OpenVINO™ toolkit core components were updated to the 2019 R1.1 baseline:
- The Deep Learning Deployment Toolkit changes:
- VPU-Myriad plugin source code is now available in the repository! This plugin is aligned with Intel® Movidius™ Myriad™ X Development Kit R7 release.
- Inference Engine build instructions updated for Linux*, Raspbian* Stretch OS, Windows* and macOS* systems. Added support for Microsoft Visual Studio* 2019.
- Parallelism schemes switched from OpenMP* to Threading Building Blocks (TBB) to increase performance in a multi-network scenario. Most customer’s deployment pipeline executes multiple network combinations and TBB gives the most optimal performance in such use cases (testing showed up to 3.5X improvement). Threading Building Blocks (TBB) is used by default. To build the Inference Engine with OpenMP* threading, set -DTHREADING=OMP option.
- Added support for many new operations in ONNX*, TensorFlow* and MXNet* frameworks. Topologies like Tiny YOLO v3, full DeepLab v3, bi-directional LSTMs can now be run using Deep Learning Deployment toolkit for optimized inference.
- Try it now: https://github.com/opencv/dldt
- Open Model Zoo changes:
- Added more than 10 new pre-trained models including: action recognition encoder/decoder, text recognition and instance segmentation networks to expand to newer use cases. Few models with binary weights introduced to further boost the performance for these networks. The full list of models and demos is available here.
- Model Downloader configuration file is extended to support several new public models. Run the script with
--print_alloption to see available topologies.
- Try it now: https://github.com/opencv/open_model_zoo
Note: The Intel® Distribution of OpenVINO™ toolkit is still available as free commercial product. Documentation for latest releases is available here: https://docs.openvinotoolkit.org/
We are pleased to highlight OpenVINO™ toolkit extensions – standalone projects that build a software ecosystem around OpenVINO™ toolkit.
- GStreamer* Video Analytics plugins bring Deep Learning Inference capabilities to open-source framework GStreamer* and helps developers to build highly efficient and scalable video analytics applications. The solution:
- Extracts insights from video stream(s) using object detection, classification and recognition CNN models and sends this metadata to an application or a cloud service for further processing
- Leverages hardware acceleration for media and inference operations and heterogeneous execution across Intel CPU and GPU
- Provides flexible mechanism to quickly construct video analytics pipeline from highly optimized building blocks
- Deployable on both edge devices and cloud infrastructure, including deployment in Docker containers
- Can be used for various applications such as video surveillance and security, smart city, retail analytics, ad insertion and others
- Try it now: https://github.com/opencv/gst-video-analytics
- OpenVINO™ Model Server is a flexible, high-performance inference-serving component for artificial intelligence models. The software makes it easy to deploy new algorithms and AI experiments, while keeping the same server architecture and APIs as in the TensorFlow Serving. It provides out-of-the-box integration with models supported by OpenVINO™ toolkit and allows frameworks such as AWS Sagemaker to serve AI models with OpenVINO™ toolkit. OpenVINO™ Model Server supports various kinds of models storage: local filesystem, GCS, S3 and Minio. It can be easily deployed via Docker container and scaled in Kubernetes or in Kubeflow pipelines.
- Please, read more at:
- Try it now: https://github.com/IntelAI/OpenVINO-model-server
- OpenVINO™ Training Extensions provide a convenient environment to train Deep Learning models and convert them using OpenVINO™ Toolkit for optimized inference.