The Renesas TVM is the Extension package of Apache TVM Deep Learning Complier for Renesas DRP-AI accelerators powered by EdgeCortix MERA™. The TVM is a software framework that translates Neural Networks to run on the Renesas MPUs. While the AI Translator (see section above) can translate ONNX models to the DRP-AI hardware, it is restricted by the supported AI operations. This can restrict the number of supported AI Models. The TVM Translator expands the number of supported AI models for the RZV processors (currently RZV2L, RZV2MA, RZV2M). The TVM translates ONNX models by delegating the generated output between the DRP-AI and CPU.
This is the TVM Software framework based on the Apache TVM. The TVM includes python support libraries, and sample scripts. The python scripts follow the Apache TVM framework API found here.
Official TVM Translator Github repo
Installation Directions Readme
Current Supported Renesas MPUs
Generated Output
BSP Supported Versions
Required VLP, DRP-AI and DRP-AI Translator are listed below. Board Support Package requirements can be found on the TVM repo page here.
RZV2H AI SDK is the recommended BSP. It includes VLP and DRP-AI Driver.
Expand | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
A TVM application need the following files.
This is the TVM runtime library. This a pre-compiled library for the RZV2M and RZV2MA BSP. It is included in the Renesas TVM repository.
The file that are generated using the tutorial scripts included in the repository.
Neural Network require preprocessing of the input images before an inference can be run. This can involve resize, crop, and format conversion to match the input source to the expected inference input. In addition inference do not process RGB pixels instead the images must be converted to float and normalized. These preprocessing operation will increase the total inference time when done on the CPU. The Renesas TVM provides DRP-AI binaries to accelerate this process.
The DRP-AI Preprocessing library must operate as follows
NOTE: This library is optional. It is provided to accelerate preprocessing.
This is the CPP Wrapper files that utilize the Preprocessor Binaries.
These files contain the application CPP API functions for loading and running the TVM.
TVM Application DevlopmentDRP-AI TVM is a tool that generates runtime executables for AI from trained AI models for Renesas' RZ/V Series MPUs. Below is the overview of the Renesas DRP-AI TVM Tool.
A summary of the TVM API implementation is DRP-AI TVM Application Development.
The link below shows the performance of several AI Vision Models converted using the TVM and Tested on the RZV2MA. The TVM is capable of converting modesl from exported trained models in the standard ONNX format, pyTorch IMage Models TIMM, and Pytorch Class Models.
Official DRP-AI TVM Performance
RZ/V Web Performance Application
This Application runs AI Algorithms, translated using the Renesas TVM on the RZ/V products. The image, AI output and performance are viewed on a web client ( i.e. PC browser ). Source code running on the RZ/V and web hosting software can be found in the link above. List of AI Models implemented for the demo can be found here.
This demo demonstrates how to write an application that implements a DRP-AI TVM generated model. The demo uses Resnet50.
These are the models Renesas has tested on the Renesas DRP-AI TVM Tool.
DRP-AI is designed for feed-forward neural networks.
Loops or recursive layer types like RNN, LSTM and GRU cannot be mapped to DRP-AI.
Additionally, the quantization for DRP-AI3 requires that every node input has its own initializer (weight, bias, etc.).
The following Python script analyzes ONNX models regarding these structures.
To analyze the subgraph processing of DRP-AI TVM please refer to the profiling guide.
The ONNX of the subgraphs assigned to DRP-AI are saved in the temp subdirectory after the AI model is compiled.
If the neural network is split into many pieces, please have a look at:
Reduce DRP-AI TVM network split of TensorFlow models with "onnx-simplifier"
Please note that TVM uses the maximum number of threads that it can and therefore may utilize the CPU significantly without performance benefits.
If the neural network is mainly processed on DRP-AI, please evaluate a change of the environment variable TVM_NUM_THREADS (default = number of CPUs).
There are cases where the quad core CPU load has been reduced from 320% to 23%.
Simply define the environment variable TVM_NUM_THREADS before the application gets started.
For example:
Code Block |
---|
export TVM_NUM_THREADS=2
./<start_app> |
Table of Contents |
---|