Overview

DRP-AI TVM on RZ/V series

The Renesas TVM is the Extension package of Apache TVM Deep Learning Complier for Renesas DRP-AI accelerators powered by EdgeCortix MERA™. The TVM is a software framework that translates Neural Networks to run on the Renesas MPUs. While the AI Translator (see section above) can translate ONNX models to the DRP-AI hardware, it is restricted by the supported AI operations. This can restrict the number of supported AI Models. The TVM Translator expands the number of supported AI models for the RZV processors (currently RZV2L, RZV2MA, RZV2M). The TVM translates ONNX models by delegating the generated output between the DRP-AI and CPU.

This is the TVM Software framework based on the Apache TVM. The TVM includes python support libraries, and sample scripts. The python scripts follow the Apache TVM framework API found here.

  • The TVM Provides the following
    • Wider supported range of AI Networks that can run on the DRP-AI and CPU.
    • Translate AI models from ONNX files
    • Translate AI models from PyTorch PT saved models. ( For other supported AI Software Frameworks see Apache TVM)
    • Translate models to run on CPU only. This allows models to run on RZG.


Official TVM Translator Github repo

Installation Directions Readme

Current Supported Renesas MPUs

  • Renesas RZ/V2L Evaluation Board Kit
  • Renesas RZ/V2M Evaluation Board Kit
  • Renesas RZ/V2MA Evaluation Board Kit
  • Renesas RZ/V2H Evaluation Board Kit

Generated Output

  • DRP-AI + CPU
  • CPU only


BSP Supported Versions

Required VLP, DRP-AI and DRP-AI Translator are listed below. Board Support Package requirements can be found on the TVM repo page here.

RZV2H AI SDK is the recommended BSP. It includes VLP and DRP-AI Driver.


TVMVLP (RZV2L)VLP (RZV2M)VLP (RZV2MA)AI_SDK

( RZV2H )

DRP-AI Driver

RZVL,M.MA

DRP-AI Translator

RZV2L,M,MA

DRP-AI Translator i8

RZV2H

v2.3.1v3.0.6v3.0.4 or laterv3.0.4 or laterv4.00 or laterv7.40 or laterv1.84v1.02 or later
v2.2.1v3.0.4 or laterv3.0.4 or laterv3.0.4 or laterv3.0.0v7.40 or laterv1.84v1.01
v2.2.0v3.0.4 or laterv3.0.4 or laterv3.0.4 or laterv3.0.0v7.40 or laterv1.83v1.01
v2.1.0v3.0.4 or laterv3.0.4v3.0.4v3.0.0v7.40 or laterv1.82v1.01
v1.1.1v3.0.4v3.0.4v3.0.4
v7.40v1.82na
v1,1.0v.3.0.2v1.3.0 update1v1.1.0 update1
v7.30(V2L,M) v7.31(V2MA)v1.82na
v1.04v3.0.2v1.3.0v1.1.0
v7.30v1.81na
v1.03NSNSv1.0.0
v7.20v1.80na

Getting Started

DRP-AI TVM is a tool that generates runtime executables for AI from trained AI models for Renesas' RZ/V Series MPUs. Below is the overview of the Renesas DRP-AI TVM Tool.

tvm

Renesas DRP-AI TVM Guides

  • RZ/V2H Rapid Evaluation (Yolo) Wiki
  • RZ/V2 Evaluation (Yolo) Wiki
  • RZ/V2 YoloV5

TVM Application Development

A summary of the TVM API implementation is DRP-AI TVM Application Development.

AI Model Performance

The link below shows the performance of several AI Vision Models converted using the TVM and Tested on the RZV2MA. The TVM is capable of converting modesl from exported trained models in the standard ONNX format, pyTorch IMage Models TIMM, and Pytorch Class Models.

Official DRP-AI TVM Performance

DRP-AI TVM Sample Applications

RZ/V Web Performance Application

This Application runs AI Algorithms, translated using the Renesas TVM on the RZ/V products. The image, AI output and performance are viewed on a web client ( i.e. PC browser ). Source code running on the RZ/V and web hosting software can be found in the link above. List of AI Models implemented for the demo can be found here.


RZ/V TVM Application Example

This demo demonstrates how to write an application that implements a DRP-AI TVM generated model. The demo uses Resnet50.

Tested AI Models

These are the models Renesas has tested on the Renesas DRP-AI TVM Tool.

Renesas TVM Models

DRP-AI TVM subgraph profiling ("DRP-AI" or "CPU" processing)

To analyze the subgraph processing of DRP-AI TVM please refer to the profiling guide.

DRP-AI TVM profiling

The ONNX of the subgraphs assigned to DRP-AI are saved in the temp subdirectory after the AI model is compiled.


If the neural network is split into many pieces, please have a look at:

Reduce DRP-AI TVM network split of TensorFlow models with "onnx-simplifier" 


Please note that TVM uses the maximum number of threads that it can and therefore may utilize the CPU significantly without performance benefits.
If the neural network is mainly processed on DRP-AI, please evaluate a change of the environment variable TVM_NUM_THREADS (default = number of CPUs).
There are cases where the quad core CPU load has been reduced from 320% to 23%.

Simply define the environment variable TVM_NUM_THREADS before the application gets started.

For example:

export TVM_NUM_THREADS=2
./<start_app>


  • No labels