You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

General Information

The following example shows how to evaluate AI Models for RZ/V2H. The created inference will be able to run on the DRP-AI (INT8). The inference output may not be accurate because the training needs to be done to further calibrate the AI Model.

For this demo we are using the Darknet YoloV2 VOC.


Supported Devices (INT8 DRP-AI IP):

  • RZV2H

Requirements:

  • RZ/V2H EVK
  • RZ/V2H AI_SDK
  • TVM Translator ( Follow RZV2H Installation ) here
  • Install TVM Translator here ( Follow RZ/V2H Docker Installation )

TVM Translation

Step 1) Start Docker Container

mkdir data
docker run -it --name drp-ai_tvm_v2h_container_${USER} -v $(pwd)/data:/drp-ai_tvm/data drp-ai_tvm_v2h_image_${USER}


Step 2) Preparation - add the following to access the TVM Compile Scripts.

# Added the following paths to use the TVM Scripts
PYTHONPATH=/drp-ai_tvm/tvm/python:/drp-ai_tvm/tutorials/

# Create Symbolic links for the following bash scripts 
# These are requried to run TVM translator
ln -s /drp-ai_tvm/tutorials/run_*

Step 3) Download Darknet yolov2 voc inference

wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2-voc.cfg
wget https://pjreddie.com/media/files/yolov2-voc.weights


Step 4) Get the yolo.ini configuration file to run the provided conversion scripts

cp /drp-ai_tvm/how-to/sample_app/docs/object_detection/yolo/yolo.ini .


Step 5) Convert Darknet Inference model to Pytorch file

python3 ../scripts/convert_to_pytorch.py yolov2


Step 6) Convert pytorch file to onnx file

python3 ../scripts/convert_to_onnx.py yolov2


Step 7) Make a copy of the TVM Script compile_onnx_model.

cp /drp-ai_tvm/tutorials/compile_onnx_model_quant.py /drp-ai_tvm/tutorials/compile_onnx_model_yolov2.py


Step 8) Modify the python script compile_onnx_model_yolov2.py  pre-processing section of the script 

Before
mean    = [0.485, 0.456, 0.406]
stdev   = [0.229, 0.224, 0.225]

After
config.shape_in     = [1, 480, 640, 3]
mean    = [0.0, 0.0, 0.0]
stdev   = [1.0, 1.0, 1.0]

Sed Script to do the conversion
# Configure TVM DRP-AI Preprocessor Camera/Image input shape
sed -i 's/^.*config.shape_in.*$/ config.shape_in = [1, 480, 640, 3]/' /drp-ai_tvm/tutorials/compile_onnx_model_yolov2.py

# Change the default TVM DRP-AI Preprocessor mean and stdev to YoloV2
sed -i 's/mean\s*=\s*\[[0-9., ]*\]/mean = [0.0, 0.0, 0.0]/' /drp-ai_tvm/tutorials/compile_onnx_model_yolov2.py
sed -i 's/stdev\s*=\s*\[[0-9., ]*\]/stdev = [1.0, 1.0, 1.0]/' /drp-ai_tvm/tutorials/compile_onnx_model_yolov2.py



Step 9) Using the modified script created in step 7 and 8 translate the onnx model to DRP-AI TVM Model.

python3 compile_onnx_model_yolov2.py \
./d-yolov2.onnx \
-t $SDK \
-d $TRANSLATOR \
-c $QUANTIZER \
-i input1 \
-s 1,3,416,416 \
-o yolov2_onnx \
-v 100

Minimum Required arguments:

  • ./d-yolov2.onnx - This is the ONNX input file
  • -t is path to Yocto SDK
  • -d path to the DRP AI Translator Tool
  • -c Path to DRP-AI Quantization Tool 
  • -s Define the Inference input shape
  • -o Output directory
  • -v Specifies the version of TVM 


  • No labels