724|2

8

帖子

0

TA的资源

一粒金砂(中级)

楼主
 

#AI挑战营第二站#ONNX模型转RKNN [复制链接]

本帖最后由 jianghelong 于 2024-5-10 16:30 编辑

1、环境搭建

(1)安装Anaconda(已安装的可跳过)

./Anaconda3-2023.07-2-Linux-x86_64.sh

然后回车,一直往下滑动看完 license,最后输入 yes 后,继续按下回车, 然后进入安装,安装完成后输入 yes 初始化 anaconda3

(2)新建虚拟环境

# 重启或者使用命令 source ~/.bashrc 进入 anaconda 环境
source ~/.bashrc
# 创建一个环境,本例中环境名称为toolkit2_1.6,python的版本为3.8
conda create -n toolkit2_1.6 python=3.8
# 激活环境
conda activate toolkit2_1.6

(3)安装指定版本的库和whl文件

# 拉取 toolkit2 源码
git clone https://github.com/airockchip/rknn-toolkit2
# 配置 pip 源
pip3 config set global.index-url https://mirror.baidu.com/pypi/simple
# pip 安装指定版本的库(请根据 python 版本选择文件安装)
cd rknn-toolkit2
pip3 install -r packages/requirements_cp38-2.0.0b0.txt
# 安装 whl 文件,需要根据 python 版本和 rknn_toolkit2 版本
pip3 install packages/rknn_toolkit2-2.0.0b0+9bab5682-cp38-cp38-linux_x86_64.whl

2、转换代码

这里从MNIST数据集中选取了一张图片进行量化

from rknn.api import RKNN


if __name__ == '__main__':
    # 创建 RKNN 对象
    rknn = RKNN(verbose=True)

    print('--> Config model')
    rknn.config(mean_values=[[0]], std_values=[[1]], target_platform='rv1106')
    print('done')

    # 导入 onnx 模型
    print('--> Loading model')
    ret = rknn.load_onnx(model="new.onnx")
    if ret != 0:
        print('Load model failed!')
        exit(ret)
    print('done')

    # 构建模型
    print('--> Building model')
    #ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)
    ret = rknn.build(do_quantization=True, dataset="dataset.txt")
    if ret != 0:
        print('Build model failed!')
        exit(ret)
    print('done')

    # 导出 rknn 模型
    print('--> Export rknn model')
    ret = rknn.export_rknn("best.rknn")
    if ret != 0:
        print('Export rknn model failed!')
        exit(ret)
    print('done')

    rknn.release()

输出日志如下:

I rknn-toolkit2 version: 2.0.0b0+9bab5682
--> Config model
done
--> Loading model
I Loading :   0%|                                                             |I Loading : 100%|██████████████████████████████████████████████████| 7/7 [00:00<00:00, 29448.47it/s]
done
--> Building model
D base_optimize ...
D base_optimize done.
D 
D fold_constant ...
D fold_constant done.
D 
D correct_ops ...
D correct_ops done.
D 
D fuse_ops ...
D fuse_ops results:
D     replace_reshape_gemm_by_conv: remove node = ['/Reshape', '/linear_layer/Gemm'], add node = ['/linear_layer/Gemm_2conv', '/linear_layer/Gemm_2conv_reshape']
D     fold_constant ...
D     fold_constant done.
D fuse_ops done.
D 
D sparse_weight ...
D sparse_weight done.
D 
I GraphPreparing :   0%|                                                      |I GraphPreparing : 100%|████████████████████████████████████████████| 8/8 [00:00<00:00, 6011.18it/s]
I Quantizating :   0%|                                                        |I Quantizating : 100%|██████████████████████████████████████████████| 8/8 [00:00<00:00, 1291.55it/s]
D 
D quant_optimizer ...
D quant_optimizer results:
D     adjust_relu: ['/conv_layer2/conv_layer2.1/Relu', '/conv_layer1/conv_layer1.1/Relu']
D quant_optimizer done.
D 
W build: The default input dtype of 'input.1' is changed from 'float32' to 'int8' in rknn model for performance!
                       Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of '21' is changed from 'float32' to 'int8' in rknn model for performance!
                      Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I RKNN: [16:27:56.472] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0
I RKNN: librknnc version: 2.0.0b0 (35a6907d79@2024-03-24T02:34:11)
D RKNN: [16:27:56.473] RKNN is invoked
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNTileFcBatchFuse
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTileFcBatchFuse
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [16:27:56.476] >>>>>> start: OpEmit
D RKNN: [16:27:56.476] <<<<<<<< end: OpEmit
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNLayoutMatchPass
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[input.1]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer1/conv_layer1.1/Relu_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer1/conv_layer1.2/MaxPool_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer2/conv_layer2.1/Relu_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer2/conv_layer2.2/MaxPool_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/linear_layer/Gemm_2conv_output]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(0), tname:[21]
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [16:27:56.476] >>>>>> start: OpEmit
D RKNN: [16:27:56.476] finish initComputeZoneMap
D RKNN: [16:27:56.476] <<<<<<<< end: OpEmit
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNProfileAnalysisPass
D RKNN: [16:27:56.476] node: Reshape:/linear_layer/Gemm_2conv_reshape, Target: NPU
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNProfileAnalysisPass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNOperatorIdGenPass
D RKNN: [16:27:56.477] <<<<<<<< end: rknn::RKNNOperatorIdGenPass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNWeightTransposePass
W RKNN: [16:27:56.477] Warning: Tensor /linear_layer/Gemm_2conv_reshape_shape need paramter qtype, type is set to float16 by default!
W RKNN: [16:27:56.477] Warning: Tensor /linear_layer/Gemm_2conv_reshape_shape need paramter qtype, type is set to float16 by default!
D RKNN: [16:27:56.477] <<<<<<<< end: rknn::RKNNWeightTransposePass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNCPUWeightTransposePass
D RKNN: [16:27:56.477] <<<<<<<< end: rknn::RKNNCPUWeightTransposePass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNModelBuildPass
D RKNN: [16:27:56.479] <<<<<<<< end: rknn::RKNNModelBuildPass
D RKNN: [16:27:56.479] >>>>>> start: rknn::RKNNModelRegCmdbuildPass
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479]                                                         Network Layer Information Table                                                     
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479] ID   OpType           DataType Target InputShape                               OutputShape            Cycles(DDR/NPU/Total)    RW(KB)       FullName        
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479] 0    InputOperator    INT8     CPU    \                                        (1,1,28,28)            0/0/0                    0            InputOperator:input.1
D RKNN: [16:27:56.479] 1    ConvRelu         INT8     NPU    (1,1,28,28),(16,1,3,3),(16)              (1,16,28,28)           2279/7056/7056           1            Conv:/conv_layer1/conv_layer1.0/Conv
D RKNN: [16:27:56.479] 2    MaxPool          INT8     NPU    (1,16,28,28)                             (1,16,14,14)           2546/0/2546              12           MaxPool:/conv_layer1/conv_layer1.2/MaxPool
D RKNN: [16:27:56.479] 3    ConvRelu         INT8     NPU    (1,16,14,14),(32,16,3,3),(32)            (1,32,14,14)           2318/3744/3744           7            Conv:/conv_layer2/conv_layer2.0/Conv
D RKNN: [16:27:56.479] 4    MaxPool          INT8     NPU    (1,32,14,14)                             (1,32,7,7)             1273/0/1273              6            MaxPool:/conv_layer2/conv_layer2.2/MaxPool
D RKNN: [16:27:56.479] 5    Conv             INT8     NPU    (1,32,7,7),(10,32,7,7),(10)              (1,10,1,1)             2824/784/2824            16           Conv:/linear_layer/Gemm_2conv
D RKNN: [16:27:56.479] 6    Reshape          INT8     NPU    (1,10,1,1),(2)                           (1,10)                 7/0/7                    0            Reshape:/linear_layer/Gemm_2conv_reshape
D RKNN: [16:27:56.479] 7    OutputOperator   INT8     CPU    (1,10)                                   \                      0/0/0                    0            OutputOperator:21
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479] <<<<<<<< end: rknn::RKNNModelRegCmdbuildPass
D RKNN: [16:27:56.479] >>>>>> start: rknn::RKNNFlatcModelBuildPass
D RKNN: [16:27:56.479] Export Mini RKNN model to /tmp/tmppq9867f1/check.rknn
D RKNN: [16:27:56.479] >>>>>> end: rknn::RKNNFlatcModelBuildPass
D RKNN: [16:27:56.480] >>>>>> start: rknn::RKNNMemStatisticsPass
D RKNN: [16:27:56.480] ----------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.480]                                                  Feature Tensor Information Table                                          
D RKNN: [16:27:56.480] ------------------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] ID  User           Tensor                                      DataType  DataFormat   OrigShape    NativeShape    |     [Start       End)       Size
D RKNN: [16:27:56.480] ------------------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] 1   ConvRelu       input.1                                     INT8      NC1HWC2      (1,1,28,28)  (1,1,28,28,1)  | 0x00006640 0x000069c0 0x00000380
D RKNN: [16:27:56.480] 2   MaxPool        /conv_layer1/conv_layer1.1/Relu_output_0    INT8      NC1HWC2      (1,16,28,28) (1,1,28,28,16) | 0x000069c0 0x00009ac0 0x00003100
D RKNN: [16:27:56.480] 3   ConvRelu       /conv_layer1/conv_layer1.2/MaxPool_output_0 INT8      NC1HWC2      (1,16,14,14) (1,1,14,14,16) | 0x00009ac0 0x0000a700 0x00000c40
D RKNN: [16:27:56.480] 4   MaxPool        /conv_layer2/conv_layer2.1/Relu_output_0    INT8      NC1HWC2      (1,32,14,14) (1,2,14,14,16) | 0x00006640 0x00007ec0 0x00001880
D RKNN: [16:27:56.480] 5   Conv           /conv_layer2/conv_layer2.2/MaxPool_output_0 INT8      NC1HWC2      (1,32,7,7)   (1,2,7,7,16)   | 0x00007ec0 0x00008540 0x00000680
D RKNN: [16:27:56.480] 6   Reshape        /linear_layer/Gemm_2conv_output             INT8      NC1HWC2      (1,10,1,1)   (1,1,1,1,16)   | 0x00006640 0x00006650 0x00000010
D RKNN: [16:27:56.480] 7   OutputOperator 21                                          INT8      UNDEFINED    (1,10)       (1,10)         | 0x000066c0 0x00006700 0x00000040
D RKNN: [16:27:56.480] ------------------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] -------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.480]                                     Const Tensor Information Table                        
D RKNN: [16:27:56.480] ---------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] ID  User     Tensor                                 DataType  OrigShape    |     [Start       End)       Size
D RKNN: [16:27:56.480] ---------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] 1   ConvRelu conv_layer1.0.weight                   INT8      (16,1,3,3)   | 0x00000000 0x00000240 0x00000240
D RKNN: [16:27:56.480] 1   ConvRelu conv_layer1.0.bias                     INT32     (16)         | 0x00000240 0x000002c0 0x00000080
D RKNN: [16:27:56.480] 3   ConvRelu conv_layer2.0.weight                   INT8      (32,16,3,3)  | 0x000002c0 0x000014c0 0x00001200
D RKNN: [16:27:56.480] 3   ConvRelu conv_layer2.0.bias                     INT32     (32)         | 0x000014c0 0x000015c0 0x00000100
D RKNN: [16:27:56.480] 5   Conv     linear_layer.weight                    INT8      (10,32,7,7)  | 0x000015c0 0x00005300 0x00003d40
D RKNN: [16:27:56.480] 5   Conv     linear_layer.bias                      INT32     (10)         | 0x00005300 0x00005380 0x00000080
D RKNN: [16:27:56.480] 6   Reshape  /linear_layer/Gemm_2conv_reshape_shape INT64     (2)          | 0x00005380*0x000053c0 0x00000040
D RKNN: [16:27:56.480] ---------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] ----------------------------------------
D RKNN: [16:27:56.480] Total Internal Memory Size: 16.1875KB
D RKNN: [16:27:56.480] Total Weight Memory Size: 20.9375KB
D RKNN: [16:27:56.480] ----------------------------------------
D RKNN: [16:27:56.480] <<<<<<<< end: rknn::RKNNMemStatisticsPass
I rknn buiding done.
done
--> Export rknn model
done

得到的rknn模型见附件
 

new.onnx

81.8 KB, 下载次数: 1

最新回复

选取的是什么图片进行量化,没看到图片   详情 回复 发表于 2024-5-11 07:29
点赞 关注

回复
举报

6802

帖子

0

TA的资源

五彩晶圆(高级)

沙发
 

选取的是什么图片进行量化,没看到图片

 
 

回复

8

帖子

0

TA的资源

一粒金砂(中级)

板凳
 

  这是我量化用的图片

 
 
 

回复
您需要登录后才可以回帖 登录 | 注册

随便看看
查找数据手册?

EEWorld Datasheet 技术支持

相关文章 更多>>
关闭
站长推荐上一条 1/9 下一条

 
EEWorld订阅号

 
EEWorld服务号

 
汽车开发圈

About Us 关于我们 客户服务 联系方式 器件索引 网站地图 最新更新 手机版

站点相关: 国产芯 安防电子 汽车电子 手机便携 工业控制 家用电子 医疗电子 测试测量 网络通信 物联网

北京市海淀区中关村大街18号B座15层1530室 电话:(010)82350740 邮编:100190

电子工程世界版权所有 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2025 EEWORLD.com.cn, Inc. All rights reserved
快速回复 返回顶部 返回列表