4261|1

81

帖子

3

TA的资源

一粒金砂(中级)

楼主
 

#AI挑战营第二站#ONNX模型转RKNN [复制链接]

本帖最后由 wakojosin 于 2024-4-28 23:35 编辑

介绍

ONNX是一种开发的模型格式,通过ONNX可以使用不同的机器学习框架进行模型训练,然后导出统一的格式,进而可以被部署到支持ONNX的所有平台.

RKNN是rockchip的模型格式,可以通过他家的toolkit将ONNX转换成RKNN并支持量化、推理、性能和内存评估、量化精度分析以及模型加密,工具位于github,地址为:

链接已隐藏,如需查看请登录或者注册

 

安装

通过git下载完成rknn-toolkit2之后,即可在目录rknn-toolkit2/packages下看到requirements文件和whl包,通过pip进行相应版本的安装即可.

我用的清华源,如果期间出现以下类似的问题,可以尝试其他的源进行安装.

ERROR: Could not find a version that satisfies the requirement tf-estimator-nightly==2.8.0.dev2021122109 (from tensorflow) (from versions: none)
ERROR: No matching distribution found for tf-estimator-nightly==2.8.0.dev2021122109

临时换源的方式:

pip install tensorflow==2.8.0 -i https://pypi.mirrors.ustc.edu.cn/simple/

我最后是通过https://pypi.doubanio.com/simple完成安装的

安装结果测试:python -c 'from rknn.api import RKNN'

如果不报错就安装成功了.

 

模型转化

此部分代码可以参考example里面的例子.

def onnx2rknn(onnx_model, rknn_model, dataset=None):
    # Create RKNN object
    rknn = RKNN(verbose=True)
    
    # pre-process config
    print('--> config model')
    rknn.config(target_platform='rv1106')
    print('done')
    
    # Load model
    print('--> Loading model')
    ret = rknn.load_onnx(model=onnx_model)
    if ret != 0:
        print('Load model failed!')
        exit(ret)
    print('done')
    
    # Build model
    print('--> Building model')
    ret = rknn.build(do_quantization=True, dataset=dataset)
    if ret != 0:
        print('Build model failed!')
        exit(ret)
    print('done')
    
    # Export rknn model
    print('--> Export rknn model')
    ret = rknn.export_rknn(rknn_model)
    if ret != 0:
        print('Export rknn model failed!')
        exit(ret)
    print('done')
    
    rknn.release()

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("fonnx", help="onnx file path.")
    parser.add_argument("frknn", help="rknn file path.")
    parser.add_argument("dataset", help="dataset path.")
    args = parser.parse_args()
    print("onn_file:", args.fonnx)
    print("rknn_file:", args.frknn)
    print("dataset:", args.dataset)
    onnx2rknn(args.fonnx, args.frknn, args.dataset)

日至信息供参考:

python onnx2rknn.py lenet.onnx lenet.rknn examples/onnx/resnet50v2/dataset.txt      luckfoxenv
onn_file: lenet.onnx
rknn_file: lenet.rknn
dataset: examples/onnx/resnet50v2/dataset.txt
W __init__: rknn-toolkit2 version: 1.6.0+81f21f4d
--> config model
done
--> Loading model
W load_onnx: It is recommended onnx opset 19, but your onnx model opset is 17!
W load_onnx: Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert!
Loading : 100%|██████████████████████████████████████████████████| 10/10 [00:00<00:00, 19239.93it/s]
W load_onnx: The config.mean_values is None, zeros will be set for input 0!
W load_onnx: The config.std_values is None, ones will be set for input 0!
done
--> Building model
I base_optimize ...
I base_optimize done.
I 
I fold_constant ...
I fold_constant done.
I 
I correct_ops ...
I correct_ops done.
I 
I fuse_ops ...
I fuse_ops results:
I     replace_flatten_gemm_by_conv: remove node = ['/6/Flatten', '/7/Gemm'], add node = ['/7/Gemm_2conv', '/7/Gemm_2conv_reshape']
I     swap_reshape_relu: remove node = ['/7/Gemm_2conv_reshape', '/8/Relu'], add node = ['/8/Relu', '/7/Gemm_2conv_reshape']
I     convert_gemm_by_conv: remove node = ['/9/Gemm'], add node = ['/9/Gemm_2conv_reshape1', '/9/Gemm_2conv', '/9/Gemm_2conv_reshape2']
I     unsqueeze_to_4d_relu: remove node = [], add node = ['/10/Relu_0_unsqueeze0', '/10/Relu_0_unsqueeze1']
I     convert_gemm_by_conv: remove node = ['/11/Gemm'], add node = ['/11/Gemm_2conv_reshape1', '/11/Gemm_2conv', '/11/Gemm_2conv_reshape2']
I     fuse_two_reshape: remove node = ['/7/Gemm_2conv_reshape', '/9/Gemm_2conv_reshape2', '/10/Relu_0_unsqueeze1']
I     remove_invalid_reshape: remove node = ['/9/Gemm_2conv_reshape1', '/10/Relu_0_unsqueeze0', '/11/Gemm_2conv_reshape1']
I     fold_constant ...
I     fold_constant done.
I fuse_ops done.
I 
I sparse_weight ...
I sparse_weight done.
I 
GraphPreparing : 100%|████████████████████████████████████████████| 12/12 [00:00<00:00, 3997.75it/s]
Quantizating : 100%|███████████████████████████████████████████████| 12/12 [00:00<00:00, 569.73it/s]
I 
I quant_optimizer ...
I quant_optimizer results:
I     adjust_relu: ['/10/Relu', '/8/Relu', '/4/Relu', '/1/Relu']
I quant_optimizer done.
I 
W build: The default input dtype of 'input.1' is changed from 'float32' to 'int8' in rknn model for performance!
                       Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of '22' is changed from 'float32' to 'int8' in rknn model for performance!
                      Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I RKNN: [23:30:53.007] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1,enable_argb_group=0 ,layout_match = 1, pipeline_fuse = 0
I RKNN: librknnc version: 1.6.0 (585b3edcf@2023-12-11T07:56:14)
D RKNN: [23:30:53.008] RKNN is invoked
D RKNN: [23:30:53.014] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNTileFcBatchFuse
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNTileFcBatchFuse
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [23:30:53.015] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [23:30:53.015] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [23:30:53.015] >>>>>> start: OpEmit
D RKNN: [23:30:53.016] <<<<<<<< end: OpEmit
D RKNN: [23:30:53.016] >>>>>> start: rknn::RKNNLayoutMatchPass
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[input.1]
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[/1/Relu_output_0]
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[/2/MaxPool_output_0]
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[/4/Relu_output_0]
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[/5/MaxPool_output_0]
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[/8/Relu_output_0_before]
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[/10/Relu_output_0_shape4]
I RKNN: [23:30:53.016] AppointLayout: t->setNativeLayout(64), tname:[22_conv]
D RKNN: [23:30:53.016] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [23:30:53.016] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [23:30:53.016] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [23:30:53.016] >>>>>> start: OpEmit
D RKNN: [23:30:53.016] finish initComputeZoneMap
D RKNN: [23:30:53.016] <<<<<<<< end: OpEmit
D RKNN: [23:30:53.016] >>>>>> start: rknn::RKNNProfileAnalysisPass
D RKNN: [23:30:53.016] node: Reshape:/11/Gemm_2conv_reshape2, Target: NPU
D RKNN: [23:30:53.016] <<<<<<<< end: rknn::RKNNProfileAnalysisPass
D RKNN: [23:30:53.016] >>>>>> start: rknn::RKNNOperatorIdGenPass
D RKNN: [23:30:53.016] <<<<<<<< end: rknn::RKNNOperatorIdGenPass
D RKNN: [23:30:53.016] >>>>>> start: rknn::RKNNWeightTransposePass
W RKNN: [23:30:53.019] Warning: Tensor /11/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
W RKNN: [23:30:53.019] Warning: Tensor /11/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
D RKNN: [23:30:53.019] <<<<<<<< end: rknn::RKNNWeightTransposePass
D RKNN: [23:30:53.019] >>>>>> start: rknn::RKNNCPUWeightTransposePass
D RKNN: [23:30:53.019] <<<<<<<< end: rknn::RKNNCPUWeightTransposePass
D RKNN: [23:30:53.019] >>>>>> start: rknn::RKNNModelBuildPass
D RKNN: [23:30:53.028] RKNNModelBuildPass: [Statistics]
D RKNN: [23:30:53.028] total_regcfg_size     :      6216
D RKNN: [23:30:53.028] total_diff_regcfg_size:      6256
D RKNN: [23:30:53.028] <<<<<<<< end: rknn::RKNNModelBuildPass
D RKNN: [23:30:53.028] >>>>>> start: rknn::RKNNModelRegCmdbuildPass
D RKNN: [23:30:53.028] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [23:30:53.028]                                                                 Network Layer Information Table                                                                 
D RKNN: [23:30:53.028] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [23:30:53.028] ID   OpType           DataType Target InputShape                               OutputShape            DDRCycles    NPUCycles    MaxCycles    TaskNumber   RW(KB)       FullName        
D RKNN: [23:30:53.028] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [23:30:53.028] 0    InputOperator    INT8     CPU    \                                        (1,1,32,32)            0            0            0            0/0          0            InputOperator:input.1
D RKNN: [23:30:53.028] 1    ConvRelu         INT8     NPU    (1,1,32,32),(6,1,5,5),(6)                (1,6,28,28)            2321         19600        19600        1/0          1            Conv:/0/Conv    
D RKNN: [23:30:53.028] 2    MaxPool          INT8     NPU    (1,6,28,28)                              (1,6,14,14)            2546         0            2546         1/0          12           MaxPool:/2/MaxPool
D RKNN: [23:30:53.028] 3    ConvRelu         INT8     NPU    (1,6,14,14),(16,6,5,5),(16)              (1,16,10,10)           1829         2800         2800         1/0          9            Conv:/3/Conv    
D RKNN: [23:30:53.028] 4    MaxPool          INT8     NPU    (1,16,10,10)                             (1,16,5,5)             325          0            325          1/0          1            MaxPool:/5/MaxPool
D RKNN: [23:30:53.028] 5    ConvRelu         INT8     NPU    (1,16,5,5),(120,16,5,5),(120)            (1,120,1,1)            8045         3200         8045         1/0          48           Conv:/7/Gemm_2conv
D RKNN: [23:30:53.028] 6    ConvRelu         INT8     NPU    (1,120,1,1),(84,120,1,1),(84)            (1,84,1,1)             1798         384          1798         1/0          10           Conv:/9/Gemm_2conv
D RKNN: [23:30:53.028] 7    Conv             INT8     NPU    (1,84,1,1),(10,84,1,1),(10)              (1,10,1,1)             195          64           195          1/0          1            Conv:/11/Gemm_2conv
D RKNN: [23:30:53.028] 8    Reshape          INT8     NPU    (1,10,1,1),(2)                           (1,10)                 7            0            7            1/0          0            Reshape:/11/Gemm_2conv_reshape2
D RKNN: [23:30:53.028] 9    OutputOperator   INT8     CPU    (1,10)                                   \                      0            0            0            0/0          0            OutputOperator:22
D RKNN: [23:30:53.028] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [23:30:53.028] <<<<<<<< end: rknn::RKNNModelRegCmdbuildPass
D RKNN: [23:30:53.028] >>>>>> start: rknn::RKNNFlatcModelBuildPass
D RKNN: [23:30:53.029] Export Mini RKNN model to /tmp/tmpujiow0wl/dumps/main_graph.mini.rknn
D RKNN: [23:30:53.029] >>>>>> end: rknn::RKNNFlatcModelBuildPass
D RKNN: [23:30:53.029] >>>>>> start: rknn::RKNNMemStatisticsPass
D RKNN: [23:30:53.029] ---------------------------------------------------------------------------------------------------------------------------------
D RKNN: [23:30:53.029]                                            Feature Tensor Information Table                                
D RKNN: [23:30:53.029] -----------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [23:30:53.029] ID  User           Tensor                   DataType  DataFormat   OrigShape    NativeShape    |     [Start       End)       Size
D RKNN: [23:30:53.029] -----------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [23:30:53.029] 1   ConvRelu       input.1                  INT8      NC1HWC2      (1,1,32,32)  (1,1,32,32,1)  | 0x00012880 0x00012c80 0x00000400
D RKNN: [23:30:53.029] 2   MaxPool        /1/Relu_output_0         INT8      NC1HWC2      (1,6,28,28)  (1,1,28,28,16) | 0x00012c80 0x00015d80 0x00003100
D RKNN: [23:30:53.029] 3   ConvRelu       /2/MaxPool_output_0      INT8      NC1HWC2      (1,6,14,14)  (1,1,14,14,16) | 0x00015d80 0x000169c0 0x00000c40
D RKNN: [23:30:53.029] 4   MaxPool        /4/Relu_output_0         INT8      NC1HWC2      (1,16,10,10) (1,1,10,10,16) | 0x00012880 0x00012ec0 0x00000640
D RKNN: [23:30:53.029] 5   ConvRelu       /5/MaxPool_output_0      INT8      NC1HWC2      (1,16,5,5)   (1,1,5,5,16)   | 0x00012ec0 0x00013080 0x000001c0
D RKNN: [23:30:53.029] 6   ConvRelu       /8/Relu_output_0_before  INT8      NC1HWC2      (1,120,1,1)  (1,8,1,1,16)   | 0x00012880 0x00012900 0x00000080
D RKNN: [23:30:53.029] 7   Conv           /10/Relu_output_0_shape4 INT8      NC1HWC2      (1,84,1,1)   (1,6,1,1,16)   | 0x00012900 0x00012960 0x00000060
D RKNN: [23:30:53.029] 8   Reshape        22_conv                  INT8      NC1HWC2      (1,10,1,1)   (1,1,1,1,16)   | 0x00012880 0x00012890 0x00000010
D RKNN: [23:30:53.029] 9   OutputOperator 22                       INT8      UNDEFINED    (1,10)       (1,10)         | 0x00012900 0x00012940 0x00000040
D RKNN: [23:30:53.029] -----------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [23:30:53.029] ----------------------------------------------------------------------------------------------------
D RKNN: [23:30:53.029]                                  Const Tensor Information Table                    
D RKNN: [23:30:53.029] ------------------------------------------------------------------+---------------------------------
D RKNN: [23:30:53.029] ID  User     Tensor                        DataType  OrigShape    |     [Start       End)       Size
D RKNN: [23:30:53.029] ------------------------------------------------------------------+---------------------------------
D RKNN: [23:30:53.029] 1   ConvRelu 0.weight                      INT8      (6,1,5,5)    | 0x00000000 0x00000280 0x00000280
D RKNN: [23:30:53.029] 1   ConvRelu 0.bias                        INT32     (6)          | 0x00000280 0x00000300 0x00000080
D RKNN: [23:30:53.029] 3   ConvRelu 3.weight                      INT8      (16,6,5,5)   | 0x00000300 0x00001c00 0x00001900
D RKNN: [23:30:53.029] 3   ConvRelu 3.bias                        INT32     (16)         | 0x00001c00 0x00001c80 0x00000080
D RKNN: [23:30:53.029] 5   ConvRelu 7.weight                      INT8      (120,16,5,5) | 0x00001c80 0x0000d800 0x0000bb80
D RKNN: [23:30:53.029] 5   ConvRelu 7.bias                        INT32     (120)        | 0x0000d800 0x0000dc00 0x00000400
D RKNN: [23:30:53.029] 6   ConvRelu 9.weight                      INT8      (84,120,1,1) | 0x0000dc00 0x00010600 0x00002a00
D RKNN: [23:30:53.029] 6   ConvRelu 9.bias                        INT32     (84)         | 0x00010600 0x00010900 0x00000300
D RKNN: [23:30:53.029] 7   Conv     11.weight                     INT8      (10,84,1,1)  | 0x00010900 0x00010cc0 0x000003c0
D RKNN: [23:30:53.029] 7   Conv     11.bias                       INT32     (10)         | 0x00010cc0 0x00010d40 0x00000080
D RKNN: [23:30:53.029] 8   Reshape  /11/Gemm_2conv_reshape2_shape INT64     (2)          | 0x00010d40*0x00010d80 0x00000040
D RKNN: [23:30:53.029] ------------------------------------------------------------------+---------------------------------
D RKNN: [23:30:53.029] ----------------------------------------
D RKNN: [23:30:53.029] Total Internal Memory Size: 16.3125KB
D RKNN: [23:30:53.029] Total Weight Memory Size: 67.375KB
D RKNN: [23:30:53.029] ----------------------------------------
D RKNN: [23:30:53.029] <<<<<<<< end: rknn::RKNNMemStatisticsPass
I rknn buiding done.
done
--> Export rknn model
done

 

lenet.onnx

242.6 KB, 下载次数: 1

lenet.rknn

82.44 KB, 下载次数: 0

最新回复

好吧,分享了这么多部分代码可以参考example里面的例子,谢谢   详情 回复 发表于 2024-4-30 07:27
点赞 关注

回复
举报

6809

帖子

0

TA的资源

五彩晶圆(高级)

沙发
 

好吧,分享了这么多部分代码可以参考example里面的例子,谢谢

 
 

回复
您需要登录后才可以回帖 登录 | 注册

随便看看
查找数据手册?

EEWorld Datasheet 技术支持

相关文章 更多>>
关闭
站长推荐上一条 1/8 下一条

 
EEWorld订阅号

 
EEWorld服务号

 
汽车开发圈

About Us 关于我们 客户服务 联系方式 器件索引 网站地图 最新更新 手机版

站点相关: 国产芯 安防电子 汽车电子 手机便携 工业控制 家用电子 医疗电子 测试测量 网络通信 物联网

北京市海淀区中关村大街18号B座15层1530室 电话:(010)82350740 邮编:100190

电子工程世界版权所有 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2025 EEWORLD.com.cn, Inc. All rights reserved
快速回复 返回顶部 返回列表