1319|2

10

帖子

0

TA的资源

一粒金砂(中级)

楼主
 

#AI挑战营第二站#RV1106的rknn模型转换、仿真推理 [复制链接]

本帖最后由 NNTK_NLY 于 2024-5-29 20:41 编辑

1.环境搭建不再赘述,参考https://bbs.eeworld.com.cn/thread-1280038-1-1.html

 

2.模型转换

    rknn = RKNN(verbose=True)

    #Pre-process config
    print('--> Config model')
    rknn.config(mean_values=[[28]], std_values=[[28]],target_platform='rv1106')
    print('done')

    # Load model
    print('--> Loading model')
    ret = rknn.load_onnx(model='mnist.onnx')

    if ret != 0:
        print('Load model failed!')
        exit(ret)
    print('done')

    # Build model
    print('--> Building model')
    ret = rknn.build(do_quantization=True, dataset='./data.txt',rknn_batch_size=1)
    if ret != 0:
        print('Build model failed!')
        exit(ret)
    print('done')

    # Export rknn model
    print('--> Export rknn model')
    ret = rknn.export_rknn('mnist.rknn')
    if ret != 0:
        print('Export rknn model failed!')
        exit(ret)
    print('done')
	# Release
    rknn.release()

 

3.紧接着进行模型仿真推理

    # Set inputs
    img = cv2.imread('8.png')
    img = cv2.resize(img, (28, 28))
    #cv2.imwrite('2r.jpg', img)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    img = np.expand_dims(img, 0)
    img = np.expand_dims(img, 3)

    # Init runtime environment
    print('--> Init runtime environment')

    ret = rknn.init_runtime()
    if ret != 0:
        print('Init runtime environment failed!')
        exit(ret)
    print('done')

    # Inference
    print('--> Running model')
    outputs = rknn.inference(inputs=[img])

    #Post Process
    print('--> PostProcess')
    with open('./synset.txt', 'r') as f:
        labels = [l.rstrip() for l in f]

    scores = softmax(outputs[0])
    # print the top-5 inferences class
    scores = np.squeeze(scores)
    a = np.argsort(scores)[::-1]
    print('-----TOP 5-----')
    for i in a[0:5]:
        print('[%d] score=%.2f class="%s"' % (i, scores[i], labels[i]))
    print('done')

    # Release
    rknn.release()

注意:如需仿真推理,第二步2.模型转换先不释放rknn实体

    # Release
    rknn.release()

 

4.模型转换log

I rknn-toolkit2 version: 2.0.0b0+9bab5682
--> Config model
done
--> Loading model
I It is recommended onnx opset 19, but your onnx model opset is 10!
I Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert!
I Loading : 100%|██████████████████████████████████████████████████| 9/9 [00:00<00:00, 16897.38it/s]
done
--> Building model
D base_optimize ...
D base_optimize done.
D
D fold_constant ...
D fold_constant done.
D
D correct_ops ...
D correct_ops done.
D
D fuse_ops ...
D fuse_ops results:
D     replace_reshape_gemm_by_conv: remove node = ['/Reshape', '/fc1/Gemm'], add node = ['/fc1/Gemm_2conv', '/fc1/Gemm_2conv_reshape']
D     swap_reshape_relu: remove node = ['/fc1/Gemm_2conv_reshape', '/Relu_2'], add node = ['/Relu_2', '/fc1/Gemm_2conv_reshape']
D     convert_gemm_by_conv: remove node = ['/fc2/Gemm'], add node = ['/fc2/Gemm_2conv_reshape1', '/fc2/Gemm_2conv', '/fc2/Gemm_2conv_reshape2']
D     fuse_two_reshape: remove node = ['/fc1/Gemm_2conv_reshape']
D     remove_invalid_reshape: remove node = ['/fc2/Gemm_2conv_reshape1']
D     fold_constant ...
D     fold_constant done.
D fuse_ops done.
D
D sparse_weight ...
D sparse_weight done.
D
I GraphPreparing : 100%|██████████████████████████████████████████| 10/10 [00:00<00:00, 4456.81it/s]
I Quantizating : 100%|████████████████████████████████████████████| 10/10 [00:00<00:00, 1337.34it/s]
D
D quant_optimizer ...
D quant_optimizer results:
D     adjust_relu: ['/Relu_2', '/Relu_1', '/Relu']
D quant_optimizer done.
D
W build: The default input dtype of 'input' is changed from 'float32' to 'int8' in rknn model for performance!
                       Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of 'output' is changed from 'float32' to 'int8' in rknn model for performance!
                      Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I RKNN: [18:11:22.305] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0
I RKNN: librknnc version: 2.0.0b0 (35a6907d79@2024-03-24T02:34:11)
D RKNN: [18:11:22.308] RKNN is invoked
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNTileFcBatchFuse
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNTileFcBatchFuse
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [18:11:22.318] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [18:11:22.318] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [18:11:22.319] >>>>>> start: OpEmit
D RKNN: [18:11:22.319] <<<<<<<< end: OpEmit
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNLayoutMatchPass
I RKNN: [18:11:22.319] AppointLayout: t->setNativeLayout(64), tname:[/Relu_output_0]
I RKNN: [18:11:22.319] AppointLayout: t->setNativeLayout(64), tname:[/pool/MaxPool_output_0]
I RKNN: [18:11:22.319] AppointLayout: t->setNativeLayout(64), tname:[/Relu_1_output_0]
I RKNN: [18:11:22.319] AppointLayout: t->setNativeLayout(64), tname:[/pool_1/MaxPool_output_0]
I RKNN: [18:11:22.319] AppointLayout: t->setNativeLayout(64), tname:[/fc1/Gemm_output_0_new]
I RKNN: [18:11:22.319] AppointLayout: t->setNativeLayout(64), tname:[output_conv]
I RKNN: [18:11:22.319] AppointLayout: t->setNativeLayout(0), tname:[output]
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [18:11:22.319] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [18:11:22.319] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [18:11:22.319] >>>>>> start: OpEmit
D RKNN: [18:11:22.319] finish initComputeZoneMap
D RKNN: [18:11:22.320] <<<<<<<< end: OpEmit
D RKNN: [18:11:22.320] >>>>>> start: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [18:11:22.320] <<<<<<<< end: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [18:11:22.320] >>>>>> start: rknn::RKNNProfileAnalysisPass
D RKNN: [18:11:22.320] node: Reshape:/fc2/Gemm_2conv_reshape2, Target: NPU
D RKNN: [18:11:22.320] <<<<<<<< end: rknn::RKNNProfileAnalysisPass
D RKNN: [18:11:22.320] >>>>>> start: rknn::RKNNOperatorIdGenPass
D RKNN: [18:11:22.320] <<<<<<<< end: rknn::RKNNOperatorIdGenPass
D RKNN: [18:11:22.320] >>>>>> start: rknn::RKNNWeightTransposePass
W RKNN: [18:11:22.331] Warning: Tensor /fc2/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
W RKNN: [18:11:22.331] Warning: Tensor /fc2/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
D RKNN: [18:11:22.331] <<<<<<<< end: rknn::RKNNWeightTransposePass
D RKNN: [18:11:22.331] >>>>>> start: rknn::RKNNCPUWeightTransposePass
D RKNN: [18:11:22.331] <<<<<<<< end: rknn::RKNNCPUWeightTransposePass
D RKNN: [18:11:22.331] >>>>>> start: rknn::RKNNModelBuildPass
D RKNN: [18:11:22.336] <<<<<<<< end: rknn::RKNNModelBuildPass
D RKNN: [18:11:22.336] >>>>>> start: rknn::RKNNModelRegCmdbuildPass
D RKNN: [18:11:22.336] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [18:11:22.336]                                                         Network Layer Information Table
D RKNN: [18:11:22.336] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [18:11:22.336] ID   OpType           DataType Target InputShape                               OutputShape            Cycles(DDR/NPU/Total)    RW(KB)       FullName
D RKNN: [18:11:22.336] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [18:11:22.336] 0    InputOperator    INT8     CPU    \                                        (1,1,28,28)            0/0/0                    0            InputOperator:input
D RKNN: [18:11:22.336] 1    ConvRelu         INT8     NPU    (1,1,28,28),(32,1,3,3),(32)              (1,32,28,28)           4429/7056/7056           2            Conv:/conv1/Conv
D RKNN: [18:11:22.336] 2    MaxPool          INT8     NPU    (1,32,28,28)                             (1,32,14,14)           5092/0/5092              24           MaxPool:/pool/MaxPool
D RKNN: [18:11:22.336] 3    ConvRelu         INT8     NPU    (1,32,14,14),(64,32,3,3),(64)            (1,64,14,14)           6131/7488/7488           24           Conv:/conv2/Conv
D RKNN: [18:11:22.336] 4    MaxPool          INT8     NPU    (1,64,14,14)                             (1,64,7,7)             2546/0/2546              12           MaxPool:/pool_1/MaxPool
D RKNN: [18:11:22.336] 5    ConvRelu         INT8     NPU    (1,64,7,7),(128,64,7,7),(128)            (1,128,1,1)            65865/12544/65865        396          Conv:/fc1/Gemm_2conv
D RKNN: [18:11:22.336] 6    Conv             INT8     NPU    (1,128,1,1),(10,128,1,1),(10)            (1,10,1,1)             252/64/252               1            Conv:/fc2/Gemm_2conv
D RKNN: [18:11:22.336] 7    Reshape          INT8     NPU    (1,10,1,1),(2)                           (1,10)                 7/0/7                    0            Reshape:/fc2/Gemm_2conv_reshape2
D RKNN: [18:11:22.336] 8    OutputOperator   INT8     CPU    (1,10)                                   \                      0/0/0                    0            OutputOperator:output
D RKNN: [18:11:22.336] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [18:11:22.337] <<<<<<<< end: rknn::RKNNModelRegCmdbuildPass
D RKNN: [18:11:22.337] >>>>>> start: rknn::RKNNFlatcModelBuildPass
D RKNN: [18:11:22.337] Export Mini RKNN model to /tmp/tmphfyahgcw/check.rknn
D RKNN: [18:11:22.337] >>>>>> end: rknn::RKNNFlatcModelBuildPass
D RKNN: [18:11:22.337] >>>>>> start: rknn::RKNNMemStatisticsPass
D RKNN: [18:11:22.337] ---------------------------------------------------------------------------------------------------------------------------------
D RKNN: [18:11:22.337]                                            Feature Tensor Information Table   
D RKNN: [18:11:22.337] -----------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [18:11:22.337] ID  User           Tensor                   DataType  DataFormat   OrigShape    NativeShape    |     [Start       End)       Size
D RKNN: [18:11:22.337] -----------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [18:11:22.337] 1   ConvRelu       input                    INT8      NC1HWC2      (1,1,28,28)  (1,1,28,28,1)  | 0x00069000 0x00069380 0x00000380
D RKNN: [18:11:22.337] 2   MaxPool        /Relu_output_0           INT8      NC1HWC2      (1,32,28,28) (1,2,28,28,16) | 0x00069380 0x0006f580 0x00006200
D RKNN: [18:11:22.337] 3   ConvRelu       /pool/MaxPool_output_0   INT8      NC1HWC2      (1,32,14,14) (1,2,14,14,16) | 0x0006f580 0x00070e00 0x00001880
D RKNN: [18:11:22.337] 4   MaxPool        /Relu_1_output_0         INT8      NC1HWC2      (1,64,14,14) (1,4,14,14,16) | 0x00069000 0x0006c100 0x00003100
D RKNN: [18:11:22.337] 5   ConvRelu       /pool_1/MaxPool_output_0 INT8      NC1HWC2      (1,64,7,7)   (1,4,7,7,16)   | 0x0006c100 0x0006ce00 0x00000d00
D RKNN: [18:11:22.337] 6   Conv           /fc1/Gemm_output_0_new   INT8      NC1HWC2      (1,128,1,1)  (1,8,1,1,16)   | 0x00069000 0x00069080 0x00000080
D RKNN: [18:11:22.337] 7   Reshape        output_conv              INT8      NC1HWC2      (1,10,1,1)   (1,1,1,1,16)   | 0x00069080 0x00069090 0x00000010
D RKNN: [18:11:22.337] 8   OutputOperator output                   INT8      UNDEFINED    (1,10)       (1,10)         | 0x00069040 0x00069080 0x00000040
D RKNN: [18:11:22.337] -----------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [18:11:22.337] -----------------------------------------------------------------------------------------------------
D RKNN: [18:11:22.337]                                  Const Tensor Information Table               
D RKNN: [18:11:22.337] -------------------------------------------------------------------+---------------------------------
D RKNN: [18:11:22.337] ID  User     Tensor                         DataType  OrigShape    |     [Start       End)       Size
D RKNN: [18:11:22.337] -------------------------------------------------------------------+---------------------------------
D RKNN: [18:11:22.337] 1   ConvRelu conv1.weight                   INT8      (32,1,3,3)   | 0x00000000 0x00000480 0x00000480
D RKNN: [18:11:22.337] 1   ConvRelu conv1.bias                     INT32     (32)         | 0x00000480 0x00000580 0x00000100
D RKNN: [18:11:22.337] 3   ConvRelu conv2.weight                   INT8      (64,32,3,3)  | 0x00000580 0x00004d80 0x00004800
D RKNN: [18:11:22.337] 3   ConvRelu conv2.bias                     INT32     (64)         | 0x00004d80 0x00004f80 0x00000200
D RKNN: [18:11:22.337] 5   ConvRelu fc1.weight                     INT8      (128,64,7,7) | 0x00004f80 0x00066f80 0x00062000
D RKNN: [18:11:22.337] 5   ConvRelu fc1.bias                       INT32     (128)        | 0x00066f80 0x00067380 0x00000400
D RKNN: [18:11:22.337] 6   Conv     fc2.weight                     INT8      (10,128,1,1) | 0x00067380 0x00067880 0x00000500
D RKNN: [18:11:22.337] 6   Conv     fc2.bias                       INT32     (10)         | 0x00067880 0x00067900 0x00000080
D RKNN: [18:11:22.337] 7   Reshape  /fc2/Gemm_2conv_reshape2_shape INT64     (2)          | 0x00067900*0x00067940 0x00000040
D RKNN: [18:11:22.337] -------------------------------------------------------------------+---------------------------------
D RKNN: [18:11:22.337] ----------------------------------------
D RKNN: [18:11:22.337] Total Internal Memory Size: 31.5KB
D RKNN: [18:11:22.337] Total Weight Memory Size: 414.312KB
D RKNN: [18:11:22.337] ----------------------------------------
D RKNN: [18:11:22.337] <<<<<<<< end: rknn::RKNNMemStatisticsPass
I rknn buiding done.
done
--> Export rknn model
done

 

5.仿真推理log

--> Init runtime environment
I Target is None, use simulator!
done
--> Running model
W inference: The 'data_format' is not set, and its default value is 'nhwc'!
I GraphPreparing : 100%|██████████████████████████████████████████| 12/12 [00:00<00:00, 4513.64it/s]
I SessionPreparing : 100%|████████████████████████████████████████| 12/12 [00:00<00:00, 1368.27it/s]
--> PostProcess
-----TOP 5-----
[8] score=1.00 class="8 8"
[3] score=0.00 class="3 3"
[9] score=0.00 class="9 9"
[2] score=0.00 class="2 2"
[6] score=0.00 class="6 6"
done

 

6.模型转换工程目录下文件

 

 

7.data.txt内容

./2.png

 

8.synset.txt内容

0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9

  

9.仿真推理所用图片:

 

 

10.附github工程链接

链接已隐藏,如需查看请登录或者注册

 

 

 

 

 

 

此帖出自ARM技术论坛

最新回复

推理的参数好像没有看到在哪设置。   详情 回复 发表于 2024-5-30 18:32
点赞 关注
 

回复
举报

6587

帖子

0

TA的资源

五彩晶圆(高级)

沙发
 

不知道楼主遇到没有模型加载错误或预处理问题

此帖出自ARM技术论坛
 
 
 

回复

6773

帖子

2

TA的资源

版主

板凳
 

推理的参数好像没有看到在哪设置。

此帖出自ARM技术论坛
 
 
 

回复
您需要登录后才可以回帖 登录 | 注册

随便看看
查找数据手册?

EEWorld Datasheet 技术支持

相关文章 更多>>
关闭
站长推荐上一条 1/10 下一条

 
EEWorld订阅号

 
EEWorld服务号

 
汽车开发圈

About Us 关于我们 客户服务 联系方式 器件索引 网站地图 最新更新 手机版

站点相关: 国产芯 安防电子 汽车电子 手机便携 工业控制 家用电子 医疗电子 测试测量 网络通信 物联网

北京市海淀区中关村大街18号B座15层1530室 电话:(010)82350740 邮编:100190

电子工程世界版权所有 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved
快速回复 返回顶部 返回列表