【 AI挑战营(进阶)】5.模型部署 1 rknn推理测试
本帖最后由 iexplore123 于 2025-1-12 04:26 编辑【AI挑战营(进阶)】在RV1106部署InsightFace算法的多人实时人脸识别实战5 模型部署1 rknn推理测试
### 3.1 板上环境准备
`RV1106` 是一款专门用于人工智能相关应用的高度集成 IPC 视觉处理器 SoC。它基于`单核` ARM Cortex-A7 `32` 位内核,集成了 NEON 和 FPU,并内置 NPU 支持 `INT4 / INT8 / INT16` 混合运算,RV1106G3计算能力高达 1TOPS。
#### 1. 构建板上系统(`buildroot`)
`buildroot` 是一个简单的、高效的、易于定制的嵌入式 Linux 系统构建框架。`LuckFoxPico-SDK` 提供了快速构建 `buildroot` 系统的命令行界面工具,可以帮助用户快速构建 `buildroot` 系统,构建流程如下:
```shell
luckfox@luckfox:~/luckfox-pico$ ./build.sh lunch
You're building on Linux
Lunch menu...pick the Luckfox Pico hardware version:
选择 Luckfox Pico 硬件版本:
RV1103_Luckfox_Pico
RV1103_Luckfox_Pico_Mini_A
RV1103_Luckfox_Pico_Mini_B
RV1103_Luckfox_Pico_Plus
RV1106_Luckfox_Pico_Pro_Max
RV1106_Luckfox_Pico_Ultra
RV1106_Luckfox_Pico_Ultra_W
custom
Which would you like? : 3
Lunch menu...pick the boot medium:
选择启动媒介:
SD_CARD
SPI_NAND
Which would you like? : 1
Lunch menu...pick the system version:
选择系统版本:
Buildroot(Support Rockchip official features)
Which would you like? : 0
Lunching for Default BoardConfig_IPC/BoardConfig-SPI_NAND-Buildroot-RV1103_Luckfox_Pico_Plus-IPC.mk boards...
Running build_select_board succeeded.
luckfox@luckfox:~/luckfox-pico$./build.sh
```
> 快速克隆仓库:`git clone https://gitee.com/LuckfoxTECH/luckfox-pico.git`
获取`LuckFoxPico-SDK` 仓库后,我们首先需要配置rv1106专属的32位交叉编译器,命令如下:
```shell
cd tools/linux/toolchain/arm-rockchip830-linux-uclibcgnueabihf/
source env_install_toolchain.sh
```
> 该交叉编译器在后续的板上应用开发中也会用到。
获取到编译完成的`buildroot` 系统后,就可以根据[官方说明](https://wiki.luckfox.com/zh/Luckfox-Pico/Luckfox-Pico-RV1106/Luckfox-Pico-Pro-Max/Luckfox-Pico-SD-Card-burn-image)利用官方工具进行烧录了。
烧录工具:(https://files.luckfox.com/wiki/Luckfox-Pico/Software/SocToolKit_v1.98_20240705_01_win.zip)
官方提供的预构建固件:[百度云](https://pan.baidu.com/s/1Mhf5JMpkFuZo_TuaGSxBYg?pwd=2sf8)
> 官方提供的luckfox pico max固件镜像中的平台变量是`rv1103`的,若使用该镜像,连扳推理时需要将`target`参数设置为`rv1103`。
##### 2. 配置系统环境 (`librknnmrt.so`)
由于`rv1106`是单核32位处理器,而官方未提供32位可运行的`rknn-toolkitlit2`,所以我们无法在板上运行`rknn-toolkit2`,也无法在板上运行`rknn`的`python api`,只能使用c++的`rknn`的api进行推理。
在板上系统中,我们确认rknn相关的库版本与rknn-toolkit2的版本一致,然后将`librknnmrt.so`拷贝到`/oem/usr/lib`目录下,将`rknn-server`拷贝到`/oem/usr/bin`目录下,给与可执行权限,然后启动`rknn-server`。具体参考`rknn-toolkit2`的文档。
### 3.2. rknn模型推理测试
`Rknn-Toolkit` 提供了一个`rknn.api.RKNN`类,我们可以使用这个类实现`rknn`模型的转换、推理、性能评估等功能。`RKNN`类的主要功能如下:
1. 模型转换:支持将PyTorch、ONNX、TensorFlow、TensorFlow Lite、Caffe、DarkNet等模型转为RKNN模型。
2. 量化功能:支持将浮点模型量化为定点模型,并支持混合量化。
3. 模型推理:将RKNN模型分发到指定的NPU设备上进行推理并获取推理结果;或在计算机上仿真NPU运行RKNN模型并获取推理结果。
4. 性能和内存评估:将RKNN模型分发到指定NPU设备上运行,以评估模型在实际设备上运行时的性能和内存占用情况。
5. 量化精度分析:该功能将给出模型量化后每一层推理结果与浮点模型推理结果的余弦距离和欧氏距离,以便于分析量化误差是如何出现的,为提高量化模型的精度提供思路。
6. 模型加密功能:使用指定的加密等级将RKNN模型整体加密。
> 以上功能仅供参考,具体需要查看芯片的支持情况。
#### 3.2.1 MTCNN
`MTCNN`是一个三层级联的人脸检测模型,它可以检测出图像中的人脸位置和五个关键点位置。
各网络的输入输出如下:
|网络层级|输入|---|输出|---|
|---|---|---|---|---|
|---|img|offsets|probs|landmarks|
|PNet|(N, 3, H, W)|(N, 4, H, W)|(N, 2, H, W)|/|
|RNet|(N, 3, 24, 24)|(N, 4)|(N, 2)|/|
|ONet|(N, 3, 48, 48)|(N, 4)|(N, 2)|(N, 10)|
> N 为 batch_size,H 为高度,W 为宽度。
> Rv1106的单核芯片大概只支持batch_size为1的推理。
模型转换的流程如下:
导出模型后,我们可以根据需要调用推理接口进行模拟推理或者连板推理并进行模型的性能评估。
```python
import math
import os
import urllib
import traceback
import time
import sys
import numpy as np
import cv2
from rknn.api import RKNN
from onnxruntime import InferenceSession
Pnet_model = '/root/Mtcnn&Arcface/mtcnn/pnet.onnx'
Rnet_model = '/root/Mtcnn&Arcface/mtcnn/rnet.onnx'
Onnx_model = '/root/Mtcnn&Arcface/mtcnn/onet.onnx'
# 输出的RKNN模型路径
P_NET_RKNN = 'pnet.rknn'
R_NET_RKNN = 'rnet.rknn'
O_NET_RKNN = 'onet.rknn'
IMG_PATH = '/root/Mtcnn&Arcface/mtcnn/office2.jpg'
DATASET = './dataset.txt'
DATASET_PREFIX = '/root/Mtcnn&Arcface/mtcnn/dataset/'
GENCODE = True
QUANTIZE_ON = True
# if this value is too low the algorithm will use a lot of memory
min_face_size = 15.0
# for probabilities
thresholds =
# for NMS
nms_thresholds=
def build_image_pyramid(img, min_face_size):
h, w, _ = img.shape
minl = min(h, w)
m = 12.0 / min_face_size
minl *= m
# generate the image pyramid
scale_list = []
factor = 0.707
factor_count = 0
while minl > 12:
scale_list.append(m*factor**factor_count)
minl *= factor
factor_count += 1
return scale_list
def convert_to_rknn(rknn, onnx_model, rknn_model, dataset, input_size_list=None):
# 配置模型的预处理参数
rknn.config(mean_values=[], std_values=[], target_platform='rv1103')
# 加载ONNX模型
print(f'--> Loading model: {onnx_model}')
ret = rknn.load_onnx(model=onnx_model, inputs=['input'], input_size_list=input_size_list)
if ret != 0:
print('Load model failed!')
exit(ret)
print('done')
# 构建RKNN模型(进行量化,如果需要)
print('--> Building model')
ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=dataset)
if ret != 0:
print('Build model failed!')
exit(ret)
print('done')
# 导出RKNN模型
print(f'--> Exporting RKNN model to: {rknn_model}')
ret = rknn.export_rknn(rknn_model)
if ret != 0:
print('Export RKNN model failed!')
exit(ret)
print('done')
def _generate_bboxes(probs, offsets, scale, threshold):
"""Generate bounding boxes at places
where there is probably a face.
Arguments:
probs: a float numpy array of shape .
offsets: a float numpy array of shape .
scale: a float number,
width and height of the image were scaled by this number.
threshold: a float number.
Returns:
a float numpy array of shape
"""
stride = 2
cell_size = 12
# indices of boxes where there is probably a face
inds = np.where(probs > threshold)
if inds.size == 0:
return np.array([])
# transformations of bounding boxes
tx1, ty1, tx2, ty2 = , inds] for i in range(4)]
offsets = np.array()
score = probs, inds]
# P-Net is applied to scaled images
# so we need to rescale bounding boxes back
bounding_boxes = np.vstack([
np.round((stride*inds + 1.0)/scale),
np.round((stride*inds + 1.0)/scale),
np.round((stride*inds + 1.0 + cell_size)/scale),
np.round((stride*inds + 1.0 + cell_size)/scale),
score, offsets
])
return bounding_boxes.T
def run_first_stage(image, scale, threshold):
# 将图像调整为网络输入大小
height, width, _ = image.shape
new_height = math.ceil(height * scale)
new_width = math.ceil(width * scale)
img_resized = cv2.resize(image, (new_width, new_height))
img_resized = np.asarray(img_resized, 'float32')
# img_resized = (img_resized - 127.5) / 128.0
img_resized = np.expand_dims(img_resized, 0)
img_resized = img_resized.transpose((0, 3, 1, 2))
# 进行推理
# rknn的模型需要固定的输入尺寸,所以对每个缩放尺寸的图像都需要单独的RKNN模型
Pnet = RKNN()
scale_dataset_path = f'{DATASET_PREFIX}dataset_scale_{scale}.txt'
scale_rknn_model_path = f'pnet_{new_width}x{new_height}.rknn'
scale_input_size_list = [*scale), math.ceil(img.shape*scale)]]
convert_rknn_and_init(Pnet, Pnet_model, scale_rknn_model_path, scale_dataset_path, scale_input_size_list)
print('--> Running PNet for scale:', scale)
outputs = Pnet.inference(inputs=, data_format='nchw')
probs = outputs
offsets = outputs
Pnet.release()
# # 使用onnxruntime进行推理
# session = InferenceSession(Pnet_model)
# outputs = session.run(None, {'input': img_resized})
# probs = outputs
# offsets = outputs
# 生成边界框
boxes = _generate_bboxes(probs, offsets, scale, threshold)
if len(boxes) == 0:
return None
return boxes
def get_image_boxes(bounding_boxes, img, size=24):
"""Cut out boxes from the image.
Arguments:
bounding_boxes: a float numpy array of shape .
img: an instance of PIL.Image.
size: an integer, size of cutouts.
Returns:
a float numpy array of shape .
"""
num_boxes = len(bounding_boxes)
height, width, _ = img.shape
= correct_bboxes(bounding_boxes, width, height)
img_boxes = np.zeros((num_boxes, size, size, 3), 'float32')
for i in range(num_boxes):
img_box = np.zeros((h, w, 3), 'uint8')
img_box:(edy + 1), dx:(edx + 1), :] = img:(ey + 1), x:(ex + 1), :]
# resize
img_box = cv2.resize(img_box, (size, size))
img_boxes = img_box
return img_boxes
def nms(boxes, overlap_threshold=0.5, mode='union'):
"""Non-maximum suppression.
Arguments:
boxes: a float numpy array of shape ,
where each row is (xmin, ymin, xmax, ymax, score).
overlap_threshold: a float number.
mode: 'union' or 'min'.
Returns:
list with indices of the selected boxes
"""
if len(boxes) == 0:
return []
pick = []
x1, y1, x2, y2, score = for i in range(5)]
area = (x2 - x1 + 1.0)*(y2 - y1 + 1.0)
ids = np.argsort(score)
while len(ids) > 0:
last = len(ids) - 1
i = ids
pick.append(i)
ix1 = np.maximum(x1, x1])
iy1 = np.maximum(y1, y1])
ix2 = np.minimum(x2, x2])
iy2 = np.minimum(y2, y2])
w = np.maximum(0.0, ix2 - ix1 + 1.0)
h = np.maximum(0.0, iy2 - iy1 + 1.0)
inter = w * h
if mode == 'min':
overlap = inter/np.minimum(area, area])
elif mode == 'union':
overlap = inter/(area + area] - inter)
ids = np.delete(
ids,
np.concatenate([, np.where(overlap > overlap_threshold)])
)
return pick
def calibrate_box(bboxes, offsets):
"""Transform bounding boxes to be more like true bounding boxes.
'offsets' is one of the outputs of the nets.
Arguments:
bboxes: a float numpy array of shape .
offsets: a float numpy array of shape .
Returns:
a float numpy array of shape .
"""
x1, y1, x2, y2 = for i in range(4)]
w = x2 - x1 + 1.0
h = y2 - y1 + 1.0
w = np.expand_dims(w, 1)
h = np.expand_dims(h, 1)
# this is what happening here:
# tx1, ty1, tx2, ty2 = for i in range(4)]
# x1_true = x1 + tx1*w
# y1_true = y1 + ty1*h
# x2_true = x2 + tx2*w
# y2_true = y2 + ty2*h
# below is just more compact form of this
translation = np.hstack()*offsets
bboxes[:, 0:4] = bboxes[:, 0:4] + translation
return bboxes
def convert_to_square(bboxes):
"""Convert bounding boxes to a square form.
Arguments:
bboxes: a float numpy array of shape .
Returns:
a float numpy array of shape ,
squared bounding boxes.
"""
square_bboxes = np.zeros_like(bboxes)
x1, y1, x2, y2 = for i in range(4)]
h = y2 - y1 + 1.0
w = x2 - x1 + 1.0
max_side = np.maximum(h, w)
square_bboxes[:, 0] = x1 + w*0.5 - max_side*0.5
square_bboxes[:, 1] = y1 + h*0.5 - max_side*0.5
square_bboxes[:, 2] = square_bboxes[:, 0] + max_side - 1.0
square_bboxes[:, 3] = square_bboxes[:, 1] + max_side - 1.0
return square_bboxes
def correct_bboxes(bboxes, width, height):
"""Crop boxes that are too big and get coordinates
with respect to cutouts.
Arguments:
bboxes: a float numpy array of shape ,
where each row is (xmin, ymin, xmax, ymax, score).
width: a float number.
height: a float number.
Returns:
dy, dx, edy, edx: a int numpy arrays of shape ,
coordinates of the boxes with respect to the cutouts.
y, x, ey, ex: a int numpy arrays of shape ,
corrected ymin, xmin, ymax, xmax.
h, w: a int numpy arrays of shape ,
just heights and widths of boxes.
in the following order:
.
"""
x1, y1, x2, y2 = for i in range(4)]
w, h = x2 - x1 + 1.0,y2 - y1 + 1.0
num_boxes = bboxes.shape
x, y, ex, ey = x1, y1, x2, y2
dx, dy = np.zeros((num_boxes,)), np.zeros((num_boxes,))
edx, edy = w.copy() - 1.0, h.copy() - 1.0
ind = np.where(ex > width - 1.0)
edx = w + width - 2.0 - ex
ex = width - 1.0
ind = np.where(ey > height - 1.0)
edy = h + height - 2.0 - ey
ey = height - 1.0
ind = np.where(x < 0.0)
dx = 0.0 - x
x = 0.0
ind = np.where(y < 0.0)
dy = 0.0 - y
y = 0.0
return_list =
return_list =
return return_list
def draw(image, boxes, landmarks):
img_copy = image.copy()
# 绘制边界框和关键点
for box in boxes:
x1, y1, x2, y2 = box[:4].astype(int)
cv2.rectangle(img_copy, (x1, y1), (x2, y2), (255, 0, 0), 2)
for landmark in landmarks:
for i in range(5):
cv2.circle(img_copy, (int(landmark), int(landmark)), 2, (0, 255, 0), -1)
img_rgb = cv2.cvtColor(img_copy, cv2.COLOR_BGR2RGB)
return img_rgb
def convert_rknn_and_init(rknn, onnx_model, rknn_model, dataset, input_size_list=None,gen_code=GENCODE):
convert_to_rknn(rknn, onnx_model, rknn_model, dataset, input_size_list)
if gen_code:
with open(dataset, 'r') as f:
inputs = f.readlines()
input_list =
input = input_list
ret = rknn.codegen(output_path=f'./rknn_app_{rknn_model}', inputs = input, overwrite=True)
if ret != 0:
print(f'Init runtime environment for {rknn_model} failed!')
exit(ret)
print(f'Init runtime environment for {rknn_model} done')
ret = rknn.init_runtime()
if ret != 0:
print(f'Init runtime environment for {rknn_model} failed!')
exit(ret)
print(f'Init runtime environment for {rknn_model} done')
def generate_dataset(bounding_boxes, img, size, dataset_filename, dataset_prefix):
dataset_path = dataset_prefix + dataset_filename
with open(dataset_path, 'w') as f:
for i, img_box in enumerate(get_image_boxes(bounding_boxes, img, size=size)):
img_path = f'{dataset_prefix}{dataset_filename}_{i}.jpg'
img_box = cv2.cvtColor(img_box, cv2.COLOR_BGR2RGB)
cv2.imwrite(img_path, img_box)
f.write(f'{img_path}\n')
return dataset_path
def generate_pnet_dataset_for_each_scale(dataset_filename, dataset_prefix, scale_list):
# 生成PNet量化每个缩放尺寸的数据集
with open(dataset_filename, 'r') as dataset_file:
image_paths = dataset_file.readlines()
for scale in scale_list:
scale_dataset_filename = f'{dataset_prefix}dataset_scale_{scale}.txt'
with open(scale_dataset_filename, 'w') as scale_dataset_file:
for image_path in image_paths:
image_path = image_path.strip()
img = cv2.imread(image_path)
if img is None:
continue
h = math.ceil(img.shape * scale)
w = math.ceil(img.shape * scale)
img_resized = cv2.resize(img, (w, h))
resized_image_path = f'{dataset_prefix}img_{w}x{h}.jpg'
cv2.imwrite(resized_image_path, img_resized)
scale_dataset_file.write(f'{resized_image_path}\n')
return True
if __name__ == '__main__':
# 创建RKNN对象
rknn_rnet = RKNN(verbose=False)
rknn_onet = RKNN(verbose=False)
# 加载图像, 构建图像金字塔
print('--> Load image')
img = cv2.imread(IMG_PATH)
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
scale_list = build_image_pyramid(img, min_face_size)
print('scales:', ['{:.2f}'.format(s) for s in scale_list])
print('number of different scales:', len(scale_list))
input_size_list_Pnet = [*s), math.ceil(img.shape*s)] for s in scale_list]
print('input_size_list_Pnet:', input_size_list_Pnet)
# 生成PNet量化数据集
# generate_pnet_dataset_for_each_scale(DATASET, DATASET_PREFIX, scale_list)
for scale in scale_list:
scale_dataset_filename = f'{DATASET_PREFIX}dataset_scale_{scale}.txt'
with open(scale_dataset_filename, 'w') as scale_dataset_file:
h = math.ceil(img.shape * scale)
w = math.ceil(img.shape * scale)
img_resized = cv2.resize(img, (w, h))
resized_image_path = f'{DATASET_PREFIX}img_{w}x{h}.jpg'
cv2.imwrite(resized_image_path, img_resized)
scale_dataset_file.write(f'{resized_image_path}\n')
# PNet阶段
print('--> Running PNet')
bounding_boxes = []
for scale in scale_list:
boxes = run_first_stage(img, scale, thresholds)
bounding_boxes.append(boxes)
bounding_boxes =
if len(bounding_boxes) == 0:
print('No bounding boxes found.')
exit(1)
bounding_boxes = np.vstack(bounding_boxes)
print('number of bounding boxes:', len(bounding_boxes))
img_drawn = draw(img, bounding_boxes, [])
cv2.imwrite('result_pnet.jpg', img_drawn)
print('Save results to pnet_result.jpg!')
print('--> NMS and calibrate bounding boxes')
# NMS + 校准
keep = nms(bounding_boxes[:, 0:5], nms_thresholds)
bounding_boxes = bounding_boxes
# 使用PNet预测的偏移量来校准边界框
bounding_boxes = calibrate_box(bounding_boxes[:, 0:5], bounding_boxes[:, 5:])
bounding_boxes = convert_to_square(bounding_boxes)
bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4])
print('number of bounding boxes:', len(bounding_boxes))
img_drawn = draw(img, bounding_boxes, [])
cv2.imwrite('result_pnet_1.jpg', img_drawn)
# rknn_pnet.release()
# 生成RNet量化数据集
rnet_dataset = generate_dataset(bounding_boxes, img, 24, 'rnet_dataset.txt', DATASET_PREFIX)
# 初始化并转换RNet模型
convert_rknn_and_init(rknn_rnet, Rnet_model, R_NET_RKNN, rnet_dataset, input_size_list=[])
# RNet阶段
print('--> Running RNet')
img_boxes = get_image_boxes(bounding_boxes, img, size=24)
# 将img_boxes拆分为RKNN输入格式(1, 3, 24, 24)
img_boxes_split = for i in range(len(img_boxes))]
output = ) for img_box in img_boxes_split]
offsets = np.vstack( for out in output])
probs = np.vstack( for out in output])
keep = np.where(probs[:, 1] > thresholds)
bounding_boxes = bounding_boxes
bounding_boxes[:, 4] = probs.reshape((-1,))
offsets = offsets
print('number of bounding boxes:', len(bounding_boxes))
img_drawn = draw(img, bounding_boxes, [])
cv2.imwrite('result_rnet.jpg', img_drawn)
print('Save results to result_rnet.jpg!')
print('--> NMS and calibrate bounding boxes')
# NMS + 校准
keep = nms(bounding_boxes, nms_thresholds)
bounding_boxes = bounding_boxes
bounding_boxes = calibrate_box(bounding_boxes, offsets)
bounding_boxes = convert_to_square(bounding_boxes)
bounding_boxes[:, 0:4] = np.round(bounding_boxes[:, 0:4])
print('number of bounding boxes:', len(bounding_boxes))
img_drawn = draw(img, bounding_boxes, [])
cv2.imwrite('result_rnet_1.jpg', img_drawn)
rknn_rnet.release()
# 生成ONet量化数据集
onet_dataset = generate_dataset(bounding_boxes, img, 48, 'onet_dataset.txt', DATASET_PREFIX)
# 初始化并转换ONet模型
convert_rknn_and_init(rknn_onet, Onnx_model, O_NET_RKNN, onet_dataset, input_size_list=[])
# ONet阶段
print('--> Running ONet')
img_boxes = get_image_boxes(bounding_boxes, img, size=48)
# 将img_boxes拆分为RKNN输入格式(1, 3, 48, 48)
img_boxes_split = for i in range(len(img_boxes))]
output = ) for img_box in img_boxes_split]
probs = np.vstack( for out in output])
offsets = np.vstack( for out in output])
landmarks = np.vstack( for out in output])
keep = np.where(probs[:, 1] > thresholds)
bounding_boxes = bounding_boxes
bounding_boxes[:, 4] = probs.reshape((-1,))
offsets = offsets
landmarks = landmarks
# 计算关键点
width = bounding_boxes[:, 2] - bounding_boxes[:, 0] + 1.0
height = bounding_boxes[:, 3] - bounding_boxes[:, 1] + 1.0
xmin, ymin = bounding_boxes[:, 0], bounding_boxes[:, 1]
landmarks[:, 0:5] = np.expand_dims(xmin, 1) + np.expand_dims(width, 1) * landmarks[:, 0:5]
landmarks[:, 5:10] = np.expand_dims(ymin, 1) + np.expand_dims(height, 1) * landmarks[:, 5:10]
print('number of bounding boxes:', len(bounding_boxes))
img_drawn = draw(img, bounding_boxes, landmarks)
cv2.imwrite('result_onet.jpg', img_drawn)
print('Save results to result_onet.jpg!')
# NMS + 校准
bounding_boxes = calibrate_box(bounding_boxes, offsets)
keep = nms(bounding_boxes, nms_thresholds, mode='min')
bounding_boxes = bounding_boxes
landmarks = landmarks
print('number of bounding boxes:', len(bounding_boxes))
# 绘制结果
img_drawn = draw(img, bounding_boxes, landmarks)
cv2.imwrite('result.jpg', img_drawn)
print('Save results to result.jpg!')
rknn_onet.release()
```
代码分为两部分,第一部分是将`MTCNN`模型转换为`RKNN`模型,第二部分是使用`RKNN`模型进行推理。
第一部分转换模型最核心的部分是`rknn.config()`和`rknn.build()`,`rknn.config()`用于配置模型的预处理参数,`rknn.build()`用于构建RKNN模型。
- config已经包含了模型的输入处理的归一化过程,所以在推理时不需要再对输入图像进行归一化处理。
- build函数中的do_quantization参数用于指定是否进行量化,dataset参数用于指定量化数据集,rv1106不支持非量化的模型,所以这里一定需要进行量化。
- 关于量化数据集,我将推理使用的图像以及得到的结果框预处理后保存到了文件中,然后将文件的路径保存到了dataset.txt文件中,并且将这个文件作为量化数据集传入到build函数中。
> 我使用的onnx模型本质上是一个caffe模型,所以输入的图像色彩格式是BGR,后续应用部署需要注意。
第二部分推理首先输入图像,然后将图像进行金字塔缩放,然后使用`PNet`模型进行推理,得到边界框,然后使用`NMS`和校准边界框,然后将校准后的边界框作为`RNet`的输入,得到更准确的边界框,然后再次使用`NMS`和校准边界框,最后将校准后的边界框作为`ONet`的输入,得到最终的边界框和关键点。
> 由于`rv1106`的`rknn`的模型需要固定的输入尺寸,所以对每个缩放尺寸的图像都需要单独的RKNN模型,这里我使用了一个for循环来处理不同尺寸的图像,并且生成了了多个相应尺寸的RKNN模型来处理不同尺寸的图像。
使用的图像如下:
以下是各层级的推理测试结果:
PNet:
NMS并校准后的结果:
RNet:
NMS并校准后的结果:
ONet:
NMS并校准后的结果:
以上是`MTCNN`模型的推理结果,可以看到,`MTCNN`模型可以准确的检测出图像中的人脸位置和五个关键点位置。
#### 3.2.2 ArcFace
`ArcFace`是一个人脸识别模型,它可以将人脸图像映射到一个高维空间,然后计算两个人脸图像在这个高维空间的距离,从而判断两个人脸图像是否是同一个人。
`ArcFace`模型的输入是一个人脸图像,输出是一个512维的向量。
这部分的转换我参考了`zhuxirui`的成果,他将`ArcFace`模型转换为了`RKNN`模型,并且提供了一个`RKNN`模型的推理代码。
> 帖子地址:(https://bbs.eeworld.com.cn/thread-1302820-1-1.html)
------------
附件下载:
<br/><br/>
<br/><br/>
<br/><br/>
<br/><br/>
<br/><br/>
<br/><br/>
<br/><br/>
<p>人脸识别模型是rknn推理测试的关键</p>
页:
[1]