嵌入式工程师AI挑战营RV1106人脸识别+RKNN推理测试
本帖最后由 冬天的木鱼 于 2025-1-21 08:05 编辑<p>嵌入式工程师AI挑战营RV1106人脸识别+RKNN推理测试</p>
<p>本次测试完全按照wiki相关文档执行<a href="https://wiki.luckfox.com/zh/Luckfox-Pico/Luckfox-Pico-RKNN-Test" target="_blank">https://wiki.luckfox.com/zh/Luckfox-Pico/Luckfox-Pico-RKNN-Test</a>。</p>
<p>系统是Ubuntu22.04.5,python3.10,需要配置清华源</p>
<p> </p>
<p>有些有错误的地方属于本人错误操作导致,wiki文档依次执行下来没有大的问题</p>
<p>Step1. 安装Miniconda</p>
<p>Step1.1 下载安装包</p>
<p>wget <a href="https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-4.6.14-Linux-x86_64.sh" target="_blank">https://mirrors.tuna.tsinghua.edu.cn/anaconda/miniconda/Miniconda3-4.6.14-Linux-x86_64.sh</a></p>
<p> </p>
<div style="text-align: center;"></div>
<div>Step1.2 安装miniconda</div>
<div>chmod 777 Miniconda3-4.6.14-Linux-x86_64.sh<br />
bash Miniconda3-4.6.14-Linux-x86_64.sh</div>
<div> </div>
<div style="text-align: center;"></div>
<div>Step1.3 配置Shell文件</div>
<div>gedit nano ~/.bashrc<br />
#在文件末尾添加以下行:<br />
source ~/miniconda3/bin/activate<br />
#退出conda环境<br />
conda deactivate</div>
<div style="text-align: center;"></div>
<div>Step2. 下载rknn-toolkit2, </div>
<div>执行指令:git clone <a href="https://github.com/rockchip-linux/rknn-toolkit2" target="_blank">https://github.com/rockchip-linux/rknn-toolkit2</a></div>
<div style="text-align: center;"> </div>
<div style="text-align: center;"></div>
<div>Step2.1 安装RKNN-ToolKit2依赖包</div>
<div>执行指令:pip3 install -r rknn-toolkit2/packages/requirements_cp310-1.6.0.txt</div>
<div style="text-align: center;"></div>
<div>Step2.2 创建 RKNN-Toolkit2 Conda 环境</div>
<div>
<p>创建 RKNN-Toolkit2 开发 Conda 环境,-n 参数表示环境名称,指定python版本为3.8</p>
<p>conda create -n RKNN-Toolkit2 python=3.8</p>
</div>
<div style="text-align: center;"></div>
<div>Step2.3 进入RKNN-Toolkit2 Conda环境</div>
<div>conda activate RKNN-Toolkit2</div>
<div> </div>
<div>Step2.4 获取 RKNN-Toolkit2 安装包
<pre tabindex="0">
git clone <a href="https://github.com/rockchip-linux/rknn-toolkit2.git" target="_blank">https://github.com/rockchip-linux/rknn-toolkit2.git</a>
</pre>
</div>
<div>Step2.5 安装 RKNN-Toolkit2 相关的依赖库<br />
pip install tf-estimator-nightly==2.8.0.dev2021122109<br />
pip install -r rknn-toolkit2/packages/requirements_cp38-1.6.0.txt -i <a href="https://pypi.mirrors.ustc.edu.cn/simple/" target="_blank">https://pypi.mirrors.ustc.edu.cn/simple/</a></div>
<div> </div>
<div>Step2.6 安装 RKNN-Toolkit2</div>
<div>pip install rknn-toolkit2/packages/rknn_toolkit2-1.6.0+81f21f4d-cp38-cp38-linux_x86_64.whl</div>
<div> </div>
<div>Step2.7 测试是否安装成功</div>
<div>python<br />
>>> from rknn.api import RKNN</div>
<div> </div>
<div>测试部署ONNX模型</div>
<div>ONNX模型信息:</div>
<div>
<div style="text-align: center;"></div>
<p> </p>
</div>
<div>Step3 人脸检测 retinaface</div>
<div>Step3.1 获取retinaface源码</div>
<div>
<pre tabindex="0">
<code class="hljs">git clone <a href="https://github.com/bubbliiiing/retinaface-pytorch.git" target="_blank">https://github.com/bubbliiiing/retinaface-pytorch.git</a>
</code>
</pre>
</div>
<div style="text-align: center;"></div>
<div>Step3.2. 进入源码目录</div>
<div>cd retinaface-pytorch</div>
<div>
<p>Step3.3 搭建模型训练环境</p>
<p>conda create -n retinaface python=3.6</p>
<p>Step3.4 进入 Conda 虚拟环境并安装运行的依赖库</p>
<p>conda activate retinaface<br />
pip install -r requirements.txt</p>
<p> </p>
</div>
<div style="text-align: center;"></div>
<div>在 model_data文件夹下存放有训练好的 .pth权重文件,选择以mobilenet作为骨干网络的权重文件导出为 .onnx格式</div>
<div><strong><span style="color:#c0392b;">强调:执行指令后,model_data 内并没有下载.pth权重文件,需要手工下载.pth权重文件</span></strong></div>
<div style="text-align: center;"></div>
<div>Step3.5 在工程文件夹下创建导出 ONNX 文件的python脚本</div>
<div>from nets.retinaface import RetinaFace<br />
from utils.config import cfg_mnet<br />
import torch<br />
<br />
model_path='model_data/Retinaface_mobilenet0.25.pth' #模型路径<br />
model=RetinaFace(cfg=cfg_mnet,pretrained = False) #模型初始化<br />
device = torch.device('cpu')<br />
model.load_state_dict(torch.load(model_path,map_location=device),strict=False) #模型加载<br />
net=model.eval()<br />
example=torch.rand(1,3,640,640) #给定输入<br />
torch.onnx.export(model,(example),'model_data/retinaface.onnx',verbose=True,opset_version=9) #导出</div>
<div> </div>
<div>Step3.6 执行脚本获取ONNX文件</div>
<div>python export_onnx.py</div>
<div style="text-align: center;"></div>
<div>Step4. 人脸特征提取Facenet</div>
<div>Step4.1 获取facenet源码</div>
<div>git clone <a href="https://github.com/bubbliiiing/facenet-pytorch.git" target="_blank">https://github.com/bubbliiiing/facenet-pytorch.git</a></div>
<div>补充,图片内是在retinaface环境下,应该先执行conda deactivate指令退出retinaface环境,再执行git 指令,这是个bug。</div>
<div style="text-align: center;"></div>
<div>Step4.2 进入源码目录</div>
<div>cd facenet-pytorch</div>
<div> </div>
<div>
<div>Step4.3 搭建模型训练环境</div>
<div>conda create -n facenet python=3.6</div>
</div>
<div> </div>
<div style="text-align: center;"></div>
<div>Step4.4 进入 Conda 虚拟环境并安装运行的依赖库</div>
<div>conda activate facenet<br />
pip install -r requirements.txt</div>
<div> </div>
<div style="text-align: center;"></div>
<div>Step4.5 在工程文件夹下创建导出 ONNX 文件的python脚本<code>export_onnx.py</code></div>
<div> </div>
<div>from nets.facenet import Facenet<br />
from torch import onnx<br />
import torch<br />
<br />
model_path='model_data/facenet_mobilenet.pth' #模型路径<br />
model = Facenet(backbone="mobilenet",mode="predict",pretrained=True) #模型初始化<br />
device = torch.device('cpu')<br />
model.load_state_dict(torch.load(model_path, map_location=device), strict=False)<br />
example=torch.rand(1,3,160,160) #给定一个输入<br />
torch.onnx.export(model,example,'model_data/facenet.onnx',verbose=True,opset_version=9) #导出</div>
<div> </div>
<div>Step4.6 执行脚本获取 ONNX 文件(facenet conda 环境下)</div>
<div>python export_onnx.py</div>
<div style="text-align: center;"></div>
<div>Step 5. 物体识别YoloV5</div>
<div>
<p>Yolov5 的基本原理是:通过卷积神经网络提取图像特征,并在网格划分的基础上对每个网格单元进行目标检测预测,预测边界框位置和类别,并分配置信度分数。最后,通过非极大值抑制(NMS)筛选和合并重叠较大的边界框,得到最终的目标检测结果。</p>
<p> </p>
<p>Step5.1 获取Yolov5源码</p>
<p>git clone <a href="https://github.com/airockchip/yolov5.git" target="_blank">https://github.com/airockchip/yolov5.git</a></p>
</div>
<div>图片中仍处于facenet环境,应该先执行conda deactivate指令退出facenet环境,该处为bug</div>
<div> </div>
<div>Step5.2 进入 Yolov5 源码目录</div>
<div>cd yolov5</div>
<div> </div>
<div>Step5.3 搭建模型训练环境</div>
<div>conda create -n yolov5 python=3.9</div>
<div><br />
</div>
<div> </div>
<div style="text-align: center;"></div>
<div style="text-align: center;"> </div>
<div>
<div>Step5.4 进入 Conda 虚拟环境并安装运行的依赖库</div>
<div>conda activate yolov5<br />
pip install -r requirements.txt</div>
</div>
<div style="text-align: center;"></div>
<div>Step5.5 从默认文件中导出 ONNX 文件(yolov5 conda 环境下)</div>
<div>python export.py --rknpu --weight yolov5s.pt</div>
<div style="text-align: center;"> </div>
<div> </div>
<div>Step6. RKNN 应用示例</div>
<div>Step6.1 模型源码获取</div>
<div>git clone <a href="https://github.com/LuckfoxTECH/luckfox_pico_rknn_example.git" target="_blank">https://github.com/LuckfoxTECH/luckfox_pico_rknn_example.git</a></div>
<div> </div>
<div>Step6.2 进入 <code>scripts/luckfox_onnx_to_rknn</code> 目录</div>
<div>cd luckfox_pico_rknn_example/scripts/luckfox_onnx_to_rknn</div>
<div> </div>
<div>Step 6.3 进入RKNN-Toolkit2 Conda 开发环境</div>
<div>conda activate RKNN-Toolkit2</div>
<div> </div>
<div>Step6.4 模型转换</div>
<div>cd convert</div>
<div>convert.py ../model/retinaface.onnx ../dataset/retinaface_dataset.txt ../model/retinaface.rknn Retinaface</div>
<div> </div>
<div style="text-align: center;"></div>
<div style="text-align: center;"></div>
<div style="text-align: center;"></div>
<div style="text-align: center;"></div>
<div style="text-align: center;"></div>
<div>Step7 rknn_model_zoo 应用示例</div>
<div>Step7.1 下载 rknn_model_zoo</div>
<div>git clone <a href="https://github.com/airockchip/rknn_model_zoo.git" target="_blank">https://github.com/airockchip/rknn_model_zoo.git</a></div>
<div> </div>
<div>
<p> </p>
</div>
<div style="text-align: center;"></div>
<div>Step7.2 获取 Yolov5 ONNX模型文件</div>
<div>cd <rknn_model_zoo Path>/rknn_model_zoo/examples/yolov5/model<br />
chmod a+x download_model.sh<br />
./download_model.sh</div>
<div style="text-align: center;"></div>
<div>Step7.3 执行 <span style="background-color: rgb(250, 250, 250); color: rgb(56, 58, 66); font-family: "Source Code Pro", "DejaVu Sans Mono", "Ubuntu Mono", "Anonymous Pro", "Droid Sans Mono", Menlo, Monaco, Consolas, Inconsolata, Courier, monospace, "PingFang SC", "Microsoft YaHei", sans-serif; font-size: 12px;">rknn_model_zoo/examples/yolov5/python</span>目录下的模型转换程序 <span style="background-color: rgb(250, 250, 250); color: rgb(56, 58, 66); font-family: "Source Code Pro", "DejaVu Sans Mono", "Ubuntu Mono", "Anonymous Pro", "Droid Sans Mono", Menlo, Monaco, Consolas, Inconsolata, Courier, monospace, "PingFang SC", "Microsoft YaHei", sans-serif; font-size: 12px;">convert.py</span><br />
使用方法:</div>
<div>conda activate RKNN-Toolkit2<br />
cd <rknn_model_zoo Path>/rknn_model_zoo/examples/yolov5/python<br />
python3 convert.py ../model/yolov5s.onnx rv1106</div>
<div> </div>
<div style="text-align: center;"></div>
<p> </p>
<p>相关模型经以下处理后,移植至开发板</p>
<div style="text-align: center;"></div>
<div style="text-align: center;"></div>
<p>最后执行,效果如下视频所示</p>
<p> </p>
页:
[1]