#AI挑战营第一站#pytorch训练MNIST数据集实现手写数字识别
<div class='showpostmsg'> 本帖最后由 LitchiCheng 于 2024-4-18 22:28 编辑<article data-content="[{"type":"block","id":"3060-1621846615933","name":"paragraph","data":{},"nodes":[{"type":"text","id":"p5PQ-1621846617594","leaves":[{"text":"下载MNIST数据集","marks":[]}]}],"state":{}}]">
<p> </p>
<p>下载MNIST数据集</p>
<pre>
<code># MNIST数据集,用于训练,一次抓60 size
self._train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./data/', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=60, shuffle=True)
# 用于测试,一次抓500 size
self._test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./data/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=500, shuffle=True)</code></pre>
<article data-content="[{"type":"block","id":"2AQ8-1713449147189","name":"paragraph","data":{},"nodes":[{"type":"text","id":"wltg-1713449147188","leaves":[{"text":"编辑网络","marks":[]}]}],"state":{}}]">
<p>编辑网络</p>
<pre>
<code># 连接序列
self._conv1_layer = nn.Sequential(
# 卷积
nn.Conv2d(1,15,5),
# 激活函数
nn.ReLU(),
# 最大池化,减少特征量,选特征最大的数,是一种下采样
nn.MaxPool2d(kernel_size=2, stride=2),
)
self._conv2_layer = nn.Sequential(
nn.Conv2d(15,30,5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self._full_layer = nn.Sequential(
# 卷积层都是四维张量,展平为二维张量给连接层用
nn.Flatten(),
nn.Linear(in_features=480, out_features=60),
nn.ReLU(),
nn.Linear(in_features=60, out_features=10),
)</code></pre>
<article data-content="[{"type":"block","id":"Gvwd-1713449164172","name":"paragraph","data":{},"nodes":[{"type":"text","id":"89Bv-1713449164171","leaves":[{"text":"判断是否可以是否GPU训练","marks":[]}]}],"state":{}}]">
<p>判断是否可以是否GPU训练</p>
<pre>
<code> if torch.cuda.is_available():
print("Use CUDA training!")
self._device = torch.device("cuda")
else:
print("Use CPU training!")
self._device = torch.device("cpu")</code></pre>
<article data-content="[{"type":"block","id":"19rs-1713449192194","name":"paragraph","data":{},"nodes":[{"type":"text","id":"Jctq-1713449192193","leaves":[{"text":"训练","marks":[]}]}],"state":{}}]">
<p>训练</p>
<pre>
<code>def train(self):
loss_d = []
for epoch in range(1, self._epochs + 1):
self._cnn.train(mode=True)
for idx, (train_img, train_label) in enumerate(self._train_loader):
# 复制到device中
train_img = train_img.to(self._device)
train_label = train_label.to(self._device)
outputs = self._cnn(train_img)
# 清除梯度
self._optim.zero_grad()
loss = self._loss_func(outputs, train_label)
# 反向传播
loss.backward()
# 更新权重
self._optim.step()
# print('Train epoch {}: loss: {:.6f}'.format(epoch,loss.item()))
loss_d.append(loss.item())
plt.plot(range(0,len(loss_d)),loss_d)
plt.show()</code></pre>
<article data-content="[{"type":"block","id":"YXfQ-1713449222659","name":"paragraph","data":{},"nodes":[{"type":"text","id":"oQ57-1713449222658","leaves":[{"text":"Train的loss分布","marks":[]}]}],"state":{}}]">
<p>Train的loss分布</p>
<p> </p>
<article data-content="[{"type":"block","id":"9BwP-1713363592096","name":"paragraph","data":{},"nodes":[{"type":"text","id":"5GGU-1713363592095","leaves":[{"text":"Test的loss以及准确率","marks":[]}]}],"state":{}}]">
<p>Test的loss以及准确率</p>
<p> </p>
<article data-content="[{"type":"block","id":"O0Gt-1713363623524","name":"paragraph","data":{},"nodes":[{"type":"text","id":"fljY-1713363623522","leaves":[{"text":"识别的结果","marks":[]}]}],"state":{}}]">
<p>识别的结果</p>
<p> </p>
<article data-content="[{"type":"block","id":"Sk9P-1713363640524","name":"paragraph","data":{},"nodes":[{"type":"text","id":"T7RN-1713363640523","leaves":[{"text":"保存的pth和onnx模型","marks":[]}]}],"state":{}}]">
<p>保存的pth和onnx模型</p>
<pre>
<code> def savePthModel(self, pth_name:str):
torch.save(self._cnn.state_dict(), pth_name)
def saveOnnxModel(self, onnx_name:str):
input = torch.randn(1,1,28,28)
torch.onnx.export(self._cnn, input, onnx_name, verbose=True)</code></pre>
<p> </p>
<article data-content="[{"type":"block","id":"yh0U-1713363673517","name":"paragraph","data":{},"nodes":[{"type":"text","id":"wXcN-1713363673516","leaves":[{"text":"完整代码","marks":[]}]}],"state":{}}]">
<p>完整代码</p>
<pre>
<code>import torch
import torch.nn as nn
import torchvision.datasets
import matplotlib.pyplot as plt
import numpy as np
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
# 连接序列
self._conv1_layer = nn.Sequential(
# 卷积
nn.Conv2d(1,15,5),
# 激活函数
nn.ReLU(),
# 最大池化,减少特征量,选特征最大的数,是一种下采样
nn.MaxPool2d(kernel_size=2, stride=2),
)
self._conv2_layer = nn.Sequential(
nn.Conv2d(15,30,5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self._full_layer = nn.Sequential(
# 卷积层都是四维张量,展平为二维张量给连接层用
nn.Flatten(),
nn.Linear(in_features=480, out_features=60),
nn.ReLU(),
nn.Linear(in_features=60, out_features=10),
)
def forward(self, input):
# 层层连接,两个卷积层,最后全连接层
output = self._conv1_layer(input)
output = self._conv2_layer(output)
output = self._full_layer(output)
return output
class Test:
def __init__(self):
# MNIST数据集,用于训练,一次抓60 size
self._train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./data/', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=60, shuffle=True)
# 用于测试,一次抓500 size
self._test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./data/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=500, shuffle=True)
# 训练次数
self._epochs = 3
self._cnn = CNN()
# 交叉熵损失函数,刻画的是两个概率分布的距离,交叉熵越小,概率分布越接近
self._loss_func = nn.CrossEntropyLoss()
# 优化器
self._optim = torch.optim.Adam(self._cnn.parameters(), lr=0.01)
if torch.cuda.is_available():
print("Use CUDA training!")
self._device = torch.device("cuda")
else:
print("Use CPU training!")
self._device = torch.device("cpu")
def train(self):
loss_d = []
for epoch in range(1, self._epochs + 1):
self._cnn.train(mode=True)
for idx, (train_img, train_label) in enumerate(self._train_loader):
# 复制到device中
train_img = train_img.to(self._device)
train_label = train_label.to(self._device)
outputs = self._cnn(train_img)
# 清除梯度
self._optim.zero_grad()
loss = self._loss_func(outputs, train_label)
# 反向传播
loss.backward()
# 更新权重
self._optim.step()
# print('Train epoch {}: loss: {:.6f}'.format(epoch,loss.item()))
loss_d.append(loss.item())
plt.plot(range(0,len(loss_d)),loss_d)
plt.show()
def test(self):
correct_num = 0
total_num = 0
loss_d = []
self._cnn.train(mode=False)
with torch.no_grad():
for idx, (test_img, test_label) in enumerate(self._test_loader):
test_img = test_img.to(self._device)
test_label = test_label.to(self._device)
total_num += test_label.size(0)
outputs = self._cnn(test_img)
loss = self._loss_func(outputs, test_label)
loss_d.append(loss.item())
predictions = torch.argmax(outputs, dim=1)
correct_num += torch.sum(predictions == test_label)
acc_num = ((correct_num.item()/total_num)*100)
title_str ="Accuracy:"+str(acc_num)+"%"
plt.title(title_str)
plt.plot(range(0,len(loss_d)),loss_d)
plt.show()
def plotTestResult(self):
iteration = enumerate(self._test_loader)
idx, (test_img, test_label) = next(iteration)
with torch.no_grad():
outputs = self._cnn(test_img)
fig = plt.figure()
for i in range(4 * 2):
plt.subplot(4, 2, i + 1)
plt.tight_layout()
plt.imshow(test_img, cmap='gray', interpolation='none')
plt.title('real: {}, predict: {}'.format(
test_label, outputs.data.max(1, keepdim=True).item()
))
plt.xticks([])
plt.yticks([])
plt.show()
def savePthModel(self, pth_name:str):
torch.save(self._cnn.state_dict(), pth_name)
def saveOnnxModel(self, onnx_name:str):
input = torch.randn(1,1,28,28)
torch.onnx.export(self._cnn, input, onnx_name, verbose=True)
if __name__ == "__main__":
mt = Test()
mt.train()
mt.test()
mt.plotTestResult()
mt.savePthModel("model.pth")
mt.saveOnnxModel("model.onnx")</code></pre>
<p>视频讲解</p>
<p><iframe allowfullscreen="true" frameborder="0" height="450" src="//player.bilibili.com/player.html?bvid=1VE421K781&page=1" style="background:#eee;margin-bottom:10px;" width="700"></iframe><br />
</p>
</article>
</article>
</article>
</article>
</article>
</article>
</article>
</article>
</article>
</div><script> var loginstr = '<div class="locked">查看本帖全部内容,请<a href="javascript:;" style="color:#e60000" class="loginf">登录</a>或者<a href="https://bbs.eeworld.com.cn/member.php?mod=register_eeworld.php&action=wechat" style="color:#e60000" target="_blank">注册</a></div>';
if(parseInt(discuz_uid)==0){
(function($){
var postHeight = getTextHeight(400);
$(".showpostmsg").html($(".showpostmsg").html());
$(".showpostmsg").after(loginstr);
$(".showpostmsg").css({height:postHeight,overflow:"hidden"});
})(jQuery);
} </script><script type="text/javascript">(function(d,c){var a=d.createElement("script"),m=d.getElementsByTagName("script"),eewurl="//counter.eeworld.com.cn/pv/count/";a.src=eewurl+c;m.parentNode.insertBefore(a,m)})(document,523)</script> <p>我对python实在是没有兴趣,所以就不搞pytorch了,不过我的帖子<a href="https://bbs.eeworld.com.cn/thread-1278379-1-1.html">AI到底在搞个“毛儿”</a>希望兄弟能够积极参与。</p>
bigbat 发表于 2024-4-19 11:21
我对python实在是没有兴趣,所以就不搞pytorch了,不过我的帖子AI到底在搞个“毛儿”希望兄弟能 ...
<p>666,搞个"毛儿"</p>
<p>这要是布局到MCU里,如何生成C文件 </p>
秦天qintian0303 发表于 2024-4-19 23:33
这要是布局到MCU里,如何生成C文件
<p>C应该有别的推理框架,pytorch肯定上不了</p>
<p>感谢楼主分享的技术内容信息,非常详实,实用价值非常大,值得学习</p>
chejm 发表于 2024-4-21 21:25
感谢楼主分享的技术内容信息,非常详实,实用价值非常大,值得学习
<p>感谢支持,一起进步啦</p>
<p>心目中的最佳贡献给楼主...帮助非常大<img height="53" src="https://bbs.eeworld.com.cn/static/editor/plugins/hkemoji/sticker/facebook/sad.gif" width="54" /></p>
crimsonsnow 发表于 2024-4-25 11:12
心目中的最佳贡献给楼主...帮助非常大
<p>哈哈哈,感谢</p>
通途科技 发表于 2024-10-29 21:11
好好学习,天天向上,加油每一个人,加油自己,加油!!!
<p>加油</p>
页:
[1]