ljg2np 发表于 2024-11-5 09:09

采用pyTorch训练CNN的讨论

<p>采用pyTorch训练CNN,既可以在CPU上,也可以在GPU上训练。</p>

<p>1、安装支持GPU的pyTorch;</p>

<p>2、使用pytorch中的函数判断是否支持GPU:</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;g_support = torch.cunda.is_available()<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;if g_support:</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;device = torch.device(&#39;cuda:0&#39;)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;else:<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;device = torch.device(&#39;cpu&#39;)</p>

<p>3、转移CPU到GPU:</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;net = Net()</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;net.to(device)&nbsp;</p>

ljg2np 发表于 2024-11-5 09:17

<p>判断是否支持GPU的语句,也可以按如下方式进行编码:</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;device = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;</p>

<p>&nbsp;</p>

ljg2np 发表于 2024-11-5 09:23

<p>CNN特别适合图像数据的处理,可以捕获图像中的空间层次结构,例如边缘,纹理和更复杂的模式;一般来说,一个CNN可以由卷积层,池化层、和完全连接层组成。</p>

ljg2np 发表于 2024-11-5 09:36

<p>卷积层作为CNN的构建块,一般由以下几个部分组成:</p>

<p>1、卷积核;</p>

<p>2、步幅(缩小);</p>

<p>3、填充(放大);</p>

<p>4、特征图。</p>

ljg2np 发表于 2024-11-5 09:54

<p>CNN的架构代码:Conv2D ()、ReLU()和MaxPool2D()层执行卷积、激活和池化操作,Linear()全连接层执行分类,各层通过使用torch.nn.Sequential容器组合起来。</p>

<p>class CNN(torch.nn.Module):&nbsp;<br />
&nbsp; &nbsp; &nbsp; &nbsp; ...<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;self.model = torch.nn.Sequential(&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;torch.nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size = 3, padding = 1),&nbsp;<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;torch.nn.ReLU(),&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;torch.nn.MaxPool2d(kernel_size=2),&nbsp;&nbsp; &nbsp; &nbsp;</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ...&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</p>

<p>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;torch.nn.Flatten(),&nbsp;<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;torch.nn.Linear(64*4*4, 512),&nbsp;<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;torch.nn.ReLU(),&nbsp;<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;torch.nn.Linear(512, 10)&nbsp;<br />
&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;)&nbsp;</p>

ljg2np 发表于 2024-11-5 14:40

<p>由Module类继承,使用torch.nn.Conv2d构建一个只有一层二维卷积的神经网络:</p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp; &nbsp;</span></span><span style="font-family:Times;"><span style="font-size:16px;">class MyNet(torch.nn.Module): </span></span></p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;def __init__(self): </span></span></p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;super(MyNet, self).__init__() </span></span></p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;self.conv2d = torch.nn.Conv2d(in_channels=3,out_channels=64,kernel_size=3,stride=2,padding=1) </span></span></p>

<p>&nbsp;</p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;def forward(self, x): </span></span><span style="font-family:Times;"><span style="font-size:16px;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</span></span></p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;x = self.conv2d(x) </span></span></p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return x </span></span></p>

<p><span style="font-family:Times;"><span style="font-size:16px;">&nbsp;&nbsp;&nbsp;net = MyNet() </span></span></p>

ljg2np 发表于 2024-11-5 15:53

<div class='shownolgin' data-isdigest='no'><p>#定义CNN网络的例子(LeNet网络):<br />
class Net(torch.nn.Module):<br />
&nbsp; &nbsp; def __init__(self):<br />
&nbsp; &nbsp; &nbsp; &nbsp; super(Net, self).__init__()<br />
&nbsp; &nbsp; &nbsp; &nbsp; self.conv1 = torch.nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5)<br />
&nbsp; &nbsp; &nbsp; &nbsp; self.pool1 = torch.nn.MaxPool2d(kernel_size=2, stride=2)<br />
&nbsp; &nbsp; &nbsp; &nbsp; self.conv2 = torch.nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5)<br />
&nbsp; &nbsp; &nbsp; &nbsp; self.fc1 = torch.nn.Linear(in_features=16 * 5 * 5,out_features=120)<br />
&nbsp; &nbsp; &nbsp; &nbsp; self.fc2 = torch.nn.Linear(in_features=120, out_features=84)<br />
&nbsp; &nbsp; &nbsp; &nbsp; self.fc3 = torch.nn.Linear(in_features=84, out_features=10)</p>

<p>&nbsp; &nbsp; def forward(self, x):<br />
&nbsp; &nbsp; &nbsp; &nbsp; x = self.pool1(torch.nn.functional.relu(self.conv1(x)))<br />
&nbsp; &nbsp; &nbsp; &nbsp; x = self.pool1(torch.nn.functional.relu(self.conv2(x)))<br />
&nbsp; &nbsp; &nbsp; &nbsp; x = x.view(-1, 16 * 5 * 5)&nbsp;<br />
&nbsp; &nbsp; &nbsp; &nbsp; x = torch.nn.functional.relu(self.fc1(x))<br />
&nbsp; &nbsp; &nbsp; &nbsp; x = torch.nn.functional.relu(self.fc2(x))<br />
&nbsp; &nbsp; &nbsp; &nbsp; x = self.fc3(x)<br />
&nbsp; &nbsp; &nbsp; &nbsp; return x</p>
</div><script>showreplylogin();</script><script type="text/javascript">(function(d,c){var a=d.createElement("script"),m=d.getElementsByTagName("script"),eewurl="//counter.eeworld.com.cn/pv/count/";a.src=eewurl+c;m.parentNode.insertBefore(a,m)})(document,523)</script>

ljg2np 发表于 2024-11-6 10:23

<div class='shownolgin' data-isdigest='no'><p>采用pyTorch进行模型的保存与加载:</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;torch.save(net, &quot;abc.pth&quot;)</p>

<p>定义模型结构并加载参数</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;net = torch.nn.Sequential(...)</p>

<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;net = torch.load(&quot;abc.pth&quot;)</p>
</div><script>showreplylogin();</script>

ljg2np 发表于 2024-11-6 14:48

<div class='shownolgin' data-isdigest='no'><p>在CNN中,卷积核代表特征,通过卷积层运算可得到特征图,这是一种线性激活响应;之后,通过非线性激活响应(Relu),减少数据量;再通过池化层,进一步减少数据量。</p>
</div><script>showreplylogin();</script>

ljg2np 发表于 2024-11-6 15:15

<div class='shownolgin' data-isdigest='no'><p>作为最早出现的卷积神经网络之一,LeNet的卷积单元由卷积层、sigmoid(或ReLU)激活函数和平均(最大)汇聚层组成。</p>
</div><script>showreplylogin();</script>

ljg2np 发表于 2024-11-6 15:26

<div class='shownolgin' data-isdigest='no'><p>全连接层(FC)将特征进行组合,在CNN中起到分类器的作用,并映射到样本标记空间,全连接层可由卷积操作实现。</p>
</div><script>showreplylogin();</script>

ljg2np 发表于 2024-11-6 15:35

<div class='shownolgin' data-isdigest='no'><p>卷积层通过局部映射来缩小实体对象在某一分类方向上的表示尺寸(投影);池化层(也称汇聚层)通过指定操作进一步缩小信息块,以更显著的特征来生成特征图。</p>
</div><script>showreplylogin();</script>

ljg2np 发表于 2024-11-6 16:04

<div class='shownolgin' data-isdigest='no'><p>CNN是一种带有卷积结构的前馈神经网络,卷积结构可以减少深层网络占用的内存量,其中三个关键操作&mdash;&mdash;局部感受野、权值共享、池化层,有效的减少了网络的参数个数,缓解了模型的过拟合问题。</p>
</div><script>showreplylogin();</script>

ljg2np 发表于 2024-11-6 16:21

<div class='shownolgin' data-isdigest='no'><p>人工神经网络(ANN)通过调整内部神经元之间的权重,达到处理信息目的;CNN在卷积层中输出特征图的每个神经元与其输入进行局部连接,并通过对应的连接权值与局部输入进行加权求和再加上偏置值,得到神经元输入值,该过程等同于卷积过程,CNN也由此而得名。</p>
</div><script>showreplylogin();</script>

ljg2np 发表于 2024-11-8 10:19

<div class='shownolgin' data-isdigest='no'><p>pyTorch通过提供torch.library来进行pyTorch 核心运算符库的扩展、测试和创建,几个方法及其作用:<br />
1、torch.library.custom_op&nbsp; 用于创建新的自定义运算符。此装饰器将函数包装为自定义运算符,使其能够与PyTorch的各个子系统交互。</p>

<p>2、torch.library.opcheck&nbsp; 用于测试自定义运算符是否正确注册,并检查运算符在不同设备上的行为是否一致。</p>

<p>3、torch.library.register_kernel&nbsp; 为自定义运算符注册特定设备类型的实现(如CPU或CUDA)。</p>

<p>4、torch.library.register_autograd&nbsp; 注册自定义运算符的后向传递公式,使其能够在自动求导过程中正确计算梯度。</p>

<p>5、torch.library.register_fake&nbsp;&nbsp;为自定义运算符注册 FakeTensor 实现,以支持 PyTorch 编译 API。</p>
</div><script>showreplylogin();</script>

通途科技 发表于 2024-11-11 05:34

页: [1]
查看完整版本: 采用pyTorch训练CNN的讨论