《深度学习》CNN模型的简单实现
<div class='showpostmsg'><div>在对深度学习一书的理论学习之后,尝试做了一下简单的CNN模型实现。</div><div>神经网络被组织成神经元层。通常有三种类型的层:<br />
输入层:将输入数据传递到隐藏层。<br />
隐藏层:这些是输入层和输出层之间的中间层,它们执行计算并学习给定数据中的模式。<br />
输出层:基于隐藏层的计算产生最终输出。</div>
<div>神经元之间的每一个连接都有一个权重来表示它的重要性。在训练期间,这些权重被调整以最小化预测和实际输出之间的差异。<br />
每个神经元也有一个偏差项,允许网络考虑权重没有捕捉到的输入可变性。</div>
<div>它们在训练期间调整神经网络的权重和偏差,以最小化损失函数,并在减少预测和实际输出之间的误差的方向上更新参数。</div>
<div>在前向传播中,数据从输入层通过隐藏层流向进行预测的输出层。前向传播的主要目标是基于神经网络的当前权重和偏差生成预测或输出。<br />
反向传播用于根据损失函数计算的误差来更新CNN的权重和偏差,其主要目的是优化神经网络的参数,以便它学习随着时间的推移做出更准确的预测。</div>
<div>这两个过程对于训练CNN有效地完成诸如图像分类、对象检测和图像分割的任务是必不可少的。</div>
<div>学习率控制在训练过程中相对于损失梯度调整多少权重。这是一个关键的超参数,影响网络学习的快慢。</div>
<div>神经网络有不同的架构,适用于不同类型的数据和任务:<br />
监督学习:涉及从标记数据中学习,以预测或分类新数据。人工神经网络(ANN)、前馈神经网络(FNN)和卷积神经网络(CNN)是一些常见的例子。</div>
<div>无监督学习:学习数据中的模式和关系,通常没有明确的标签。自动编码器、生成对抗网络(GAN)就是一些例子。</div>
<div>卷积神经网络(CNN)是一种监督学习模型,对涉及图像或视频分析的任务特别有效,可以通过一系列分层有效地学习和识别边缘、纹理和形状等模式。<br />
让我们更深入地研究卷积神经网络(CNN):</div>
<div><strong>1.CNN</strong><strong>的架构</strong><strong>:</strong><br />
<strong>一</strong><strong>.</strong><strong>卷积层</strong><strong>:</strong>这些层是CNN中的基本构建块,用于从输入数据中提取重要特征,如边缘、纹理和形状。它包括以下几层:<br />
*卷积运算:包括采用一个称为内核(或过滤器)的小矩阵,并将其滑过输入数据(通常是一幅图像)以执行逐元素矩阵乘法,并对结果求和以在特征图中产生单个输出像素。每个过滤器从输入中提取特定的特征,例如边缘、纹理或更复杂的图案。<br />
*参数:参数包括过滤器的大小(内核大小)、过滤器的数量以及过滤器在输入中移动的步幅(步长)。<br />
*激活功能:典型地,ReLU(整流线性单元)被用作卷积运算之后的激活函数。<br />
二.<strong>池层</strong><strong>:</strong>这些图层对从卷积图层获得的要素地图进行下采样(减少宽度和高度等维度)。最大池和平均池是常用的方法,用于保留最重要的功能,同时降低计算复杂性。<br />
池大小(内核大小)和步幅决定了如何在输入中应用池。<br />
<strong>三</strong><strong>.</strong><strong>完全连接的层</strong><strong>(</strong><strong>致密层</strong><strong>):</strong>这些层将卷积层和池层学习到的特征集成到预测中。它们将一层中的每个神经元连接到下一层中的每个神经元,产生最终输出。<br />
神经元之间的每个连接都有自己的权重和偏差,这些权重和偏差在训练过程中进行调整,以提高模型精度。</div>
<div><strong>2.CNN</strong><strong>模型的工作示例</strong><strong>:</strong><br />
使用小图像和过滤器检测3x3图像中的水平线。</div>
<div>输入图像:考虑灰度3×3图像矩阵(像素值表示为整数):</div>
<div>[,</div>
<div>,</div>
<div>]</div>
<div>过滤器(内核):让我们定义一个简单的2x2滤波器来检测水平线。该过滤器在顶行具有正权重,在底行具有负权重。它将检测从亮到暗(或暗到亮)像素的水平转换。</div>
<div>[,</div>
<div>[-1, -1]]</div>
<div>应用过滤器:在输入图像上滑动滤镜,计算每个位置的点积。从图像左上角的2x2子区域开始。</div>
<div>[,</div>
<div>]</div>
<div>用过滤器计算乘积:</div>
<div>(1 * 1) + (2 * 1) + (4 * -1) + (5 * -1) = 1 + 2 - 4 - 5 = -6</div>
<div>用结果替换输出要素地图中的中心像素(本例中为-6)。</div>
<div>滑动:将过滤器向右滑动一个像素,然后再次计算。</div>
<div>[,</div>
<div>]</div>
<div>(2 * 1) + (3 * 1) + (5 * -1) + (6 * -1) = 2 + 3 - 5 - 6 = -6</div>
<div>输出要素地图:在整个图像上滑动过滤器后,生成的2x2输出特征地图可能如下所示:</div>
<div>[[-6, -6],</div>
<div>[-6, -6]]</div>
<div>输出特征图中的每个值表示滤波器对输入图像的相应子区域的响应。</div>
<div>一个简单的CNN模型</div>
<div>下面是一个如何使用PyTorch实现简单CNN模型的基本示例:</div>
<div><strong>1.</strong><strong>导入必要的头文件</strong><strong>:</strong></div>
<div>import torch是PyTorch的核心库,提供基本的张量运算和神经网络功能。</div>
<div>import numpy as np用于数值运算和管理数组,是数据操作和预处理的一部分。</div>
<div>import torchvision和import torchvision.transforms as transforms用于处理图像数据集、预训练模型和图像预处理技术,帮助完成与计算机视觉相关的任务。</div>
<div>import time</div>
<div>import torch</div>
<div>import torch.nn as nn</div>
<div>import numpy as np</div>
<div>from torch.utils.data import random_split, DataLoader</div>
<div>import torch.optim as optim</div>
<div>import torchvision</div>
<div>import torchvision.transforms as transforms</div>
<div>import torch.nn.functional as F</div>
<div><strong>2.</strong><strong>设置参数</strong><strong>:</strong></div>
<div>以下参数对于配置机器学习模型的训练和验证过程至关重要。</div>
<div>batch_size指定每次训练迭代中处理的样本数,并帮助平衡训练速度和内存使用。</div>
<div>valid_size指定将为验证保留的数据集的百分比,允许在训练过程中根据看不见的数据对模型进行评估,以监视性能并防止过度拟合。</div>
<div>num_epochs设置训练期间整个数据集将通过模型的次数。</div>
<div>num_workers指定用于数据加载的工作线程数量,并且可以在更复杂的情况下进行调整以优化性能。</div>
<div>batch_size = 64</div>
<div>valid_size = 0.2</div>
<div>num_epochs = 20</div>
<div>num_workers = 4</div>
<div><strong>3.</strong><strong>准备数据集</strong><strong>:</strong></div>
<div>transform用于在图像被用于训练机器学习模型之前准备图像。</div>
<div>transforms.ToTensor()将图像从常规图片格式转换为机器学习模型可以处理的结构化数字格式(张量)。*transforms.Normalize()调整图像的颜色,使其标准化。这意味着每张图像的颜色都将调整到相似的范围,这有助于模型更好、更一致地学习。</div>
<div>这些步骤确保所有图像都处于模型可以有效工作的格式和范围内。</div>
<div>transform = transforms.Compose(</div>
<div>[transforms.ToTensor(),</div>
<div>transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])</div>
<div><strong>4.</strong><strong>加载和分割数据集</strong><strong>:</strong></div>
<div>使用CIFAR10数据集,让我们下载并加载数据集,该数据集包含用于培训和测试目的的不同类别的图像。</div>
<div>full_train_data保存训练图像和标签,而test_data包含测试图像和标签。</div>
<div>valid_size参数指定20%的定型数据应用于验证。计算此百分比并用于分割full_train_data到…里面train_data和valid_data使用random_split,确保80%的数据用于训练,20%用于验证。</div>
<div>DataLoader然后,使用类为每个数据集创建加载器:train_loader用于使用混合批次训练模型,valid_loader用于使用非混合批次验证模型,test_loader用于使用整个测试数据集评估模型。</div>
<div>full_train_data = torchvision.datasets.CIFAR10('data', train=True, download=True, transform=transform)</div>
<div>test_data = torchvision.datasets.CIFAR10('data', train=False, download=True, transform=transform)</div>
<div>num_train = len(full_train_data)</div>
<div># store the test_dataset size as 20% of the total dataset</div>
<div>split = int(np.floor(valid_size * num_train))</div>
<div>train_size = num_train - split # store the train_dataset size (80% in our case)</div>
<div># Random split of the dataset</div>
<div>train_data, valid_data = random_split(full_train_data, )</div>
<div>#train the model using 80% of the dataset</div>
<div>train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True, num_workers=num_workers)</div>
<div>#validate the working a validation dataset which contains 20% of the dataset</div>
<div>valid_loader = DataLoader(valid_data, batch_size=batch_size, shuffle=False, num_workers=num_workers)</div>
<div>#run the test using the entire dataset</div>
<div>test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=False, num_workers=num_workers)</div>
<div><strong>5.</strong><strong>定义</strong><strong>CNN</strong><strong>模型</strong><strong>:</strong></div>
<div>基于测试精度和要求,有几种方法来优化和定制CNN模型。让我们来看一个示例模型,并理解各个层。</div>
<div>classes代表CIFAR-10数据集中的不同类别,如“飞机”、“车辆”、“鸟”等。</div>
<div>Net使用PyTorch的nn.Module定义卷积神经网络(CNN)模型。</div>
<div>网络模型包括:</div>
<div>一.爱达荷(Idaho的缩写)卷积层:卷积运算被应用于输入图像,以使用用于提取特征的3个连续层块来提取特征。Conv2d用于对输入图像应用卷积运算,而BatchNorm2d和ReLU分别用于批量标准化和激活功能。</div>
<div>二.池层:用于减少特征图的空间维度,有助于提高模型的计算效率,降低对输入中微小变化的敏感度。</div>
<div>三.完全连接的层:这些层用于根据卷积层提取的特征进行最终预测。他们首先增加图像的大小,然后将其简化,以关注最重要的细节,最后,制作一个列表,显示图像属于每个可能类别的可能性,如“猫”、“狗”或“飞机”。本质上,这些层帮助网络对图像作出明确和最终的决定。</div>
<div>四.脱落层:这些层确保网络即使在某些部分缺失的情况下也能很好地学习工作,这有助于它更好地进行归纳,而不仅仅是记住训练数据。</div>
<div>动词 (verb的缩写)正向方法:指定输入数据在网络中的流动方式:应用卷积、池化和激活函数,对数据进行整形,并通过在不同阶段应用丢弃的完全连接的图层传递数据。</div>
<div>最后,用net = Net()创建Net类的一个实例,初始化CNN模型。</div>
<div>classes = ['plane', 'vehicle', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']</div>
<div># Define the CNN model</div>
<div>class Net(nn.Module):</div>
<div>def __init__(self):</div>
<div>super(Net, self).__init__()</div>
<div>self.conv1 = nn.Sequential(</div>
<div>nn.Conv2d(3, 32, 3, padding=1),</div>
<div>nn.BatchNorm2d(32),</div>
<div>nn.ReLU(),</div>
<div>nn.MaxPool2d(2, 2)</div>
<div>)</div>
<div>self.conv2 = nn.Sequential(</div>
<div>nn.Conv2d(32, 64, 3, padding=1),</div>
<div>nn.BatchNorm2d(64),</div>
<div>nn.ReLU(),</div>
<div>nn.MaxPool2d(2, 2)</div>
<div>)</div>
<div>self.conv3 = nn.Sequential(</div>
<div>nn.Conv2d(64, 128, 3, padding=1),</div>
<div>nn.BatchNorm2d(128),</div>
<div>nn.ReLU(),</div>
<div>nn.MaxPool2d(2, 2)</div>
<div>)</div>
<div>self.fc1 = nn.Linear(128 * 4 * 4, 256)</div>
<div>self.fc2 = nn.Linear(256, 128)</div>
<div>self.fc3 = nn.Linear(128, 10)</div>
<div>self.dropout = nn.Dropout(0.5)</div>
<div>def forward(self, x):</div>
<div>x = self.conv1(x)</div>
<div>x = self.conv2(x)</div>
<div>x = self.conv3(x)</div>
<div>x = x.view(-1, 128 * 4 * 4)</div>
<div>x = self.dropout(F.relu(self.fc1(x)))</div>
<div>x = self.dropout(F.relu(self.fc2(x)))</div>
<div>x = self.fc3(x)</div>
<div>return x</div>
<div>net = Net()</div>
<div><strong>6.</strong><strong>定义损失函数和优化器</strong><strong>:</strong></div>
<div>这criterion用于衡量网络预测的好坏,以及optimizer用于根据测量结果调整网络的权重以提高其性能</div>
<div>criterion = nn.CrossEntropyLoss()</div>
<div>optimizer = optim.Adam(net.parameters(), lr=0.001)</div>
<div><strong>7.</strong><strong>培训、验证和损失计算</strong><strong>:</strong></div>
<div>valid_loss_min是一个变量,用于跟踪迄今为止看到的最低验证损失。</div>
<div>对于每个时期,训练和验证过程重复多次。</div>
<div>培训阶段:</div>
<div>I .网络进入训练模式net.train()对于每一批数据,我们清除旧的梯度optimizer.zero_grad()</div>
<div>II.做预测net(data)并计算损失criterion(net(data), target).</div>
<div>III.调整重量以减少这种损失loss.backward(),并累计总训练损失loss.item() * data.size(0).</div>
<div>验证阶段:</div>
<div>I .网络切换到评估模式net.eval().</div>
<div>II.对于每批验证数据,我们在不更新权重的情况下计算损失torch.no_grad(),以及net(data)获取验证数据的网络预测。criterion(output, target)计算该批次的预测误差。</div>
<div>III.累计总验证损失loss.item() * data.size(0).</div>
<div>计算平均损失:</div>
<div>I .在每个时期之后,代码通过将总损失除以每个数据集中的样本数来计算训练和验证数据的平均损失。</div>
<div>II.测量每个时期所用的时间。此外,检查验证损失与以前的时期相比是否有所改善,并将最佳模型的状态保存到文件中net_cifar10.pt.</div>
<div>该过程有助于跟踪模型的性能,监控进度,并确保保存性能最佳的模型。</div>
<div>valid_loss_min = np.Inf</div>
<div>for epoch in range(num_epochs):</div>
<div>start_time = time.time()</div>
<div>train_loss = 0.0</div>
<div>valid_loss = 0.0</div>
<div># Training</div>
<div>net.train()</div>
<div>for data, target in train_loader:</div>
<div>data, target = data, target</div>
<div>optimizer.zero_grad()</div>
<div>output = net(data)</div>
<div>loss = criterion(output, target)</div>
<div>loss.backward()</div>
<div>optimizer.step()</div>
<div>train_loss += loss.item() * data.size(0)</div>
<div># Validation</div>
<div>net.eval()</div>
<div>with torch.no_grad():</div>
<div>for data, target in valid_loader:</div>
<div>data, target = data, target</div>
<div>output = net(data)</div>
<div>loss = criterion(output, target)</div>
<div>valid_loss += loss.item() * data.size(0)</div>
<div># Calculate average loss</div>
<div>train_loss /= len(train_loader.dataset)</div>
<div>valid_loss /= len(valid_loader.dataset)</div>
<div>end_time = time.time()</div>
<div>epoch_time = end_time - start_time</div>
<div>print(f'Epoch: {epoch+1}/{num_epochs} | Time: {epoch_time:.3f}s | Training Loss: {train_loss:.4f} | Validation Loss: {valid_loss:.4f}')</div>
<div># Save model if validation loss decreases</div>
<div>if valid_loss <= valid_loss_min:</div>
<div>print(f'Validation loss decreased ({valid_loss_min:.4f} --> {valid_loss:.4f}). Saving model as net_cifar10.pt')</div>
<div>torch.save(net.state_dict(), 'net_cifar10.pt')</div>
<div>valid_loss_min = valid_loss</div>
<div><strong>8.</strong><strong>加载并测试最佳模型</strong><strong>:</strong></div>
<div>net.load_state_dict(torch.load('net_cifar10.pt'))从名为的文件中加载最佳保存的模型net_cifar10.pt.</div>
<div>初始化变量test_loss, class_correct,以及class_total跟踪测试数据的总损失以及模型的整体表现。</div>
<div>测试阶段:</div>
<div>I .模型被设置为评估模式net.eval(),以及torch.no_grad()再次用于阻止模型计算梯度,这样可以节省内存。</div>
<div>II.对于每批测试数据,模型进行预测,计算损失,并更新总测试损失。</div>
<div>III.然后检查每个预测是否正确,并记录每个类别的正确预测数和每个类别的样本总数。</div>
<div>通过在测试数据集上计算模型的性能来打印测试结果,计算平均损失和准确性,并打印出模型的总体性能。</div>
<div># Load the best model</div>
<div>net.load_state_dict(torch.load('net_cifar10.pt'))</div>
<div>print('Finished Training')</div>
<div>test_loss = 0.0</div>
<div>class_correct = * 10</div>
<div>class_total = * 10</div>
<div>net.eval()</div>
<div>with torch.no_grad():</div>
<div>for data, target in test_loader:</div>
<div>data, target = data, target</div>
<div>output = net(data)</div>
<div>loss = criterion(output, target)</div>
<div>test_loss += loss.item() * data.size(0)</div>
<div>_, pred = torch.max(output, 1)</div>
<div>correct = pred.eq(target.view_as(pred))</div>
<div>for i in range(len(target)):</div>
<div>label = target.item()</div>
<div>class_correct += correct.item()</div>
<div>class_total += 1</div>
<div># Print test results</div>
<div>test_loss /= len(test_loader.dataset)</div>
<div>print(f'Test Loss: {test_loss:.6f}')</div>
<div>overall_accuracy = 100. * np.sum(class_correct) / np.sum(class_total)</div>
<div>print(f'\nTest Accuracy (Overall): {overall_accuracy:.2f}%')</div>
<div>下载数据集后CIFAR-10的随机样本测试数据集上对模型进行训练和评估CIFAR-10数据集。</div>
<div>Epoch: 1/20 | Time: 218.234s | Training Loss: 1.6360 | Validation Loss: 1.2467</div>
<div>Validation loss decreased (inf --> 1.2467). Saving model as net_cifar10.pt</div>
<div>Epoch: 2/20 | Time: 223.496s | Training Loss: 1.2740 | Validation Loss: 1.0825</div>
<div>Validation loss decreased (1.2467 --> 1.0825). Saving model as net_cifar10.pt</div>
<div>Epoch: 3/20 | Time: 232.533s | Training Loss: 1.1203 | Validation Loss: 0.9540</div>
<div>Validation loss decreased (1.0825 --> 0.9540). Saving model as net_cifar10.pt</div>
<div>Epoch: 4/20 | Time: 230.899s | Training Loss: 1.0092 | Validation Loss: 0.8614</div>
<div>Validation loss decreased (0.9540 --> 0.8614). Saving model as net_cifar10.pt</div>
<div>Epoch: 5/20 | Time: 231.082s | Training Loss: 0.9349 | Validation Loss: 0.8214</div>
<div>Validation loss decreased (0.8614 --> 0.8214). Saving model as net_cifar10.pt</div>
<div>Epoch: 6/20 | Time: 252.445s | Training Loss: 0.8686 | Validation Loss: 0.8234</div>
<div>Epoch: 7/20 | Time: 234.719s | Training Loss: 0.8168 | Validation Loss: 0.7961</div>
<div>Validation loss decreased (0.8214 --> 0.7961). Saving model as net_cifar10.pt</div>
<div>Epoch: 8/20 | Time: 244.801s | Training Loss: 0.7701 | Validation Loss: 0.7754</div>
<div>Validation loss decreased (0.7961 --> 0.7754). Saving model as net_cifar10.pt</div>
<div>Epoch: 9/20 | Time: 284.708s | Training Loss: 0.7218 | Validation Loss: 0.7546</div>
<div>Validation loss decreased (0.7754 --> 0.7546). Saving model as net_cifar10.pt</div>
<div>Epoch: 10/20 | Time: 255.791s | Training Loss: 0.6918 | Validation Loss: 0.7677</div>
<div>Epoch: 11/20 | Time: 203.933s | Training Loss: 0.6485 | Validation Loss: 0.7009</div>
<div>Validation loss decreased (0.7546 --> 0.7009). Saving model as net_cifar10.pt</div>
<div>Epoch: 12/20 | Time: 393.549s | Training Loss: 0.6176 | Validation Loss: 0.7026</div>
<div>Epoch: 13/20 | Time: 253.282s | Training Loss: 0.5890 | Validation Loss: 0.6831</div>
<div>Validation loss decreased (0.7009 --> 0.6831). Saving model as net_cifar10.pt</div>
<div>Epoch: 14/20 | Time: 284.252s | Training Loss: 0.5553 | Validation Loss: 0.6826</div>
<div>Validation loss decreased (0.6831 --> 0.6826). Saving model as net_cifar10.pt</div>
<div>Epoch: 15/20 | Time: 229.772s | Training Loss: 0.5271 | Validation Loss: 0.6881</div>
<div>Epoch: 16/20 | Time: 257.720s | Training Loss: 0.5061 | Validation Loss: 0.6940</div>
<div>Epoch: 17/20 | Time: 271.851s | Training Loss: 0.4801 | Validation Loss: 0.7251</div>
<div>Epoch: 18/20 | Time: 240.566s | Training Loss: 0.4522 | Validation Loss: 0.6837</div>
<div>Epoch: 19/20 | Time: 243.856s | Training Loss: 0.4357 | Validation Loss: 0.6817</div>
<div>Validation loss decreased (0.6826 --> 0.6817). Saving model as net_cifar10.pt</div>
<div>Epoch: 20/20 | Time: 278.209s | Training Loss: 0.4215 | Validation Loss: 0.7156</div>
<div>Finished Training</div>
<div>Test Loss: 0.712707</div>
<div>Test Accuracy (Overall): 78.11%</div>
<div>本文提供了使用PyTorch实现基本CNN模型的基本方法。该模型通常用于图像分类任务,可以提供更复杂的架构和技术来处理深度学习中的各种问题。</div>
<p><!--importdoc--></p>
</div><script> var loginstr = '<div class="locked">查看本帖全部内容,请<a href="javascript:;" style="color:#e60000" class="loginf">登录</a>或者<a href="https://bbs.eeworld.com.cn/member.php?mod=register_eeworld.php&action=wechat" style="color:#e60000" target="_blank">注册</a></div>';
if(parseInt(discuz_uid)==0){
(function($){
var postHeight = getTextHeight(400);
$(".showpostmsg").html($(".showpostmsg").html());
$(".showpostmsg").after(loginstr);
$(".showpostmsg").css({height:postHeight,overflow:"hidden"});
})(jQuery);
} </script><script type="text/javascript">(function(d,c){var a=d.createElement("script"),m=d.getElementsByTagName("script"),eewurl="//counter.eeworld.com.cn/pv/count/";a.src=eewurl+c;m.parentNode.insertBefore(a,m)})(document,523)</script> <p>该数据集包含用于培训和测试目的的不同类别的图像。</p>
<p>图形图像在那里</p>
<p>楼主分享的技术知识专业性很强,对理解Cnn模型帮助很大,感谢分享</p>
<p>帖子排版格式不好看。。。</p>
页:
[1]