LitchiCheng

  • 2024-11-04
  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- 层和块

    御坂10032号 发表于 2024-11-4 10:32 我的哥啊, 我都要死到这个softmax上了 不用死磕在数学推导上

  • 2024-11-03
  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- RNN-sequence-model

    秦天qintian0303 发表于 2024-11-2 23:18 训练后的文件是什么类型的?C能调用吗? 可以保存成模型,然后一般推理引擎都有各种语言的实现,C只是一种

  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- RNN-sequence-model

    hellokitty_bean 发表于 2024-11-3 08:21 还没开始仔细看呢。。。。。。。。等闲下来再看后分享了。。。。。。。。 kkk

  • 2024-11-02
  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- RNN-sequence-model

    hellokitty_bean 发表于 2024-11-2 13:51 不清不楚。。。。。。。。。。。。。。。。。 RNN,其实也有变种。。。。。。。。 原理和方法都没 这个就书里面的,其他例子等你分享,hhh

  • 发表了日志: 一起读《动手学深度学习(PyTorch版)》- RNN-sequence-model

  • 发表了主题帖: 一起读《动手学深度学习(PyTorch版)》- RNN-sequence-model

    sin 1000加上噪声 import torch from torch import nn import matplotlib.pyplot as plt T = 1000 time = torch.arange(1, T + 1, dtype=torch.float32) x = torch.sin(0.01 * time) + torch.normal(0, 0.2, (T,)) plt.plot(time.tolist(), x.tolist()) plt.show()   训练后,进行单步预测 import torch from torch import nn from torch.utils import data from torchvision import transforms import matplotlib.pyplot as plt def load_array(data_arrays, batch_size, is_train=True): dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train, num_workers=6) T = 1000 time = torch.arange(1, T + 1, dtype=torch.float32) x = torch.sin(0.01 * time) + torch.normal(0, 0.2, (T,)) # plt.plot(time.tolist(), x.tolist()) # plt.show() tau = 4 # [996, 4] features = torch.zeros((T - tau, tau)) for i in range(tau): # pick up 996 elements of x and then slide 1 element every time features[:, i] = x[i: T - tau + i] # print(features[:, i]) # print(features[:, i].shape) labels = x[tau:].reshape((-1, 1)) batch_size, n_train = 16, 600 train_iter = load_array((features[:n_train], labels[:n_train]), batch_size, is_train=True) def init_weights(m): if type(m) == nn.Linear: nn.init.xavier_uniform_(m.weight) def get_net(): net = nn.Sequential(nn.Linear(4, 10), nn.ReLU(), nn.Linear(10, 1)) net.apply(init_weights) return net class Accumulator: def __init__(self, n) -> None: self.data = [0.0]*n def add(self, *args): # args is a tupe self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def evaluate_loss(net, data_iter, loss): metric = Accumulator(2) for X, y in data_iter: out = net(X) y = y.reshape(out.shape) l = loss(out, y) metric.add(l.sum(), l.numel()) return metric[0] / metric[1] loss = nn.MSELoss(reduction='none') def train(net, train_iter, loss, epochs, lr): trainer = torch.optim.Adam(net.parameters(), lr) for epoch in range(epochs): for X, y in train_iter: trainer.zero_grad() l = loss(net(X), y) l.sum().backward() trainer.step() print(f'epoch {epoch + 1}, ' f'loss: {evaluate_loss(net, train_iter, loss):f}') net = get_net() train(net, train_iter, loss, 5, 0.01) onestep_preds = net(features) plt.plot(time.tolist(), x.tolist()) plt.plot(time[tau:].tolist(), onestep_preds.tolist()) plt.show()  

  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- 层和块

    ljg2np 发表于 2024-10-31 20:55 如果pyTorch也能够支持CPU就好了,否则门槛太高了,并行计算也不是必须非要GPU吧。 本身就可以cpu

  • 回复了主题帖: RV1106手把手教你:惊呆了!USB摄像头秒变AI助手,rknn轻松拍照做yolov5推理!

    fimai 发表于 2024-10-31 14:26 运行时报错,是什么问题 load lable ./model/coco_80_labels_list.txt E RKNN: failed to decode confi ... 越界之类的吧

  • 2024-10-30
  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- 层和块

    zxhgll1975 发表于 2024-10-29 22:29 作为一名DIY电子爱好者,好的资料与资源就是最好的良师,感谢。 感谢

  • 2024-10-29
  • 发表了日志: 一起读《动手学深度学习(PyTorch版)》- 层和块

  • 发表了主题帖: 一起读《动手学深度学习(PyTorch版)》- 层和块

    nn.Sequential是PyTorch中表示一个块的类,维护了一个由Module组成的有序列表。全连接层是Linear类的实例,通过net(X)调用模型来获得输出,实际上是net.__call__(X)的简写。前向传播函数将每个块连接在一起,将每个块的输出作为下一个块的输入      自定义块 import matplotlib.pyplot as plt import torch from torch import nn from torch.nn import functional as F net = nn.Sequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10)) X = torch.rand(2, 20) print(net(X)) class MLP(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(20, 256) self.out = nn.Linear(256, 10) def forward(self, X): return self.out(F.relu(self.hidden(X))) net = MLP() print(net(X)) 输出结果不同,是因为权重是随机分配的  

  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- 多项式回归:欠拟合、过拟合

    freebsder 发表于 2024-10-29 15:25 随机噪声可能需要估计噪声分布,白噪声还是高斯噪声,可以先处理一下。虽然现在深度学习网络很强大,但是必 ... 这个是真实噪声,消除就意味失真

  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- 暂退法(drop out)

    freebsder 发表于 2024-10-29 15:27 神经网络把随机性是玩明白了 是的

  • 回复了主题帖: #AI挑战营第一站#pytorch训练MNIST数据集实现手写数字识别

    通途科技 发表于 2024-10-29 21:11 好好学习,天天向上,加油每一个人,加油自己,加油!!! 加油

  • 2024-10-28
  • 回复了主题帖: 一起读《动手学深度学习(PyTorch版)》- 暂退法(drop out)

    Jacktang 发表于 2024-10-28 07:32 原来本书也讲暂退法,也称为Dropout,是一种在神经网络训练中用来减少过拟合的技术 是的

  • 2024-10-27
  • 发表了主题帖: 一起读《动手学深度学习(PyTorch版)》- 暂退法(drop out)

    简单性的另一个深刻维度在于其平滑性特质,这一特性强调了函数在面对输入数据的微小波动时不应产生剧烈的反应或输出的大幅变化。在机器学习与深度神经网络的领域中,追求模型的平滑性对于提升泛化能力、减少过拟合现象至关重要。平滑性要求模型不仅能够很好地拟合训练数据,还能在未见过的数据上保持稳定的预测性能。 暂退法(Dropout),作为训练神经网络时的一种创新技术,正是基于这一理念而设计的。在前向传播的过程中,暂退法不仅计算每一内部层的输出,还故意向这些层中注入随机噪声,具体做法是按照一定概率随机“丢弃”或暂时忽略网络中的一部分神经元。之所以将这种技术命名为“暂退法”(Dropout),是因为从直观上看,它仿佛是在训练过程中的每一个迭代步骤中,随机地“丢弃”或关闭了一部分神经元的连接。这种随机性的引入,有效地打破了神经网络训练过程中可能形成的固定模式或路径依赖,促使网络能够探索到更多的可能性,进而提升了模型的泛化性能和鲁棒性。通过不断地在训练过程中实施暂退,网络逐渐学会了如何在缺少部分信息的情况下仍然能够做出准确的预测,这对于提高模型在实际应用中的表现具有重要意义。 import torch import torchvision from torch.utils import data from torchvision import transforms import matplotlib.pyplot as plt from torch import nn def get_dataloader_workers(): return 6 def load_data_fashion_mnist(batch_size, resize=None): trans = [transforms.ToTensor()] if resize: trans.insert(0, transforms.Resize(resize)) trans = transforms.Compose(trans) mnist_train = torchvision.datasets.FashionMNIST(root="./data", train=True, transform=trans, download=True) mnist_test = torchvision.datasets.FashionMNIST(root="./data", train=False, transform=trans, download=True) return (data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=get_dataloader_workers()), data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=get_dataloader_workers())) def accurancy(y_hat, y): if len(y_hat.shape) > 1 and y_hat.shape[1] > 1: y_hat = y_hat.argmax(axis=1) cmp = y_hat.type(y.dtype) == y return float(cmp.type(y.dtype).sum()) class Accumulator: def __init__(self, n) -> None: self.data = [0.0]*n def add(self, *args): self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def evaluate_accurancy(net, data_iter): if isinstance(net, torch.nn.Module): net.eval() metric = Accumulator(2) with torch.no_grad(): for X, y in data_iter: metric.add(accurancy(net(X), y), y.numel()) return metric[0] / metric[1] def train_epoch_ch3(net, train_iter, loss, updater): if isinstance(net, torch.nn.Module): net.train() metric = Accumulator(3) for X, y in train_iter: y_hat = net(X) l = loss(y_hat, y) if isinstance(updater, torch.optim.Optimizer): updater.zero_grad() l.mean().backward() updater.step() else: l.sum().backward() updater(X.shape[0]) metric.add(float(l.sum()), accurancy(y_hat, y), y.numel()) return metric[0] / metric[1], metric[1] / metric[2] def set_axes(axes, xlable, ylable, xlim, ylim, xscale, yscale, legend): axes.set_xlabel(xlable) axes.set_ylabel(ylable) axes.set_xscale(xscale) axes.set_yscale(yscale) axes.set_xlim(xlim) axes.set_ylim(ylim) if legend: axes.legend(legend) axes.grid() class Animator: def __init__(self, xlable=None, ylable=None, legend=None, xlim=None, ylim=None, xscale='linear', yscale='linear',fmts=('-','m--','g-.','r:'), nrows=1, ncols=1, figsize=(3.5, 2.5)): if legend is None: legend = [] self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [self.axes, ] self.config_axes = lambda: set_axes(self.axes[0], xlable, ylable, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): if not hasattr(y, "__len__"): y=[y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] for i, (a,b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater): animator = Animator(xlable='epoch', xlim=[1, num_epochs], ylim=[0.3, 0.9], legend=['train loss', "train acc", "test acc"]) for epoch in range(num_epochs): train_metrics = train_epoch_ch3(net, train_iter, loss, updater) test_acc = evaluate_accurancy(net, test_iter) animator.add(epoch+1, train_metrics+(test_acc, )) train_loss, train_acc = train_metrics assert train_loss < 0.5, train_loss assert train_acc < 1 and train_acc > 0.7, train_acc assert test_acc < 1 and test_acc > 0.7, test_acc dropout1, dropout2 = 0.2, 0.5 num_epochs, lr, batch_size = 10, 0.5, 256 loss = nn.CrossEntropyLoss(reduction='none') train_iter, test_iter = load_data_fashion_mnist(batch_size) net = nn.Sequential(nn.Flatten(), nn.Linear(784, 256), nn.ReLU(), nn.Dropout(dropout1), nn.Linear(256, 256), nn.ReLU(), nn.Dropout(dropout2), nn.Linear(256, 10)) def init_weights(m): if type(m) == nn.Linear: nn.init.normal_(m.weight, std=0.01) net.apply(init_weights) trainer = torch.optim.SGD(net.parameters(), lr=lr) train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) plt.show()  

  • 发表了日志: 一起读《动手学深度学习(PyTorch版)》- 暂退法(drop out)

  • 发表了日志: 一起读《动手学深度学习(PyTorch版)》- 权重衰减(weight decay)

  • 发表了主题帖: 一起读《动手学深度学习(PyTorch版)》- 权重衰减(weight decay)

    weight-decay,用于解决过拟合问题,其中用到范数,将权重进行欧几里得范数,得到惩罚函数为 Sum(w^2) / 2,要保证权重向量比较小, 最常用方法是将其范数作为惩罚项加到最小化损失的问题中,将原来的训练目标最小化训练标签上的预测损失, 调整为最小化预测损失和惩罚项之和。现在,如果权重向量增长的太大, 优化算法会更集中于最小化权重范数   损失函数   预测损失和惩罚项之和 import torch import torchvision from torch.utils import data from torchvision import transforms import matplotlib.pyplot as plt from torch import nn def get_dataloader_workers(): return 6 class Accumulator: def __init__(self, n) -> None: self.data = [0.0]*n def add(self, *args): # args is a tupe self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend): axes.set_xlabel(xlabel) axes.set_ylabel(ylabel) axes.set_xscale(xscale) axes.set_yscale(yscale) axes.set_xlim(xlim) axes.set_ylim(ylim) if legend: axes.legend(legend) axes.grid() class Animator: def __init__(self, xlabel=None, ylabel=None, legend=None, xlim=None, ylim=None, xscale='linear', yscale='linear',fmts=('-','m--','g-.','r:'), nrows=1, ncols=1, figsize=(3.5, 2.5)): if legend is None: legend = [] self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [self.axes, ] self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): if not hasattr(y, "__len__"): y=[y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] for i, (a,b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() def evaluate_loss(net, data_iter, loss): metric = Accumulator(2) for X, y in data_iter: out = net(X) y = y.reshape(out.shape) l = loss(out, y) metric.add(l.sum(), l.numel()) return metric[0] / metric[1] def load_array(data_arrays, batch_size, is_train=True): dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train, num_workers=get_dataloader_workers()) def synthetic_data(w, b, num_examples): X = torch.normal(0, 1, (num_examples, len(w))) y = torch.matmul(X, w) + b y += torch.normal(0, 0.01, y.shape) return X, y.reshape((-1, 1)) n_train, n_test, num_inputs, batch_size = 20, 100, 200, 5 true_w, true_b = torch.ones((num_inputs, 1)) * 0.01, 0.05 train_data = synthetic_data(true_w, true_b, n_train) train_iter = load_array(train_data, batch_size) test_data = synthetic_data(true_w, true_b, n_test) test_iter = load_array(test_data, batch_size, is_train=False) def train_concise(wd): net = nn.Sequential(nn.Linear(num_inputs, 1)) for param in net.parameters(): param.data.normal_() loss = nn.MSELoss(reduction='none') num_epochs, lr = 100, 0.003 trainer = torch.optim.SGD([ {"params":net[0].weight,'weight_decay': wd}, {"params":net[0].bias}], lr=lr) animator = Animator(xlabel='epochs', ylabel='loss', yscale='log', xlim=[5, num_epochs], legend=['train', 'test']) for epoch in range(num_epochs): for X, y in train_iter: trainer.zero_grad() l = loss(net(X), y) l.mean().backward() trainer.step() if (epoch + 1) % 5 == 0: animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss), evaluate_loss(net, test_iter, loss))) print('weight', net[0].weight.norm().item()) train_concise(0) plt.show()   可以看到,当不启用权重衰减时,测试loss基本不下降,也就是过拟合的现象 train_concise(5)   启用权重衰减,控制量为5时,可以看到实际的测试loss也是逐渐减小的   继续增强权重衰减的效果

  • 发表了主题帖: 一起读《动手学深度学习(PyTorch版)》- 多项式回归:欠拟合、过拟合

    本帖最后由 LitchiCheng 于 2024-10-27 15:15 编辑   多项式回归 这样可以避免很大的 x^i 带来的特别大的指数值,变成 x^i / i !,也就是x的i次幂 / i的阶乘,如下图可以对比 import matplotlib.pyplot as plt import math num = 3 x = range(0, 1000) y = [] y1 = [] for i in x: y.append(i**num) y1.append((i**num)/math.factorial(num)) plt.plot(x, y) plt.plot(x, y1) plt.show()   3阶多项式拟合 100个训练样本,100和验证样本 设定多项式的权重,3阶多项式,有4个权重值,经过400次训练 import torch import torchvision from torch.utils import data from torchvision import transforms import matplotlib.pyplot as plt from torch import nn import numpy as np import math def get_dataloader_workers(): return 6 def accurancy(y_hat, y): if len(y_hat.shape) > 1 and y_hat.shape[1] > 1: y_hat = y_hat.argmax(axis=1) # cmp is a dict which restore true or false cmp = y_hat.type(y.dtype) == y # calc the num of true return float(cmp.type(y.dtype).sum()) class Accumulator: def __init__(self, n) -> None: self.data = [0.0]*n def add(self, *args): # args is a tupe self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def evaluate_accurancy(net, data_iter): if isinstance(net, torch.nn.Module): net.eval() metric = Accumulator(2) with torch.no_grad(): for X, y in data_iter: metric.add(accurancy(net(X), y), y.numel()) return metric[0] / metric[1] def train_epoch_ch3(net, train_iter, loss, updater): if isinstance(net, torch.nn.Module): print("is instance nn.Module") net.train() metric = Accumulator(3) for X, y in train_iter: y_hat = net(X) # print(y, y_hat) l = loss(y_hat, y) if isinstance(updater, torch.optim.Optimizer): updater.zero_grad() l.mean().backward() updater.step() else: l.sum().backward() updater(X.shape[0]) metric.add(float(l.sum()), accurancy(y_hat, y), y.numel()) # print(metric[0] , metric[1], metric[2]) # print("", metric[0] , metric[1], metric[2]) # return metric[0] / metric[1], metric[1] / metric[2] def set_axes(axes, xlable, ylable, xlim, ylim, xscale, yscale, legend): axes.set_xlabel(xlable) axes.set_ylabel(ylable) axes.set_xscale(xscale) axes.set_yscale(yscale) axes.set_xlim(xlim) axes.set_ylim(ylim) if legend: axes.legend(legend) axes.grid() class Animator: def __init__(self, xlable=None, ylable=None, legend=None, xlim=None, ylim=None, xscale='linear', yscale='linear',fmts=('-','m--','g-.','r:'), nrows=1, ncols=1, figsize=(3.5, 2.5)): if legend is None: legend = [] self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [self.axes, ] self.config_axes = lambda: set_axes(self.axes[0], xlable, ylable, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): if not hasattr(y, "__len__"): y=[y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] for i, (a,b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() def load_array(data_arrays, batch_size, is_train=True): #@save dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train, num_workers=get_dataloader_workers()) max_degree = 20 # 20 power n_train, n_test = 100, 100 true_w = np.zeros(max_degree) true_w[0:4] = np.array([5, 1.2, -3.4, 5.6]) features = np.random.normal(size=(n_train + n_test, 1)) np.random.shuffle(features) poly_features = np.power(features, np.arange(max_degree).reshape(1, -1)) for i in range(max_degree): poly_features[:, i] /= math.gamma(i + 1) # gamma(n)=(n-1)! labels = np.dot(poly_features, true_w) labels += np.random.normal(scale=0.1, size=labels.shape) true_w, features, poly_features, labels = [torch.tensor(x, dtype= torch.float32) for x in [true_w, features, poly_features, labels]] # print(features[:2], poly_features[:2, :], labels[:2]) def evaluate_loss(net, data_iter, loss): metric = Accumulator(2) for X, y in data_iter: out = net(X) y = y.reshape(out.shape) l = loss(out, y) metric.add(l.sum(), l.numel()) return metric[0] / metric[1] def train(train_features, test_features, train_labels, test_labels, num_epochs=400): loss = nn.MSELoss(reduction='none') input_shape = train_features.shape[-1] net = nn.Sequential(nn.Linear(input_shape, 1, bias=False)) batch_size = min(10, train_labels.shape[0]) train_iter = load_array((train_features, train_labels.reshape(-1,1)), batch_size) test_iter = load_array((test_features, test_labels.reshape(-1,1)), batch_size, is_train=False) trainer = torch.optim.SGD(net.parameters(), lr=0.01) animator = Animator(xlable='epoch', ylable='loss', yscale='log', xlim=[1, num_epochs], ylim=[1e-3, 1e2], legend=['train', 'test']) for epoch in range(num_epochs): train_epoch_ch3(net, train_iter, loss, trainer) if epoch == 0 or (epoch + 1) % 20 == 0: animator.add(epoch + 1, (evaluate_loss(net, train_iter, loss), evaluate_loss(net, test_iter, loss))) print('weight:', net[0].weight.data.numpy()) train(poly_features[:n_train, :4], poly_features[n_train:, :4], labels[:n_train], labels[n_train:]) plt.show() 可以看到权重值也就是多项式系数和最初设定的系数基本相同,loss也逐渐变小   线性函数,欠拟合 原因:由四个系数构成的20阶多项式,当训练系数只有2个时,不太可能表达出4个的效果,再增加数量或者训练次数,都不能减少损失 train(poly_features[:n_train, :2], poly_features[n_train:, :2], labels[:n_train], labels[n_train:])   高阶多项式,过拟合 3阶以上的系数未指定,本身应该时0值,但因为添加了噪声,没有明显的规律,所以训练结果很好,但面对随机的噪声怎么可能预估的出来 train(poly_features[:n_train, :], poly_features[n_train:, :], labels[:n_train], labels[n_train:], num_epochs=1000)  

最近访客

< 1/6 >

统计信息

已有312人来访过

  • 芯积分:1869
  • 好友:2
  • 主题:64
  • 回复:300

留言

你需要登录后才可以留言 登录 | 注册


现在还没有留言