交叉熵+全连接层的构建

    技术2024-10-13  57

    目录

    1. 求熵2. 交叉熵3.用交叉熵进行多分类问题4.用nn.Linear构建全连接层5. 继承nn.Module构建全连接层

    1. 求熵

    import torch def Entropy(x): y = -(x * torch.log2(x)).sum() return y a = torch.full([4],1/4) b = torch.tensor([0.1, 0.1, 0.1, 0.7]) c = torch.tensor([0.001, 0.001, 0.001, 0.999]) if __name__ == "__main__": y1 = Entropy(a) y2 = Entropy(b) y3 = Entropy(c) print(y1, y2, y3)

    结果:

    tensor(2.) tensor(1.3568) tensor(0.0313)

    2. 交叉熵

    import torch import torch.nn.functional as F x = torch.randn(1,784) w = torch.randn(10,784) logits = x@w.t() pred = F.softmax(logits, dim=1) pred_log = torch.log(pred) print(F.cross_entropy(logits, torch.tensor([3]))) print(F.nll_loss(pred_log,torch.tensor([3]))) tensor(28.7205) tensor(28.7205)

    3.用交叉熵进行多分类问题

    import torch import torch.nn as nn import torch.nn.functional as F from torchvision import datasets, transforms batch_size = 200 learning_rate = 0.01 epochs = 10 train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True) w1, b1 = torch.randn(200, 784, requires_grad=True), \ torch.zeros(200, requires_grad=True) w2, b2 = torch.randn(200, 200, requires_grad=True), \ torch.zeros(200, requires_grad=True) w3, b3 = torch.randn(10, 200, requires_grad=True), \ torch.zeros(10, requires_grad=True) ##初始化 torch.nn.init.kaiming_normal_(w1) torch.nn.init.kaiming_normal_(w2) torch.nn.init.kaiming_normal_(w3) def forward(x): x = x @ w1.t() + b1 x = F.relu(x) x = x @ w2.t() + b2 x = F.relu(x) x = x @ w3.t() + b3 x = F.relu(x) return x optimizer = torch.optim.SGD([w1, b1, w2, b2, w3, b3], lr=learning_rate) criteon = nn.CrossEntropyLoss() for epoch in range(epochs): for batch_idx, (data, target) in enumerate(train_loader): data = data.view(-1, 28 * 28) logits = forward(data) loss = criteon(logits, target) optimizer.zero_grad() loss.backward() optimizer.step() if batch_idx % 100 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) test_loss = 0 correct = 0 for data, target in test_loader: data = data.view(-1, 28 * 28) logits = forward(data) test_loss += criteon(logits, target).item() pred = logits.data.max(1)[1] correct += pred.eq(target.data).sum() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset)))

    运行结果:

    Train Epoch: 0 [0/60000 (0%)] Loss: 2.816831 Train Epoch: 0 [20000/60000 (33%)] Loss: 0.695306 Train Epoch: 0 [40000/60000 (67%)] Loss: 0.550181 Test set: Average loss: 0.0018, Accuracy: 8973/10000 (89%) Train Epoch: 1 [0/60000 (0%)] Loss: 0.334501 Train Epoch: 1 [20000/60000 (33%)] Loss: 0.430899 Train Epoch: 1 [40000/60000 (67%)] Loss: 0.378964 Test set: Average loss: 0.0014, Accuracy: 9215/10000 (92%) Train Epoch: 2 [0/60000 (0%)] Loss: 0.299217 Train Epoch: 2 [20000/60000 (33%)] Loss: 0.254395 Train Epoch: 2 [40000/60000 (67%)] Loss: 0.297133 Test set: Average loss: 0.0012, Accuracy: 9307/10000 (93%) Train Epoch: 3 [0/60000 (0%)] Loss: 0.231520 Train Epoch: 3 [20000/60000 (33%)] Loss: 0.220551 Train Epoch: 3 [40000/60000 (67%)] Loss: 0.214800 Test set: Average loss: 0.0011, Accuracy: 9398/10000 (93%) Train Epoch: 4 [0/60000 (0%)] Loss: 0.203065 Train Epoch: 4 [20000/60000 (33%)] Loss: 0.252411 Train Epoch: 4 [40000/60000 (67%)] Loss: 0.227913 Test set: Average loss: 0.0010, Accuracy: 9442/10000 (94%) Train Epoch: 5 [0/60000 (0%)] Loss: 0.197479 Train Epoch: 5 [20000/60000 (33%)] Loss: 0.155061 Train Epoch: 5 [40000/60000 (67%)] Loss: 0.231575 Test set: Average loss: 0.0009, Accuracy: 9470/10000 (94%) Train Epoch: 6 [0/60000 (0%)] Loss: 0.139516 Train Epoch: 6 [20000/60000 (33%)] Loss: 0.182578 Train Epoch: 6 [40000/60000 (67%)] Loss: 0.189277 Test set: Average loss: 0.0009, Accuracy: 9512/10000 (95%) Train Epoch: 7 [0/60000 (0%)] Loss: 0.231095 Train Epoch: 7 [20000/60000 (33%)] Loss: 0.137316 Train Epoch: 7 [40000/60000 (67%)] Loss: 0.159869 Test set: Average loss: 0.0008, Accuracy: 9541/10000 (95%) Train Epoch: 8 [0/60000 (0%)] Loss: 0.164737 Train Epoch: 8 [20000/60000 (33%)] Loss: 0.082444 Train Epoch: 8 [40000/60000 (67%)] Loss: 0.103773 Test set: Average loss: 0.0008, Accuracy: 9560/10000 (95%) Train Epoch: 9 [0/60000 (0%)] Loss: 0.139510 Train Epoch: 9 [20000/60000 (33%)] Loss: 0.100558 Train Epoch: 9 [40000/60000 (67%)] Loss: 0.188554 Test set: Average loss: 0.0007, Accuracy: 9576/10000 (95%)

    4.用nn.Linear构建全连接层

    import torch import torch.nn as nn import torch.nn.functional as F x = torch.randn(1,784) layer1 = nn.Linear(784,200) #in到out layer2 = nn.Linear(200,200) layer3 = nn.Linear(200,10) x = layer1(x) x = F.relu(x,inplace=True) #保持非线性 x = layer2(x) x = F.relu(x,inplace=True) x = layer3(x) x = F.relu(x,inplace=True) print(x.shape) torch.Size([1, 10])

    5. 继承nn.Module构建全连接层

    import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets,transforms batch_size = 200 epochs = 10 learning_rate = 0.01 train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True) class MLP(nn.Module): #要继承nn.Module def __init__(self): #初始化,可自带参数(这里没带,因为已给出) super(MLP, self).__init__() self.model = nn.Sequential( nn.Linear(784,200), nn.ReLU(inplace=True), nn.Linear(200,200), nn.ReLU(inplace=True), nn.Linear(200,10), nn.ReLU(inplace=True), ) def forward(self,x): x = self.model(x) return x #train device = torch.device('cuda:0') net = MLP().to(device) optimizer = optim.SGD(net.parameters(),lr=learning_rate) criteon = nn.CrossEntropyLoss() for epoch in range(epochs): for batch_idx, (data, target) in enumerate(train_loader): data = data.view(-1, 28 * 28) data, target = data.to(device), target.cuda() logits = net(data) loss = criteon(logits, target) optimizer.zero_grad() loss.backward() optimizer.step() if batch_idx % 100 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) test_loss = 0 correct = 0 for data, target in test_loader: data = data.view(-1, 28 * 28) data, target = data.to(device), target.cuda() logits = net(data) test_loss += criteon(logits, target).item() pred = logits.data.max(1)[1] correct += pred.eq(target.data).sum() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) Train Epoch: 0 [0/60000 (0%)] Loss: 2.309051 Train Epoch: 0 [20000/60000 (33%)] Loss: 2.110557 Train Epoch: 0 [40000/60000 (67%)] Loss: 1.602919 Test set: Average loss: 0.0057, Accuracy: 7220/10000 (72%) Train Epoch: 1 [0/60000 (0%)] Loss: 1.223766 Train Epoch: 1 [20000/60000 (33%)] Loss: 0.843435 Train Epoch: 1 [40000/60000 (67%)] Loss: 0.826475 Test set: Average loss: 0.0034, Accuracy: 7994/10000 (79%) Train Epoch: 2 [0/60000 (0%)] Loss: 0.755618 Train Epoch: 2 [20000/60000 (33%)] Loss: 0.544501 Train Epoch: 2 [40000/60000 (67%)] Loss: 0.517594 Test set: Average loss: 0.0028, Accuracy: 8441/10000 (84%) Train Epoch: 3 [0/60000 (0%)] Loss: 0.611635 Train Epoch: 3 [20000/60000 (33%)] Loss: 0.511144 Train Epoch: 3 [40000/60000 (67%)] Loss: 0.603972 Test set: Average loss: 0.0026, Accuracy: 8738/10000 (87%) Train Epoch: 4 [0/60000 (0%)] Loss: 0.565730 Train Epoch: 4 [20000/60000 (33%)] Loss: 0.513728 Train Epoch: 4 [40000/60000 (67%)] Loss: 0.529109 Test set: Average loss: 0.0025, Accuracy: 8881/10000 (88%) Train Epoch: 5 [0/60000 (0%)] Loss: 0.525373 Train Epoch: 5 [20000/60000 (33%)] Loss: 0.470926 Train Epoch: 5 [40000/60000 (67%)] Loss: 0.517402 Test set: Average loss: 0.0024, Accuracy: 8978/10000 (89%) Train Epoch: 6 [0/60000 (0%)] Loss: 0.505295 Train Epoch: 6 [20000/60000 (33%)] Loss: 0.457965 Train Epoch: 6 [40000/60000 (67%)] Loss: 0.552664 Test set: Average loss: 0.0023, Accuracy: 9052/10000 (90%) Train Epoch: 7 [0/60000 (0%)] Loss: 0.635619 Train Epoch: 7 [20000/60000 (33%)] Loss: 0.365278 Train Epoch: 7 [40000/60000 (67%)] Loss: 0.370224 Test set: Average loss: 0.0023, Accuracy: 9117/10000 (91%) Train Epoch: 8 [0/60000 (0%)] Loss: 0.400364 Train Epoch: 8 [20000/60000 (33%)] Loss: 0.452789 Train Epoch: 8 [40000/60000 (67%)] Loss: 0.415130 Test set: Average loss: 0.0022, Accuracy: 9188/10000 (91%) Train Epoch: 9 [0/60000 (0%)] Loss: 0.464027 Train Epoch: 9 [20000/60000 (33%)] Loss: 0.385774 Train Epoch: 9 [40000/60000 (67%)] Loss: 0.410523 Test set: Average loss: 0.0022, Accuracy: 9183/10000 (91%)

    可以在程序运行时,打来任务管理器看GPU的使用情况:

    Processed: 0.012, SQL: 9