学习神经网络:搭建一个vgg-11模型在FashionMNIST训练

 VGG-11 的模型如下:

根据上图设计vgg_block:

代码如下:

def vgg_block(num_convs,in_channels,out_channels):
    layers=[] #创建一个空列表
    for _ in range(num_convs):
        layers.append(nn.Conv2d(in_channels,out_channels,kernel_size=3,
                                padding=1)) #卷积层
        layers.append(nn.ReLU()) #激活函数
        in_channels=out_channels
    layers.append(nn.MaxPool2d(kernel_size=2,stride=2))
    return nn.Sequential(*layers)

 args:

1.num_convs:vgg块中的卷积层数

2.in_channels:卷积层输入通道数

3.out_channesl:卷积层输出通道数(即:卷积层中卷积核的个数)

返回一个nn.Sequential

根据vgg_block 实现vgg网络搭建:

代码如下:

def vgg (conv_arch):
    '''通过块嵌套块的形式搭建vgg网络'''
    vgg_blks=[]
    in_channels=1
    for num_conv,out_channels in conv_arch:
        vgg_blks.append(vgg_block(num_conv,in_channels,out_channels))
        in_channels=out_channels

    return nn.Sequential(*vgg_blks,#进入全连接层之前要展开tensor
                         nn.Flatten(),
                         nn.Linear(out_channels*7*7,4096),nn.ReLU(),nn.Dropout(0.5),
                         nn.Linear(4096,4096),nn.ReLU(),nn.Dropout(0.5),
                         nn.Linear(4096,10)
                         )

args:

conv_arch:整个vgg网络的vgg_block骨架,其中元素含有num_conv 和 out_channels

整体代码:

'''自定义一个vgg块'''
import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms



def try_gpu(i=0):
    '''尽量使用gpu提速'''
    if torch.cuda.device_count()>=i+1:
        return torch.device(f"cuda:{i}")
    else:
        return torch.device('cpu')

def vgg_block(num_convs,in_channels,out_channels):
    layers=[] #创建一个空列表
    for _ in range(num_convs):
        layers.append(nn.Conv2d(in_channels,out_channels,kernel_size=3,
                                padding=1)) #卷积层
        layers.append(nn.ReLU()) #激活函数
        in_channels=out_channels
    layers.append(nn.MaxPool2d(kernel_size=2,stride=2))
    return nn.Sequential(*layers)

'''原始vgg网络有5个卷积块,其中前两个各有一个卷积层,后三个块各包含两个卷积层。第一个模块有64个通道,
每个后续模块将输出通道数量翻倍,知道数字达到512.由于该网络使用8个卷积层和3个全连接层,因此它通常被称为vgg-11'''

conv_arch=((1,64),(1,128),(2,256),(2,512),(2,512))

#实现vgg-11。
def vgg (conv_arch):
    '''通过块嵌套块的形式搭建vgg网络'''
    vgg_blks=[]
    in_channels=1
    for num_conv,out_channels in conv_arch:
        vgg_blks.append(vgg_block(num_conv,in_channels,out_channels))
        in_channels=out_channels

    return nn.Sequential(*vgg_blks,#进入全连接层之前要展开tensor
                         nn.Flatten(),
                         nn.Linear(out_channels*7*7,4096),nn.ReLU(),nn.Dropout(0.5),
                         nn.Linear(4096,4096),nn.ReLU(),nn.Dropout(0.5),
                         nn.Linear(4096,10)
                         )

'''训练模型'''
ratio=4
#运用列表解析
small_conv_arch=[(pair[0],pair[1]//ratio)for pair in conv_arch]
net=vgg(small_conv_arch).to(try_gpu())

#下载数据集
train_data=torchvision.datasets.FashionMNIST('FashionMNIST',train=True,transform=torchvision.
                                             transforms.Compose([transforms.ToTensor(),transforms.Resize(224)]),
                                             download=True)
test_data=torchvision.datasets.FashionMNIST('FashionMNIST',train=False,transform=torchvision.
                                             transforms.Compose([transforms.ToTensor(),transforms.Resize(224)]),
                                             download=True)

#加载数据集
batch_size=128
train_loader=DataLoader(train_data,batch_size=batch_size)
test_loader=DataLoader(test_data,batch_size=batch_size)


#损失函数
loss_fn=nn.CrossEntropyLoss()

#优化器
lr=0.05
optimizer=torch.optim.SGD(net.parameters(),lr=lr)

#训练次数
epoch=10
total_train_step=0
total_test_step=0

#训练
for i in range(epoch):
    print(f"第{i}轮训练开始了")
    for data in train_loader:
        imgs , targets=data
        imgs=imgs.to(try_gpu())
        targets=targets.to(try_gpu())
        outputs=net(imgs)
        loss=loss_fn(outputs,targets)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step=total_train_step+1
        if total_train_step%100==0:
            print(f"训练次数:{total_train_step},loss:{loss.item()}")

    #测试步骤
    total_test_loss=0
    total_accuracy=0
    with torch.no_grad():
        for data in test_loader:
            imgs,targets=data
            imgs=imgs.to(try_gpu())
            targets=targets.to(try_gpu())
            outputs=net(imgs)
            loss=loss_fn(outputs,targets)
            total_test_loss=total_test_loss+loss
            accuracy=(outputs.argmax(1)==targets).sum()
            total_accuracy=accuracy+total_accuracy

    print(f"整体测试集上的Loss:{total_test_loss}")
    print(f"整体测试集上的测试率:{total_accuracy/len(test_data)}")
    total_test_step=total_test_step+1




 小结:

1.VGG-11使用可复用的卷积块构造网络。不同的VGG模型可通过每个块中卷积层数量和输出通道数量的差异来定义

2.块的使用导致网络定义的非常简洁。使用块可以有效地设计复杂的网络。

3.在VGG论文中,Simonyan和iserman尝试了各种架构。特别是他们发现深层且窄的卷积(即3x3)比较浅层且宽的卷积更有效