(三)Pytorch 常⽤迁移学习深度模型结构
⽬录
SqueezeNet
压缩策略
1. 将 卷积替换成  卷积:通过这⼀步,⼀个卷积操作的参数数量减少了9倍;
2. 减少 卷积的通道数:⼀个卷积的计算量是 (其中 , 分别是输⼊Feature Map和输出Feature Map的通道数),作者降低  , 值以减少参数数量;
3. 将降采样后置:作者认为较⼤的Feature Map含有更多的信息,因此将降采样往分类层移动。
总体结构
pytorch⾃带SqueezeNet,使⽤下⾯代码可以查看SqueezeNet结构
代码
from  torchvision import  models
mod = models .SqueezeNet (version ='1_0')#默认为1.0版本
print (mod )
1.0版本结构
SqueezeNet (
(features ): Sequential (
(0): Conv2d (3, 96, kernel_size =(7, 7), stride =(2, 2))
(1): ReLU (inplace )
(2): MaxPool2d (kernel_size =3, stride =2, padding =0, dilation =1, ceil_mode =True )
(3): Fire (
(squeeze ): Conv2d (96, 16, kernel_size =(1, 1), stride =(1, 1))
(squeeze_activation ): ReLU (inplace )
(expand1x1): Conv2d (16, 64, kernel_size =(1, 1), stride =(1, 1))
(expand1x1_activation ): ReLU (inplace )
(expand3x3): Conv2d (16, 64, kernel_size =(3, 3), stride =(1, 1), padding =(1, 1))
(expand3x3_activation ): ReLU (inplace )
)
(4): Fire (
(squeeze ): Conv2d (128, 16, kernel_size =(1, 1), stride =(1, 1))
(squeeze_activation ): ReLU (inplace )
电子狗原理
(expand1x1): Conv2d (16, 64, kernel_size =(1, 1), stride =(1, 1))
(expand1x1_activation ): ReLU (inplace )
(expand3x3): Conv2d (16, 64, kernel_size =(3, 3), stride =(1, 1), padding =(1, 1))
(expand3x3_activation ): ReLU (inplace )
)
(5): Fire (
(squeeze ): Conv2d (128, 32, kernel_size =(1, 1), stride =(1, 1))
(squeeze_activation ): ReLU (inplace )
(expand1x1): Conv2d (32, 128, kernel_size =(1, 1), stride =(1, 1))
(expand1x1_activation ): ReLU (inplace )
(expand3x3): Conv2d (32, 128, kernel_size =(3, 3), stride =(1, 1), padding =(1, 1))
(expand3x3_activation ): ReLU (inplace )
)
(6): MaxPool2d (kernel_size =3, stride =2, padding =0, dilation =1, ceil_mode =True )
(7): Fire (
(squeeze ): Conv2d (256, 32, kernel_size =(1, 1), stride =(1, 1))
3∗31∗13∗33∗33∗3∗M ∗N M N M N
(squeeze): Conv2d(256,32, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(32,128, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(32,128, kernel_size=(3,3), stride=(1,1), padding=(1,1)) (expand3x3_activation): ReLU(inplace)
)
(8): Fire(
(squeeze): Conv2d(256,48, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(48,192, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(48,192, kernel_size=(3,3), stride=(1,1), padding=(1,1)) (expand3x3_activation): ReLU(inplace)
)
(9): Fire(
(squeeze): Conv2d(384,48, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(48,192, kernel_size=(1,1), stride=(1,1))
汽车四轮定位仪价格(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(48,192, kernel_size=(3,3), stride=(1,1), padding=(1,1)) (expand3x3_activation): ReLU(inplace)
)
(10): Fire(
(squeeze): Conv2d(384,64, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(64,256, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(64,256, kernel_size=(3,3), stride=(1,1), padding=(1,1)) (expand3x3_activation): ReLU(inplace)
)
(11): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(12): Fire(
(squeeze): Conv2d(512,64, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)杭州摇号登录入口
(expand1x1): Conv2d(64,256, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(64,256, kernel_size=(3,3), stride=(1,1), padding=(1,1)) (expand3x3_activation): ReLU(inplace)
)
)
(classifier): Sequential(
(0): Dropout(p=0.5)
(1): Conv2d(512,1000, kernel_size=(1,1), stride=(1,1))
比亚迪u9
(2): ReLU(inplace)
(3): AdaptiveAvgPool2d(output_size=(1,1))
)
)
1.1版本结构
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(32,128, kernel_size=(3,3), stride=(1,1), padding=(1,1))
(expand3x3_activation): ReLU(inplace)
)
(8): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(9): Fire(
本田雅阁2022新款
(squeeze): Conv2d(256,48, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(48,192, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(48,192, kernel_size=(3,3), stride=(1,1), padding=(1,1))
(expand3x3_activation): ReLU(inplace)
)
(10): Fire(新疆汽车网
(squeeze): Conv2d(384,48, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(48,192, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(48,192, kernel_size=(3,3), stride=(1,1), padding=(1,1))
(expand3x3_activation): ReLU(inplace)
)
(11): Fire(
(squeeze): Conv2d(384,64, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(64,256, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(64,256, kernel_size=(3,3), stride=(1,1), padding=(1,1))
(expand3x3_activation): ReLU(inplace)
)
(12): Fire(
(squeeze): Conv2d(512,64, kernel_size=(1,1), stride=(1,1))
(squeeze_activation): ReLU(inplace)
(expand1x1): Conv2d(64,256, kernel_size=(1,1), stride=(1,1))
(expand1x1_activation): ReLU(inplace)
(expand3x3): Conv2d(64,256, kernel_size=(3,3), stride=(1,1), padding=(1,1))
(expand3x3_activation): ReLU(inplace)
)
)
(classifier): Sequential(
(0): Dropout(p=0.5)
(1): Conv2d(512,1000, kernel_size=(1,1), stride=(1,1))
(2): ReLU(inplace)
(3): AdaptiveAvgPool2d(output_size=(1,1))
)
)
参考⽂献
F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size, 2016, arXiv:1602.07360.
VGG16
模型特点
1.结构单⼀,所有的卷积核尺⼨都是3x3,池化核尺⼨都是2x2,所有的激活函数都是Relu;
2. 计算量⼩,采⽤连续的⼏个3x3的卷积核代替AlexNet中的较⼤卷积核(11x11,7x7,5x5),保证了感受野不变的情况下,减少了计算量;
总体结构
pytorch⾃带VGG16,使⽤下⾯代码可以查看VGG结构
代码
结构