cvpr2022的 mobileformer中用到了mlp多层感知机,就来学习一下
其实就是3个全连接层,前面两个加了bn,最后一层没有加bn。
import time
import torch
from torch import nn
class MLP(nn.Module):
'''widths [in_channel, ..., out_channel], with ReLU within'''
def __init__(self, widths, bn=True, p=0.5):
super(MLP, self).__init__()
self.widths = widths
self.bn = bn
self.p = p
self.layers = []
for n in range(len(self.widths) - 2):
layer_ = nn.Sequential(nn.Linear(self.widths[n], self.widths[n + 1]).cuda(), nn.Dropout(p=self.p).cuda(), nn.ReLU6(inplace=True).cuda(), )
self.layers.append(layer_)
self.layers.append(nn.Sequential(nn.Linear(self.widths[-2], self.widths[-1]), nn.Dropout(p=self.p)))
self.mlp = nn.Sequential(*self.layers).cuda()
if self.bn:
self.mlp
版权说明 : 本文为转载文章, 版权归原作者所有 版权申明
原文链接 : https://blog.csdn.net/jacke121/article/details/123968536
内容来源于网络,如有侵权,请联系作者删除!