web的注意力机制怎么实现
本篇内容介绍了“web的注意力机制怎么实现”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!
成都创新互联公司主营廊坊网站建设的网络公司,主营网站建设方案,重庆APP开发,廊坊h5小程序定制开发搭建,廊坊网站营销推广欢迎廊坊等地区企业咨询
通道注意力机制:
import torchfrom torch import nnclass ChannelAttention(nn.Module):def __init__(self, in_planes, ratio=16):super(ChannelAttention, self).__init__() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.max_pool = nn.AdaptiveMaxPool2d(1) self.fc1 = nn.Conv2d(in_planes, in_planes // ratio, 1, bias=False) self.relu1 = nn.ReLU() self.fc2 = nn.Conv2d(in_planes // ratio, in_planes, 1, bias=False) self.sigmoid = nn.Sigmoid()def forward(self, x): avg_out = self.fc2(self.relu1(self.fc1(self.avg_pool(x)))) max_out = self.fc2(self.relu1(self.fc1(self.max_pool(x)))) out = avg_out + max_out return self.sigmoid(out)if __name__ == '__main__':CA = ChannelAttention(32)data_in = torch.randn(8,32,300,300)data_out = CA(data_in)print(data_in.shape) # torch.Size([8, 32, 300, 300])print(data_out.shape) # torch.Size([8, 32, 1, 1])
控制台输出结果:
加载个人及系统配置文件用了 958 毫秒。(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> & 'D:\Anaconda3\envs\ssd4pytorch2_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2021.1.502429796\pythonFiles\lib\python\debugpy\launcher' '53813' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test1.py'torch.Size([8, 32, 300, 300])torch.Size([8, 32, 1, 1])(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch2_2_0(ssd4pytorch2_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>
通道注意力机制:
import torchfrom torch import nnclass SpatialAttention(nn.Module):def __init__(self, kernel_size=7):super(SpatialAttention, self).__init__()assert kernel_size in (3, 7), 'kernel size must be 3 or 7'padding = 3 if kernel_size == 7 else 1self.conv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False) # 7,3 3,1self.sigmoid = nn.Sigmoid()def forward(self, x):avg_out = torch.mean(x, dim=1, keepdim=True)max_out, _ = torch.max(x, dim=1, keepdim=True)x = torch.cat([avg_out, max_out], dim=1)x = self.conv1(x)return self.sigmoid(x)if __name__ == '__main__':SA = SpatialAttention(7)data_in = torch.randn(8,32,300,300)data_out = SA(data_in)print(data_in.shape) # torch.Size([8, 32, 300, 300])print(data_out.shape) # torch.Size([8, 1, 300, 300])
控制台输出结果:
加载个人及系统配置文件用了 959 毫秒。(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch2_2_0(ssd4pytorch2_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> & 'D:\Anaconda3\envs\ssd4pytorch2_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2021.1.502429796\pythonFiles\lib\python\debugpy\launcher' '53827' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test2.py'torch.Size([8, 32, 300, 300])torch.Size([8, 1, 300, 300])(ssd4pytorch2_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>
“web的注意力机制怎么实现”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注创新互联网站,小编将为大家输出更多高质量的实用文章!
网页标题:web的注意力机制怎么实现
文章起源:http://ybzwz.com/article/ijicpd.html