这是我从Pytorch教程中创建的一个自动编码器:
epochs = 1000
from pylab import plt
plt.style.use('seaborn')
import torch.utils.data as data_utils
import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable
cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
import numpy as np
import pandas as pd
import datetime as dt
features = torch.tensor(np.array([ [1,2,3],[1,2,3],[100,200,500] ]))
print(features)
batch = 10
data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=False)
encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid())
decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid())
autoencoder = nn.Sequential(encoder, decoder)
optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001)
encoded_images = []
for i in range(epochs):
for j, images in enumerate(data_loader):
# images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
optimizer.zero_grad()
reconstructions = autoencoder(images)
loss = torch.dist(images, reconstructions)
loss.backward()
optimizer.step()
# encoded_images.append(encoder(images))
# print(decoder(torch.tensor(np.array([1,2,3])).type(FloatTensor)))
encoded_images = []
for j, images in enumerate(data_loader):
images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
encoded_images.append(encoder(images))
我可以看到编码的图像确实有新创建的维度10。为了理解引擎盖下正在进行的矩阵操作,我试图打印encoder
和decoder
的矩阵维度,但shape
在nn.Sequential
上不可用
如何打印nn.Sequential
的矩阵维度?
nn.Sequential
不是一个"层",而是一个"容器"。它可以存储多个层并管理它们的执行(以及一些其他功能(
在您的情况下,每个nn.Sequential
都包含线性层和非线性nn.Sigmoid
激活。要获得nn.Sequential
中第一层权重的形状,您可以简单地执行:
encoder[0].weight.shape