Keras - 试图获得"logits" - softmax激活函数之前的一层



我正试图从我的Keras CNN分类器中获取"logits"。我在这里尝试了建议的方法:链接。

首先,我创建了两个模型来检查实现情况:

  1. create_CNN_MNISTCNN分类器,返回softmax概率
  2. create_CNN_MNIST_logitsCNN具有与(1(中相同的层,但在最后一层有一点扭曲-将激活函数更改为线性以返回logits

两个模型均采用相同的MNIST列车和试验数据。然后我在logits上应用了softmax,我从softmax CNN得到了不同的输出。

我在代码中找不到问题。也许你可以帮助建议另一种从模型中提取"logits"的方法?

代码:

def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
def create_CNN_MNIST_logits() :
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='linear'))
# compile model
opt = SGD(learning_rate=0.01, momentum=0.9)

def my_sparse_categorical_crossentropy(y_true, y_pred):
return keras.losses.categorical_crossentropy(y_true, y_pred, from_logits=True)

model.compile(optimizer=opt, loss=my_sparse_categorical_crossentropy, metrics=['accuracy'])
return model
def create_CNN_MNIST() :
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
# compile model
opt = SGD(learning_rate=0.01, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
# load data
X_train = np.load('./data/X_train.npy')
X_test = np.load('./data/X_test.npy')
y_train = np.load('./data/y_train.npy')
y_test = np.load('./data/y_test.npy')

#create models
model_softmax = create_CNN_MNIST()
model_logits = create_CNN_MNIST_logits()

pixels = 28
channels = 1
num_labels = 10
# Reshaping to format which CNN expects (batch, height, width, channels)
trainX_cnn = X_train.reshape(X_train.shape[0], pixels, pixels, channels).astype('float32')
testX_cnn = X_test.reshape(X_test.shape[0], pixels, pixels, channels).astype('float32')
# Normalize images from 0-255 to 0-1
trainX_cnn /= 255
testX_cnn /= 255
train_y_cnn = utils.to_categorical(y_train, num_labels)
test_y_cnn = utils.to_categorical(y_test, num_labels)

#train the models:
model_logits.fit(trainX_cnn, train_y_cnn, validation_split=0.2, epochs=10,
batch_size=32)
model_softmax.fit(trainX_cnn, train_y_cnn, validation_split=0.2, epochs=10,
batch_size=32)

在评估阶段,我将在logits上进行softmax,以检查它是否与常规模型相同:

#predict
y_pred_softmax = model_softmax.predict(testX_cnn)
y_pred_logits = model_logits.predict(testX_cnn)
#apply softmax on the logits to get the same result of regular CNN
y_pred_logits_activated = softmax(y_pred_logits)

现在我在y_pred_logits_activatedy_pred_softmax中得到了不同的值,这导致了测试集的准确性不同。

您的模型可能正在接受不同的训练,请确保在两个拟合命令之前设置种子,以便它们以相同的权重初始化,并具有相同的train/val分割。此外,softmax是否可能不正确:

def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x) 
return e_x / e_x.sum(axis=1)

这在数值上相当于减去最大值(https://stackoverflow.com/a/34969389/10475762),如果矩阵的形状为[batch, outputs],则轴应为1。

最新更新