image_dim_ordering -我在这里错过了什么



编辑:使用cuda 8.0和titan X (Pascal)无法重现此问题

使用tensorflow后端keras,我有与image_dim_ordering相关的问题。当我在keras配置文件中使用image_dim_ordering='th'时,一切都运行良好,但是当我使用'tf'时,训练并没有真正提高0.5的精度。

动机是目前我的实时增强是非常昂贵的,我想把不需要的重塑从暗阶约定中移除到tensorflow。

我试着用简单的代码重新创建问题,以允许其他人复制,这可能有助于我理解我在这里做错了什么。我很清楚通道,高度,宽度不同的约定,至少我认为我处理好了。

虽然我没有在紧凑的示例中完全重现我的问题(可能是因为这是一个微不足道的任务),但训练结果反复不同,并且对于'tf'情况更糟糕,即使我尝试不同的种子值。注意——在这个复制代码中,网络所需要做的就是区分-1.0的完整补丁和1.0的完整补丁

这是我的'~/.keras/keras.json'

{
    "floatx": "float32",
    "epsilon": 1e-07,
    "backend": "tensorflow",
    "image_dim_ordering": "th"  
}

我的tensorflow版本是"0.11.0 r0"(它也发生在0,10)我的keras是今天最新的git pull。

对于image_dim_ordering使用'th',我在epoch 4得到三个不同种子的精度>=0.99。使用'tf'作为dim顺序,我得到的精度>= 0.9是最新的,正如你可以在下面的日志中看到的,只有在epoch 24左右

下面是一个应该重现问题的独立代码:

from keras import backend as K
import keras.optimizers
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense, Input
from keras.models import Model
import numpy as np
def make_model(input_dim_size):
    if K.image_dim_ordering() == 'tf':
        input_shape = (input_dim_size, input_dim_size,1)
    else:
        input_shape = (1, input_dim_size, input_dim_size)
    img_input = Input(shape=input_shape)
    x = Convolution2D(64,5,5,border_mode='same')(img_input)
    x = Activation('relu')(x)
    x = MaxPooling2D((2,2),strides=(2,2))(x)
    x = Convolution2D(64, 5, 5, border_mode='same')(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2))(x)
    x = Convolution2D(64, 5, 5, border_mode='same')(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2))(x)
    x = Convolution2D(128, 5, 5, border_mode='same')(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2))(x)
    x = Convolution2D(128, 5, 5, border_mode='same')(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2))(x)
    x = Flatten()(x)
    x = Dense(1024*2)(x)
    x = Activation('relu')(x)
    x = Dropout(0.5)(x)
    x = Dense(1024 * 2)(x)
    x = Activation('relu')(x)
    x = Dropout(0.75)(x)
    x = Dense(200)(x)
    x = Activation('relu')(x)
    x = Dropout(0.75)(x)
    x = Dense(1,activation='sigmoid')(x)
    model = Model(img_input, x)
    learning_rate = 0.01
    sgd = keras.optimizers.sgd(lr=learning_rate, momentum=0.9, nesterov=True)
    model.summary()
    model.compile(loss='binary_crossentropy',
                  optimizer=sgd,
                  metrics=['accuracy']
                  )
    return model
np.random.seed(456)
def dummy_generator(mini_batch_size=64, block_size=100):
    if K.image_dim_ordering() == 'tf':
        tensor_X_shape = (mini_batch_size,block_size, block_size,1)
    else:
        tensor_X_shape = (mini_batch_size, 1, block_size, block_size)
    X = np.zeros(tensor_X_shape, dtype=np.float32)
    y = np.zeros((mini_batch_size, 1))
    while True:
        for b in range(mini_batch_size):
            X[b, :, :, :] = (float(b % 2) * 2.0) - 1.0
            y[b, :] = float(b % 2)
        yield X,y
with K.tf.device('/gpu:2'):
    K.set_session(K.tf.Session(config=K.tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)))
    MINI_BATCH_SIZE = 64
    PATCH_SIZE = 100
    model = make_model(PATCH_SIZE)
    gen = dummy_generator(mini_batch_size=MINI_BATCH_SIZE,block_size=PATCH_SIZE)
    model.fit_generator(gen, MINI_BATCH_SIZE*10,
                        100, verbose=1,
                        callbacks=[],
                        validation_data=None,
                        nb_val_samples=None,
                        max_q_size=1,
                        nb_worker=1, pickle_safe=False)

对于'tf'情况,这是训练日志:(并且在不同的种子上看起来非常相似):

Epoch 1/100
640/640 [==============================] - 1s - loss: 0.6932 - acc: 0.4781     
Epoch 2/100
640/640 [==============================] - 0s - loss: 0.6932 - acc: 0.4938     
Epoch 3/100
640/640 [==============================] - 0s - loss: 0.6921 - acc: 0.5203     
Epoch 4/100
640/640 [==============================] - 0s - loss: 0.6920 - acc: 0.5469     
Epoch 5/100
640/640 [==============================] - 0s - loss: 0.6935 - acc: 0.4875     
Epoch 6/100
640/640 [==============================] - 0s - loss: 0.6941 - acc: 0.4969     
Epoch 7/100
640/640 [==============================] - 0s - loss: 0.6937 - acc: 0.5047     
Epoch 8/100
640/640 [==============================] - 0s - loss: 0.6931 - acc: 0.5312     
Epoch 9/100
640/640 [==============================] - 0s - loss: 0.6923 - acc: 0.5250     
Epoch 10/100
640/640 [==============================] - 0s - loss: 0.6929 - acc: 0.5281     
Epoch 11/100
640/640 [==============================] - 0s - loss: 0.6934 - acc: 0.4953     
Epoch 12/100
640/640 [==============================] - 0s - loss: 0.6918 - acc: 0.5234     
Epoch 13/100
640/640 [==============================] - 0s - loss: 0.6930 - acc: 0.5125     
Epoch 14/100
640/640 [==============================] - 0s - loss: 0.6939 - acc: 0.4797     
Epoch 15/100
640/640 [==============================] - 0s - loss: 0.6936 - acc: 0.5047     
Epoch 16/100
640/640 [==============================] - 0s - loss: 0.6917 - acc: 0.4922     
Epoch 17/100
640/640 [==============================] - 0s - loss: 0.6945 - acc: 0.4891     
Epoch 18/100
640/640 [==============================] - 0s - loss: 0.6948 - acc: 0.5000     
Epoch 19/100
640/640 [==============================] - 0s - loss: 0.6968 - acc: 0.4594     
Epoch 20/100
640/640 [==============================] - 0s - loss: 0.6919 - acc: 0.5391     
Epoch 21/100
640/640 [==============================] - 0s - loss: 0.6904 - acc: 0.5172     
Epoch 22/100
640/640 [==============================] - 0s - loss: 0.6881 - acc: 0.5906     
Epoch 23/100
640/640 [==============================] - 0s - loss: 0.6804 - acc: 0.6359     
Epoch 24/100
640/640 [==============================] - 0s - loss: 0.6470 - acc: 0.8219     
Epoch 25/100
640/640 [==============================] - 0s - loss: 0.4134 - acc: 0.9625     
Epoch 26/100
640/640 [==============================] - 0s - loss: 0.2347 - acc: 0.9953     
Epoch 27/100
640/640 [==============================] - 0s - loss: 0.1231 - acc: 1.0000 

对于'th'情况,训练日志是(在不同的种子上看起来非常相似):

Epoch 1/100
640/640 [==============================] - 3s - loss: 0.6891 - acc: 0.5594     
Epoch 2/100
640/640 [==============================] - 2s - loss: 0.6079 - acc: 0.7328     
Epoch 3/100
640/640 [==============================] - 2s - loss: 0.3166 - acc: 0.9422     
Epoch 4/100
640/640 [==============================] - 2s - loss: 0.1767 - acc: 0.9969  

我发现它在tensorflow情况下如此之快是可疑的,(0),但在向生成器添加调试打印后,它似乎确实被调用了。我想这可能与keras不需要重塑任何东西有关,但是2-3秒听起来太长了

如果有人能试着重现我看到的结果,并帮助我理解我到底错过了什么,我将不胜感激:)

这个帖子有点老了,但我仍然在回复,以防有人面临同样的问题。

错误是由于Keras后端配置不一致导致的…

{
    "floatx": "float32",
    "epsilon": 1e-07,
    "backend": "tensorflow",
    "image_dim_ordering": "th"  
}

配置使用tensorflow作为后端,但使用Theano的图像维度排序而不是tensorflow。将image_dim_ordering更改为tf,应该可以解决问题。

"image_dim_ordering": "tf" 

相关内容

  • 没有找到相关文章

最新更新