我有一个ubermodel,它使用子模型作为特征提取的层。我的代码是模块化的,这样我就可以很容易地切换我使用的子模型来执行特征提取,只需更改我指定的子模型:
...
elif FEATURE_EXTRACTOR == "VGG16":
Features = keras.applications.VGG16(
weights = "imagenet",
pooling = FEATURE_POOLING,
include_top = False
)
elif FEATURE_EXTRACTOR == "EfficientNetB0":
Features = keras_applications_master.keras_applications.efficientnet.EfficientNetB0(
# ^ Local copy of official keras repo: https://github.com/keras-team/keras-applications
# because pip install --upgrade keras doesn't install version with efficientnet.
weights = "imagenet",
include_top = False,
pooling = FEATURE_POOLING,
classes = None
)
...
我保存和加载ubermodel及其权重的例程也知道哪个子模型用于特征提取:
model.load_weights(submodel_specific_path)
对于任何子模型,我都可以执行初始训练运行,并将ubermodel保存到磁盘上。如果我尝试继续训练或微调任何包含VGG16子模型的ubermodel,使用load_weights
加载权重,一切都会很好。但是,当我load_weights
任何具有efficientnet子模型(或者,比如keras.applications.exception.xception(的ubermodel时,我会得到以下错误:
Traceback (most recent call last):
File "image_model.py", line 284, in <module>
model.load_weights(model_path)
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packageskerasenginesaving.py", line 492, in load_wrapper
return load_function(*args, **kwargs)
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packageskerasenginenetwork.py", line 1227, in load_weights
reshape=reshape)
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packageskerasenginesaving.py", line 1294, in load_weights_from_hdf5_group_by_name
reshape=reshape)
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packageskerasenginesaving.py", line 861, in preprocess_weights_for_loading
weights = convert_nested_model(weights)
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packageskerasenginesaving.py", line 836, in convert_nested_model
original_backend=original_backend))
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packageskerasenginesaving.py", line 980, in preprocess_weights_for_loading
weights[0] = np.transpose(weights[0], (3, 2, 0, 1))
File "<__array_function__ internals>", line 6, in transpose
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packagesnumpycorefromnumeric.py", line 651, in transpose
return _wrapfunc(a, 'transpose', axes)
File "C:UsersUsernameAnaconda3envstensorflow1libsite-packagesnumpycorefromnumeric.py", line 61, in _wrapfunc
return bound(*args, **kwds)
ValueError: axes don't match array
我做错了什么?
我无法重新创建您的问题,但我发现了这个Github问题,它说当我们使用多GPU模型时会出现这个问题。所以诀窍应该是:
multi_gpu_model = multi_gpu_model(model, gpus=G)
model.save_model(...)
代替
multi_gpu_model = multi_gpu_model(model, gpus=G)
multi_gpu_model.save_model(...)
话虽如此,我还是成功地创建了EfficientNetB0
模型,评估了模型,保存了模型,并最终加载了模型。
使用模型构建、评估和保存模型的代码。保存-
%tensorflow_version 1.x
import keras
from keras.models import Model
from keras.layers import GlobalAveragePooling2D, Dense, Flatten
from keras_efficientnets import EfficientNetB0
model = EfficientNetB0(input_shape=(224, 224, 3), classes=1000, include_top=False, weights='imagenet')
x = model.output
x = GlobalAveragePooling2D()(x)
x = Dense(20, activation='relu')(x)
x = Dense(17, activation='softmax')(x)
model = Model(inputs = model.input, outputs = x)
# summarize model
#model.summary()
# (2) Get Data
import tflearn.datasets.oxflower17 as oxflower17
x, y = oxflower17.load_data(one_hot=True)
# (4) Compile
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x, y, batch_size=64, epochs= 1, verbose=1, validation_split=0.2, shuffle=True)
# evaluate the model
scores = model.evaluate(x, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# save model and architecture to single file
model.save("model.h5py")
print("Saved model to disk")
输出-
TensorFlow 1.x selected.
Using TensorFlow backend.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/helpers/summarizer.py:9: The name tf.summary.merge is deprecated. Please use tf.compat.v1.summary.merge instead.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/helpers/trainer.py:25: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/collections.py:13: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/config.py:123: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/config.py:129: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tflearn/config.py:131: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.
Downloading Oxford 17 category Flower Dataset, Please wait...
100.0% 60276736 / 60270631
('Succesfully downloaded', '17flowers.tgz', 60270631, 'bytes.')
File Extracted
Starting to parse images...
Parsing Done!
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:431: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:438: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
Train on 1088 samples, validate on 272 samples
Epoch 1/1
1088/1088 [==============================] - 203s 187ms/step - loss: 1.6433 - accuracy: 0.5561 - val_loss: 1.9315 - val_accuracy: 0.5074
accuracy: 54.85%
使用Load_model加载模型并评估-
%tensorflow_version 1.x
# load and evaluate a saved model
from numpy import loadtxt
from keras.models import load_model
# load model
model = load_model('model.h5py')
# summarize model
#model.summary()
# (2) Get Data
import tflearn.datasets.oxflower17 as oxflower17
x, y = oxflower17.load_data(one_hot=True)
# evaluate the model
score = model.evaluate(x, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
输出-保存之前和加载之后的模型精度应该匹配,在这里也是如此
accuracy: 54.85%
使用model.Save_weights保存模型并使用model.Load_weights-加载模型
from keras.models import model_from_json
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
# later...
# load json and create model
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")
# evaluate loaded model on test data
loaded_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
score = loaded_model.evaluate(x, y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
输出-
Saved model to disk
Loaded model from disk
accuracy: 54.85%
希望这能回答你的问题。快乐学习。