我已经训练了一个输入大小为(None, 256, 256, 1)
的CNN模型,并使用model.save()
方法保存了这个模型。
在这个模型中,我在连接到密集层之前使用了GlobalAveragePooling2D()
,这反过来又导致了三个值的输出。
我希望整个模型保持原样(即,相同的架构和权重(,但输入中的通道数量不同。
例如,如何将输入大小从(None, 256, 256, 1)
更改为(None, 256, 256, 2).
我尝试了以下操作,但它引发了一个属性错误,
from tensorflow import keras
cnn_model = keras.models.load_model(filepath=model_dir)
cnn_model.input = (None, 256, 256, 2)
AttributeError: Can't set the attribute "input", likely because it conflicts with an existing read-only @property of the object. Please choose a different name.
我是否必须从现有模型中构建另一个模型(包含相同的架构和权重(?如果是,如何做到这一点?
否则我必须从这个模型的权重开始训练另一个模型吗(这是不可取的,但如果没有办法,我可以求助于它。(
这很容易,而且应该为预测或工作的正确结果重新训练新值的模型
- 您可以创建4通道输入图片并修改3通道图片,但由于某些颜色会影响图像模式识别,因此效果良好
- 我建议使用相同的输入格式,包括通道,我知道一些程序中有一些错误,他们检测到图像模式,打开图像看看身体是否只包含像素,或者在没有标签和顺序的情况下阅读
- 使用具有优先级和分数的连接模型
具有优先级和分数的样本级联模型<lt;你有一些工作,然后在构建足够的工作时继续使用它>gt;
import os
from os.path import exists
import tensorflow as tf
import tensorflow_io as tfio
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
None
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)
print(config)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
PATH = os.path.join('F:\datasets\downloads\Actors\train\Pikaploy', '*.tif')
PATH_2 = os.path.join('F:\datasets\downloads\Actors\train\Candidt Kibt', '*.tif')
files = tf.data.Dataset.list_files(PATH)
files_2 = tf.data.Dataset.list_files(PATH_2)
list_file = []
list_file_actual = []
list_label = []
list_label_actual = [ 'Pikaploy', 'Pikaploy', 'Pikaploy', 'Pikaploy', 'Pikaploy', 'Candidt Kibt', 'Candidt Kibt', 'Candidt Kibt', 'Candidt Kibt', 'Candidt Kibt' ]
for file in files.take(5):
image = tf.io.read_file( file )
image = tfio.experimental.image.decode_tiff(image, index=0)
list_file_actual.append(image)
image = tf.image.resize(image, [32,32], method='nearest')
list_file.append(image)
list_label.append(1)
for file in files_2.take(5):
image = tf.io.read_file( file )
image = tfio.experimental.image.decode_tiff(image, index=0)
list_file_actual.append(image)
image = tf.image.resize(image, [32,32], method='nearest')
list_file.append(image)
list_label.append(9)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
DataSet
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
dataset = tf.data.Dataset.from_tensor_slices((tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(10, 1, 32, 32, 4), dtype=tf.int64),tf.constant(list_label, shape=(10, 1, 1), dtype=tf.int64)))
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
first_input = tf.keras.Input(shape=( 32, 32, 4 ), name="first input" )
second_input = tf.keras.Input(shape=( 32, 32, 4 ), name="second input" )
priority_pred = tf.keras.layers.Normalization(mean=3., variance=2.)(first_input)
priority_pred = tf.keras.layers.Normalization(mean=4., variance=6.)(priority_pred)
priority_pred = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(priority_pred)
priority_pred = tf.keras.layers.MaxPooling2D((2, 2))(priority_pred)
priority_pred = tf.keras.layers.Dense(128, activation='relu')(priority_pred)
priority_pred = tf.keras.layers.Reshape((128, 225))(priority_pred)
priority_pred = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False))(priority_pred)
priority_pred = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96))(priority_pred)
priority_pred = tf.keras.layers.Flatten()(priority_pred)
priority_pred = tf.keras.layers.Dense(192, activation='relu')(priority_pred)
priority_pred = tf.keras.layers.Dense(1, name="priority")(priority_pred)
department_pred = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(second_input)
department_pred = tf.keras.layers.MaxPooling2D((2, 2))(department_pred)
department_pred = tf.keras.layers.Dense(128, activation='relu')(department_pred)
department_pred = tf.keras.layers.Reshape((128, 225))(department_pred)
department_pred = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False))(department_pred)
department_pred = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96))(department_pred)
department_pred = tf.keras.layers.Flatten()(department_pred)
department_pred = tf.keras.layers.Dense(192, activation='relu')(department_pred)
department_pred = tf.keras.layers.Dense(1, name="department")(department_pred)
# Instantiate an end-to-end model predicting both priority and department
model = tf.keras.Model(
inputs=[first_input, second_input],
outputs=[priority_pred, department_pred],
)
model.summary()
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.Nadam(
learning_rate=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-07,
name='Nadam'
)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
lossfn = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False,
reduction=tf.keras.losses.Reduction.AUTO,
name='sparse_categorical_crossentropy'
)
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model.compile(optimizer=optimizer, loss={
"priority": tf.keras.losses.CategoricalCrossentropy(from_logits=True),
"department": tf.keras.losses.CategoricalCrossentropy(from_logits=True),
},
loss_weights={"priority": 1.0, "department": 0.2},
metrics=['accuracy'])
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
priority_targets = tf.constant(list_label, shape=(10, 1, 1), dtype=tf.int64)
dept_targets = tf.constant(list_label, shape=(10, 1, 1), dtype=tf.int64)
history = model.fit( {"first input": tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(10, 32, 32, 4), dtype=tf.int64),
"second input": tf.constant(tf.cast(list_file, dtype=tf.int64), shape=(10, 32, 32, 4), dtype=tf.int64)},
{"priority": priority_targets, "department": dept_targets},
batch_size=100, epochs=50 )
输出:修改矩阵并返回正确的显示值,模型使用不同的优先级,但2个输出串联。
Epoch 49/50
1/1 [==============================] - ETA: 0s - loss: 0.0000e+00 - priority_loss: 0.0000e+00 - department_loss: 0.0000e1/1 [==============================] - 0s 54ms/step - loss: 0.0000e+00 - priority_loss: 0.0000e+00 - department_loss: 0.0000e+00 - priority_accuracy: 0.5000 - department_accuracy: 0.5000
Epoch 50/50
1/1 [==============================] - ETA: 0s - loss: 0.0000e+00 - priority_loss: 0.0000e+00 - department_loss: 0.0000e1/1 [==============================] - 0s 54ms/step - loss: 0.0000e+00 - priority_loss: 0.0000e+00 - department_loss: 0.0000e+00 - priority_accuracy: 0.5000 - department_accuracy: 0.5000