对于无限数据集,每个纪元中使用的数据是否相同?



在tensorflow中,假设我有一个来自生成器的数据集:

dataset = tf.data.Dataset.from_generator(gen...)

这个生成器生成无限的非重复数据(就像无限的非循环小数一样(。

model.fit(dataset, steps_per_epoch=10000, epochs=5)

现在在这 5 个训练周期内,使用的数据是否相同?即始终是生成器中的前 10000 个项目?而不是 0-9999 表示纪元 1,10000-19999 表示纪元 2,依此类推。

initial_epoch参数呢?如果我将其设置为 1,模型是否会从第 10000 个项目开始训练?

model.fit(dataset, steps_per_epoch=10000, epochs=5, initial_epoch=1)

更新:这个简单的测试表明,每次调用model.fit()时,数据集都会被重置

def gen():
i = 1
while True:
yield np.array([[i]]), np.array([[0]])
i += 1
ds = tf.data.Dataset.from_generator(gen, output_types=(tf.int32, tf.int32)).batch(3)
x = Input(shape=(1, 1))
model = Model(inputs=x, outputs=x)
model.compile('adam', loss=lambda true, pred: tf.reduce_mean(pred))
for i in range(10):
model.fit(ds, steps_per_epoch=5, epochs=1)

输出:

1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 9ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 2ms/step - loss: 8.0000

1 次调用中的 5 个时期:

model.fit(ds, steps_per_epoch=5, epochs=5)

输出:

Epoch 1/5
1/5 [=====>........................] - ETA: 0s - loss: 2.0000
5/5 [==============================] - 0s 9ms/step - loss: 8.0000
Epoch 2/5
1/5 [=====>........................] - ETA: 0s - loss: 17.0000
5/5 [==============================] - 0s 2ms/step - loss: 23.0000
Epoch 3/5
1/5 [=====>........................] - ETA: 0s - loss: 32.0000
5/5 [==============================] - 0s 2ms/step - loss: 38.0000
Epoch 4/5
1/5 [=====>........................] - ETA: 0s - loss: 47.0000
5/5 [==============================] - 0s 2ms/step - loss: 53.0000
Epoch 5/5
1/5 [=====>........................] - ETA: 0s - loss: 62.0000
5/5 [==============================] - 0s 2ms/step - loss: 68.0000

不,使用的数据是不同的。keras使用steps_per_epoch来确定每个epoch的长度(因为生成器没有长度(,因此它知道何时结束训练(或调用检查指针等(。

initial_epoch是针对 epoch 显示的数字,当您想从检查点重新开始训练时很有用(请参阅 fit 方法(,但它与数据迭代无关。

如果将相同的dataset传递给model.fit方法,它将在每次函数调用后重置(感谢您提供 OP 信息(。

最新更新