我是机器学习领域的新手。我在第一列中有一个1d信号及其相应的频率,mean_amplitude,并且在文件的第二列中保存了时间:这些是监督训练的输入输出对,即对于已测试的1d信号,我需要输出频率,mean_amplitude和时间。
-0.000000000000000000e+00 5.80000
-0.000000000000000000e+00 3.11111
-0.000000000000000000e+00 -1.3666
-0.000000000000000000e+00
-1.366125990000000065e-14
-1.032400010000000034e-13
-6.034000879999999677e-13
-5.719921059999999811e-13
-1.361178959999999947e-12
-9.374413750000000466e-11
-1.666704970000000006e-10
-1.149504050000000062e-09
5.453276159999999863e-10
1.457022949999999906e-09
-5.355599959999999815e-09
-4.683606839999999697e-09
-2.849577019999999957e-09
-1.108899989999999921e-08
-2.849577019999999957e-09
-4.683606839999999697e-09
-5.355599959999999815e-09
1.457022949999999906e-09
5.453276159999999863e-10
-1.149504050000000062e-09
-1.666704970000000006e-10
-9.374413750000000466e-11
-1.361178959999999947e-12
-5.719921059999999811e-13
-6.034000879999999677e-13
-1.032400010000000034e-13
-0.000000000000000000e+00
-0.000000000000000000e+00
以类似的方式,我有1000个输入输出对保存在附加的目录中,我想训练一个autoencoder
网络,并希望网络预测新的测试信号的频率,mean_amplitude和时间。
在这方面,我需要一些建议,如何为这种输入输出对的自编码器提供输入。
我在keras教程中找到了以下代码,但不知道如何实现这种数据。我希望机器学习专家可以分享一些想法。
input = layers.Input(shape=(28, 28, 1))
# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(input)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding="same")(x)
# Autoencoder
autoencoder = Model(input, x)
autoencoder.compile(optimizer="adam", loss="binary_crossentropy")
autoencoder.summary()
autoencoder.fit(x=train_data,y=train_data,epochs=50,batch_size=128,shuffle=True,validation_data=(test_data, test_data),)
下面是一个使用虚拟数据的简单工作模型:
import tensorflow as tf
signal_input = tf.keras.layers.Input(shape=(1,))
x = tf.keras.layers.Dense(16, activation='relu')(signal_input)
x = tf.keras.layers.Dense(8, activation='relu')(x)
output = tf.keras.layers.Dense(3, activation='linear')(x)
model = tf.keras.models.Model(inputs=signal_input, outputs=output)
model.compile(optimizer='adam',
loss='MSE')
signals = tf.random.normal((1000,1)) # 1000 signals with 1 value each
labels = tf.random.normal((1000, 3)) # 1000 labels with 3 values for frequency, mean_amplitude, and a time
model.fit(x = signals, y = labels, epochs=5, batch_size=8)
输出:
Epoch 1/5
32/32 [==============================] - 0s 1ms/step - loss: 1.0087
Epoch 2/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9856
Epoch 3/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9777
Epoch 4/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9747
Epoch 5/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9733
<keras.callbacks.History at 0x7f4d0909f7d0>
这应该给你一个关于如何为你的数据实现你的模型的想法。