我们能只用整数编码获得有效的结果吗?



我正在学习深度学习,

我正在使用IMDB数据集。 它正在处理[整数编码]?

一些示例表明您只是在进行深度学习,而不是转换为单热编码。

这足以获得有效的结果吗?

如果是这样的话

单热编码的优势是什么?

这是我的代码吗

import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
max_words = 500
X_train = train_result
y_train = train_label
X_test = test_result
y_test = test_label# pad dataset to a maximum review length in words
X_train = sequence.pad_sequences(X_train, maxlen=max_words)
X_test = sequence.pad_sequences(X_test, maxlen=max_words)
print(X_train[:1])
# create the model
model = Sequential()
model.add(Embedding(top_words, 32, input_length=max_words))
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# Fit the model
hist = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size=128, verbose=1)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))

在X_train[1]。

[[   0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    0    0    0    0    0    0    0
0    0    0    0    0    0    0    3  284 2452  756    1 3075  194
54 3717   10  757  169 2216    5    1 1906  843   54   52 2732 3403
5 1819    3   34    4   54 1819    5 2532   42  668   23   54  709
52    7    9    2   80  172 3258  265   33    1 1467    4  683    4
11   21  988    1    3  110  631    2    4  321    3 3040  294  284
478   33    1   33   54 4349   33   54  213    2   86   54  516  420
754    1   84    2    8  526  473   63   20  184   20  184   20  184
1138   52    3   23    1 1468  101    3 1850    4   61    6  777   20
237  185   52 3846    5   54  149    7   34    4    1   18   54 4802
929    2    5   98    8   13   17    9    1  993  117  101    3  165
41  653  781    3  286  923 2882    7  210    3  181    5 1743    3
120  814 1630 1517    3 2317 4606 4425    9   43  686    5  744 1018
910  223  136 3782 1585  775 1391 3041  155    3  292    4 2975    2
136  135  120  864   24  869 3655  245  421    1 1803   10    1  120
2    1  261   78 1671   19   43 1288   16    1 1036    5  380    1
1744  121   10    1   84  252   55   51  670    2   24  200   51 1709
1 1256 1469    2    1  217    5 2453  423   79  929   36    9    3
1106    4 2754 4526   14   29   24 2393   74   34 4049   17   42   72
9  365    1   69   41 1804  572   41  559   76   92    2  153  112
11   15  835 1423  136    1   59   15   67    1 1320    5  441    2
733   17    1  688  890    5   26  421   55   23  208    2   31 2070
23    1 2998  136    6  413   44   33   40    7  119    9  668    4
22 3213   40    7  119  151  359    5   25  185]]

这是输出,

Epoch 1/20
10103/10103 [==============================] - 5s 523us/step - loss: 0.5812 - acc: 0.6589 - val_loss: 0.1229 - val_acc: 0.9551
Epoch 2/20
10103/10103 [==============================] - 5s 478us/step - loss: 0.1299 - acc: 0.9485 - val_loss: 0.0693 - val_acc: 0.9663
Epoch 3/20
10103/10103 [==============================] - 5s 488us/step - loss: 0.0544 - acc: 0.9824 - val_loss: 0.0589 - val_acc: 0.9775
Epoch 4/20
10103/10103 [==============================] - 5s 488us/step - loss: 0.0258 - acc: 0.9923 - val_loss: 0.0371 - val_acc: 0.9850
Epoch 5/20
10103/10103 [==============================] - 5s 483us/step - loss: 0.0120 - acc: 0.9976 - val_loss: 0.0528 - val_acc: 0.9813
Epoch 6/20
10103/10103 [==============================] - 5s 483us/step - loss: 0.0058 - acc: 0.9991 - val_loss: 0.0464 - val_acc: 0.9850
Epoch 7/20
10103/10103 [==============================] - 5s 482us/step - loss: 0.0032 - acc: 0.9994 - val_loss: 0.0707 - val_acc: 0.9738
Epoch 8/20
10103/10103 [==============================] - 5s 485us/step - loss: 0.0022 - acc: 0.9997 - val_loss: 0.0471 - val_acc: 0.9925
Epoch 9/20
10103/10103 [==============================] - 5s 482us/step - loss: 0.0011 - acc: 0.9998 - val_loss: 0.0698 - val_acc: 0.9775
Epoch 10/20
10103/10103 [==============================] - 5s 481us/step - loss: 6.8280e-04 - acc: 1.0000 - val_loss: 0.0728 - val_acc: 0.9775
Epoch 11/20
10103/10103 [==============================] - 5s 483us/step - loss: 4.8174e-04 - acc: 1.0000 - val_loss: 0.0873 - val_acc: 0.9738
Epoch 12/20
10103/10103 [==============================] - 5s 477us/step - loss: 3.4037e-04 - acc: 1.0000 - val_loss: 0.0674 - val_acc: 0.9813
Epoch 13/20
10103/10103 [==============================] - 5s 478us/step - loss: 2.6164e-04 - acc: 1.0000 - val_loss: 0.0847 - val_acc: 0.9775
Epoch 14/20
10103/10103 [==============================] - 5s 478us/step - loss: 2.0453e-04 - acc: 1.0000 - val_loss: 0.0812 - val_acc: 0.9775
Epoch 15/20
10103/10103 [==============================] - 5s 473us/step - loss: 1.6034e-04 - acc: 1.0000 - val_loss: 0.0831 - val_acc: 0.9775
Epoch 16/20
10103/10103 [==============================] - 5s 469us/step - loss: 1.3443e-04 - acc: 1.0000 - val_loss: 0.0874 - val_acc: 0.9775
Epoch 17/20
10103/10103 [==============================] - 5s 467us/step - loss: 1.1035e-04 - acc: 1.0000 - val_loss: 0.0891 - val_acc: 0.9775
Epoch 18/20
10103/10103 [==============================] - 5s 471us/step - loss: 9.3257e-05 - acc: 1.0000 - val_loss: 0.0956 - val_acc: 0.9775
Epoch 19/20
10103/10103 [==============================] - 5s 465us/step - loss: 7.9740e-05 - acc: 1.0000 - val_loss: 0.0965 - val_acc: 0.9775
Epoch 20/20
10103/10103 [==============================] - 5s 467us/step - loss: 6.7700e-05 - acc: 1.0000 - val_loss: 0.0919 - val_acc: 0.9775
Accuracy: 97.75%

整数编码意味着标签中存在一些有序关系,因此在构建分类模型时需要独热嵌入。本质上,独热嵌入是将离散数据映射到欧几里得空间。

例如,这里的数据集包括 3 个类别:苹果、橙子、香蕉。如果你使用整数编码:{apple => 0,橙色=> 1,香蕉=> 2},你永远不能说"橙色"大于或大于"苹果"。

在您的情况下,IMDB评论数据集是一个二元分类数据集,有两种标签:负面和正面。您可以将它们作为连续特征处理:如果预测值接近 1,则审查率更积极,反之亦然。

https://www.quora.com/What-are-good-ways-to-handle-discrete-and-continuous-inputs-together

为什么一个热编码可以提高机器学习性能?

相关内容

  • 没有找到相关文章

最新更新