如果我不提供oov_token,张量流中的分词器如何处理词汇表外的代币?


tokenizer = Tokenizer()
tokenizer.fit_on_texts(X_train)
encoded_docs = tokenizer.texts_to_sequences(X_train)
padded_sequence = pad_sequences(encoded_docs, maxlen=60)
test_tweets = tokenizer.texts_to_sequences(X_test)
test_padded_sequence = pad_sequences(test_tweets, maxlen=60)

尽管我没有提供oov_token参数,但该代码没有出现任何错误。我本以为test_tweets = tokenizer.texts_to_sequences(X_test)会出错

当你没有提供oov_token时,tensorflow如何在测试期间处理词汇表外的单词?

OOV单词默认情况下将被忽略/丢弃,如果oov_tokenNone:

import tensorflow as tf
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(['hello world'])
print(tokenizer.word_index)
sequences = tokenizer.texts_to_sequences(['hello friends'])
print(sequences)
{'hello': 1, 'world': 2}
[[1]]

最新更新