拥抱面TFBertForSequenceClassification总是预测相同的标签



TL;博士:我的模型总是预测相同的标签,我不知道为什么。以下是我的全部代码,用于微调,希望有人能向我指出我的错误所在。

我使用Huggingface的TFBertForSequenceClassification进行序列分类任务来预测德语文本中句子的4个标签。

我使用基于bert的德语大小写模型,因为我不只使用小写文本(因为德语比英语更区分大小写(。

我从一个csv文件中获得输入,该文件是我从收到的带注释的语料库中构建的。这是一个例子:

0       Hier kommen wir ins Spiel Die App Cognitive At...
1       Doch wenn Athlet Lebron James jede einzelne Mu...
2       Wie kann ein Gehirn auf Hochleistung getrimmt ...
3       Wie schafft es Warren Buffett knapp 1000 Wörte...
4       Entfalte dein mentales Potenzial und werde ein...
Name: sentence_clean, Length: 3094, dtype: object

这些是我的标签,来自同一个csv文件:

0       e_1
1       e_4
2       e_4
3       e_4
4       e_4

不同的标签是:e_1、e_2、e_3和e_4

这是我用来微调我的模型的代码:

import pandas as pd
import numpy as np
import os

# read in data
# sentences_df = pd.read_csv('path/file.csv')

X = sentences_df.sentence_clean
Y = sentences_df.classId
# =============================================================================
# One hot encode labels
# =============================================================================
# integer encode labels
from numpy import array
from numpy import argmax
from sklearn.preprocessing import LabelEncoder

label_encoder = LabelEncoder()
Y_integer_encoded = label_encoder.fit_transform(list(Y))

# one hot encode labels
from sklearn.preprocessing import OneHotEncoder
onehot_encoder = OneHotEncoder(sparse=False)
Y_integer_encoded_reshaped = Y_integer_encoded.reshape(len(Y_integer_encoded), 1)
Y_one_hot_encoded = onehot_encoder.fit_transform(Y_integer_encoded_reshaped)
# train test split
from sklearn.model_selection import train_test_split

X_train_raw, X_test_raw, y_train, y_test = train_test_split(X, Y_one_hot_encoded, test_size=0.20, random_state=42)

# =============================================================================
# Perpare datasets for finetuning
# =============================================================================
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU') 
tf.config.experimental.set_memory_growth(physical_devices[0], True)
from transformers import BertTokenizer, TFBertForSequenceClassification

tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased') # initialize tokenizer

# tokenize trai and test sets
max_seq_length = 128
X_train_tokens = tokenizer(list(X_train_raw),
truncation=True,
padding=True)
X_test_tokens = tokenizer(list(X_test_raw),
truncation=True,
padding=True)

# create TF datasets as input for BERT model
bert_train_ds = tf.data.Dataset.from_tensor_slices((
dict(X_train_tokens),
y_train
))
bert_test_ds = tf.data.Dataset.from_tensor_slices((
dict(X_test_tokens),
y_test
))
# =============================================================================
# setup model and finetune
# =============================================================================
# define hyperparams
num_labels = 4
learninge_rate = 2e-5
epochs = 3
batch_size = 16
# create BERT model
bert_categorical_partial = TFBertForSequenceClassification.from_pretrained('bert-base-german-cased', num_labels=num_labels)
optimizer = tf.keras.optimizers.Adam(learning_rate=learninge_rate)
bert_categorical_partial.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
history = bert_categorical_partial.fit(bert_train_ds.shuffle(100).batch(batch_size),
epochs=epochs,
# batch_size=batch_size,
validation_data=bert_test_ds.shuffle(100).batch(batch_size))

这是微调的输出:

Epoch 1/3
155/155 [==============================] - 31s 198ms/step - loss: 8.3038 - accuracy: 0.2990 - val_loss: 8.7751 - val_accuracy: 0.2811
Epoch 2/3
155/155 [==============================] - 30s 196ms/step - loss: 8.2451 - accuracy: 0.2913 - val_loss: 8.9314 - val_accuracy: 0.2779
Epoch 3/3
155/155 [==============================] - 30s 196ms/step - loss: 8.3101 - accuracy: 0.2913 - val_loss: 9.0355 - val_accuracy: 0.2746

最后,我尝试预测测试集的标签,并用混淆矩阵验证结果

X_test_tokens_new = {'input_ids': np.asarray(X_test_tokens['input_ids']),
'token_type_ids': np.asarray(X_test_tokens['token_type_ids']),
'attention_mask': np.asarray(X_test_tokens['attention_mask']),
}
pred_raw = bert_categorical_partial.predict(X_test_tokens_new)
pred_proba = tf.nn.softmax(pred_raw).numpy()
pred = pred_proba[0].argmax(axis = 1)
y_true = y_test.argmax(axis = 1)
cm = confusion_matrix(y_true, pred)

打印输出(cm(:

array([[  0,   0,   0,  41],
[  2,   0,   0, 253],
[  2,   0,   0, 219],
[  6,   0,   0,  96]], dtype=int64)

正如你所看到的,我的准确性真的很差,当我看厘米时,我可以看到我的模型几乎只预测一个标签。我尝试了所有的方法,并多次运行该模型,但总是得到相同的结果。我知道我使用的数据并不好,我只训练了大约2k个带标签的句子。但我有一种感觉,准确度应该更高,更重要的是,该模型不应该在98%的时间里只预测一个标签,对吧?

我发布了我用来运行模型的所有东西,希望有人能告诉我哪里出了问题。事先非常感谢您的帮助!

您进行了几分钟的训练。即使对于经过预训练的BERT来说,这也是不够的。

试着降低学习率,以使你的准确性在每个时期后提高(前10个时期(。并训练更多的时期(直到你看到验证准确性下降了10个时期(。

相关内容

  • 没有找到相关文章

最新更新