如何在蒸馏酒中进行交叉验证



我制作了一个简单的模型,使用蒸馏器进行文本分类。问题是我无法弄清楚如何在培训时进行交叉验证。下面提供了我的代码实现。

谁能帮助我实现交叉验证,而培训?

提前谢谢你。

#Split into Train-Test-Validation    
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.10, random_state = 0)
X_val, X_test, y_val, y_test = train_test_split(X_test,y_test, test_size=0.10, random_state=42)


#Encoding text for train data
train_encoded = tokenizer(X_train, truncation=True, padding=True, return_tensors="tf")
train_data = tf.data.Dataset.from_tensor_slices((dict(train_encoded), y_train))

#Encoding text for validation data
val_encoded = tokenizer(X_val, truncation=True, padding=True, return_tensors="tf")
val_data = tf.data.Dataset.from_tensor_slices((dict(val_encoded), y_val))

#Encoding text for testing data
test_data = tf.data.Dataset.from_tensor_slices((dict(test_encoded), y_test))
test_encoded = tokenizer(X_test, truncation=True, padding=True, return_tensors="tf")

#Load distil bert model
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels=2)
model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy'])
model.fit(train_data.batch(16), epochs=10, batch_size=16)

我建议使用K-fold验证作为交叉评估策略!

kf = KFold(n_splits=10, random_state=99, shuffle=True)
for train_index, test_index in kf.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
X_val, X_test, y_val, y_test = train_test_split(X_test,y_test, test_size=0.10, random_state=42)
#Encoding text for train data
train_encoded = tokenizer(X_train, truncation=True, padding=True, return_tensors="tf")
train_data = tf.data.Dataset.from_tensor_slices((dict(train_encoded), y_train))

#Encoding text for validation data
val_encoded = tokenizer(X_val, truncation=True, padding=True, return_tensors="tf")
val_data = tf.data.Dataset.from_tensor_slices((dict(val_encoded), y_val))

#Encoding text for testing data
test_data = tf.data.Dataset.from_tensor_slices((dict(test_encoded), y_test))
test_encoded = tokenizer(X_test, truncation=True, padding=True, return_tensors="tf")

#Load distil bert model
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels=2)
model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy'])
model.fit(train_data.batch(16), epochs=10, batch_size=16)
#Get your results and perform analysis

作为一种替代方法,您可以使用sklearn-api支持包装您的模型,然后享受sklearn提供的交叉验证和数十个其他实用程序!

相关内容

  • 没有找到相关文章

最新更新