IndexError:在添加特殊标记后尝试微调Roberta模型时,self中的索引超出范围



在向Roberta模型的标记生成器添加一些特殊标记后,我正在尝试对其进行微调

special_tokens_dict = {'additional_special_tokens': ['[Tok1]','[Tok2]']}
tokenizer.add_special_tokens(special_tokens_dict)

当我尝试训练模型(在cpu上(时,我得到了这个错误:

IndexError                                Traceback (most recent call last)
<ipython-input-75-d63f8d3c6c67> in <module>()
50         l = model(b_input_ids, 
51                      attention_mask=b_input_mask,
---> 52                     labels=b_labels)
53         loss,logits = l
54         total_train_loss += l[0].item()
8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850         # remove once script supports set_grad_enabled
1851         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853 
1854 
IndexError: index out of range in self

p.s.如果我对add_special_tokens进行注释,代码就会工作。

您还需要告诉您的模型,它需要学习两个新令牌的向量表示:

from transformers import RobertaTokenizer, RobertaForQuestionAnswering
t = RobertaTokenizer.from_pretrained('roberta-base')
m = RobertaForQuestionAnswering.from_pretrained('roberta-base')
#roberta-base 'knows' 50265 tokens
print(m.roberta.embeddings.word_embeddings)
special_tokens_dict = {'additional_special_tokens': ['[Tok1]','[Tok2]']}
t.add_special_tokens(special_tokens_dict)
#we now tell the model that it needs to learn new tokens:
m.resize_token_embeddings(len(t))
m.roberta.embeddings.word_embeddings.padding_idx=1
print(m.roberta.embeddings.word_embeddings)

输出:

Embedding(50265, 768, padding_idx=1)
Embedding(50267, 768, padding_idx=1)

相关内容

最新更新