如何在拥抱脸中访问代码生成模型的"last_hidden_state"?



我正在尝试获取last_hidden_state"(如这里所述)的代码生成模型。我无法弄清楚如何继续,除了手动下载每个代码生成模型并检查其键是否使用以下代码具有该属性-

import numpy as np
from datasets import load_dataset
from transformers import AutoTokenizer
from transformers import AutoModel, AutoModelForCausalLM
import torch
from sklearn.linear_model import LogisticRegression
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot").to(device)
inputs = tokenizer("def hello_world():", return_tensors="pt")
inputs = {k:v.to(device) for k,v in inputs.items()}
with torch.no_grad():
outputs = model(**inputs)
print(outputs.keys())
到目前为止,我在CodeParrot和InCoder上尝试了这个策略,但没有成功。也许有更好的方法来访问隐藏层的值?

CodeGenForCausalLM的输出hidden_states已经是编码模型的last_hidden_state。参见:链接

其中hidden_states = transformer_outputs[0]CodeGenModel(link)的输出,transformer_outputs[0]last_hidden_state

if not return_dict:
return tuple(v for v in [hidden_states, presents, all_hidden_states, all_self_attentions] if v is not None)
return BaseModelOutputWithPast(
last_hidden_state=hidden_states,
past_key_values=presents,
hidden_states=all_hidden_states,
attentions=all_self_attentions,
)

最新更新