火炬张量和输入冲突:"Tensor Object Is Not Callable"



由于代码"torch.tensor,";我得到了错误";张量对象不可调用";当我加上";输入"有人知道我该怎么解决吗?

import torch
from torch.nn import functional as F
from transformers import GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
text0 = "In order to"
text = tokenizer.encode("In order to")
input, past = torch.tensor([text]), None

logits, past = model(input, past = past)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(5)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
for i in range(5):
f = ('Generated {}: {}'.format(i, best_words[i]))
print(f)

option = input("Pick a Option:")
z = text0.append(option)
print(z)

堆栈跟踪错误:

TypeError                                 Traceback (most recent call last)
<ipython-input-2-82e8d88e81c1> in <module>()
25 
26 
---> 27 option = input("Pick a Option:")
28 z = text0.append(option)
29 print(z)
TypeError: 'Tensor' object is not callable

问题是您已经定义了一个名为input的变量,该变量将被用来代替input函数。只需为变量使用不同的名称,它就会按预期工作。

此外,python字符串没有append方法。

import torch
from torch.nn import functional as F
from transformers import GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
text0 = "In order to"
text = tokenizer.encode("In order to")
myinput, past = torch.tensor([text]), None

logits, past = model(myinput, past = past)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(5)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
for i in range(5):
f = ('Generated {}: {}'.format(i, best_words[i]))
print(f)

option = input("Pick a Option:")
z = text0 + ' ' + option
print(z)

相关内容

最新更新