def main(s):
with open('pipe.dat', 'rb') as fp:
pipe = pickle.load(fp)
这个 Python 代码会出错
{
"errorMessage": "Can't get attribute 'tokenize' on <module '__main__' from '/var/runtime/bootstrap'>",
"errorType": "AttributeError",
"stackTrace": [
" File "/var/lang/lib/python3.8/imp.py", line 234, in load_modulen return load_source(name, filename, file)n",
" File "/var/lang/lib/python3.8/imp.py", line 171, in load_sourcen module = _load(spec)n",
" File "<frozen importlib._bootstrap>", line 702, in _loadn",
" File "<frozen importlib._bootstrap>", line 671, in _load_unlockedn",
" File "<frozen importlib._bootstrap_external>", line 783, in exec_modulen",
" File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removedn",
" File "/var/task/hand.py", line 2, in <module>n import SentimentModelingn",
" File "/var/task/SentimentModeling.py", line 72, in <module>n event = main(s)n",
" File "/var/task/SentimentModeling.py", line 37, in mainn pipe = pickle.load(fp)n"
]
}
我无法处理这些错误。
我做了关于模块的层:sklearn numpy joblib 使用Spicy & Numpy(由Lambda提供( Lambda Python 和模块的版本是 3.8
并制作管道.dat
pipe = Pipeline([('vect', tfidf), ('cif', logistic)])
pipe.fit(train_x, train_y)
predict_y = pipe.predict(test_x)
print(accuracy_score(test_y, predict_y)*100)
print(classification_report(test_y, predict_y))
with open('pipe.dat', 'wb') as fp:
pickle.dump(pipe, fp)
我在使用 python 3.8 和 pickle with lambda 时遇到了同样的问题。
我将运行时更改为 3.6,泡菜开始工作。
我也使用了.pkl文件,而不是.dat文件。
def load_pickle_tfidf_vocab():
with open("tfidf_vocab.pkl", "rb") as f:
raw_data_tfidf = f.read()
tfidf_vocab = pickle.loads(raw_data_tfidf)
return tfidf_vocab
我也在努力打包用于部署的依赖项,所以我切换到 cloud9 IDE。我 pip 将所有内容安装到文件夹中,然后从 IDE 本身自动部署。