这是我的 Python 代码
def sentiment_local_file(text):
"""Detects sentiment in the local document"""
language_client = language.Client()
if isinstance(text, six.binary_type):
text = text.decode('utf-8')
with open("abhi.txt",'r') as fr:
data = json.loads(fr.read())
print ([data['document']['content']])
document = language_client.document_from_text(data['document']['content'])
result = document.annotate_text(include_sentiment=True,
include_syntax=False,
include_entities=False)
我正在尝试在单个 post 请求中发送字符串列表进行分析,但它给出了错误.这是我正在阅读的文本文件。在上面的代码文本中引用文件名,代码示例是一个函数
{
"document":{
"type":"PLAIN_TEXT",
"language": "EN",
"content":[
"pretending to be very busy"
,
"being totally unconcerned"
,
"a very superior attitude"
,
"calm, dignified and affectionate disposition"
]},"encodingType":"UTF8"}
我阅读了文档和许多示例仍然无法弄清楚。
据我所知,没有办法发送要分析的字符串列表。返回的句子是一个数组,因为 GNL API 会破坏每个句子并对其进行分析。
假设您发送以下请求:
{
"document": {
"type": "PLAIN_TEXT",
"content": "Terrible, I did not like the last updated."
}
}
响应可能是: {
"language": "en",
"sentences": [
{
"text": {
"content": "Terrible, I did not like the last updated.",
"beginOffset": -1
},
"sentiment": {
"magnitude": 0.9,
"score": -0.9
}
}
]
}
上面的响应,有一个名为"句子"的数组,但只有一个元素。发生这种情况是因为我们发送去分析的文本只有一句话。因此,下面是另一个示例:请求:
{
"document": {
"type": "PLAIN_TEXT",
"content": "Terrible, I did not like the last updated. Also, I would like to have access to old version"
}
}
可能的响应将是:
{
"language": "en",
"sentences": [
{
"text": {
"content": "Terrible, I did not like the last updated.",
"beginOffset": -1
},
"sentiment": {
"magnitude": 0.9,
"score": -0.9
}
},
{
"text": {
"content": "Also, I would like to have access to old version",
"beginOffset": -1
},
"sentiment": {
"magnitude": 0,
"score": 0
}
}
]
}
在这种情况下,sentence
数组有两个元素。发生这种情况是因为发送的文本有两个句子(用句点"."分隔(。
希望对您有所帮助。