谷歌云语音转Python文本:节省翻译和JSON时间



我正在使用标准解决方案进行带有时间戳的语音到文本处理(请参阅下面的代码(。我从这篇文章中知道,可以将参数添加到gcloud命令行工具中,例如--format=json.

一般问题:如何在google.cloud.speech中指定这些?我似乎在Google的网站上找不到任何关于如何使用Python执行此操作的文档。

具体问题:我现在的目标是写出一个字典样式的JSON文件,其中包含所有单词的条目,以及每个单词的开始和结束时间。我意识到我云编写了一个黑客解决方案,但如果已经存在一个选项,那将是可取的。

代码

def transcribe_file_with_word_time_offsets(speech_file, language):
"""Transcribe the given audio file synchronously and output the word time
offsets."""
print("Start")
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
print("checking credentials")
client = speech.SpeechClient(credentials=credentials)
print("Checked")
with io.open(speech_file, 'rb') as audio_file:
content = audio_file.read()

print("audio file read")
audio = types.RecognitionAudio(content=content)
print("config start")
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.FLAC,
language_code=language,
enable_word_time_offsets=True)
print("Recognizing:")
response = client.recognize(config, audio)
print("Recognized")
for result in response.results:
alternative = result.alternatives[0]
print('Transcript: {}'.format(alternative.transcript))
for word_info in alternative.words:
word = word_info.word
start_time = word_info.start_time
end_time = word_info.end_time
print('Word: {}, start_time: {}, end_time: {}'.format(
word,
start_time.seconds + start_time.nanos * 1e-9,
end_time.seconds + end_time.nanos * 1e-9))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument(dest='path', help='Audio file to be recognized')
args = parser.parse_args()
transcribe_file_with_word_time_offsets(args.path, 'en-US')

这是黑客解决方案:

...
transcript_dict = {'Word':[], 'start_time': [], 'end_time':[]}
for result in response.results:
alternative = result.alternatives[0]
print('Transcript: {}'.format(alternative.transcript))
for word_info in alternative.words:
word = word_info.word
start_time = word_info.start_time
end_time = word_info.end_time
transcript_dict['Word'].append(word)
transcript_dict['start_time'].append(
start_time.seconds + start_time.nanos * 1e-9)
transcript_dict['end_time'].append(
end_time.seconds + end_time.nanos * 1e-9)
print(transcript_dict)
...

在链接问题中使用protobuf的解决方案对我不起作用(2020 年 11 月(,但它让我想到了这条评论,它对我使用语音 API 有用:

speech.types.RecognizeResponse.to_json(response)
# alternatively
type(response).to_json(response)

from google.cloud import speech_v1 as speech

def transcribe_gcs(gcs_uri):
client = speech.SpeechClient()
audio = speech.RecognitionAudio(uri=gcs_uri)
config = speech.RecognitionConfig(
language_code="en-US",
)
return client.recognize(config=config, audio=audio)

sample_audio_uri = "gs://cloud-samples-tests/speech/brooklyn.flac"
response = transcribe_gcs(sample_audio_uri)
response_json = type(response).to_json(response)

print(response_json)
{
"results": [
{
"alternatives": [
{
"transcript": "how old is the Brooklyn Bridge",
"confidence": 0.98314303,
"words": []
}
],
"channelTag": 0
}
]
}

你可以尝试这样的事情:

from google.cloud import speech_v1p1beta1 as speech
import proto
client = speech.SpeechClient()
audio = speech.RecognitionAudio(...)
config = speech.RecognitionConfig(...)
operation = client.long_running_recognize(config=config, audio=audio)
response = operation.result()
response_dict = proto.Message.to_dict(response)

最新更新