Wav2Vec2ForCTC 未从模式初始化,传递'sampling_rate'参数



我的代码有点工作(它正在监听和捕获我的声音(,但正在警告我!!我一直有这个问题的代码为:

1)
UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
warnings.warn(
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

现在收听。。

2)
It is strongly recommended to pass the ``sampling_rate`` argument to this function. Failing to do so can result in silent errors that might be hard to debug.

import torch
import speech_recognition as sr
import io
from pydub import AudioSegment
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
tokenizer = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
recognizer = sr.Recognizer()
while True:
audio = recognizer.listen(source)
data = io.BytesIO(audio.get_wav_data())
clip = AudioSegment.from_file(data)
tensor = torch.FloatTensor(clip.get_array_of_samples())
inputs = tokenizer(tensor, sample_rate=16000, return_tensors="pt", padding="longest").input_values
logits = model(inputs).logits
tokens = torch.argmax(logits, dim=-1)
text = tokenizer.batch_decode(tokens)
print(str(text).lower())
import soundfile as sf
import torch       
ds, samplerate = sf.read(audio_file)
input_values = speech_tokenizer(ds  , return_tensors="pt", sampling_rate = samplerate, padding="longest"  )

将sampling_rate传递给speechtokenizer以解决sampling_rate警告并确保您的";sampling_rate":16000

最新更新