我曾经从Azure Test-to-Speech API服务获取输出格式为"riff-24khz-16bit-mono-pcm"。由于一些技术变化,我们现在得到的音频文本是音频-16khz-128kbitrate-mono-mp3。
在此更改之前,我们曾经执行以下操作来播放音频文本中的音频:
String stepTitle=soundData; // audioText output from Azure
byte[] bytes = stepTitle.getBytes();
Base64.Decoder decoder = Base64.getDecoder();
byte[] decoded = decoder.decode(bytes);
InputStream input = new ByteArrayInputStream(decoded);
AudioInputStream audioInput = null;
try {
///////// This line is giving exception ////////////////////////
audioInput = AudioSystem.getAudioInputStream(input);
} catch (UnsupportedAudioFileException | IOException e) {
e.printStackTrace();
}
AudioFormat audioFormats = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED,
24000,
16,
1,
1 * 2,
24000,
false);
如上所述,在获取音频输入流时,我得到了不支持的音频文件异常。
我已经尝试过使用mp3插件.jar。但我想我无法让它正常工作。 请帮忙!
通过参考官方 TTS REST API 文档,我将用密钥交换一个令牌,并使用该令牌将文本转换为语音。我将得到一个音频数据的字节数组,然后我使用 JLayer 播放音频。
这是我的示例:
public class TestTTS {
static String key = "f1a0ea***********fa5e35";
static String tokenEndpoint = "https://southeastasia.api.cognitive.microsoft.com/sts/v1.0/issueToken";
static String serviceEndpoint = "https://southeastasia.tts.speech.microsoft.com/cognitiveservices/v1";
public static String GetToken(){
Map<String, String> headers = new HashMap<>();
headers.put("Ocp-Apim-Subscription-Key",key);
headers.put("Content-Type","application/x-www-form-urlencoded");
try(CloseableHttpClient client = HttpClients.createDefault()){
HttpPost post = new HttpPost(new URI(tokenEndpoint));
for (String key : headers.keySet()) {
post.setHeader(key, headers.get(key));
}
post.setEntity(new StringEntity(""));
CloseableHttpResponse result = client.execute(post);
return EntityUtils.toString(result.getEntity());
} catch (URISyntaxException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return "";
}
public static byte[] GetAudio(String token) {
String content = "<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Female' name='en-US-JessaRUS'>Microsoft Speech Service Text-to-Speech API</voice></speak>";
Map<String, String> headers = new HashMap<>();
headers.put("Authorization","Bearer " + token);
headers.put("Content-Type","application/ssml+xml");
headers.put("X-Microsoft-OutputFormat"," audio-16khz-64kbitrate-mono-mp3");
try(CloseableHttpClient client = HttpClients.createDefault()){
HttpPost post = new HttpPost(new URI(serviceEndpoint));
for (String key : headers.keySet()) {
post.setHeader(key, headers.get(key));
}
post.setEntity(new StringEntity(content));
CloseableHttpResponse result = client.execute(post);
byte[] bytes = EntityUtils.toByteArray(result.getEntity());
return bytes;
} catch (URISyntaxException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return new byte[0];
}
public static void main(String[] args) {
byte[] bytes = GetAudio(GetToken());
System.out.println(bytes.length);
try(InputStream is = new ByteArrayInputStream(bytes)){
Player player=new Player(is);
player.play();
} catch (IOException e) {
e.printStackTrace();
} catch (JavaLayerException e) {
e.printStackTrace();
}
}
}
对于带有mp3插件的JMF,我将音频数据保存到临时文件中,然后播放。
public static void PlayWithJMF(){
Format input1 = new AudioFormat(AudioFormat.MPEGLAYER3);
Format input2 = new AudioFormat(AudioFormat.MPEG);
Format output = new AudioFormat(AudioFormat.LINEAR);
PlugInManager.addPlugIn(
"com.sun.media.codec.audio.mp3.JavaDecoder",
new Format[]{input1, input2},
new Format[]{output},
PlugInManager.CODEC
);
try{
FileOutputStream fos = new FileOutputStream("tmp.mp3");
byte[] bytes = GetAudio(GetToken());
System.out.println(bytes.length);
fos.write(bytes);
fos.flush();
fos.close();
File file = new File("tmp.mp3");
javax.media.Player player = Manager.createPlayer(new MediaLocator(file.toURI().toURL()));
player.start();
while(true){
if(player.getState() == 500){
player.close();
break;
}
Thread.sleep(500);
}
}catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoPlayerException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}