我使用的是just_audio插件,它在描述中有一个功能:从字节流读取。
基本上,当我放一个文件(从url(播放时,我会保存文件中的字节,所以在这一步之后,我想在本地播放它。
我有一个关于如何从字节流播放的问题。有人能提供一个如何做到这一点的例子吗?我需要把它放在我的播放列表中,这样它就必须是ConcatanatingAudioSource的子项。
我找到的唯一音频源是从Uri中使用它。
final _playlist = ConcatenatingAudioSource(
children: [
AudioSource.uri(
Uri.parse(
"https://s3.amazonaws.com/scifri-episodes/scifri20181123-episode.mp3"),
tag: AudioMetadata(
album: "Science Friday",
title: "ddddd",
artwork:
"https://media.wnyc.org/i/1400/1400/l/80/1/ScienceFriday_WNYCStudios_1400.jpg",
),
)
]
)
这就是我保存字节的方式:
void getBytes() async {
Uri uri = Uri.parse(url);
var rng = new Random();
// get temporary directory of device.
Directory tempDir = await getTemporaryDirectory();
// get temporary path from temporary directory.
String tempPath = tempDir.path;
// create a new file in temporary path with random file name.
File file = new File('$tempPath' + (rng.nextInt(100)).toString() + '.mp3');
// call http.get method and pass imageUrl into it to get response.
http.Response response = await http.get(uri);
// write bodyBytes received in response to file.
await file.writeAsBytes(response.bodyBytes);
}
提前感谢
因此,您似乎需要创建自己的类作为StreamAudioSource的扩展。
import 'dart:typed_data';
import 'package:just_audio/just_audio.dart';
class MyJABytesSource extends StreamAudioSource {
final Uint8List _buffer;
MyJABytesSource(this._buffer) : super(tag: 'MyAudioSource');
@override
Future<StreamAudioResponse> request([int? start, int? end]) async {
// Returning the stream audio response with the parameters
return StreamAudioResponse(
sourceLength: _buffer.length,
contentLength: (end ?? _buffer.length) - (start ?? 0),
offset: start ?? 0,
stream: Stream.fromIterable([_buffer.sublist(start ?? 0, end)]),
contentType: 'audio/wav',
);
}
}
然后像这样调用
await thePlayer.setAudioSource(MyJABytesSource(bytes));
您可以在之后调用thePlayer.play().
,但我更喜欢将其用作侦听器。
thePlayer.processingStateStream.listen((ja.ProcessingState state) {
if (state == ja.ProcessingState.ready) {
// I'm using flutter_cache_manager, and it serves all the file
// under the same name, which is fine, but I think this is why
// I need to pause before I can play again.
// (For tracks after the first, the first plays fine.)
// You probably won't need to pause, but I'm not sure.
thePlayer.pause();
thePlayer.play();
} else if (state == ja.ProcessingState.completed) {
// What to do when it completes.
}
});
这样做的好处在于,您实际上不需要await
关键字,这在实际情况下可能很有用。我把它放在那里只是为了表明这是一个async
函数。