Android Wear Audio Recorder using the ChannelAPI



我正在尝试为Android Wear构建一个音频记录器应用程序。现在,我可以在手表上捕捉音频,传输到手机上,并保存到文件中。然而,音频文件呈现空白或裁剪的部分。

我找到了与我的问题相关的问题的解答link1, link2,但是他们不能帮助我。


下面是我的代码:

首先,在手表端,我使用channelAPI创建频道,并成功地将手表上捕获的音频发送到智能手机。

//here are the variables values that I used
//44100Hz is currently the only rate that is guaranteed to work on all devices
//but other rates such as 22050, 16000, and 11025 may work on some devices.
private static final int RECORDER_SAMPLE_RATE = 44100; 
private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
int BufferElements2Rec = 1024; 
int BytesPerElement = 2; 
//start the process of recording audio
private void startRecording() {
    recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
            RECORDER_SAMPLE_RATE, RECORDER_CHANNELS,
            RECORDER_AUDIO_ENCODING, BufferElements2Rec * BytesPerElement);
    recorder.startRecording();
    isRecording = true;
    recordingThread = new Thread(new Runnable() {
        public void run() {
            writeAudioDataToPhone();
        }
    }, "AudioRecorder Thread");
    recordingThread.start();
}
private void writeAudioDataToPhone(){
    short sData[] = new short[BufferElements2Rec];
    ChannelApi.OpenChannelResult result = Wearable.ChannelApi.openChannel(googleClient, nodeId, "/mypath").await();
    channel = result.getChannel();
    Channel.GetOutputStreamResult getOutputStreamResult = channel.getOutputStream(googleClient).await();
    OutputStream outputStream = getOutputStreamResult.getOutputStream();
    while (isRecording) {
        // gets the voice output from microphone to byte format
        recorder.read(sData, 0, BufferElements2Rec);
        try {
            byte bData[] = short2byte(sData);
            outputStream.write(bData);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
    try {
        outputStream.close();
    } catch (IOException e) {
        e.printStackTrace();
    }
}

然后,在智能手机端,我从通道接收音频数据并将其写入PCM文件。

public void onChannelOpened(Channel channel) {
    if (channel.getPath().equals("/mypath")) {
        Channel.GetInputStreamResult getInputStreamResult = channel.getInputStream(mGoogleApiClient).await();
        inputStream = getInputStreamResult.getInputStream();
        writePCMToFile(inputStream);
        MainActivity.this.runOnUiThread(new Runnable() {
            public void run() {
                Toast.makeText(MainActivity.this, "Audio file received!", Toast.LENGTH_SHORT).show();
            }
        });
    }
}
public void writePCMToFile(InputStream inputStream) {
    OutputStream outputStream = null;
    try {
        // write the inputStream to a FileOutputStream
        outputStream = new FileOutputStream(new File("/sdcard/wearRecord.pcm"));
        int read = 0;
        byte[] bytes = new byte[1024];
        while ((read = inputStream.read(bytes)) != -1) {
            outputStream.write(bytes, 0, read);
        }
        System.out.println("Done writing PCM to file!");
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        if (inputStream != null) {
            try {
                inputStream.close();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
        if (outputStream != null) {
            try {
                // outputStream.flush();
                outputStream.close();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    }
}

我做错了什么,或者你的建议是什么,以实现智能手机上完美的无间隙音频文件?

我注意到在您的代码中,您正在读取所有内容到一个短[]数组,然后将其转换为一个字节[]数组供通道API发送。您的代码还通过循环的每次迭代创建一个新的byte[]数组,这将为垃圾收集器带来大量工作。通常,您希望避免在循环内分配。

我将在顶部分配一个byte[]数组,并让AudioRecord类将其直接存储到byte[]数组中(只需确保分配的字节数是短裤的两倍),代码如下:

mAudioTemp = new byte[bufferSize];
int result;
while ((result = mAudioRecord.read(mAudioTemp, 0, mAudioTemp.length)) > 0) {
  try {
    mAudioStream.write(mAudioTemp, 0, result);
  } catch (IOException e) {
    Log.e(Const.TAG, "Write to audio channel failed: " + e);
  }
}

我也测试了这与1秒的音频缓冲,使用这样的代码,它工作得很好。在开始出现问题之前,我不确定最小缓冲区大小是多少:

int bufferSize = Math.max(
  AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT),
  44100 * 2);

相关内容

  • 没有找到相关文章

最新更新