如何将 AVAsset 音频从一台 iOS 设备无线流式传输到另一台 iOS 设备



我正在制作一些东西,比如从iPod资料库流式传输音频,通过网络或蓝牙发送数据,以及使用音频队列播放。

感谢您的这个问题和代码。帮了我很多。

我有两个问题。

  1. 我应该从一台设备向另一台设备发送什么?CMSampleBufferRef?音频缓冲区?mData?AudioQueueBuffer?包?我不知道。

  2. 当应用程序完成播放时,它崩溃了,我收到错误 (-12733)。我只是想知道如何处理错误,而不是让它崩溃。(检查OSState?当错误发生时,停止它?

    错误:无法读取示例数据 (-12733)

我将首先回答您的第二个问题 - 不要等待应用程序崩溃,您可以通过检查您正在阅读的 CMSampleBufferRef 中可用的样本数量来停止从轨道中提取音频; 例如(此代码也将包含在我答案的后半部分):

CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];
CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);
if (!sample || (numSamples == 0)) {
  // handle end of audio track here
  return;
}

关于您的第一个问题,这取决于您要抓取的音频类型 - 它可能是枯萎的 PCM(非压缩)或 VBR(压缩)格式。我什至不会费心解决PCM部分的问题,因为通过网络将未压缩的音频数据从一部手机发送到另一部手机根本不明智 - 它不必要地昂贵并且会阻塞您的网络带宽。所以我们只剩下 VBR 数据。为此,您必须发送从样本中提取的AudioBufferAudioStreamPacketDescription的内容。但话又说回来,最好用代码解释我所说的内容:

-(void)broadcastSample
{
    [broadcastLock lock];
CMSampleBufferRef sample;
sample = [readerOutput copyNextSampleBuffer];
CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);
if (!sample || (numSamples == 0)) {
    Packet *packet = [Packet packetWithType:PacketTypeEndOfSong];
    packet.sendReliably = NO;
    [self sendPacketToAllClients:packet];
    [sampleBroadcastTimer invalidate];
    return;
}

        NSLog(@"SERVER: going through sample loop");
        Boolean isBufferDataReady = CMSampleBufferDataIsReady(sample);

        CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );                                                         
        AudioBufferList audioBufferList;  
        CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                                                                           sample,
                                                                           NULL,
                                                                           &audioBufferList,
                                                                           sizeof(audioBufferList),
                                                                           NULL,
                                                                           NULL,
                                                                           kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                                           &CMBuffer
                                                                           ),
                   "could not read sample data");
        const AudioStreamPacketDescription   * inPacketDescriptions;
        size_t                               packetDescriptionsSizeOut;
        size_t inNumberPackets;
        CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample, 
                                                                     &inPacketDescriptions,
                                                                     &packetDescriptionsSizeOut),
                   "could not read sample packet descriptions");
        inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription);
        AudioBuffer audioBuffer = audioBufferList.mBuffers[0];

        for (int i = 0; i < inNumberPackets; ++i)
        {
            NSLog(@"going through packets loop");
            SInt64 dataOffset = inPacketDescriptions[i].mStartOffset;
            UInt32 dataSize   = inPacketDescriptions[i].mDataByteSize;            
            size_t packetSpaceRemaining = MAX_PACKET_SIZE - packetBytesFilled - packetDescriptionsBytesFilled;
            size_t packetDescrSpaceRemaining = MAX_PACKET_DESCRIPTIONS_SIZE - packetDescriptionsBytesFilled;        
            if ((packetSpaceRemaining < (dataSize + AUDIO_STREAM_PACK_DESC_SIZE)) || 
                (packetDescrSpaceRemaining < AUDIO_STREAM_PACK_DESC_SIZE))
            {
                if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
                    break;
            }
            memcpy((char*)packet + packetBytesFilled, 
                   (const char*)(audioBuffer.mData + dataOffset), dataSize);
            memcpy((char*)packetDescriptions + packetDescriptionsBytesFilled, 
                   [self encapsulatePacketDescription:inPacketDescriptions[i]
                                         mStartOffset:packetBytesFilled
                    ],
                   AUDIO_STREAM_PACK_DESC_SIZE);  

            packetBytesFilled += dataSize;
            packetDescriptionsBytesFilled += AUDIO_STREAM_PACK_DESC_SIZE; 
            // if this is the last packet, then ship it
            if (i == (inNumberPackets - 1)) {          
                NSLog(@"woooah! this is the last packet (%d).. so we will ship it!", i);
                if (![self encapsulateAndShipPacket:packet packetDescriptions:packetDescriptions packetID:assetOnAirID])
                    break;
            }
        }
    [broadcastLock unlock];
}

我在上面的代码中使用的一些方法是你不需要担心的方法,例如为每个数据包添加标头(我正在创建自己的协议,您可以创建自己的协议)。有关详细信息,请参阅本教程。

- (BOOL)encapsulateAndShipPacket:(void *)source
              packetDescriptions:(void *)packetDescriptions
                        packetID:(NSString *)packetID
{
    // package Packet
    char * headerPacket = (char *)malloc(MAX_PACKET_SIZE + AUDIO_BUFFER_PACKET_HEADER_SIZE + packetDescriptionsBytesFilled);
    appendInt32(headerPacket, 'SNAP', 0);    
    appendInt32(headerPacket,packetNumber, 4);    
    appendInt16(headerPacket,PacketTypeAudioBuffer, 8);   
    // we use this so that we can add int32s later
    UInt16 filler = 0x00;
    appendInt16(headerPacket,filler, 10);    
    appendInt32(headerPacket, packetBytesFilled, 12);
    appendInt32(headerPacket, packetDescriptionsBytesFilled, 16);    
    appendUTF8String(headerPacket, [packetID UTF8String], 20);

    int offset = AUDIO_BUFFER_PACKET_HEADER_SIZE;        
    memcpy((char *)(headerPacket + offset), (char *)source, packetBytesFilled);
    offset += packetBytesFilled;
    memcpy((char *)(headerPacket + offset), (char *)packetDescriptions, packetDescriptionsBytesFilled);
    NSData *completePacket = [NSData dataWithBytes:headerPacket length: AUDIO_BUFFER_PACKET_HEADER_SIZE + packetBytesFilled + packetDescriptionsBytesFilled];        

    NSLog(@"sending packet number %lu to all peers", packetNumber);
    NSError *error;    
    if (![_session sendDataToAllPeers:completePacket withDataMode:GKSendDataReliable error:&error])   {
        NSLog(@"Error sending data to clients: %@", error);
    }   
    Packet *packet = [Packet packetWithData:completePacket];
    // reset packet 
    packetBytesFilled = 0;
    packetDescriptionsBytesFilled = 0;
    packetNumber++;
    free(headerPacket);    
    //  free(packet); free(packetDescriptions);
    return YES;
}
- (char *)encapsulatePacketDescription:(AudioStreamPacketDescription)inPacketDescription
                          mStartOffset:(SInt64)mStartOffset
{
    // take out 32bytes b/c for mStartOffset we are using a 32 bit integer, not 64
    char * packetDescription = (char *)malloc(AUDIO_STREAM_PACK_DESC_SIZE);
    appendInt32(packetDescription, (UInt32)mStartOffset, 0);
    appendInt32(packetDescription, inPacketDescription.mVariableFramesInPacket, 4);
    appendInt32(packetDescription, inPacketDescription.mDataByteSize,8);    
    return packetDescription;
}

接收数据:

- (void)receiveData:(NSData *)data fromPeer:(NSString *)peerID inSession:(GKSession *)session context:(void *)context
{
    Packet *packet = [Packet packetWithData:data];
    if (packet == nil)
    {
         NSLog(@"Invalid packet: %@", data);
        return;
    }
    Player *player = [self playerWithPeerID:peerID];
    if (player != nil)
    {
        player.receivedResponse = YES;  // this is the new bit
    } else {
        Player *player = [[Player alloc] init];
        player.peerID = peerID;
        [_players setObject:player forKey:player.peerID];
    }
    if (self.isServer)
    {
        [Logger Log:@"SERVER: we just received packet"];   
        [self serverReceivedPacket:packet fromPlayer:player];
    }
    else
        [self clientReceivedPacket:packet];
}

笔记:

  1. 有很多网络细节我没有在这里介绍(即,在接收数据部分。我使用了很多定制的对象,但没有扩展它们的定义)。我没有,因为解释所有这些超出了SO的一个答案的范围。但是,您可以遵循Ray Wenderlich的优秀教程。他花时间解释网络原理,我上面使用的架构几乎是逐字逐句地从他那里拿走的。但是有一个问题(见下一点)

  2. 根据您的项目,GKSession 可能不适合(特别是如果您的项目是实时的,或者如果您需要超过 2-3 台设备同时连接),它有很多限制。您将不得不深入挖掘并直接使用 Bonjour。iPhone酷项目有一个很好的快速章节,给出了使用Bonjour服务的一个很好的例子。它并不像听起来那么可怕(苹果文档在这个问题上有点霸道)。

  3. 我注意到您使用 GCD 进行多线程处理。同样,如果您正在处理实时,那么您不想使用高级框架来为您完成繁重的工作(GCD 就是其中之一)。有关此主题的更多信息,请阅读这篇出色的文章。另请阅读我和贾斯汀在此答案的评论中长时间的讨论。

  4. 您可能想查看iOS 6中引入的MTAudioProcessingTap。在处理 AVAsset 时,它可能会为您节省一些麻烦。不过我没有测试这些东西。它是在我完成所有工作后出现的。

  5. 最后但并非最不重要的一点是,您可能想查看学习核心有声读物。这是关于这个主题的广泛认可的参考。我记得在你问这个问题的时候和你一样被困住了。核心音频是重型的,需要时间才能沉浸其中。SO只会给你指点。你将不得不花时间自己吸收材料,然后你就会弄清楚事情是如何解决的。祝你好运!

相关内容

  • 没有找到相关文章

最新更新