嗨,我想制作一个在iOS设备之间进行视频通话的应用程序。我研究过opentok和idoubs,但我想从头开始自己做。我搜索了很多,但找不到任何解决方案。我试图以一种我认为视频聊天如何工作的方式实现这一目标。到目前为止,我已经做了以下事情(通过使用流式链接教程):
-
创建 avcapture 会话并获取 cmsamplebufferref 数据
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ if( captureOutput == _captureOutput ){ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); //Lock the image buffer// CVPixelBufferLockBaseAddress(imageBuffer,0); //Get information about the image// uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); //Create a CGImageRef from the CVImageBufferRef// CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); //We release some components CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); previewImage= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight]; CGImageRelease(newImage); [uploadImageView performSelectorOnMainThread:@selector(setImage:) withObject:previewImage waitUntilDone:YES]; //We unlock the image buffer// CVPixelBufferUnlockBaseAddress(imageBuffer,0); [pool drain]; [self sendMIxedData:@"video1"]; } else if( captureOutput == _audioOutput){ dataA= [[NSMutableData alloc] init]; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, ¤tInputAudioBufferList, sizeof(currentInputAudioBufferList), NULL, NULL, 0, &blockBuffer); //CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &bufferList, sizeof(bufferList), NULL, NULL, 0, &blockBuffer); for (int y = 0; y < currentInputAudioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = currentInputAudioBufferList.mBuffers[y]; Float32 *frame = (Float32*)audioBuffer.mData; [dataA appendBytes:frame length:audioBuffer.mDataByteSize]; } [self sendMIxedData:@"audio"]; }
-
现在,sendMixeddata 方法正在将这些视频/音频字节写入 NSStream。
NSData *data = UIImageJPEGRepresentation([self scaleAndRotateImage:previewImage], 1.0); const uint8_t *message1 = (const uint8_t *)[@"video1" UTF8String]; [_outStream write:message1 maxLength:strlen((char *)message1)]; [_outStream write:(const uint8_t *)[data bytes] maxLength:[data length]]; const uint8_t *message1 = (const uint8_t *)[@"audio" UTF8String]; [_outStream write:message1 maxLength:strlen((char *)message1)]; [_outStream write:(const uint8_t *)[dataA bytes] maxLength:[dataA length]];
-
现在,字节在接收设备上的 nsstream 委托方法中接收
-
问题是我不知道泰国语是否是视频聊天的工作方式或不
-
我也没有成功如何使用要显示的接收字节作为视频.
-
我尝试通过发送带有字节的"音频"和"video1"字符串来知道如果是视频或音频。我也尝试不使用额外的字符串。图像被正确接收和显示,但音频是如此扭曲。
-
请告诉我这是否是制作视频聊天应用程序的正确方法或不。如果是,那么我应该怎么做才能使其可用.例如:我应该一起发送音频/视频数据,而不是像我的例子.在这里我使用简单的 bonjour 教程,但我将如何使用真实服务器实现相同的效果
请指导我正确的方向,因为我被困在这里.
-
谢谢 (对不起,格式。我试过但无法正确格式化)
视频流应用程序使用 vp8 或 h.264 等视频编解码器,它们将击败您的 JPEG 编码帧。
您应该能够通过执行操作来显示您收到的 NSData ...
UIImage *image = [UIImage imageWithData:data];