我正在创建一个需要对图像实时应用滤镜的应用程序。将UIImage
转换为CIImage
和应用滤镜都是非常快的操作,但是将创建的CIImage
转换回CGImageRef
并显示图像需要很长时间(1/5秒,如果需要实时编辑,这实际上很多)。
图像大约 2500 x 2500 像素大,这很可能是问题的一部分
目前,我正在使用
let image: CIImage //CIImage with applied filters
let eagl = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let context = CIContext(EAGLContext: eagl, options: [kCIContextWorkingColorSpace : NSNull()])
//this line takes too long for real-time processing
let cg: CGImage = context.createCGImage(image, fromRect: image.extent)
我已经考虑使用EAGLContext.drawImage()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
然而,我找不到任何关于如何完成此操作的可靠文档,或者它是否会更快。
有没有更快的方法可以在屏幕上显示CIImage
(无论是在UIImageView
中,还是直接在CALayer
上)?我想避免过多地降低图像质量,因为这可能会对用户造成注意。
可能值得考虑金属并用MTKView
显示。
您需要一个可以使用MTLCreateSystemDefaultDevice()
创建的金属设备。这用于创建命令队列和核心映像上下文。这两个对象都是持久性的,并且实例化成本很高,因此理想情况下应该创建一次:
lazy var commandQueue: MTLCommandQueue =
{
return self.device!.newCommandQueue()
}()
lazy var ciContext: CIContext =
{
return CIContext(MTLDevice: self.device!)
}()
您还需要一个色彩空间:
let colorSpace = CGColorSpaceCreateDeviceRGB()!
在渲染CIImage
时,您需要创建一个短期命令缓冲区:
let commandBuffer = commandQueue.commandBuffer()
您需要将CIImage
(我们称之为image
)渲染到MTKView
的currentDrawable?.texture
。如果这绑定到 targetTexture
,则渲染语法为:
ciContext.render(image,
toMTLTexture: targetTexture,
commandBuffer: commandBuffer,
bounds: image.extent,
colorSpace: colorSpace)
commandBuffer.presentDrawable(currentDrawable!)
commandBuffer.commit()
我这里有一个工作版本。
希望对您有所帮助!
西蒙
我最终使用了context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
方法。这是我创建的图像视图类
import Foundation
//GLKit must be linked and imported
import GLKit
class CIImageView: GLKView{
var image: CIImage?
var ciContext: CIContext?
//initialize with the frame, and CIImage to be displayed
//(or nil, if the image will be set using .setRenderImage)
init(frame: CGRect, image: CIImage?){
super.init(frame: frame, context: EAGLContext(API: EAGLRenderingAPI.OpenGLES2))
self.image = image
//Set the current context to the EAGLContext created in the super.init call
EAGLContext.setCurrentContext(self.context)
//create a CIContext from the EAGLContext
self.ciContext = CIContext(EAGLContext: self.context)
}
//for usage in Storyboards
required init?(coder aDecoder: NSCoder){
super.init(coder: aDecoder)
self.context = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
EAGLContext.setCurrentContext(self.context)
self.ciContext = CIContext(EAGLContext: self.context)
}
//set the current image to image
func setRenderImage(image: CIImage){
self.image = image
//tell the processor that the view needs to be redrawn using drawRect()
self.setNeedsDisplay()
}
//called automatically when the view is drawn
override func drawRect(rect: CGRect){
//unwrap the current CIImage
if let image = self.image{
//multiply the frame by the screen's scale (ratio of points : pixels),
//because the following .drawImage() call uses pixels, not points
let scale = UIScreen.mainScreen().scale
let newFrame = CGRectMake(rect.minX, rect.minY, rect.width * scale, rect.height * scale)
//draw the image
self.ciContext?.drawImage(
image,
inRect: newFrame,
fromRect: image.extent
)
}
}
}
然后,要使用它,只需
let myFrame: CGRect //frame in self.view where the image should be displayed
let myImage: CIImage //CIImage with applied filters
let imageView: CIImageView = CIImageView(frame: myFrame, image: myImage)
self.view.addSubview(imageView)
在将 UIImage 转换为 CIImage 之前将其调整为屏幕大小也有帮助。在高质量图像的情况下,它可以大大加快速度。只需确保在实际保存时使用全尺寸图像即可。
就是这样!然后,更新视图中的图像
imageView.setRenderImage(newCIImage)
//note that imageView.image = newCIImage won't work because
//the view won't be redrawn
您可以使用 GlkView 并使用 context.drawImage() 进行渲染:
let glView = GLKView(frame: superview.bounds, context: EAGLContext(API: .OpenGLES2))
let context = CIContext(EAGLContext: glView.context)
处理渲染图像后:
glView.bindDrawable()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
glView.display()
<</div>
div class="one_answers"> 这是一个相当大的图像,所以这绝对是其中的一部分。我建议查看GPUImage来制作单个图像过滤器。你可以完全跳过使用CoreImage。
let inputImage:UIImage = //... some image
let stillImageSource = GPUImagePicture(image: inputImage)
let filter = GPUImageSepiaFilter()
stillImageSource.addTarget(filter)
filter.useNextFrameForImageCapture()
stillImageSource.processImage()