我正在关注这个WWDC讲座。
在演讲中,他提到了一个名为"CIEdgePreserveUpsampleFilter"的滤波器,它使边缘更加保留和上采样。我正在尝试在我的 CIImage 上应用它,我得到了图像和崩溃的未初始化结果。
这是我正在使用的代码,也是我如何尝试应用过滤器的示例(这显然是错误的(。我只是找不到任何应用此过滤器的相关说明,我只知道我希望它的结果在我的图像上。我在尝试应用过滤器的位置以及执行此操作时会发生什么旁边发表评论。
func createMask(for depthImage: CIImage, withFocus focus: CGFloat, andScale scale: CGFloat, andSlope slope: CGFloat = 4.0, andWidth width: CGFloat = 0.1) -> CIImage {
let s1 = slope
let s2 = -slope
let filterWidth = 2 / slope + width
let b1 = -s1 * (focus - filterWidth / 2)
let b2 = -s2 * (focus + filterWidth / 2)
let mask0 = depthImage
.applyingFilter("CIColorMatrix", withInputParameters: [
"inputRVector": CIVector(x: s1, y: 0, z: 0, w: 0),
"inputGVector": CIVector(x: 0, y: s1, z: 0, w: 0),
"inputBVector": CIVector(x: 0, y: 0, z: s1, w: 0),
"inputBiasVector": CIVector(x: b1, y: b1, z: b1, w: 0)])
.applyingFilter("CIColorClamp").applyingFilter("CIEdgePreserveUpsampleFilter") //returns uninitialized image
let mask1 = depthImage
.applyingFilter("CIColorMatrix", withInputParameters: [
"inputRVector": CIVector(x: s2, y: 0, z: 0, w: 0),
"inputGVector": CIVector(x: 0, y: s2, z: 0, w: 0),
"inputBVector": CIVector(x: 0, y: 0, z: s2, w: 0),
"inputBiasVector": CIVector(x: b2, y: b2, z: b2, w: 0)])
.applyingFilter("CIColorClamp")
var combinedMask = mask0.applyingFilter("CIEdgePreserveUpsampleFilter", withInputParameters: ["inputBackgroundImage" : mask1]) //complete crash
if PortraitModel.sharedInstance.filterArea == .front {
combinedMask = combinedMask.applyingFilter("CIColorInvert")
}
let mask = combinedMask.applyingFilter("CIBicubicScaleTransform", withInputParameters: [kCIInputScaleKey: scale])
return mask
}
我发现的运行时标头和一些使用代码似乎表明CIEdgePreserveUpsampleFilter不接受inputBackgroundImage
参数,而是inputSmallImage
。
见 https://gist.github.com/HarshilShah/ca0e18db01ce250fd308ab5acc99a9d0