Swift 4:从 DispatchQueue.main 访问变量(Scope)



我有一个 CoreML 图像分类任务,它从 iOS 设备的 [视频] 摄像头获取"实时流"并在后台发生。识别对象并发生其他应用逻辑后,我想使用一些数据更新 UI 的标签。

有人可以解释一下DispatchQueue.main.asyc(execute: { })标注如何能够访问我一直使用的变量吗?我认为这本质上是一个范围界定问题?

我当前使用的代码:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
processCameraBuffer(sampleBuffer: sampleBuffer)
}
func processCameraBuffer(sampleBuffer: CMSampleBuffer) {
let coreMLModel = Inceptionv3()
if let model = try? VNCoreMLModel(for: coreMLModel.model) {
let request = VNCoreMLRequest(model: model, completionHandler: { (request, error) in
if let results = request.results as? [VNClassificationObservation] {
var counter = 0
var otherVar = 0
for item in results[0...9] {
if item.identifier.contains("something") {
print("some app logic goes on here")
otherVar += 10 - counter
}
counter += 1
}
switch otherVar {
case _ where otherVar >= 10:
DispatchQueue.main.async(execute: {
let displayVarFormatted = String(format: "%.2f", otherVar / 65 * 100)
self.labelPrediction.text = "(counter): (displayVarFormatted)%"
})
default:
DispatchQueue.main.async(execute: {
self.labelPrediction.text = "No result!"
})
}
}
})
if let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
do {
try handler.perform([request])
} catch {
print(error.localizedDescription)
}
}
}
}

是导致问题的 switch 语句中的self.labelPrediction.text = ""行。此变量当前始终为 0。

这不是DispatchQueue的问题。从processCameraBuffer(sampleBuffer:)开始,您的代码会在获得结果之前更新您的 UI。

要解决此问题,您需要使用escaping closure.您的函数应如下所示。

func processCameraBuffer(sampleBuffer: CMSampleBuffer, completion: @escaping (Int, String) -> Void) {
// 2.
let request = VNCoreMLRequest(model: model, completionHandler: { (request, error) in
DispatchQueue.main.async(execute: {
// 3.
let displayVarFormatted = String(format: "%.2f", otherVar / 65 * 100)
completion(counter, displayVarFormatted)
})
}
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { 
// 1.
processCameraBuffer(sampleBuffer) { counter, displayVarFormatted in
/*
This Closure will be executed from 
completion(counter, displayVarFormatted)
*/
// 4.
self.labelPrediction.text = "(counter): (displayVarFormatted)%"
}
}

从这里开始,变量的范围不是问题。您需要处理异步任务。

  1. 发生捕获。
  2. processCameraBuffer 被调用并VNCoreMLRequest执行。
  3. 您将获取数据并执行processCameraBuffer的完成块,通过completion()
  4. 更新标签。

最新更新