需要在使用MLKit和Camera2进行人脸检测时捕获静止图像



我正在使用Camera2MLKit开发人脸检测功能。

在《开发人员指南》的性能提示部分中,他们说如果使用Camera2 API,则以ImageFormat.YUV_420_888格式捕获图像,我就是这样。

然后,在人脸检测器部分,他们建议使用尺寸至少为480x360像素的图像进行实时人脸识别,这也是我的情况。

好的,我们走吧!这是我的代码,运行良好

private fun initializeCamera() = lifecycleScope.launch(Dispatchers.Main) {
// Open the selected camera
cameraDevice = openCamera(cameraManager, getCameraId(), cameraHandler)
val previewSize = if (isPortrait) {
Size(RECOMMANDED_CAPTURE_SIZE.width, RECOMMANDED_CAPTURE_SIZE.height)
} else {
Size(RECOMMANDED_CAPTURE_SIZE.height, RECOMMANDED_CAPTURE_SIZE.width)
}
// Initialize an image reader which will be used to display a preview
imageReader = ImageReader.newInstance(
previewSize.width, previewSize.height, ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE)
// Retrieve preview's frame and run detector
imageReader.setOnImageAvailableListener({ reader ->
lifecycleScope.launch(Dispatchers.Main) {
val image = reader.acquireNextImage()
logD { "Image available: ${image.timestamp}" }
faceDetector.runFaceDetection(image, getRotationCompensation())
image.close()
}
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
// Start a capture session using our open camera and list of Surfaces where frames will go
session = createCaptureSession(cameraDevice, targets, cameraHandler)
val captureRequest = cameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW).apply {
addTarget(viewfinder.holder.surface)
addTarget(imageReader.surface)
}
// This will keep sending the capture request as frequently as possible until the
// session is torn down or session.stopRepeating() is called
session.setRepeatingRequest(captureRequest.build(), null, cameraHandler)
}

现在,我想拍摄一张静止图像。。。这是我的问题,因为理想情况下,我想要:

  • 全分辨率图像,或者至少大于480x360
  • 以JPEG格式保存

Camera2Basic示例演示了如何捕获图像(Video和SlowMotion的示例正在崩溃(,MLKit示例使用如此古老的相机API!!幸运的是,我成功地混合了这些样本来开发我的功能,但我未能捕捉到不同分辨率的静态图像

我想我必须停止预览会话才能重新创建一个用于图像捕获的会话,但我不确定。。。

我所做的是以下内容,但它是在480x360中捕获图像:

session.stopRepeating()
// Unset the image reader listener
imageReader.setOnImageAvailableListener(null, null)
// Initialize an new image reader which will be used to capture still photos
// imageReader = ImageReader.newInstance(768, 1024, ImageFormat.JPEG, IMAGE_BUFFER_SIZE)
// Start a new image queue
val imageQueue = ArrayBlockingQueue<Image>(IMAGE_BUFFER_SIZE)
imageReader.setOnImageAvailableListener({ reader - >
val image = reader.acquireNextImage()
logD {"[Still] Image available in queue: ${image.timestamp}"}
if (imageQueue.size >= IMAGE_BUFFER_SIZE - 1) {
imageQueue.take().close()
}
imageQueue.add(image)
}, imageReaderHandler)
// Creates list of Surfaces where the camera will output frames
val targets = listOf(viewfinder.holder.surface, imageReader.surface)
val captureRequest = createStillCaptureRequest(cameraDevice, targets)
session.capture(captureRequest, object: CameraCaptureSession.CaptureCallback() {
override fun onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
result: TotalCaptureResult) {
super.onCaptureCompleted(session, request, result)
val resultTimestamp = result.get(CaptureResult.SENSOR_TIMESTAMP)
logD {"Capture result received: $resultTimestamp"}
// Set a timeout in case image captured is dropped from the pipeline
val exc = TimeoutException("Image dequeuing took too long")
val timeoutRunnable = Runnable {
continuation.resumeWithException(exc)
}
imageReaderHandler.postDelayed(timeoutRunnable, IMAGE_CAPTURE_TIMEOUT_MILLIS)
// Loop in the coroutine's context until an image with matching timestamp comes
// We need to launch the coroutine context again because the callback is done in
//  the handler provided to the `capture` method, not in our coroutine context
@ Suppress("BlockingMethodInNonBlockingContext")
lifecycleScope.launch(continuation.context) {
while (true) {
// Dequeue images while timestamps don't match
val image = imageQueue.take()
if (image.timestamp != resultTimestamp)
continue
logD {"Matching image dequeued: ${image.timestamp}"}
// Unset the image reader listener
imageReaderHandler.removeCallbacks(timeoutRunnable)
imageReader.setOnImageAvailableListener(null, null)
// Clear the queue of images, if there are left
while (imageQueue.size > 0) {
imageQueue.take()
.close()
}
// Compute EXIF orientation metadata
val rotation = getRotationCompensation()
val mirrored = cameraFacing == CameraCharacteristics.LENS_FACING_FRONT
val exifOrientation = computeExifOrientation(rotation, mirrored)
logE {"captured image size (w/h): ${image.width} / ${image.height}"}
// Build the result and resume progress
continuation.resume(CombinedCaptureResult(
image, result, exifOrientation, imageReader.imageFormat))
// There is no need to break out of the loop, this coroutine will suspend
}
}
}
}, cameraHandler)
}

如果我取消注释新的ImageReader说明,我会出现以下异常:

java.lang.IollegalArgumentException:CaptureRequest包含未配置的输入/输出表面!

有人能帮我吗?

IllegalArgumentException:

java.lang.IollegalArgumentException:CaptureRequest包含未配置的输入/输出表面!

。。。显然是指CCD_ 3。


同时(使用CameraX(此操作不同,请参阅CameraFragment.kt…

问题#197:使用camera X Api时,Firebase人脸检测Api问题;

可能很快就会有一个与您的用例相匹配的示例应用程序。

ImageReader对格式和/或使用标志组合的选择很敏感。文档指出,可能不支持某些格式组合。对于一些Android设备(可能是一些旧型号的手机(,你可能会发现IllegalArgumentException不是使用JPEG格式抛出的。但这并没有多大帮助——你想要多功能的东西。

我过去所做的是使用ImageFormat.YUV_420_888格式(这将由硬件和ImageReader实现支持(。此格式不包含阻止应用程序通过内部平面阵列访问图像的预优化。我注意到您已经在initializeCamera()方法中成功地使用了它。

然后,您可以从想要的帧中提取图像数据

Image.Plane[] planes = img.getPlanes();
byte[] data = planes[0].getBuffer().array();

然后通过位图使用JPEG压缩、PNG或您选择的任何编码创建静态图像。

ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
ByteArrayOutputStream out2 = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 75, out2);

最新更新