CameraX ML KIT 给出错误" java.lang.IllegalStateException: 图像已关闭



我想使用Google ML Kit和CameraX API制作一个实时图像分类器。我正在使用CameraX API的预览和分析。它给出错误作为

2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err: Caused by: java.lang.IllegalStateException: Image is already closed
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at android.media.Image.throwISEIfImageIsInvalid(Image.java:68)
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at android.media.ImageReader$SurfaceImage$SurfacePlane.getBuffer(ImageReader.java:832)
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.vision.common.internal.ImageConvertUtils.zza(com.google.mlkit:vision-common@@16.0.0:139)
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.vision.common.internal.ImageConvertUtils.convertToUpRightBitmap(com.google.mlkit:vision-common@@16.0.0:89)
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.vision.common.internal.ImageConvertUtils.getUpRightBitmap(com.google.mlkit:vision-common@@16.0.0:10)
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.vision.label.automl.internal.zzo.zza(com.google.mlkit:image-labeling-automl@@16.0.0:16)
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.vision.label.automl.internal.zzo.run(com.google.mlkit:image-labeling-automl@@16.0.0:60)
2020-07-27 01:17:18.145 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.vision.common.internal.MobileVisionBase.zza(com.google.mlkit:vision-common@@16.0.0:23)
2020-07-27 01:17:18.146 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.vision.common.internal.zzb.call(com.google.mlkit:vision-common@@16.0.0)
2020-07-27 01:17:18.146 11009-11009/com.example.camerax_automl W/System.err:     at com.google.mlkit.common.sdkinternal.ModelResource.zza(com.google.mlkit:common@@16.0.0:26)
2020-07-27 01:17:18.146 11009-11009/com.example.camerax_automl W/System.err:    ... 9 more`

在这里,我使用了一个TextureView和一个Textview来显示分类结果。我还将.tflite模型放在assets文件夹中,并插入所需的依赖项。我的代码如下所示-

public class MainActivity extends AppCompatActivity {
private int REQUEST_CODE_PERMISSIONS = 101;
private final String[] REQUIRED_PERMISSIONS = new String[]{"android.permission.CAMERA"};
TextureView textureView;
ImageButton imgbutton;
//LinearLayout linear1;
TextView text1;

//automal objects
AutoMLImageLabelerLocalModel localModel =
new AutoMLImageLabelerLocalModel.Builder()
.setAssetFilePath("model/manifest.json")
// or .setAbsoluteFilePath(absolute file path to manifest file)
.build();

AutoMLImageLabelerOptions autoMLImageLabelerOptions =
new AutoMLImageLabelerOptions.Builder(localModel)
.setConfidenceThreshold(0.0f)  // Evaluate your model in the Firebase console
// to determine an appropriate value.
.build();
ImageLabeler labeler = ImageLabeling.getClient(autoMLImageLabelerOptions);

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textureView = findViewById(R.id.view_finder);
imgbutton = findViewById(R.id.imgCapture);
text1 = findViewById(R.id.textView2);
if(allPermissionsGranted())
{
startCamera();
}
else
{
ActivityCompat.requestPermissions(this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS);
}
}

private void startCamera() {
CameraX.unbindAll();
Rational aspectRatio = new Rational (textureView.getWidth(), textureView.getHeight());
Size screen = new Size(textureView.getWidth(), textureView.getHeight()); //size of the screen
PreviewConfig pConfig = new PreviewConfig.Builder()
.setTargetAspectRatio(aspectRatio)
.setTargetResolution(screen)
.build();
Preview preview = new Preview(pConfig);
preview.setOnPreviewOutputUpdateListener(new Preview.OnPreviewOutputUpdateListener() {
@Override
public void onUpdated(Preview.PreviewOutput output) {
ViewGroup parent = (ViewGroup) textureView.getParent();
parent.removeView(textureView);
parent.addView(textureView, 0);
textureView.setSurfaceTexture(output.getSurfaceTexture());
updateTransform();
}
});
ImageAnalysisConfig imconfig = new ImageAnalysisConfig.Builder().setTargetAspectRatio(aspectRatio)
.setTargetResolution(screen)
.setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE).build();
final ImageAnalysis analysis = new ImageAnalysis(imconfig);


analysis.setAnalyzer(new ImageAnalysis.Analyzer() {
@Override
public void analyze(ImageProxy image, int rotationDegrees) {

Image img = image.getImage();
if (image.getImage() == null) {
Log.d("Null", "Image is Null");
} else {
InputImage img1 = InputImage.fromMediaImage(img, rotationDegrees);
labeler.process(img1)
.addOnSuccessListener(new OnSuccessListener<List<ImageLabel>>() {
@Override
public void onSuccess(List<ImageLabel> labels) {
// Task completed successfully
for (ImageLabel label : labels) {
String text = label.getText();
float confidence = label.getConfidence();
int index = label.getIndex();
text1.setText(text + " " + confidence);

}

}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Task failed with an exception
// ...
e.printStackTrace();
}

});

}
image.close();
}
});
CameraX.bindToLifecycle((LifecycleOwner)this,analysis, preview);

}
private void updateTransform() {
Matrix mx = new Matrix();
float w = textureView.getMeasuredWidth();
float h = textureView.getMeasuredHeight();
float cX = w / 2f;
float cY = h / 2f;
int rotationDgr;
int rotation = (int)textureView.getRotation();
switch(rotation){
case Surface.ROTATION_0:
rotationDgr = 0;
break;
case Surface.ROTATION_90:
rotationDgr = 90;
break;
case Surface.ROTATION_180:
rotationDgr = 180;
break;
case Surface.ROTATION_270:
rotationDgr = 270;
break;
default:
return;
}
mx.postRotate((float)rotationDgr, cX, cY);
textureView.setTransform(mx);
}
private boolean allPermissionsGranted(){
for(String permission:REQUIRED_PERMISSIONS) {
if (ContextCompat.checkSelfPermission(this, permission) != PackageManager.PERMISSION_GRANTED) {
return false;
}
}
return true;
}

}

我在这里干什么?

在处理图像之前,不得关闭图像。关闭图像会触发相机拍摄的另一张图像由您的应用程序处理。

处理需要时间它不是立竿见影的。

labeler.process(img1)
.addOnSuccessListener(new OnSuccessListener<List<ImageLabel>>() {
@Override
public void onSuccess(List<ImageLabel> labels) {
// Close the image
image.close();
...
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Close the image
image.close();
...
}
});

除了在两个独立的侦听器中重复image.close((之外,您还可以使用OnCompleteListener,在所有任务处理完成后,无论成功与否,都会调用它。此外,我添加了img.close(),以实现良好的内务管理并避免类似性质的潜在错误。

错误


labeler.process(image)
.addOnSuccessListener(new OnSuccessListener<List<ImageLabel>>() {
@Override
public void onSuccess(List<ImageLabel> labels) {
// Task completed successfully
Log.i(TAG, "labeler task successful");
// Do something with the labels
// ...
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Task failed with an exception
Log.i(TAG, "labeler task failed with Error:" + e);
}
});
img.close();
image.close();

无错误

labeler.process(image)
.addOnSuccessListener(new OnSuccessListener<List<ImageLabel>>() {
@Override
public void onSuccess(List<ImageLabel> labels) {
// Task completed successfully
Log.i(TAG, "labeler task successful");
// Do something with the labels
// ...
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Task failed with an exception
Log.i(TAG, "labeler task failed with Error:" + e);
}).addOnCompleteListener(new OnCompleteListener<List<ImageLabel>>() {
@Override
public void onComplete(@NonNull Task<List<ImageLabel>> task) {
img.close();
image.close();
}
});

每个用例都应该在不同的线程中设置,以避免干扰处理。

对于用例:

  • 条形码扫描
  • 文本识别

private fun setsAnalyzersAsUseCase(): ImageAnalysis {
val analysisUseCase = ImageAnalysis.Builder()
.build()
if (BARCODE_SCANNING_ENABLED) {
analysisUseCase.setAnalyzer(
Executors.newSingleThreadExecutor()
) { imageProxy ->
processImageWithBarcodeScanner(imageProxy = imageProxy)
}
}

if (TEXT_RECOGNITION_ENABLED) {
analysisUseCase.setAnalyzer(
Executors.newSingleThreadExecutor()
) { imageProxy ->
processImageWithTextRecognition(imageProxy = imageProxy)
}
}
return analysisUseCase
}

GL-

最新更新