在 Runnable 中调用 tent 会使 android studio 中的应用程序崩溃



我对Android比较陌生。我现在试图从可运行中调用意图。但是,该应用程序不断崩溃。我试图将其包装在处理程序中,但没有任何效果。

编辑:添加了日志猫屏幕截图。我注意到显示的错误是"这不是测试版用户构建" 截图链接:https://drive.google.com/open?id=1XEyWnjiXm7X6gzx84wTXx6R_vJN9srWl

runInBackground(
                new Runnable() {
                    @Override
                    public void run() {
                       
                                if ( condition has met) {
                                    
                                    //go to other activity
                                    Intent intent = new Intent(getBaseContext(), CameraActivity.class);
                                    startActivity(intent);
                                   }
                    }
                });

下面的代码片段是我的完整代码,我实际上是从Tensorflow对象检测中得到的。

@Override
    protected void processImage() {
        final Button admin_see_questions = (Button) findViewById(R.id.button);
        final KonfettiView konfettiView = (KonfettiView) findViewById(R.id.konfettiView);
        ++timestamp;
        final long currTimestamp = timestamp;
        byte[] originalLuminance = getLuminance();
        tracker.onFrame(
                previewWidth,
                previewHeight,
                getLuminanceStride(),
                sensorOrientation,
                originalLuminance,
                timestamp);
        trackingOverlay.postInvalidate();
        // No mutex needed as this method is not reentrant.
        if (computingDetection) {
            readyForNextImage();
            return;
        }
        computingDetection = true;
        LOGGER.i("Preparing image " + currTimestamp + " for detection in bg thread.");
        rgbFrameBitmap.setPixels(getRgbBytes(), 0, previewWidth, 0, 0, previewWidth, previewHeight);
        if (luminanceCopy == null) {
            luminanceCopy = new byte[originalLuminance.length];
        }
        System.arraycopy(originalLuminance, 0, luminanceCopy, 0, originalLuminance.length);
        readyForNextImage();
        final Canvas canvas = new Canvas(croppedBitmap);
        canvas.drawBitmap(rgbFrameBitmap, frameToCropTransform, null);
        // For examining the actual TF input.
        if (SAVE_PREVIEW_BITMAP) {
            ImageUtils.saveBitmap(croppedBitmap);
        }
        final Handler mHandler = new Handler(getMainLooper());
        final Button bt1 = (Button) findViewById(R.id.button2);
        runInBackground(
                new Runnable() {
                    @Override
                    public void run() {
                        LOGGER.i("Running detection on image " + currTimestamp);
                        final long startTime = SystemClock.uptimeMillis();
                        final List<Classifier.Recognition> results = detector.recognizeImage(croppedBitmap);
                        lastProcessingTimeMs = SystemClock.uptimeMillis() - startTime;
                        cropCopyBitmap = Bitmap.createBitmap(croppedBitmap);
                        final Canvas canvas = new Canvas(cropCopyBitmap);
                        final Paint paint = new Paint();
                        paint.setColor(Color.RED);
                        paint.setStyle(Style.STROKE);
                        paint.setStrokeWidth(2.0f);
                        float minimumConfidence = MINIMUM_CONFIDENCE_TF_OD_API;
                        switch (MODE) {
                            case TF_OD_API:
                                minimumConfidence = MINIMUM_CONFIDENCE_TF_OD_API;
                                break;
                            case MULTIBOX:
                                minimumConfidence = MINIMUM_CONFIDENCE_MULTIBOX;
                                break;
                            case YOLO:
                                minimumConfidence = MINIMUM_CONFIDENCE_YOLO;
                                break;
                        }
                        final List<Classifier.Recognition> mappedRecognitions =
                                new LinkedList<Classifier.Recognition>();
                            for (final Classifier.Recognition result : results) {
                                final RectF location = result.getLocation();
                                if (location != null && result.getConfidence() >= minimumConfidence) {
                                    canvas.drawRect(location, paint);
                                    //Toast.makeText(DetectorActivity.this, result.getTitle(), Toast.LENGTH_LONG).show();
                                    cropToFrameTransform.mapRect(location);
                                    result.setLocation(location);
                                    mappedRecognitions.add(result);
                                    //found = true;
                                    Log.i("err","Transiting to other intent");
                                    Intent intent = new Intent(DetectorActivity.this, display.class);
                                    startActivity(intent);
                                   
                                }
                            }
                        tracker.trackResults(mappedRecognitions, luminanceCopy, currTimestamp);
                        trackingOverlay.postInvalidate();
                        requestRender();
                        computingDetection = false;
                    }
                });
}

尝试将getBaseContext((替换为YourActivity.this。

 if ( condition has met) {                                     
        //go to other activity
       Intent intent = new Intent(YourActivity.this, CameraActivity.class);
       startActivity(intent);
       }

如果这不起作用,请将您的 Logcat 粘贴到此处。

相关内容

最新更新