Android实时DFT在相机视图使用OpenCV



我一直在努力实现一个Android应用程序,直接在相机视图中应用DFT。在stackoverflow上做一个研究,我可以找到以下主题:

SOLVED -将图像加载到Mat中并在DFT处理后显示

SOLVED -将图像加载到Mat中并在DFT处理后显示

将OpenCv DCT转换为Android

我也尝试过使用JNI的不同解决方案:http://allaboutee.com/2011/11/12/discrete-fourier-transform-in-android-with-opencv/

然后我可以实现我的主活动代码:
package ch.hepia.lsn.opencv_native_androidstudio;
import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
import android.view.SurfaceView;
import android.view.WindowManager;
import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraBridgeViewBase;
import android.hardware.Camera;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Rect;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import java.util.ArrayList;
import java.util.List;
public class MainActivity extends Activity implements CameraBridgeViewBase.CvCameraViewListener2 {
    private static final String TAG = "OCVSample::Activity";
    private CameraBridgeViewBase mOpenCvCameraView;
    private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
        @Override
        public void onManagerConnected(int status) {
            switch (status) {
                case LoaderCallbackInterface.SUCCESS: {
                    Log.i(TAG, "OpenCV loaded successfully");
                    mOpenCvCameraView.enableView();
                }
                break;
                default: {
                    super.onManagerConnected(status);
                }
            }
        }
    };
    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        // Load ndk built module, as specified
        // in moduleName in build.gradle
        System.loadLibrary("native");
        getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
        setContentView(R.layout.activity_main);
        mOpenCvCameraView = (CameraBridgeViewBase) findViewById(R.id.main_surface);
        mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);
        mOpenCvCameraView.setCvCameraViewListener(this);
    }
    @Override
    public void onPause() {
        super.onPause();
        disableCamera();
    }
    @Override
    public void onResume() {
        super.onResume();
        if (!OpenCVLoader.initDebug()) {
            Log.d(TAG, "Internal OpenCV library not found. Using OpenCV Manager for initialization");
            OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_0_0, this, mLoaderCallback);
        } else {
            Log.d(TAG, "OpenCV library found inside package. Using it!");
            mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
        }
    }
    public void onDestroy() {
        super.onDestroy();
        disableCamera();
    }
    public void disableCamera() {
        if (mOpenCvCameraView != null)
            mOpenCvCameraView.disableView();
    }
    public void onCameraViewStarted(int width, int height) {
    }
    public void onCameraViewStopped() {
    }
    private Mat getDFT(Mat singleChannel) {
        singleChannel.convertTo(singleChannel, CvType.CV_64FC1);
        int m = Core.getOptimalDFTSize(singleChannel.rows());
        int n = Core.getOptimalDFTSize(singleChannel.cols()); // on the border
        // add zero
        // values
        // Imgproc.copyMakeBorder(image1,
        // padded, 0, m -
        // image1.rows(), 0, n
        Mat padded = new Mat(new Size(n, m), CvType.CV_64FC1); // expand input
        // image to
        // optimal size
        Core.copyMakeBorder(singleChannel, padded, 0, m - singleChannel.rows(), 0,
                n - singleChannel.cols(), Core.BORDER_CONSTANT);
        List<Mat> planes = new ArrayList<Mat>();
        planes.add(padded);
        planes.add(Mat.zeros(padded.rows(), padded.cols(), CvType.CV_64FC1));
        Mat complexI = Mat.zeros(padded.rows(), padded.cols(), CvType.CV_64FC2);
        Mat complexI2 = Mat
                .zeros(padded.rows(), padded.cols(), CvType.CV_64FC2);
        Core.merge(planes, complexI); // Add to the expanded another plane with
        // zeros
        Core.dft(complexI, complexI2); // this way the result may fit in the
        // source matrix
        // compute the magnitude and switch to logarithmic scale
        // => log(1 + sqrt(Re(DFT(I))^2 + Im(DFT(I))^2))
        Core.split(complexI2, planes); // planes[0] = Re(DFT(I), planes[1] =
        // Im(DFT(I))
        Mat mag = new Mat(planes.get(0).size(), planes.get(0).type());
        Core.magnitude(planes.get(0), planes.get(1), mag);// planes[0]
        // =
        // magnitude
        Mat magI = mag;
        Mat magI2 = new Mat(magI.size(), magI.type());
        Mat magI3 = new Mat(magI.size(), magI.type());
        Mat magI4 = new Mat(magI.size(), magI.type());
        Mat magI5 = new Mat(magI.size(), magI.type());
        Core.add(magI, Mat.ones(padded.rows(), padded.cols(), CvType.CV_64FC1),
                magI2); // switch to logarithmic scale
        Core.log(magI2, magI3);
        Mat crop = new Mat(magI3, new Rect(0, 0, magI3.cols() & -2,
                magI3.rows() & -2));
        magI4 = crop.clone();
        // rearrange the quadrants of Fourier image so that the origin is at the
        // image center
        int cx = magI4.cols() / 2;
        int cy = magI4.rows() / 2;
        Rect q0Rect = new Rect(0, 0, cx, cy);
        Rect q1Rect = new Rect(cx, 0, cx, cy);
        Rect q2Rect = new Rect(0, cy, cx, cy);
        Rect q3Rect = new Rect(cx, cy, cx, cy);
        Mat q0 = new Mat(magI4, q0Rect); // Top-Left - Create a ROI per quadrant
        Mat q1 = new Mat(magI4, q1Rect); // Top-Right
        Mat q2 = new Mat(magI4, q2Rect); // Bottom-Left
        Mat q3 = new Mat(magI4, q3Rect); // Bottom-Right
        Mat tmp = new Mat(); // swap quadrants (Top-Left with Bottom-Right)
        q0.copyTo(tmp);
        q3.copyTo(q0);
        tmp.copyTo(q3);
        q1.copyTo(tmp); // swap quadrant (Top-Right with Bottom-Left)
        q2.copyTo(q1);
        tmp.copyTo(q2);
        Core.normalize(magI4, magI5, 0, 255, Core.NORM_MINMAX);
        Mat realResult = new Mat(magI5.size(), CvType.CV_8UC1);
        magI5.convertTo(realResult, CvType.CV_8UC1);
        return realResult;
    }
    public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
        //System.out.print("teste");
        Mat matGray = inputFrame.gray();
        return getDFT(inputFrame.gray());
    }
}

但问题是,我仍然得到这个错误:

07-03 22:46:46.20513700 - 28322/ch.hepia.lsn。opencv_native_androidstudio A/libc: Fatal信号11 (SIGSEGV),代码1,故障地址0x10, tid 28322(线程- 9802)

我认为这是因为一些处理限制,因为我只是复制了与其他用户使用一般图像一起工作的代码。

我的问题是:

  • 我如何检查这个错误是否由于处理限制?

  • 有任何其他方法来实现它使用OpenCV或其他库?

谢谢。

我已经将我最初发布的代码移植到Android Studio(来自Eclipse)和OpenCV 3.1.0。我认为Core.add()函数在这个版本的openCV中有一个问题-见这里的帖子

我使用建议Core.addWeighted(),我至少可以得到dft显示,但不是很长时间之前,它耗尽了内存。我认为函数,如split也使用add(),所以我认为我们需要看看在openCV修复这个问题。

我发布的代码可以改进,以更好地利用资源,例如,保持键数组的静态分配,不要一直调用size(),但再次保持静态,减少分配的Mats等的数量。你也可以缩小拍摄图像的大小,因为在更现代的手机上(我用的是三星S6), Mats会变得很大,所以使用

mOpenCvCameraView.setMaxFrameSize(176, 152);

或其他更易于管理的大小。

如果你想减少帧处理的数量,那么保持一个静态计数器,在每次捕获帧时增加它,并且只在计数器可被5或10整除时调用getDFT(),以便每隔5或10帧才处理这些帧

相关内容

  • 没有找到相关文章

最新更新